In this lesson you'll learn how to get the most out of your server hardware using virtualization and containers.
In the past, servers were mainframes that took up a lot of space. CPU time was valuable, so programmers would make use of as many CPU clock cycles as possible. They used batch processing to ensure the CPU always had something to work on. Batch processing is where jobs are lined up one after another and when one finishes the next starts. Overtime they switched to a time sharing model where multiple people could run jobs at the same time. Time sharing enabled the CPU to switch between tasks. This allowed users to connect remotely and run tasks in real time, the server would switch between user's tasks. Accounting software was used to keep track of how much time each person used and they would be billed for their time.
Eventually server hardware dropped in both price and size, companies could afford to have their own servers in house and not rent time on someone else's server. As time went on it became a common practice to split your services across multiple dedicated servers. You could have a server dedicated to email, another dedicated to your website, and another dedicated to internal network services. A company could end up with many servers to handle all the different services they needed.
Server hardware continued to improve, CPU's were faster, there was more RAM, hard drives grew in capacity. Eventually we ended up with server rooms full of servers dedicated to individual tasks that spent most of their time dormant. The servers became so fast that they would quickly respond to requests then sit and wait for the next request.
Virtualization helps with this problem. With virtualization you can create multiple virtual machines and have them all run on a single physical server.
A virtual machine is a software based computer created by software designed to emulate physical hardware and use physical virtualization technologies. Virtualization software is installed on a physical server called a host. The software is used to create virtual machines called guests. Each guest can run its own independent operating system and acts like it's own server.
The interface between the guest and the host is the hypervisor or the VMM (Virtual Machine Monitor). The hypervisor is the software and hardware working together to create separate operating environments for the guest operating systems. Early implementations of the hypervisor were entirely software based. They would intercept all guest hardware requests and dynamically translate them to the physical hardware on the host. Overtime the CPU manufacturers started adding hardware based virtualization features. This improved performance because some tasks done in the slower software based hypervisor could now be done in faster hardware.
There are two different types of hypervisors available. A type 1 hypervisor is also known as a bare-metal hypervisor. A type 1 hypervisor runs directly on the hardware. A type 2 hypervisor runs on top of the host's operating system.
An operating system can run unmodified inside a virtual machine. The Hypervisor will intercept hardware requests and translate them to actual hardware requests. Overtime time an interface was created that allowed the guest operating system to talk directly to the hypervisor called paravirtualization. The paravirtualization layer is found in the guest operating system and makes the virtual machine more efficient because the hypervisor doesn't have to intercept and interpret what's trying to be done. The guest operating system can talk directly to the hypervisor and as a result perform tasks more efficiently. In order to have paravirtualization support the guest operating system has to be modified to support it. Typically paravirtualization support is added by installing the software vendor's tools in the guest operating system.
Using virtual machines allows you to reduce the number of physical servers you purchase. You save money on the quantity of servers, on the electricity used to power the servers, and the air conditioning used to cool the server. Virtual machines also let you get the most out of your server hardware while maintaining separation between services.
We learned in the last lesson how servers are different from our client computers. Most of the differences are there to help make sure the servers continue running if there's a hardware failure. Unfortunately we still experience failures that can cause a server to become unresponsive. It appears we would lose multiple virtual machines if we have a single hardware failure in an environment using virtualization. Fortunately we have solutions to prevent this from happening.
A technology that's integral to the reliability of our virtual infrastructure is a SAN (Storage Area Network). A SAN is a high speed network that connects your servers to a shared pool of drive space. When setup properly, a disk array connected to the SAN can be attached to multiple servers at the same time. Each server will see the disk array as a locally attached drive. We're used to computers with locally attached storage. Each of our computers has some form of local storage in the form of an HDD or SSD. A SAN allows us to take this storage and attach it to multiple servers.
This is different than a NAS (Network Attached Storage). A NAS is a network connected hard drive, or disk array that provides file services. Your computer mounts the remote storage as a network share and not local storage. A SAN creates what appear as local disks to the operating system, and a NAS creates network drives.
In a virtual environment you would store your VHDs (Virtual Hard Drives) of the virtual machine on the disk array. A VHD is a file used by the host machine to create the guest operating system's hard drive. The virtual machine is created on the host and the virtual hard drive is stored on the disk array.
When you have your virtual environment setup with a SAN, disk array, and more than one host server then you can survive a physical server failure. If a server fails then all the virtual machines will start running on another host server. Depending on the virtualization software and configuration sometimes the virtual machines won't even power down. You can have a server fail in this environment and have none of your users notice anything happened.
You can also mirror your disk arrays incase you have a failure. This way if you have something happed to one disk array the second one is ready to go. You can even have disk arrays replicate across sites. This way if you have a disaster at one site all your data will be safe in another site. If you maintain hosts at the remote site you can have them take over if there's a disaster at one site. Designing your network with this level of redundancy will help with disaster recovery.
Each virtual machines requires a full operating system installed in order to function. If you're creating a 30 virtual machines you'll have the operating system installed 30 times on the server. Recently a new technology has been emerging that provides a more efficient solution. Software containers are environments where an application can run in an isolated compartment created by the operating system. The software used to create containers use many isolation technologies that have been independently developed in the operating system. The end result is a software container that can be moved easily. You can develop an application in a container on your desktop and move the container to a server or cloud service and know it will work.
Containers are still a newer technology and there are some concerns about the security of sharing the same operating system. Hypervisors have been around for a while and have been hardened and tested from a security level. If you need to use apps that require a high level of security then you may want to use virtual machines.