Lessons‎ > ‎Server‎ > ‎

Lesson 01 - Introduction to Servers


In this lesson we look at server hardware and how it differs from client hardware.

What is a Server?

A server is a network device that provides services to other network devices called clients.  A server provides centralized access to these services. There are many services that a server can provide to clients.  This material you're reading now is sitting on a web server. Web server software is installed on the server which provides web services to clients running web browsers.

Any network device can provide network services and become a server, but typically we use specialized server hardware that is more robust than standard client devices.  If you've worked with PC hardware then you know they contain some basic components that work together.  You have things like memory, processors, storage, and power supplies.  Server hardware has all the same components but with differences that make the hardware more reliable.   

Form Factors 

When you're purchasing a new client computer you have a few options.  You can pick a desktop computer, a laptop, an all in one, or tablet.  Each has it's advantages and disadvantages, and you pick the form factor that fits your needs.  When purchasing a new server you have similar choices.  You can choose from a towerrack mount, or blade server.

A tower server looks like a vertical desktop computer.  You will find tower servers in smaller data centers.  They take up more room than the other two form factors.  

A rack mount server mounts in a standard 19in four post rack.  Any equipment designed for a rack is the same width, but the height can vary.  A rack is split into 1.75in sections called units. Below you can see two rack mount servers.  The first is a 1U server, the second is a 2U server.  The 2U server is twice as tall as the 1U server.  The more units a server uses the more components you can fit in it.  Rack mount servers usually have cable management that keeps the wiring clean in the rack and allows the server to slide out without unplugging everything.

1U Rack Mount Server

2U Rack Mount Server

Tower Server
The third form factor is a blade server.  The blade server is made up of two components, the blade chassis, and the blade server.  A blade chassis holds the blade servers.  Each blade is an entire server minus the features provided by the chassis.  Each vendor has it's own chassis design so you can't swap blades between vendors.  Below we see a blade chassis with 16 blade servers.  The blades can come in different configurations as well.  Below we see two blades, one configured with 2.5 in drives, and the other with 1.8 in drives.

Blade Chassis
Two Blade Servers

Tower Blade Server
You'll end up picking the form factor that best fis your environment.  Sometimes you may run find devices that merge two form factors into one.  To the left is an example of a tower server that takes blades.  One the right you'll see the same server in rack mount format.  These merged formats give you a lot of flexibility in your environments. 

Rack Mount Blade Server

Differences in Server Hardware. 

Server hardware has some differences that make it more efficient than the client equivalent.  A lot of the improvements are in place because servers are used differently than clients.  A client is typically used by a single user working on a handful of tasks at once. Server hardware needs to support many people doing a lot of different things at the same time.  We're going to look at some of the things that set server hardware apart from client hardware.  It should be noted that some of these technologies have trickled down to clients over the years.

If we look inside a rack mount server we can see it has many of the same components as our clients.  Let's look at each of the components and see what's different.

In modern computers the processor, or CPU (Central Processing Unit) runs at a higher speed than memory or RAM (Random Access Memory).  This means our processors spend time waiting for memory to catch up.  If you have multiple processors only one processor can access memory at a time, which means you have multiple processors waiting for the memory to catch up.  In the image below we can see what this looks like in a client computer.  If one CPU is access memory the other processor has to wait.

In server we have a technology called NUMA (Non Uniform Memory Access) where the memory is split up into multiple nodes, one per CPU.  Each processor can access their own node without waiting for the other processors.  Performance is increased because each CPU has independent access to it's own portion of memory. 

Modern processors can have multiple cores, where a core acts as a logical CPU.  NUMA has been upgraded to support nodes per core. Multiprocessor and multicore NUMA enabled devices will run faster than their non NUMA enabled counterparts since they won't have to wait for RAM access. 

The RAM in our client computers is not perfect, we do experience bit flips where a stored bit will spontaneously change its value.  The operating system is responsible for detecting and recovering from these bit flips.  The recovery process slows down our system, and isn't always perfect resulting in a crash.  Our servers use a technology called ECC (Error Correction Code) in the RAM that corrects the bit flips before sending the data to the operating system.  When you use ECC RAM you don't have to worry as much about memory errors slowing down or crashing the server.

Hard drives can be categorized using two different methods.  You can look at the interface used to connect the drive to the server, and the way data is stored on the drive.  

The interfaces used in computers and servers have evolved over the years.  In the past we used two interfaces, IDE (Integrated Device Electronics) and SCSI (Small Computer System Interface).  Both technologies used cables that transmitted data in parallel.  IDE was the connector for ATA (Advanced Technology Attachment) hard drives, and ATAPI (Advanced Technology Attachment Packet Interface) CD-Roms.  The IDE connector was used in our client computers, and the SCSI interface was used on our servers.  The SCSI interface support high data rates and more devices in the chain.  

Both parallel technologies were replaced with serial connections that connected each drive directly to the main board.  The IDE connector was replaced with the SATA (Serial Advanced Technology Attachment) connector.  This cause a new name to be retroactively assigned to IDE drives, they are now known as PATA (Parallel Advanced Technology Attachment) drives.  The new serial SCSI interface is called SAS (Serial Attached SCSI).

You can purchase a server with either SATA or SAS drives.  The SAS drives typically run longer, faster, and are more reliable.  SATA drives are typically larger in capacity.  SAS drives cost more, but if you can't afford SAS drives you may be able to mix and match hard drive types.  

The next decision is do you pick HDD (Hard Disk Drive) or SSD (Solid State Drive)? HDDs store the data on platters that spin.  The data is accessed from a head on an arm that floats above the platter as it spins.  SSDs store data in transistors on semiconductor chips.  The transistors sit at intersections of rows and columns creating cells.

Hard disks have a lower MTBF (Mean Time Between Failures) than SSDs, this is due to all the moving parts.  Also hard disk drives are typically slower than SSDs.  HDDs have to seek and find the data on the drive before it can read it. This means the head has to be positioned in the correct location and it has to wait for the data to pass under the head.  The drives can spin faster to reduce the amount of time it takes to find the data.  SSD's don't have to wait for a drive to spin, they access the correct column and row and the data is read instantly.

SSD's have some downsides as well.  Over time the cell wears out, and after a while they're unable to store a charge.  Modern SSDs have implemented wear-leveling where the data is spread out over the drive.  While this is a concern, the mean time between failures is still higher in SSDs.  A higher MTBF is better.

When choosing SSDs for your server you have a choice of SLC (Single Level Cell), MLC (Multi Level Cell), or TLC (Triple Level Cells)  The difference between the options is how the charge is stored in the cell.  With MLC and TLC the amount of charge will determine it's value.  MLC uses 4 different positions allowing us to have 2 bits stored in a cell.  TLC uses 8 positions creating support for 3 bits per cell.  The advantage of this is the vendors can take a drive and double, or triple its capacity without adding anymore cells.  The down side is the cells will wear out faster.  On a server environment you need to determine what technology will work for you.  SLC drives will last longer, therefore are more reliable, but the cost is much higher.

Hard disks do have a big advantage over SSDs.  The capacity of an SSD is much lower than a hard disk.  You can get a lot of storage cheaply with hard drives. 

Server level SSD's are expensive and the capacities are lower than the HDD options.  You can mix and match your hard drive technology, for example you may have SSDs for your data that requires quick access, then use HHDs for long term storage.

Below is a breakdown of the differences in speed versus capacity between the different interface and storage types.

Once you've decided what interface and type of drive you're using the next question is how do you want to configure them?  On our client computers we're used to using a single drive to store everything, but in the server world we have a technology called RAID (Redundant Array of Independant Disks) which allows use to build redundancy in the server.  Hard drives are one of the most common components to fail in a server, so we want to plan for these failures when designing our servers.    When multiple drives are connected together in a RAID configuration they're called an array.  The operating system sees the array of drives as one drive.  RAID can be configured using hardware or software, in most servers hardware RAID controllers are used.

There are different levels of RAID which have different configurations and features.  The first is RAID 0 which spreads the data out over multiple drives.  With RAID 0 there is no redundancy, if a drive dies you lose all data.  RAID 0 uses all the space available for data storage.  

RAID 1 is a technology that mirrors the drives.  All data is written to both drives so if one drive dies you have a copy of all the data and can continue to run. Server hardware supports hot swapping drives which means you can replace the drive without turning off the server. RAID 1 uses half of the space for data and the other half for redundancy.  

RAID 5 stripes the data across the drives like RAID 0, but it adds parity information that can be used to rebuild the data if a drive dies. RAID 5 requires a minimum of 3 drives to operate properly.  You can lose one drive in a RAID 5 array and the server will continue to run. The amount of parity information stored equals that capacity of a drive, so you end up with the amount of usable space equaling the total space minus one drive.

RAID 6 is the same as RAID 5, but with extra parity. This means you can have up to two drives fail and continue to run.  The total drive array capacity is the total space minus 2 drives.

You can combine RAID levels making new RAID levels that may work in your environments.  RAID 01 is a combination of RAID 0 and RAID 1.  RAID 01 is where you mirror to stripped arrays.

You can go the other way with it and stripe two mirrors with RAID 10.

RAID 50 is two RAID 5 arrays stripped together.

RAID 60 is two RAID 6 arrays stripped together.

It's important to note that RAID technology is not a replacement for backups.  RAID will help protect you from hard drive failure, where a backup will help protect you from data corruption overwrites, or deletion.  If you overwrite a document on a mirror that overwrite occurs on both drives.  You'll need to revert to a backup copy to recover the data.

Power Supply
The PSU (Power Supply Unit) in a server is typically different than what you would find in a client device.  Typically you find more than one PSU in a server and they are redundant and hot swappable.  This means if a PSU fails the server will continue to run and you can replace the dead PSU with a new one without turning off the server.

Servers are typically designed with environmental controls that adjust the speed of the fans to properly cool the internal components. Typically the fans are designed to be hot swappable so if one dies you can replace the fan easily without shutting down the server. Some servers have filters in them that make sure clean air is circulated through the server.

Remote Access
Some servers contain a separate card that contains a network card and a small operating system on it.  You can use this card to remotely access the server and perform tasks that you would otherwise have to do in person.  For example if the server locks up you can use the card to remotely power cycle the server.

Other Hardware Found in the Server Room

Besides servers and storage arrays you'll find other items in the server room that contribute to a successful server environment.  

A UPS (Uninterruptible Power Supply) is an item found in the server room that sits between the power outlet and the server's PSU. It contains batteries that supply short term power to the servers in the event of a power outage.  Shortly after a power is lost the UPS can send signals to each server and tell them to shut down to ensure they're shut down properly.  A UPS can come in a tower or rack mount form factor and typically have expansion capabilities allowing you to extend the run time by adding more batteries.

The next item may not be in the server room, but it affects the server room.  A generator can be used to provide continued power in the event of a power outage.  Once power is lost the generator will turn on and power the server room.  There's a period of time when no power will be supplied to the server room while we wait for the generator to start up.  The UPS's keep the equipment running while the generator is starting.  The UPS and generator work together to ensure your servers continue to run.

Most of the time you access servers using remote access technology.  Remote access technology allows you to perform tasks on a server when you're in a different location.  You could be a room away, or many cities away.  Occasionally you'll have a need to access a server locally.  It doesn't make sense to have a monitor, keyboard and mouse for each physical server in your environment.  A KVM (Keyboard, Video, Mouse) is a device that allows multiple physical servers to share a single Keyboard, monitor, and mouse.  In the server room the KVM can be designed to fit in a rack allowing easy access to multiple physical servers.  You can also purchase network enabled KVM's giving you local access even when you're away.

Backing up the data in your server room is important.  There are many different methods for backing up data.  One such method is to backup your data to tape drive.  When purchasing a tape drive you can get one that holds one, or two tapes, or you can get one with an autoloader that holds multiple tapes.  An autoloader will automatically switch tapes based on your backup jobs.  A common backup practice is to do disk to disk to tape backup.  In this environment you have your servers backup their data to a centralized drive, then you write those backups to tape.  Then you want to store your tapes offsite in a secure location.  You can use backup software to backup virtual machines too.

When we have a server room running all this equipment it get's pretty hot in there.  We need to ensure we provide proper cooling to the server room.  Currently the most common method for cooling a server room is to supply air conditioning either into the room, or into the rack directly. What ever cooling method you choose you have to make sure it will cool all your equipment.  As your server room grows don't forget to increase the cooling capacity at the same time.  If it gets too hot in the server room servers and other equipment will start to shut down to prevent hardware damage.

1 | 2 | 3