If you’re relatively new to servers and/or coming from a home built server you may have heard the term virtualisation before but never had a need or the core count to test it out. At SMB Servers, we try to only offer the higher end models of our range so that every server offers large core counts and when mixed with virtualisation allows you to run fewer physical servers and get the most out of them.
What is Virtualisation?
Virtualisation, to put it simply, is server emulation. A host application (a hypervisor) emulates one or more physical servers with a virtual BIOS that software recognises as a physical device.
This essentially means you can split one server into many without having to purchase multiple physical servers. Each emulated server gets a full operating system install (like Windows or Linux) and through the hypervisor or a remote access protocol can be accessed and manipulated as if it were a full physical device.
This is how rental Virtual Private Server (VPS) work in the Cloud when you pay a company $10 a month and get a remotely accessible Linux instance. It’s a virtualised client on a shared server which may have 50 or 500x the hardware that you’re paying for, and that many other people renting the same. Except instead of paying $10 per month and getting 1 virtual core, 1GB of RAM and 25GB of storage, you’re paying $5000 once and getting 72 threads (and unlimited virtual cores), 256GB of RAM and 1TB of storage that you could then split out to your own 50-500 servers.
Wait, how can I run 500 servers on 72 threads?
It sounds unbelievable, and 500 is probably unrealistic, but it’s doable. Virtualisation takes advantage of the fact that more often than not, a server will nowhere near use its total capabilities, coupled with the concept that you can’t buy a modern processor under a certain level of performance.
For example, a mail server for a dozen people would happily run on a single core and a GB of RAM, and even then sit at only a few percentage points of utilisation most of the time, but you can’t buy a processor with a single core, nor can you really buy a single GB stick of RAM anymore. So a bare metal mail server, equipped with a quad core processor and 8GB of RAM would usually be wasting 3.5 cores and 7GB of RAM.
So, one thing you could do is install more applications on this server, make it not only a mail server, but also an app server, and an SQL server. Maybe run five or ten applications on this one server. And that’s a real use case that is often seen in small business.
However this creates a few nightmares, some examples,
- if you need to restart your machine because one of the many applications requires one after updates, then you lose all of those applications during the restart.
- If one of the applications causes an OS crash, then again, all applications are out
- If one application gets into a loop, then it could suck up ALL server resources, preventing the others from running correctly
- There’s no realistic method of limiting server resources to a certain application, nor an easy method to increase server resources as one application grows faster than the others or the machine as a whole
This is where dedicated servers come in, by splitting each application out into its own server, you can resolve all of those issues.
- Each operating system has its own updates for its own application,
- restarts only affect that application,
- if an application has an issue and crashes or sucks up resources to its limit, you can fix just that application without the stress that everything else is broken.
- And if you need more resources for just that one appliance, you can increase them.
And in that’s where virtualisation shines, why have one full server per device, when you can have one big server and virtualise a server for each application.
You get all of the above benefits, plus the ability to increase hardware allotment using a simple software based configuration interface. And why stop at five applications on four cores? Why not run a 36 core, 72 thread server and put everything in a virtual environment, and this is where the 500 servers comes in. Modern hypervisors can be configured to only consume as much hardware as the application requires. So whilst you can set a virtual server to 4 cores and 16GB of RAM, if it’s only using 50% of one core and 1GB of RAM, then it’s only consuming that much on the host device, and you could provision four, five, 10 other servers to share those underutilised resources. Or in the case of 72 threads, 500 single core servers that are 99% of the time idling at 10% of a single core (as often seen in rental VPS).
Okay, so it’s efficient on hardware, are there other benefits?
Multitude! I’ll just list out a bunch as this writing style is tiring.
- Clustering – not only is it easy to transfer servers between hosts, but you can run distributed (eg two identical servers on different machines get half the load each) and high availability instances (eg one fails and another identical one on the different server takes over).
- Remote management (turning servers on and off, and managing them all in one place)
- Snapshots (copies of the disks and VM profiles, so you can rollback to points in time)
- Backups
- Monitoring
- Local image repositories (eg keep all your OS ISOs etc in one place for quick and easy builds)
- Containerisation (hybrid of virtualisation and many apps on one OS, I won’t cover this here)
- Virtual switches and internal networking
- Rate limiting (eg if you’re renting out VPS, you could spec out a 100mb network interface, then offer an upgrade to 1000mb and it’s simply a soft change).
- And more (eg template installs, console views etc)
All in all, the best start to finding out what it can do is to simply install a free hypervisor from one of the many vendors and give it a go.
What are the overheads? How do PCI-E cards like graphics cards work?
Virtualisation is an old technology that continues to improve, which is to say that there’s definitely overheads but these days they’re minimal and can be considered negligible compared to the efficiency benefits of virtualisation.
Over the years, the processor manufacturers themselves have produced their own systems for allowing hypervisors to pass through transactions directly to the VMs and back again. For example, Intel (the primary vendor for SMB Servers) calls their processor passthrough VT-x and IO (like PCI-E) is VT-d. With these active (and they’re usually on by default on servers and supported by hypervisors) you should expect to see no perceivable performance loss on normal server workloads.
I can’t say I’ve tried gaming on a virtual machine, so certainly possible you may lose a few FPS there due to the realtime work but in normal workloads there should be no perceivable performance hit.
Does SMB Servers use it?
Of course, SMB Servers run our entire environment on virtualised devices both onsite and offsite. We have three sites running 20+ core machines, each with as many as 17 servers on one device which include SQL, load balancing, app servers, dev servers and a whole host of others. One of our 20 core machines is overprovisioned to more than 60 virtual cores, and they’re still averaging no more than 20% utilisation.
Sitting at our desks, we can access each of them individually over RDP and SSH and aside from not taking up ridiculous quantities of space in our server rooms, there’s nothing to indicate they’re virtualised on a management basis.
I’m convinced I should give it a try, at least, where do I start?
There are several free solutions to try out virtualisation, most of which then have paid solutions you can grow into later if your business hits enterprise levels.
Personally, I prefer Linux for my servers, but if you prefer Windows, a great place to start is VMWare Player, to virtualise one or two machines on your local desktop. But if you’re going straight to a server, then Windows Server includes the Hyper-V feature where you can try it out. From there, and depending on your Windows Server version, licensing will come into play past the evaluation stage but I can never make heads or tails of Microsoft’s licensing stack so I can’t tell you what, from my experience very few people in the world actually can. It’s the licensing in a business environment that makes Microsoft solutions so stressful for me, so I stick to Linux because I know I won’t find out years down the line I’m out of compliance and owe Microsoft more annually than my business makes, there’s a reason they’re a 2.5 trillion dollar company.
Otherwise if you want a simple, easy solution with no surprises, there’s plenty of free standalone purpose built options like ESXi Free Edition and Proxmox.
ESXi is the most popular, as it has been used in Enterprise for decades and people with virtualisation experience in Enterprise thus stick to what they know. ESXi comes in two free versions, Free which is limited to dual socket and a max of 8 vCores per VM, and Evaluation which lasts 60 days but has no restrictions over the paid license.
ESXi is a very small, purpose built HyperVisor that they’ve built from the ground up for purpose.
Meanwhile Proxmox VE is a full free solution from the providers of the same name, you can then opt to pay for a subscription for updates and support. Proxmox is based on a Debian Linux distribution and uses the Linux KVM framework. If you’re interested in understanding what it’s doing and how it works, you can also just spin up your favourite Linux Distribution and install a raw KVM instance (which is what I did before I started using Proxmox).
Personally, I would recommend Proxmox to get started, and the reason is that not only is a fully featured, easy to use system, but you can grow into a heavy virtualisation user without having to switch your licensing or start paying unless you want to. Compared to the others, it’s also very reasonably priced and there are no hidden costs. Fully featured also means that you have snapshotting capability, and a back end OS if you want to extend it yourself (such as I do with a backup script to Google Cloud).
Once you’ve decided on your solution (any of which can be Googled for), it’s just a matter of installing the operating system and you’re away.
Why not just use the Cloud, if it’s the same?
Use the Cloud. Absolutely. That’s not a trick statement. A hybrid solution is the best solution, take advantage of every tool available. When you need 100% uptime or to sandbox certain services to keep them up when others are down the Cloud is a great environment to do so.
For example, our mail server is a Cloud hosted VPS, if our local metal environment goes down, then our email still works and we can communicate with our customers still while we bring it back online.
Additionally our backups are sent to Google Cloud with a carefully planned retention policy. So in the event of an outage we can bring them back down and recover to a new instance. Bulk Cloud storage is cheap if you don’t need it fast or available. Perfect for backups.
But when you need heavy compute, that’s when cloud costs quickly ramp up. If you’re renting a single 8 core VPS from Vultr, for example, with 32GB of RAM and 640GB of storage that’s $160USD/m, or $1920USD/year.
Meanwhile you could buy an 18 core DL120 Gen9 + 64GB of RAM and 2x 800GB SAS SSDs (RAID1) from us at SMB Servers for $3,287. In the first year it has already paid itself off while offering more than twice the hardware. You could even pick up two and cluster them so one can go down without any effect.
So definitely use the Cloud, but pick your battles to get the most value. Put your vital devices in the Cloud, particularly if they’re small, and keep your heavy hitting servers that can see small outages on occasion at home.
Conclusion
Realistically this entire article is to point out that virtualisation in normal workloads is simply better than a bare metal operating system and hopefully you’re convinced enough to try it out and draw your own conclusion.
Virtualisation allows you to overprovision your machines to better use your hardware, both more efficiently and more effectively allowing you keep costs and management effort down, and allows you to segregate applications into local instances to keep any issues they incur to themselves.
Finally you don’t need to use on-prem OR the Cloud, and we absolutely recommend you use both. A good, reliable solution takes advantage of every available tool.
0 Comments