Skip to main content
PC Builds

2021 Server AND Gaming PC Build

By March 16, 2021January 7th, 20236 Comments

Introduction

Back in 2019, I built my current gaming PC. Since then, the only modification I have performed, was to double the amount of ram to 32GB. I did that to facilitate hosting a few services on this PC, before I built my server.

Well- In summer of 2020, I finally broke down, and built a 500$ budget server, and later, upgraded it to unraid.

Both of these PCs have performed fantastically. However- After installing plex on my server, and attempting to transcode a movie in 4k, I quickly realized…. the quad core 80$ APU was not going to cut it.

The CPU is completely hammered, while failing to successfully transcode a single stream.

Now- don’t get me wrong- this CPU has served its purpose perfectly. It currently runs over 40 containers, ranging from home automation, network administration, git, game servers, backups, etc…. and also- hosts my blue iris NVR recording from multiple 5mp cameras constantly. It does all of this, while only averaging 30% cpu utilization.

However, I think its time has come…

If you don’t like to read, and just want to see the benchmarks, and final results… CLICK HERE

Disclaimer- Amazon affiliate links are used in this article. For this site, I choose to not pesture my audience with annoying advertisements, and instead, only rely on affiliate links to support this hobby. By using the affiliate link, you will pay the same price on Amazon, as you would otherwise pay, however, a small percentage will be given to me.. To note- I DID buy all of the seen products with my own money, and did not receive any incentive to feature or utilize them.

To note on the above disclaimer- absolutely none of the content you are seeing was sponsored. 100% of everything shown below, was purchased out of my own back pocket. There is no outside influence involved for this build. As a matter of fact- as of the time of writing this introduction, I have no idea if this build will even pan out correctly.

Navigation

The plan

Now- normally, you would toss in a better CPU, and call it a day, right?

I want to do something different. I want to consolidate my server, with my gaming PC. To do this, I plan on leveraging unraid to expose my gaming PC, as a VM hosted on unraid, with the physical SSDs passed directly into the gaming PC.

That way- if issues ever do crop up, I can remove the unraid thumb drive, and boot into my pc normally, without any issues at all.

My only concerns on this approach- is…

  1. How much will it affect my gaming performance.
  2. How reliable will this solution be?

In this article, I hope to answer both of those questions for you.

The old server build will have all of the extra hard disks, and expansion cards removed, will be upgraded with the Ryzen 5 3600, and will be a birthday gift for somebody near the end of this month. With a 6c/12t processor, 32gb of ram, and a brand new SSD, this should excel at the tasks expected from the user(Photoshop/Checking email/etc…).. Well- it’s a bit overkill.

Hardware Specs

Note- pieces in bold, are new additions. The rest are carried over from my server build, and my gaming pc.

  1. Motherboard: Gigabyte X570 AORUS MASTER
  2. CPU: AMD Ryzen 7 5800x – 8c / 16t
  3. CPU CoolerCooler Master Hyper 212 LED Corsair H150i (Replaced after publishing this article)
  4. RAM: Corsair Vengeance LPX 16GB (2 X 8GB) DDR4 3600  (Only two of the sticks will be kept.)
  5. RAM: Corsair VENGEANCE RGB PRO 32GB (2x16GB) DDR4 3600 ( I couldn’t find the above sticks for a reasonable price in 16GB dimms)
  6. GPU: GeForce RTX 2070
  7. SSD1 (Gaming VM OS): Samsung 970 EVO NVME M.2 500GB (Passed through to VM)
  8. SSD2 (Gaming VM Steam): Samsung 970 EVO NVME M.2 500GB (Passed through to VM)
  9. SSD3 (Via USB) (Staging): Samsung 970 EVO NVME M.2 500GB
    1. Connected via SSK Aluminum M.2 To USB Type-C (I ran out of PCI-e lanes…) Still performs at 600MB/s.
  10. SSD4,5: (Server Cache): Samsung 970 EVO 1TB NVMe (BTRFS Mirrored Cache Pool)
  11. HDD1-8 = 8x 8TB Seagate 7200 RPM ST8000NM0105 (Added after publishing this article)
  12. HDD9 – Random 3TB (Used for NVR)
  13. NVMs PCIe: ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2 (After the fact note- only enough PCIe lanes for a single NVMe here….)
  14. SATA HBA: LSI 9207-8i (Added after this article)
  15. Case: Fractal Design Define R6
  16. PSU: EVGA SuperNova 80+ GOLD 750w
  17. Keyboard: IKBC CD108 (This keyboard is a pleasure to use)
  18. Mouse: Logitech M150 Mouse (Don’t replace what isn’t broken!)

I would have loved a new Ryzen 9 5900x, but, those are a bit hard to come by currently. So, I will make do with a 5800x.

Memory Comparison

Overall- My unraid server uses 16gb of ram for applications, and uses the remaining 16gb of ram for cache. My gaming PC uses around 9-12gb on average while playing games. With that said- 48GB of total ram, should allow everything to maintain the existing amount of ram.

Compute Comparison

For compute usage- If you compare core-by-core, My server had 4c/4t, while my gaming PC has 6c/12t. The new CPU, will have 8c/16t, leaving for the same amount of threads.

Looking at CpuBenchmark’s Comparison, The 5800x is more powerful than BOTH of the old processors combined, in nearly every area. So- there should be plenty of compute to go around.

CPUBenchmark.net Comparison of old processors, versus new processor.

With those comparisons out of the way, it should be safe to assume this new build, will have PLENTY of compute and memory to go around.

Storage Plan

For storage, here is the plan:

  1. SSD1: 500gb Gaming PC OS – Will be passed directly to the VM.
  2. SSD2: 500gb Gaming PC Steam – Will also be passed directly to my gaming VM.
  3. SSD3: 500gb Staging – Connected via USB. Used for “Staging” from Sabnzbd, BlueIris, etc. I expect this one to wear out first.
  4. SSD4: 1TB: Mirrored Cache Pool.
  5. SSD5: 1TB: Mirrored Cache Pool.
  6. HDD1: 3TB: NVR storage. Redundancy is not required for this. Mounted directly in Blue Iris VM
  7. HDD2: 8TB: FreeNAS
  8. HDD3: 8TB: FreeNAS
  9. HDD4: 8TB: FreeNAS
  10. HDD5: 8TB: FreeNAS
  11. HDD6: 8TB: FreeNAS
  12. HDD7: 8TB: FreeNAS
  13. HDD8: 8TB: FreeNAS
  14. HDD9: 8TB: FreeNAS

SSD1, SSD4, and SSD5 will be placed directly into the motherboards NVMe slots.

SSD2 is connected via USB. I ran out of PCIe lanes.

SSD3 is on the PCIe/NVMe expansion card.

The 8x 8TB HDDs are hosted off of my LSI 9207-8i, which is passed directly into FreeNAS. The zfs array is configured as striped mirrors. (Raid 10)

The 3TB NVR drive is connected to my motherboard’s sata ports.

Network Planning

My existing server has a total of 4 active gigabit ethernet ports, thanks to my quad port HP/Intel Gigabit NIC... It is also connected directly to the “closet” switch, in the center of my house. This keeps my NVR footage from having to hop through the entire network. Essentially- the POE cameras plug into this switch, and there is a dedicated ethernet port on the server for NVR traffic, keeping this traffic away from the rest of my network.

Since, my gaming PC will remain in the bedroom, this limits me to a maximum of two ethernet ports total. Luckily, the motherboard DOES have two ethernet ports.

Doing some napkin math- on potential bandwidth usage..

  1. NVR Cameras – 20Mbit/s each. 5 Total = 100Mbit/s.
  2. Internet – 100Mbit/s max d/l.
  3. Streaming Media – Let’s assume 50Mbit/s.
  4. IOT/Server Traffic – Let’s assume an unrealistic number of 50Mbit/s. The actual number is < 1Mbit/s.

This totals up to 33% of a single gigabit connection. So- assuming the server is busy syncing at full speed from the internet, consuming 100mbit/s, while my living room TV is streaming a 4k movie, at a SUPER high bitrate, and while all of my IOT clients are downloading complete firmware from my server, we are likely going to be using less then a single gigabit connection. The only potential exception, is all of the cellular devices in my home decides to do a full next-cloud sync of the previous 10 years of photos- well- 802.11ac could potentially saturate the link. However, the weak leak will be from the access point, to the switch, and will not matter where the server connects to the network.

With that said, A single gigabit ethernet port should fulfill my needs. However, I plan on dedicating the realtek 2.5GBe to my gaming PC, and dedicating the intel NIC to unraid. I have had MUCH better luck with intel NICs on server applications.

The Build

I am not going to post a full step by step guide on how to build a PC. But- I will post a few select pictures.

As well, you can view THIS article for a few of the tweaks I had to perform inside of Unraid to get everything working properly.

AORUS x570 MASTER w/3 970 Evo NVMe disks, 48GB of corsair ram, and a Ryzen 5800x.
A view inside the case before I disabled all of the LEDs and blinky lights. Also- taken before installing the H150i.. and Extra HDDs
Hard drives EVERYWHERE. Case still has room for one or two more.

Benchmarks

How benchmarks will be performed

Remember- the VM is ONLY allocated 4 physical cores, and 4 hyper threads, for a total of HALF of the physical CPU. The bare metal results WILL BE MUCH HIGHER as a result!!!!! As well- the VM is only allocated 16GB of RAM, instead of the 48GB for bare metal.

As well, the benchmarks will be executed WHILE my server is running its full workload. This includes processing NVR traffic, AI detection, Home automation, and even streaming video. This is NOT a clean-room test. This is a down and dirty REAL WORLD test to see what actual results will look like.

CPU

Tested with CPU-Z, Version 1.95.0 x64, with benchmark version 17.01.64

Link to bare metal test: https://valid.x86.fr/zaptm4

To note- There is a separate test for bare metal, using only 8 threads, to match the resources allocated inside of the VM.

Note- #VM 1 is as a VM, before installing the h150i cooler. #VM 2 is AFTER installing it.

CPU-Z Bare Metal – 16 Threads Bare Metal – 8 Threads VM 1 VM 2 % Diff from bare metal
Single Thread 630 630 569 591 -6%
Multi Thread 6280 4832 2954 2998 -52%

Comments

Summary- there is a expected amount of impact. The VM multi-thread performance, is slightly less then half of the bare metal performance, which is expected, because only half of the CPU is allocated to the VM. My guess, is the 8 thread bare metal test, ran against physical cores, instead of hyperthreading. The single threaded performance, does seem to suffer more then I would like, however, 10 percent is expected.

Overall- this was a very successful test in my opinion.

Disk

Link to benchmarks performed when this PC was built. Using the same drive.

OS Drive Only – Bare Metal

Oddly enough, Its actually quite a bit faster then when I originally benchmarked it. I guess Samsung SSDs age like fine wine.

OS Drive Only – As a VM (Updated 3/23)

Original benchmarks as a VM, passing through the device as SATA.

I ended up manually editing the XML file, to pass through the NVMe controllers, using hostdev tags. I will have a write up in the next few days on THIS ARTICLE.

After performing these corrections, here is the updated benchmark. The random speeds are suffering a tad, however, the sequential speeds are as good as bare metal. I noticed the one of the metrics was a bit weird- So, I ran the benchmark again.

Comments

The performance penalty here is pretty large… 64% slower access for large sequential reads, 60% for large sequential writes. Keep in mind- this is a bare metal drive, passed into a VM. This is not using a .VMDK, or QCOW virtual disk, this is a physical disk.

I will be trying to work with unraid to discover how this can be resolved.

At the current time, I would mark this test as a failure, as these results are not very good.

Gaming

Far Cry New Dawn – Ultra Preset

Overall- extremely playable at 1440p. Playable at 4k. I am not quite sure why the average FPS inside of the VM, is higher then on bare metal.

Bare Min Bare Avg Bare Max VM Min VM Avg VM Max
1440p 65 60 97 41 72 98
4k 36 43 56 29 44 54

Summary

Basically zero difference from bare metal performance. Very successful test.

Rise of the Tomb Raider

To note- running these benchmarks multiple times gave somewhat inconsistent results.

I also don’t understand how it would outperform the bare metal install, inside of a VM. These benchmarks do not appear to be very consistent. Take these with a grain of salt.

Also- I don’t understand how the Mountain peak on 1440p, actually did worse then the same benchmark at 4k.

1440p – Very High Preset
FPS Min Avg Max VM Min VM Avg VM Max
Mountain Peak 22 107 153 20 53 89
Syria 46 81 99 9 32 60
Geothermal Valley 57 72 90 2 45 86
Overall Score 87 44
4k – Very High Preset
FPS Min Avg Max VM Min VM Avg VM Max
Mountain Peak 30 52 75 16 53 98
Syria 26 40 47 2 33 62
Geothermal Valley 30 40 49 6 35 66
Overall Score 44 41

Summary

My benchmark results were too inconsistent to call a result. Even running back to back benchmarks with no changes, would lead to varying results on either bare metal, or VM.

User Benchmark

For the record- do NOT use user benchmark to determine what processor is better. Ever since the AMD fiasco, where they messed up their ranking so badly, an i3 was ranked above an i9 or AMD threadripper, I do not trust them one bit, whatsoever.

However, I do leverage the tool, to determine how well my hardware compares, to others, with the same hardware.

Bare Metal Run ID: 41087369

As a VM Run ID: 41161578

Note, There are purposely no links to the results. I refused to support them by driving traffic in their direction, after the obvious nerfs to AMD benchmarks, which caused them to report a quad core i3, as faster then a Ryzen or intel i7/i9.

You can read about that story here.. if you are unfamiliar with it.

Summary

~4% impact to single-threaded CPU performance.

~10% impact to multi-threaded CPU performance for their 4/8 core thread.

~50% impact to the 64 core test, which is expected, since VM is only allocated half of the cores.

The GPU benchmark actually did quite a bit better when virtualized.

HDD results are in line with the above HDD benchmarks.

Their RAM benchmark also performed better while virtualized.

Final Remarks

Overall, I would say this is a very successful project. I now have a consolidated VM and server. While physically using the computer, You cannot tell in anyway it is virtualized.

The two 32″ 4k & 1440p displays, work exactly the same.

The USB ports, work exactly the same. You plug it in, and it pops up like normal.

Even wifi and bluetooth, work exactly the same. There is also a dedicated NIC I passed in.

The ONLY noticeable difference- you don’t want to click the power button…. In the current state, it will turn off the Server…

When the server does reboot- You will see the bios normally, followed by unraid booting, then a blank screen followed by the gaming VM a few seconds later.

I will note- during all of this testing, my CPU temps were hitting 90c, which IMO, is quite hot. This is within AMDs rated temp range. The current cooler is a Cooler Master Hyper 212. However- I already have a new Corsair H150i on the way as of writing this…. especially, since with Precision boost override, more cooling, usually translates to more performance. I expect, this will actually help some of the benchmarks as well.

As an update- the H150i hardly ever goes above 40 or 50c. Under full load, with all cores at 100%, the highest temp I have seen is 70c. This is a drastic improvement from the 90c temps I was receiving with the Hyper 212.

In the future, I will design and document an IOT project to make the computer’s power button “Smart”.. in the sense, clicking it will power up/power down the VM, instead of the entire server. As well- I will connect the power LED to the VM’s power state.

What I Would Change

Use Threadripper instead of Ryzen

While, I still technically can return everything and do so- If I were to repeat this build, I would use a thread ripper instead of a Ryzen. I estimate it would cost around 400-500$ more to do so, HOWEVER, I would gain a few KEY features for this build.

  1. Quad channel DDR-4. This helps performance.
  2. PCI-E Lanes.

Seriously- 20 PCIe lanes is not enough. Ideally- your GPU gets dedicated 16 lanes. But- 8 lanes still is not likely to affect performance.

Each PCIe NVMe uses 4 lanes. Remember the ASUS Hyper M.2 used in this build? Well- due to PCIe restrictions, I can only put a single SSD in it. I had to order another adaptor to squeeze my last NVMe into the bottom slot, which gets lanes from the chipset, instead of the CPU.

Want to add another GPU, for a second player to also use? Good luck, lanes are very limited on Ryzen.

However, if say, you spent the same amount on a Threadripper 1920x, you get a CPU with 12 cores, and 24 threads (up from 8/16), which gives you 64 pci-e lanes, instead of 20. Granted- this processor will perform overall- quite a bit worse, and it only supports PCI 3.0, instead of 4.0. But- the additional PCIe lanes will offer much greater room for expansion. I do expect this is why linus chose to use a server CPU/motherboard for this video.

The downside to using a thread ripper- it would cost quite a bit more. The motherboard is quite a bit more expensive, as is the CPU. In this case, it would have likely been a better decision due to my expansion constraints I have reached.

Better Cooling

Technically, I have already ordered a Corsair H150i. However, it is not mentioned until the end of this article. But- better cooling usually means more performance… and a good cooler, also means less noise.

Note- The H150i is installed. It successfully brought temps down to 130F from 195F. It also improved the CPU benchmarks as well.

Install a mini-split into my office

This server/PC has been doing a very good job of heating my office. However- when summer time comes, this is going to be an issue…….

Q&A / FAQs

  1. What if the server stops working, and you need to access your PC?
    1. I remove the thumb drive and boot directly into windows, and everything works normally.
  2. Why didn’t you use a Ryzen 5950x?
    1. Because I am not made of money, and I cannot get my hands on one currently.

Join the discussion 6 Comments

  • Max says:

    Great read, thanks for writing it up! I have just built a super small form factor PC so doing this will have to wait a while but I am tempted… Have you thought about tucking the server away somewhere and using parsec to remote into the VM? Then you could play from anywhere. In your room (where you have your PC) you would just need a very thin client with two ports for your monitors. This would also allow you to host two VMs and allow you and a friend to play simultaneously.

    • XO says:

      Well, I actually turned it back into just a gaming PC. I actually do leverage steam-link for playing games elsewhere in the house.

      But, as far as VMs and containers, I have a rack full of servers now. Posts for those are all on here too.

  • Paul says:

    Thanks for the quick response, I will have to test from my side again.

    Lastly no concerns running your gaming PC with high end components 24/7?

    • XO says:

      The only concern, is having to price out mini-split units, because my office/bedroom is now a toasty 78 degrees, WITH the window open on a cool day.

      But- after installing the h150i (after publishing the initial article), the CPU stays nice and cool, with nothing being overly hot. With everything remaining cool, I have no concerns.

      The GPU is basically powered off when the gaming VM is offline.

  • Paul says:

    hi bud , interesting read thanks for your work on this.

    compared power usage from your closet server running 24/7 and now your consolidated running 24/7 can you give us some estimates on this?
    only reason why I have not consolidated mine was my power consumption would increase with 100-150w

    • XO says:

      It is not a fair apples to apples comparison, as I have two giant 32″ monitors, KVM switches, and my work laptop also running on this UPS-

      It is currently pulling 227w at the UPS. With that said- I am going to assume my overall power usage has actually dropped.

      Actually- after unplugging my work-laptop, powering down my KVM and monitors, overall power usage at my UPS is 130w which is only ~20-30w higher then the UPS from the closet server. To note- again, there are quite a few devices plugged in back here still- the router, a switch, a ONT, etc.

      Overall, just estimating, overall power usage will drop quite a bit.