Skip to main content
TechnologyTrueNAS - Scale

My 40GBe NAS Journey.

By April 29, 2022January 7th, 20232 Comments

Introduction

I have been trying to see exactly how fast I can push my existing hardware, without dumping ridiculous amounts of money on it.

I have tried many things to maximize the amount of performance, including even temporarily buying a pair of 100GBe nics a while back. (TLDR; lots of driver issues….)

Well, after months and months of tinkering, I have finally reached my goal of being able to easily saturate 40GBe ethernet.

If you wish to read more of the details which led me to this point, here are a few of the previous posts:

Backstory

The Original NAS – 500$ Mini Build

I started my current NAS journey, with a 500$ build, after being disappointed at the high prices of synology / drobo / qnap systems. I wanted to prove I could build a more capable system for less money. I did succeed.

This unit was eventually replaced due to the underpowered CPU.

Combined Gaming PC / Server

I ended up selling the above build, and for a few months, I turned my gaming PC into a combined gaming PC and server. Overall, it worked great. The only downside was the amount of heat being put into my bedroom. Oh, and limited number of PCIe lanes.

This was replaced by a Dell R720XD, which I picked up for 500$ shipped, with 128GB of ram included.

Upgrading my home network to 10/40G backbone

Sept of last year, I picked up a pair of brocade ICX-6610 10/40G switches. As well, this is the first article where my Dell R720XD was mentioned. I never did a formal write up on obtaining it. However, it replaced my Combined GamingPC/Server.

As apart of this project, I also ran 10GBe to my office.

Adding a server rack

Self explanatory, however, since having all of your hardware sitting on buckets and cardboard boxes is not the best thing to do, I invested in a rack… and made everything somewhat presentable.

Reducing power consumption – Retiring Brocade

Turns out, the brocade ICX-6610 produces more heat/noise then I was happy with. In this post, I retired the old switches. This saved a big chunk of power, but, also meant, I no longer had a 40GBe network core. Instead, I leveraged my opnsense firewall for providing a 10G core, although routed.

Running 40Gbit Fiber

Since, 10GBe just wasn’t cutting it for me, I finally got around to running 40Gbe fiber from my server closet to my office. This also added a Chelsio 40GBe NIC to my Gaming PC.

Since the 40GB was point to point, no switches, no routers, etc, This also improved performance significantly over the 10GBe routed iscsi from previous steps.

Adding “Bifurcation”

Since, my r720XD does not support bifurcation, I found a few PCIe cards to work as switches for my NVMes. This allowed me to run many more NVMe drives rather then only being able to leverage a single NVMe per PCIe-slot.

Infiniband

One day a few months back, I experimented with using Infiniband.

This involved custom compiling drivers for TrueNAS, and having to hack up a lot of bits and pieces you generally do not want to touch.

As well, the community forums are quite toxic to you doing anything that isn’t supported out of the box. But, I did manage to successfully get infiniband up and running. However, I was unable to perform any reasonable tests as I didn’t want to hack up the system anymore.

Further Reducing Power Consumption

Given electricity costs money, and power usage means heat production, I continued down the path of reducing the energy consumption of my home lab.

In this post, I reduced power consumption by ADDING new servers. Overall, I was able to reduce my R720XD’s power consumption by a pretty good chunk by moving services to the new SFF optiplex machines I had acquired.

Since, the server was no longer handling containers/VMs, I also removed its second CPU which actually helped performance out a good bit.

On the downside, I now only have 96G of ram instead of 128G.

Further Reducing Power Usage (Not yet published)

In the next post regarding reducing my power utilization, I enabled power management features within TrueNAS, to allow for greater savings. This knocked off ~20w of consumption, while not allowing the disks to spin down. As well, I swapped in a low-powered E5-2630L to further reduce power draw.

(Note, processor is still in the mail. Article will be completed AFTER this one has already been published)

Migrating from TrueNAS Scale, to TrueNAS Core

The final step to achieve the performance numbers I have been hunting for- I installed TrueNAS core, and rebuilt my configuration from scratch.

Out of the box, with no tuning whatsoever, it had no issues at all besting every performance metric from Scale.

The Benchmarks

I have split the benchmarks into two seperate sections. Those against my Flash array, and those against my pool of spinning rust.

I did post both 1G and 32G test sizes where possible.

I started running the benchmarks in 32G to hush the trolls who blame the results on cache/ram.

Against my 8-disk Z2 Array

The array, is a 8-disk Z2. LZ4 compression is enabled, without dedup. The drives are 8x USED 8TB Seagate Exos.

TrueNAS Scale

These benchmarks were performed before migrating to core.

iSCSI – 1G Test Size
SMB – 1G Test Size

TrueNAS Core

iSCSI – 1G Test Size
iSCSI – 32G Test Size
SMB – 1G Test Size
SMB – 32G Test Size

It is rather interesting to see the differences between 1G and 32G. The sharp decline in write speed/IOPs is due to the cache being saturated. However, large sequential writes are still working just fine.

Against My Flash Array

My flash array is a single mirrored VDEV, consisting of a pair of 1T Samsung 970 EVO NVMe. While, previously, I did add an additional stripe, I didn’t notice dramatic performance differences and ended up removing the extra vdev(Possible to do with openzfs).

Since, I did not take recent benchmarks of the flash array before migrating to core, I do not have any “before” results to compare with.

As well, I did not take any benchmarks over SMB. However, it is safe to assume the results will be faster than the SMB results.

TrueNAS Core – After Results

iSCSI – 1G Test Size
iSCSI – 32G Test Size

I will note, there is a tiny amount more room for improvement as my CPU is getting pegged pretty hard during write operations.

But, next week, I will be replacing my E5-2695v2 with a E5-2630L in attempts to further reduce my energy consumption. However, I do feel, I should still be able to obtain very reasonable benchmark results.

Next Steps?

Well, Other then replacing my CPU with a low-powered model which is going to hurt my performance slightly, I don’t really have any next steps.

I have been debating wanting to build out a CEPH cluster using a pile of SFF/MFF PCs, however, the budget for such a project is not in the cards for this year.

As well, I have toyed with the idea of deploying a disk array, but, this wouldn’t net me any performance improvements or reducings in power consumption. The array itself would use 30-60w without any disks. The only meaningful advantage would be to make it much easier to swap to a different piece of hardware. I have decided, that since the number of spinning spools directly correlates to the wattage draw, I don’t want to deploy a 16+ disk array. When zfs online expansion comes eventually, I will likely grow my existing array to 12 disks total.

For now, It is nearing time to focus my efforts on the 1,000hp turbocharged 4×4 suburban project! Because, everyone needs a 1,000hp surburban! (NOT an exaggeration on the horsepower numbers….)

Join the discussion 2 Comments

  • Edwin Mak says:

    Hello Author,

    This is Edwin from Hong Kong, this is nice to see your article.

    I am having a lab test project to trying to test out 40G with RDMA support with TrueNAS and windows 11 , and check our the performance.

    I hope we can share experience between if you dont mind.

    My email is [email protected]

    Hope to hear from you.

    Regards,
    Edwin

    • XO says:

      To be honest, I never fully got RMDA working with my setup. I did mess around with infiniband for a bit, however, I never got iSCSI or SMB working with RDMA.