New update: my current setup is a dell power edge t310 with 6x4tb SAS, zeon CPU, and 12gb ECC all parts stock. No hardware raid. 2.5gb network card. Should I just replace the 6 drives? With larger capacities? That will probably be more than $10/tb… I didn’t buy the 16 drives yet, they are used SAS drives 4tb each, turn to be about $40 each.
Current storage 8tb used out of 14… And lots of cold drives waiting to get copied… 10tb+ probably. Is it worth copying all the cold storage drives to the redundant nas.
Update: budget(200-600), the reason for the build is I found cheap 4tb drives for almost $10/Terabyte. So I want to use as much of them as I can
I am trying to build my final NAS build as a beginner.
I have a 6x4tb dell server, but it’s not enough.
I am currently trying to build the final boss of my nasses. 4x16tb with truenas with raid
I am unsure of what parts to buy as I am a complete beginner.
I found a case that can hold all 14 drives.
I need a motherboard, CPU, ram, PSU
I am on a budget, kind of.
What motherboard do you recommend? Pulled from a workstations with CPU and ram? A server board? Normal consumer with normal consumer CPU? Motherboard should have some pcie slots for 2 sata cards and one 2.5 GB card.
What CPU to run all these drives?
What ram and how much? 16? 32? Ecc, non ecc? Ddr4? Ddr3?
Power supply: 850w or more?
All parts should be able to support the 16 drives with headroom…
I would appreciate any help on this build, I want to build this as soon as possible.
Thanks


I wouldn’t use more than 4 or 6 disks in a home environment. Specially with mechanical drivers, power consumption 24/7 would get me very worried.
I run 4 x 8Tb SSDs, not cheap, but solid, low power AND low heat (even more important).
Consider also heat dissipation as most likely at home you don’t have a constant temperature and humidity, so many spinning disks can suffer from heat, and that will kill them faster
Longevity… With so much space I would expect to keep it running a decade or more… So factor in 10x365x24 hours of operation, energy consumed, heat dissipation and failure rate.
On top of that, whatever gpu and ram you throw at it is meaningless, whatever wi work, even an Intel n100 NUC. Having enough cables and port instead… Well.
20W/drive means 30x24x0.2 kWh each month for 10 drives. At 0.20€/kWh, that’s 28€/month, cheaper than a 20TB Hetzner box. That’s assuming all drives are always spinning, as an idle drive uses more like 5W.
10x4tb = 40tb can be achieved with 4 12tb drives (actually 36tb in raid5) .
Doubtfully those 12tb uses much more power than the 4tb ones, each. So the 28€/m probably cut down to 14,€/m counted in excess.
Considering 120m (10y) of uptime, you should save enough to justify cutting down from 10 to 4 drives.
But going with more smaller drives gives you higher IO and the ability to have more concurrent failures before disaster. Losing a disk during resilvering is horrible when you’re only running with 1 redundant drive normally.
Yes, more redindancy is good and indeed worth having. Still 5 12tb drives are probably yet more energy and heat efficient than 10 4tb ones.
Even if I had 10 4tb for free I wouldn’t use them. Maybe a couple for backup reasons or cold storage, but not active 24/7 for a domestic raid environment.
I actually have 4 6tb hdds that I dismissed for the 4 8tb sdds, and I use two for local backup and keep two spares to replace them when they will fail.
4 8tb in raid5 provide 24tb total space that its far more than I need, and the risk of a double failure is mitogated by a proper 3,2,1 backup strategy in place
As for the higher I/o frankly I never felt the need. 1gbps home network is always the bottleneck anyway and if you require such disk troughput on your network, you are doing something wrong anyway.
Even many 4k video streams would sturate your lan before saturating your disks unless you store uncompressed video streams.