return to table of content

7 watts idle – building a low powered server/NAS on Intel 12th/13th gen

MochaDen
25 replies
1d

Low-power is great but running a big RAID long-term without ECC gives me the heebee jeebies! Any good solutions for a similar system but more robust over 5+ years?

j45
5 replies
23h34m

I would never run a self-hosted nas when a synology/qnap are available as a dedicated appliance for around the same price.

The hardware is much more purpose equipped to store files long term and not the 2-3 years between consumer SSDs'

It's not to say self-hosting storage can't or shouldn't be done, its just about how many recoveries and transitions have you been through, because it's not an if, but a when.

dbeley
1 replies
19h34m

The hardware is basically the same as self-hosted NAS, the motherboard could even be of a lower quality. The software though is closed source and most consumer NAS only get support for 4-5 years which is outrageous.

dannyw
0 replies
13h7m

You're not buying from the right brand.

Synology supports their hardware for about 10 years since release. They are the "Apple"-like of NAS.

Jedd
1 replies
15h46m

I bought a QNAP about a decade ago under the same assumption, but my experiences [0] there means I'm unlikely to buy a SOHO-level storage appliance ever again.

The tl;dr of my rant was around shortcomings in NFS permission configuration, and a failure of the iSCSI feature (the appliance crashed when you sent it data).

Further, these appliances invariably use vanilla RAM sticks, so you're exposed to gentle memory-based file corruption you probably won't notice for years.

So I'd argue the hardware is 'better equipped', and I'd also argue the software as shipped matches the marketing promises accompanying same.

Things have doubtless changed - I'm sure those bugs are long gone now - but unless you're looking at an ECC appliance, I'd say you're better off building your own white box.

[0] https://jeddi.org/b/brief-rant-on-trying-to-use-iscsi-on-a-q...

justsomehnguy
0 replies
14h41m

but unless you're looking at an ECC appliance, I'd say you're better off building your own white box.

Synology actually allows ECC DRAM and even sells it and list which models would accept them.

But yeah, at the price of a full featured model with an x86 CPU and SO/DIMM RAM and 4+ drives you are in the territory of building your own, with a lot more of control and without DSM (in Synology case) shenanigans.

EDIT: actually the biggest problem here is actually finding a good case, because even ATX cases now usually don't have more than 2-3 3.5" bays by default and often don't have 5.25" at all.

https://www.synology.com/en-us/products/DDR4

justinsaccount
0 replies
22h46m

The hardware is much more purpose equipped to store files long term

What hardware would that be, specifically? The low end embedded platforms that don't even support ECC?

how many recoveries and transitions have you been through

3 or 4, at this point, using the same 2 disk zfs mirror upgraded from 1TB to 3TB to 10TB.

faeriechangling
5 replies
1d

Embedded SOCs like AMDs which are used by Synology etc such as AMD V2000.

If you want to step up to being able to serve an entire case or 4U of HDDs, you’re going to need pcie lanes though, in which case w680 with i5-12600k and a single ecc udimm and a SAS HBA in the pcie slot with integrated Ethernet is probably as low wattage as you can get. Shame w680 platform cost is so high, am4/zen2 is cheaper to the point of still being viable.

You can also get Xeon, embedded Xeon, am5, am4 (without an iGPU).

There’s nothing inherently wrong with running a raid without ecc for 5 years, people do it all the time and things go fine.

eisa01
4 replies
1d

Been thinking to just get a Synology with ECC support, but what I find weird is that the CPUs they use are 5+ years old. Feels wrong to buy something like that “new”

Same with TrueNas mini

hypercube33
0 replies
19h34m

I have a 3u Nas I built in 2012 or something with a two core sempron running windows and using storage spaces and it still holds up just fine.

faeriechangling
0 replies
23h24m

For the most part, these are computers which are meant to stick around through 2-4 upgrade cycles of your other computers. Just doing various low power 24/7 tasks like file serving.

You could be like “well that’s stupid, I’m going to make a balls to the wall build server that also serves storage with recent components” but the build server components will become obsolete faster then the storage components, it can lead to incidental complexity to try and run something like windows games on a NAS operating system because you tried to consolidate on one computer, being forced to use things like ECC will compromise absolute performance, you’ll want to have the computer by your desk potentially but also in a closet since it has loud storage, you’re liable to run out of pcie lanes and slots, you want to use open cooling for the high performance components and a closed case for the spinning rust, it’s all a bit awkward.

Much simpler is to just treat the NAS as an appliance that serves files, maybe runs a plex server, some surveillance, a weather station, rudimentary monitoring, and home automation. Things for which something like a v2000 is overkill. Then use breeding edge chips in things like cell phones and laptops. Then have the two computers do different jobs. Longer product cycles between processors makes things like support cheaper to maintain for long term periods of time and offer low prices.

dannyw
0 replies
13h11m

Serving files is not compute intensive at all.

cpncrunch
0 replies
23h57m

It depends what your requirements are. Ive been using a low end synology box for years as a home dev server and it is more than adequate.

ianai
4 replies
1d

Agree. Didn’t even see ECC discussed.

Apparently this board supports ecc with this chip: Supermicro X13SAE W680 LGA1700 ATX Motherboard

Costs 550.

One option is building around that and having some pcie 4.0 to nvme boards hosting as many nvme drives as needed. Not cheap though but around home affordable.

ThatMedicIsASpy
3 replies
1d

You need workstation chipsets to have ECC on intel desktop CPUs.

And yes they start at around 500.

philjohn
2 replies
1d

If you go back a few generations, the C246 chipset can be had on boards costing 200, and if you pair it with an i3-9100T you get ECC as well as pretty damn low power usage.

ianai
1 replies
23h8m

You are limited to pcie 3.0 speeds there though. But good suggestion.

philjohn
0 replies
17h52m

That's true, but if your goal is low power, that's not necessarily going to be a bottleneck - even if you dedicate all 16 PCIe lanes to NVMe storage it's going to be more than fast enough for 99% of home server needs.

philjohn
2 replies
1d

That's why I went with an i3-9100T and an Asrock Rack workstation board, ECC support (although UDIMM vs RDIMM)

a20eac1d
1 replies
23h34m

This sounds similar to a build I'm planning. I cannot find the workstation mainboards at a reasonable price though. They start at like 400€ in Europe.

philjohn
0 replies
21h9m

There's an Asus one that's available as well, the ASUS C246 PRO - it's about 250 GBP.

I did build mine 2 years ago, so the 246 motherboards are less available now, the C252 is another option which will take you up to 11th gen Intel.

rpcope1
1 replies
23h35m

I think the trick is to go with a generation or two old Supermicro motherboard in whatever ATX case you can scrounge up, and then use either a low power Xeon or a Pentium/Celeron. Something like the X11SAE-F or X12SCA-F (or maybe even older) is plenty, though maybe not quite as low power. I still use an X9SCA+-F with some very old Xeon for a NAS and to run some LXC containers. It idles at maybe 20-30W instead of 5, but I've never had any issues with it, and I'm sure it's paid itself off many times over.

sgarland
0 replies
14h52m

Even better, Supermicro will pick up the phone/answer emails, even if you bought a years-old secondhand server. They have the manuals, and are more than happy to help you out.

Love my X9 and X11 boards.

jhot
1 replies
23h23m

I'm running truenas on a used e3 1245 v5 ($30 on ebay) and an Asus workstation Mobo with 32 GB ECC and 4 spinning drives. Not sure individually, but the nas along with a i5 12400 compute machine, router, and switch use 100W from the wall during baseline operation (~30 containers). I'd consider that hugely efficient compared to some older workstations I've used as home servers.

NorwegianDude
0 replies
20h58m

I've been running a E3-1230v3 for over 10 years now. With 32GN ECC, 3 SSDs and 4 HDD and separate port for IPMI I'm averaging 35 W from the wall with a light load. Just ordered a Ryzen 7900 yesterday, and I guess the power consumption will be slightly higher for that one.

tyingq
0 replies
1d

If you're on a budget, a used HP Z-Series workstation supports ECC ram. A bare-bones one is cheap, though the ECC memory can be expensive since it's not the (plentifully available) server type RDIMMs. Not a low-power setup either :)

sandreas
22 replies
23h36m

There is a german forum thread with a google docs document listing different configurations below 30W[1]. Since there are very different requirements, this might be interesting for many homeserver / NAS builders.

For me personally I found my ideal price-performance config to be the following hardware:

  Board: Fujitsu D3417-B2
  CPU: Intel Xeon 1225 V5 (better the also compatible 1275v6, but its way more expensive)
  RAM: 64GB ECC RAM (4x16GB)
  SSD: WD SN850x 2TB (consumer SSD)
  Case: Fractal Design Define Mini C
  Cooling: Big block no name, passively cooled by case fan
  Power: Pico PSU 120W + 120W Leicke power supply
  Remote Administration via Intel AMT + MeshCommander using a DP Dummy Plug
I bought this config used VERY CHEAP and I am running Proxmox - it draws 9.3W idle (without HDDs). There are 6 SATA ports and a PCIe port, if anyone would like to add more space or passthrough a dedicated GPU.

It may be hard to get, but I paid €380,00 in total. Does not work very well for Media Encoding, here you should go for a Core i3 8100 or above. Alternatively you could go for the following changes, but these might be even harder to get for a reasonable price:

  Boards: GIGABYTE C246N-WU2 (ITX), Gigabyte C246-WU4 (mATX), Fujitsu D3517-B (mATX), Fujitsu D3644 (mATX)
  Power: Corsair RM550x (2021 Version)

Cheap used Workstations that are good servers are Dell T30 or Fujitsu Celsius W550. The Fujitsu ones have D3417(-A!) boards (not -B) having proprietary power supplies with 16 power pins (no 24pin ATX but 16pin). There are Adapters on Aliexpress for 24PIN to 16pin (Bojiadafast), but this is a bit risky - I'm validating that atm.

Ryzen possibilities are pretty rare, but there are reports that the AMD Ryzen 5 PRO 4650G with a Asus PRIME B550M-A Board is drawing about 16W Idle.

Hope I could help :-)

[1]: https://goo.gl/z8nt3A

manmal
9 replies
21h54m

For anybody reading this - I think it’s a great config, but would be careful around pico PSUs in case you want to run a bunch of good old spinning disks. HDDs have a sharp power peak when spinning up, and if you have a couple of them in a RAID, they might do so synchronously, potentially exceeding the envelope.

agilob
4 replies
20h47m

to go deeper, depending on a file system, some FS won't let HDDs go to sleep, so they always consume power for max RPM.

scns
3 replies
17h39m

some FS won't let HDDs go to sleep

Which ones?

agilob
2 replies
8h51m

btrfs and zfs at least, probably most frequently used FSs for RAID

tomatocracy
0 replies
8h6m

You can do it with ZFS at least. I have a ZFS setup which idles drives. If I don't access the datasets on the drives, they don't spin up. You do have to be careful with things like auto snapshotting tools though - that will typically wake drives up. I had to script mine to only run on relevant datasets when drives are spun up and also had to edit the tool (sanoid) to prevent it from enumerating the list of snapshots on the idle datasets not being snapshotted.

PBondurant
0 replies
6h34m

I'm running a ZFS NAS with mirrored 2 HDD vdevs. I've set the HDDs to spindown after 20mins using hdparm -S and this works fine under zfs. The drives (Toshiba N300s) take ~15s to spin up when something accesses them, and ime zfs has always handled this gracefully.

This vdev only contains data; it's not running the OS / root.

tomatocracy
2 replies
8h15m

Even if you have no issues with spinup, insufficient power can result in silent data corruption, which can be quite frustrating to diagnose (even if ZFS will help you see it). The difference between a picoPSU and a decent "real" PSU in terms of power consumption at the wall is maybe a couple of watts in my experience - not worth the risk for most people.

sandreas
1 replies
6h52m

Is there a source for this?

tomatocracy
0 replies
13m

Not sure. I experienced this personally using a picoPSU with spinning drives on one machine - no issues with spinup but silent data corruption on writes (which then got picked up by ZFS). Switching to a traditional PSU cured the problem.

sandreas
0 replies
15h28m

You're right. 6 spinning disks and a discrete GPU won't work with 120W. I would recommend to get the Corsair RM550x 2021 (or similar) when using a NAS with more than 2 disks or a discrete GPU.

I personally don't need more than 10TB space, so 2 10TB Seagate EXOS in ZFS mirror would just work fine with 120W as long as you don't run Prime95 the whole time. You might go up to Pico PSU 150 then.

ksjskskskkk
9 replies
21h6m

b550m with a amd5 pro from 2023 (will double check models and post on that forum)

i get 9w idle and amd pro cpus have ECC support which is a requirement for me on any real computer. i disable most components on the board. it's bottom tier consumer quality.

best part is when i need to burn many more watts the integrated gpu is pretty decent

akvadrako
3 replies
16h13m

What type of ECC is a requirement? All DDR5 has on-die ECC but it's hard to get real ECC with error reporting. Since errors are pretty rare (about 1 / year per 100GB) and you sacrifice to get it, seems like a hard choice.

amarshall
2 replies
15h51m

On-die ECC is not ECC memory. On-die ECC only corrects certain errors (e.g. not transport errors), and is absolutely necessary with DDR5 else such errors would be intolerably frequent.

True DDR5 ECC memory exists, and is not hard to find—just look for “server” RAM.

wtallis
0 replies
14h40m

I wish JEDEC and the memory manufacturers had not decided to present on-die ECC as being a "feature" of DDR5 memory, when it has far more to do with the generation of the fab process than the generation of the memory interface.

ksjskskskkk
0 replies
12h10m

anecdotal: run the exact same system and tasks on desktops with ddr4 ecc and laptops without ecc ddr5 (they technically have amd pro apu and sodimm sockets, but i can't find the damn memory to buy)

the laptops see two unexplainable crashes per 6 mo.

Jedd
2 replies
17h54m

The same way backups are most rigorously performed by people who've lost data, ECC is a non-negotiable requirement for people who've suffered slow data corruption via silent memory failures.

It surprises me that people are happy with 64GB+ builds of non-ECC, especially for NAS (ie. very long term storage, where corruptions probably wouldn't be noticed for years).

Periodically I look at replacing my small fleet of HP micro Gen8's, which use Xeons, have 4 x 3.5" bays (with proper h/w RAID1), but max out at a frustratingly low 16GB. They're quite robust, but because of their age - and HP - a horse veterinarian approach to component failures is usually indicated.

A Ryzen + ECC whitebox build is massively appealing, but almost everyone's build-out includes caveats like 'check the datasheet of the mobo and CPU' (because series aren't consistent), and about half the time a terrifying disclaimer to the effect of 'ECC is present / enabled in the kernel, but I can't tell if it's actually functioning properly'.

sandreas
0 replies
15h34m

This. Additionally, there is the price point of modern AMD "workstationish" systems vs. used Intel "real server" systems. Efficiency together with more performance is great, but if I had to pay double the price for a system that offers lots of performance I won't need anyway I'm not willing to pay the price.

Used Intel Systems are just way cheaper, because nobody seems to want them anymore... Everyone wants a >= 8 core ryzen :-) I personally don't need this for a little Proxmox / NAS kind of system.

However, I'm hoping for frame.work to announce official ECC support[1] on their Notebook boards (pretty likely this will never happen). I would love to just use the coolermaster case with small efficient modern notebook hardware for 600 bucks just because I don't need additional harddisks and I would just buy one to support the company.

[1]: https://community.frame.work/t/ecc-error-correcting-code-def...

harshreality
0 replies
11h40m

Older AGESA (part of the mobo firmware that handles system initialization) versions had a bug that prevented the chipset from recognizing and utilizing ECC ram properly even though the chipsets should support it. Check any motherboard in question for a firmware update which includes at least AGESA 1.0.0.5 patch C.

https://www.reddit.com/r/truenas/comments/10lqofy/

vardump
0 replies
17h40m

Indeed. ECC is non-negotiable, even for "consumer"-grade servers. Heck, also for workstations. Less/no mystery software malfunctions.

bb88
0 replies
18h50m

This is the setup I was looking for, the ECC support is really a requirement for a NAS.

kogepathic
1 replies
17h3m

> The Fujitsu ones have D3417(-A!) boards (not -B) having proprietary power supplies with 16 power pins (no 24pin ATX but 16pin). There are Adapters on Aliexpress for 24PIN to 16pin (Bojiadafast), but this is a bit risky - I'm validating that atm.

They work just fine. The pinout is well known [1]. You can also adapt a normal ATX PSU if you boost 5VSB to 11V.

Fujitsu boards are great, and very inexpensive to purchase in the EU. Someone has even reverse engineered the license for the KVM features of their remote management (iRMC S4/S5) [2]

[1] https://web.archive.org/web/20200923042644/https://sector.bi...

[2] https://watchmysys.com/blog/2023/01/fujitsu-irmc-s4-license/

sandreas
0 replies
15h54m

Oh this is pretty interesting, thank you very much. You mean that the Bojiadafast adapters work fine?

If so, I wonder if there is a hit in efficiency because of the required step-up / step-down converters in that adapter.

However, on my board there seems to be an unsoldered 24pin connector, that could be just used as is with a little soldering, but since it is on-hold my replacement system if my ...-B variant dies, I'm not willing to risk too many experiments :-)

ThatMedicIsASpy
22 replies
1d

7950X3D, X670E Taichi, 96GB 6400MHz CL32, 2x4TB Lexar, 4x18TB Seagate Exos X18, RX570 8G, Proxmox.

Idle no VM ~60-70W.

Idle TrueNAS VM drives spinning ~90-100W.

Idle TrueNAS & Fedora Desktop with GPU passthrough ~150W

In a few weeks the 570 is replaced by 7900 xtx. The RAM adds a lot of W. 3-5W per 8GB of RAM depending on the frequency is common for DDR5.

I was expecting around 50-100W for Proxmox+TrueNAS. I did not consider the power draw of the RAM when I went for 96GB.

hypercube33
9 replies
23h45m

I really want the Ryzen Embedded and or Epyc 3000(?) series that has dual 10gbe on package for something like a NAS but both are super expensive or impossible to find.

ThatMedicIsASpy
4 replies
23h40m

AsRock Rack B650D4U-2L2T/BCM, 2x10G, 2x1G, IPMI

For less power consumption Ryzen 8000 is coming up (wait for Jan 8th, CES) and the APU tend to be monolithic and draw a lot less power than the chiplets.

tw04
3 replies
23h26m

Even that uses Broadcom 10gbe, not the embedded AMD Ethernet. It’s really strange, I can only assume there’s something fatally wrong with the AMD Ethernet.

AdrianB1
2 replies
22h11m

Or it just tells the customers of this kind of equipment want proven solutions instead of other (novelty) options, so the manufacturers build their products with that in mind. Stability and support are very important to most buyers.

tw04
1 replies
19h32m

If that were the case I’d expect to still see at least SOME products utilizing the AMD chipset, even if budget focused. I have literally not seen a single board from any MFG that utilizes the built in NIC. Heck there are Intel Xeon-d chipsets that utilize both the onboard NIC and external Broadcom to get 4x for cheap.

formerly_proven
0 replies
4h37m

That's because neither AM4 nor AM5 include provisions for a NIC (MAC) on the SoC. Afaik the chipsets don't, either. Intel chipsets generally have a gigabit MAC, but this is rarely used (apparently it specifically requires an Intel PHY and that's more expensive than a complete NIC from say Realtek).

Only the Embedded chips have MACs built in.

j45
3 replies
23h36m

It may be possible to install, or add an external 2.5 or 10GbE device.

Either way, it's awful there is not more 10 GbE connectivity available default. There's no reason it shouldn't be the next level up, we have been at 1 / 2.5 for far too long.

ThatMedicIsASpy
2 replies
23h7m

You can find what you desire but you always have to pay for it.

ASUS ProArt X670E-Creator WIFI, 10G & 2.5G at 460€

10G simply isn't that cheap. The cheapest 5 port switch is 220€. Upgrading my home net would be rather expensive.

vetinari
1 replies
21h40m

What makes is more expensive is insisting on 10GBase-T. 10G over SFP+ is not that expensive; the cheapest 4 port switch (Mikrotik CRS305) is ~130 EUR.

selectodude
0 replies
16h11m

Optics aren’t free either.

MrFoof
7 replies
23h23m

You can go down to 50W idle, but it requires some very specific hardware choices where the ROI will never materialize, some of which aren’t available yet for Zen4.

I have…

* AMD Ryzen 7 PRO 5750GE

* 128GB ECC DDR4-3200

* Intel XL710-QDA2 (using QSFP+ to a quad-SFP+ passive DAC breakout)

* LSI 9500-16i

* Eight WD 16TB HDDs (shucked)

* Two 2TB SK Hynix P41 Platinum M.2 NVMe SSD

* Two Samsung 3.84TB PM9A3 U.2 NVMe SSD

* Two Samsung 960GB PM893 SATA SSD

So that’s the gist. Has a BMC, but dual 40GbE and can sustain about 55GbE over the network (in certain scenarios, 30-35GbE for almost all), running TrueNAS scale purely as a storage appliance for video editing, a Proxmox cluster (on 1L SFFs with 5750GEs and 10GbE idling at 10W each!) mostly running Apache Spark, a Pi4B 8Gb k3s cluster and lots more. Most of what talks to it is either 40GbE or 10GbE.

There is storage tiering set up so the disks are very rarely hit, so they’re asleep most of the time. It mostly is serving data to or from the U.2s, shuffling it around automatically later on. The SATA SSDs are just metadata. It actually boots off a SuperMicro SuperDOM.

——

The Zen 3 Ryzen PRO 5750GEs are unicorns, but super low power. Very tiny idle (they’re laptop cores), integrated GPU, ECC support, and the memory protection features of EPYC. 92% of the performance of a 5800X, but all 8C/16T flat out (at 3.95GHz because of an undervolt) caps at just under 39W package power.

The LSI 9500-16i gave me all the lanes I needed (8 PCIe, 16 SlimSAS) for the two enterprise U.2 and 8 HDDs, and was very low idle power by being a newer adapter.

The Intel dual QSFP+ NIC was deliberate as using passive DACs over copper saved 4-5W per port (8 at the aggregation switch) between the NIC and the switch. Yes, really. Plus lower latency (than even fiber) which matters at these transfer speeds.

The “pig” is honestly the ASRock X570D4U because the BMC is 3.2W on its own, and X570 is a bit power hungry itself. But all in all, the whole system idles at 50W, is usually 75-80W under most loads, but can theoretically peak probably around 180-190W if everything was going flat out. It uses EVERY single PCIe lane available from the chipset and CPU to its fullest! Very specific chassis fan choices and Noctua low profile cooler in a super short depth 2U chassis. I’ve never heard it make a peep, disks aside :)

ianai
4 replies
23h2m

Is the “e” for embedded? Ie needs to be bought in a package? I’m not seeing many market options.

MrFoof
3 replies
22h53m

Nope. The extra E was for "efficiency", because they were better binned than the normal Gs. Think of how much more efficient 5950Xs were than 5900Xs, despite more cores.

So the Ryzen PRO line is a "PRO" desktop CPU. So typical AM4 socket, typical PGA (not BGA), etc. However they were never sold directly to consumers, only OEMs. Typically they were put in USFF (1L) form factors, and some desktops. They were sold primarily to HP and Lenovo (note: Lenovo PSB fuse-locked them to the board -- HP didn't). For HP specifically, you're looking at the HP ProDesk and EliteDesk (dual M2.2280) 805 G8 Minis... which now have 10GbE upgrade cards (using the proprietary FlexIO V2 port) available straight from HP, plus AMD DASH for IPMI!

You could for a while get them a la carte from boutique places like QuietPC who did buy Zen 3 Ryzen PRO trays and half-trays, but they are long gone. They're also well out of production.

Now if you want one, they're mostly found from Taiwanese disassemblers and recyclers who part out off-lease 1L USFFs. The 5750GEs are the holy grail 8-cores, so they command a massive premium over the 6-core 5650GEs. I actually had a call with AMD sales and engineering on being able to source these directly about a year ago, and though they were willing, they couldn't help because they were no longer selling them into the channel themselves. Though the engineer sales folks were really thrilled to see someone who used every scrap of capability of these CPUs. They were impressed that I was using them to sustain 55GbE of actual data transfer (moving actual data, not just rando network traffic) in an extremely low power setup.

-- -----

Also, I actually just logged in to my metered PDU, and the system is idling right now at just 44.2W. So less than the 50W I said, but I wanted to be conservative in case I was wrong. :)

44.2W that has over 84TiB usable storage, with fully automagic ingest and cache that helps to serve 4.5GiB/sec to 6.5GiB/sec over the network ain't bad!

ianai
1 replies
22h6m

Nice! Wish they were easier to obtain!!

MrFoof
0 replies
19h31m

Agreed! Despite being PCIe 3.0, these were perfect home server CPUs because of the integrated GPU and ECC support. The idles were a bit higher than 12th gen Intels (especially the similarly tough to find "T" and especially "TE" processors) mostly because of X570s comparatively higher power draw, but if you ran DDR5 on the Intel platform it was kind of a wash, and under load the Zen 3 PRO GEs won by a real margin. Plus you really could use every scrap of bandwidth and compute these chips could muster. You use ALL the chip. :)

My HP ProDesk 405 G8 Minis with a 2.5GbE NIC (plus the built in 1GbE which supported AMD DASH IPMI) idled at around 8.5W, and with the 10GbE NICs that came out around June, are more around 9.5W -- with a 5750GE, 64GB of DDR4-3200 (non-ECC), WiFi 6E and BT 5.3, a 2TiB SK Hynix P31 Gold (lowest idle of any modern M.2 NVMe?), and modern ports including 10Gb USB-C. Without the WiFi/BT card it might actually get down to 9W.

The hilarious thing about those is they have an onboard SATA connector, but also another proprietary FlexIO connector that can take an NVIDIA GTX 1660 6GB! You want to talk a unicorn, try finding those GPUs in the wild! I've never seen one for sale separately! If you get the EliteDesk (over the ProDesk) you also get a 2nd M2.2280 socket for mirroring.

I have three of those beefy ProDesk 805 G8 Minis in a Proxmox 8 cluster, and it mostly runs Apache Spark jobs, sometimes with my PC participating (how I know the storage server can sustain 55GbE data transfer!), and it's hilarious that you have this computerized stack of napkins making no noise that's fully processing (reading, transforming, and then writing) 3.4GiB/sec of data -- closer to 6.3GiB/sec if my 5950X PC is also participating. I don't need the cloud, we have cloud at home!

-----

If you want a 5750GE, check eBay. That's where you'll find them, and rarely NewEgg. Just don't get Lenovo systems unless you want the whole thing, because the CPUs are PSB fuse-locked to the system they came in.

4750GEs are Zen 2s and cheaper (half the price), and pretty solid, but I think four fewer PCIe lanes. Nothing "wrong" with a 5750G per se, but they cap more around 67-68W instead of 39W.

Just if you see a 5750GE, grab it ASAP. People like me hunt those things like the unicorns they are. They go FAST! Some sellers will put up 20 at a time, and they'll all be gone within 48 hours.

-----

I really look forward to the Zen 4 versions of these chips, and the eventual possibility of putting 128GiB of memory into a 1L form factor, or 256GiB into a low power storage server. I won't need them (I'm good for a looooong time), but it's nice to know it'll be a thing.

Intel 15th gen may be great too, as it's such a massive architecture shift plus a new process node. Intel also tends to have really low board chipset power consumption, and really low idles.

Obscenely capable home servers that make no noise and idle in the 7-10W range are utterly fantastic.

justinclift
0 replies
13h37m

using the proprietary FlexIO V2 port

They're also available cheaply on Aliexpress, and reportedly work ok.

Well, there are different models (with different numbers of PCIe lanes), so very much a case of "do your research first". :)

ThatMedicIsASpy
1 replies
22h59m

I'm looking at an LSI 9300-16i which is 100€ (refurbished) including the cables. I just have to flash it myself. Even a 9305 is triple the cost for around half the power draw.

My build is storage, gaming and a bunch of VMs.

Used Epyc 7000 was the other option for a ton more PCIe. I have no need for more network speed.

MrFoof
0 replies
22h36m

Yep. 9300s are very cheap now. 9400s are less cheap. 9500s are not cheap. 9600s are new and pricey.

As I said, you can't recoup the ROI from the reduced power consumption, even if you're paying California or Germany power prices. Though you can definitely get the number lower!

I had this system (and the 18U rack) in very close proximity in an older, non-air conditioned home for a while. So less heat meant less heat and noise. I also deliberately chased, "how low can I go (within reason)" while still chasing the goal of local NVMe performance over the network. Which makes the desire to upgrade non-existent, even 5+ years from now.

Not cheap, but a very fun project where I learned a lot and the setup is absolutely silent!

eurekin
1 replies
23h53m

What about networking? Did you go over 1gbit?

ThatMedicIsASpy
0 replies
23h43m

It has 2.5G. There are X670E with 10G if you desire more.

My home net is 1G with two MikroTik hAP ax3 which are connected with the single 2.5G poe port they have (and one powers the other).

dist-epoch
1 replies
19h28m

3-5W per 8GB of RAM

I think that's wrong. It would mean 4*6=25W per DIMM.

I also have 48GB DDR5 DIMMs and HwInfo shows 6W max per module.

ThatMedicIsASpy
0 replies
4h28m

Where are those 48G dimms? Ryzen 7000 limits 4 dimms to 3600MHz. To get high clocks 2 dimm configs is the maximum for Ryzen 7000. Are they in use? I can cut off 10-15W going to 4800MHz 1.1V (from 6400 1.3V)

CommanderData
13 replies
19h17m

Great hardware but when the software is a job to administer I have a hard time justifying builds like these.

My Synology NAS for example has 8 GB RAM and a J4150 processor. Runs about 15 containers, Wireguard and on top DSM (which is Synology's OS). Usually idles around 1-3%.

Software makes all the difference - DSM has been by far the biggest benefit and surprise to me and I would be deprived of time with anything else. I'm running TrueNas as a second backup server but it no way compares to DSM. Sometimes I don't want to trawl through logs and trial and error just to get a basic CRON setup to backup a file off another server, there's countless examples where DSM has just worked.

I really think Synology is missing a trick here, they clearly have software that is miles ahead of everything else and customizable should you need to. They should be more like Microsoft of the NAS world, making DSM run on non Synology platforms, or at least making it easier to do yourself. It's a great OS and it sells itself and can easily be a way to up sell stuff like Active backup for business.

ksec
3 replies
11h53m

My biggest problem with Synology continues to be their Kernel version being very old. They were still shipping version 4.4 this year and only the new product this year gets version 5.10.

And you dont get Kernel version upgrade in between DSM.

Dalewyn
2 replies
10h42m

Is that really a problem as long as Synology supports it (we're paying money, after all)?

ksec
1 replies
10h9m

So far it hasn't but it does mean we dont get up to date BTRFS version. Which is what worries me most.

buro9
0 replies
10h4m

Are you sure? Synology do a very bespoke BTRFS and backport a lot, even though it's incredibly complex to do so. Their BTRFS is not standard, or rather it's not the standard version for a given kernel version.

walterbell
1 replies
15h29m

https://xpenology.org/

> Xpenology is a bootloader for Synology’s operating system, called DSM (Disk Station Manager), and is used on their NAS devices. DSM is running on a custom Linux version developed by Synology ... Xpenology creates the possibility to run the Synology DSM on any x86 device like any PC or self-built NAS. So, you can benefit from the powerful multimedia- and cloud features of DSM without buying the hardware NAS from Synology. Many people prefer this because they can pick out their own (more powerful) processor and RAM to handle things like transcoding video.

CommanderData
0 replies
10h37m

My point about Synology being the MS of NAS is actually about this. Spend a little development time making DSM more portable but with a caveat of no support on non-Synology hw.

I think we'd still arrive at a weird place where we see people buying Asustor or other NAS hw just to run DSM even if it's not supported and but not a complete hack.

People buy convenience and DSM offers everything NAS related it really well.

I suppose they have internal plans to make DSM go the other way and lock out attempts like Xpenology. They aren't late either as other rival is still miles behind.

agumonkey
1 replies
16h34m

dsm being diskstation manager ?

CommanderData
0 replies
10h7m

Yes, it's the OS that ships with their hardware. DSM 7 being their latest.

Dalewyn
1 replies
15h22m

I deeply appreciate Synology handling away all the Linux jank I would otherwise have to deal with myself. Easily worth the price tag of buying one of their NASes.

Specifically, I have a DS1520+ with five 16TB Seagate Iron Wolf Pro HDDs in a RAID6 config (have another, sixth identical HDD as a cold spare in the closet) and it has been running absolutely flawlessly for the now two years I've had it.

My track record with Linux installations otherwise is "I keep killing them by just breathing on them, god damn.". I'm a walking Linux genocide horror show.

CommanderData
0 replies
10h16m

Convience is nice and something they do really well. Rocksolid for me for 3+ years. A recent issue with a my failed cache drive. Within 2 days the issue triaged and investigated by their devs. The experience was great.

thelittleone
0 replies
16h34m

Agree, its a super nice UI and UX. First time I used them was around 2010 and it was already a joy to use. Would be kind of nice to have a Synology DSM front end to cloud IaaS.

matthewfcarlson
0 replies
17h44m

I suspect focusing on a set of hardware and making it work really well is part of what makes the OS so good. Hardware configs explode in a combinatorial fashion, making it impossible to test everything once you have more than a few options.

intrasight
0 replies
14h47m

I think of them as the Microsoft of NAS. Everyone I know with a NAS uses Synology. Most because I suggested it.

jnsaff2
9 replies
1d

I have a 5-node ceph cluster built out of Fujitsu desktops that I got for 50 euro a piece.

4 nodes have 8gb ram and one has 16gb.

CPU in each is i5-6500.

Each has an NVMe that is split for OS and journal and a spinning HDD.

The cluster idles at 75W and full load about 120W. That is intense ceph traffic not other workloads.

Throw839
5 replies
1d

That fujitsu part is important. Many mainstream brands do not implement power states correctly, Fujitsu seems to be focused on power consumption quite lot.

paulmd
2 replies
21h4m

fujitsu has always been underappreciated in the mainstream tbh. there has always been a thinkpad-style cult following (although much smaller) but japanese companies often do a pretty terrible job at marketing in the west (fujifilm being another fantastic example).

my university issued T4220 convertible laptops, with wacom digitizers in the screens. I rarely used it but the pivot in the screen made it indestructible, it survived numerous falls hitting the corner of the screen/etc because the screen simply flops out of the way and pivots to absorb the energy. I later got a ST6012 slate PC that my uni bookstore was clearing out (also with a wacom digitizer, and a Core2Solo ULV!). Both of them are extremely well-thought-out and competently designed/built hardware. Doesn't "feel" thinkpad grade, but it absolutely is underneath, and featured PCMCIA and bay batteries and other power-user features.

https://www.notebookcheck.net/Fujitsu-Siemens-Lifebook-T4220...

https://www.ruggedpcreview.com/3_slates_fujitsu_st6012.html

They also did a ton of HPC stuff for Riken and the other japanese research labs, they did a whole family of SPARC processors for mainframes and HPC stuff, and pivoted into ARM after that wound down. Very cool stuff that receives almost no attention from mainstream tech media, less than POWER even.

https://www.youtube.com/watch?v=m0GqCxMmyF4

Anyway back on topic but my personal cheat-code for power is Intel NUCs. Intel, too, paid far more attention to idle power and power-states than the average system-integrator. The NUCs are really really good at idle even considering they're using standalone bricks (my experience is laptop bricks are much less efficient and rarely meet 80+ cert etc). A ton of people use them as building blocks in other cases (like HDPlex H1 or Akasa cases), they don't have a ton of IO normally but they have a SATA and a M.2 and you can use a riser cable on the M.2 slot to attach any pcie card you want. People would do this with skull canyon f.ex (and HDPlex H1 explicitly supports this with the square ones). The "enthusiast" style NUCs often have multiple M.2s or even actual pcie slots and are nice for this.

https://www.amazon.com/ADT-Link-Extender-Graphics-Adapter-PC...

And don't forget that once you have engineered your way to pcie card formfactor, you can throw a Highpoint Rocket R1104 or a SAS controller card in there and run multiple SSDs (up to 8x NVMe) on a single pcie slot, without bifurcation. Or there are numerous other "cheat code" m.2 devices for breaking the intended limits of your system - GPUs (Innodisk EPV-1101/Asrock M2_GPU), SATA controllers, etc.

https://www.youtube.com/watch?v=M9TcL9aY004 (actually this makes the good point that CFExpress is a thing and is very optimized for power. No idea how durable they are in practice, and they are definitely very expensive, but they also might help in some extreme low power situations.)

Personally I never found AMD is that efficient at idle. Even with a monolithic apu you will want to dig up an X300TM-ITX from aliexpress, since this allows you to forgo the chipset. Sadly AMD does not allow X300 to be marketed directly as a standalone product, only as an integrated system like a nuc or laptop or industrial pc, despite the onboard/SOC IO being already quite adequate for beige box usage. Gotta sell those chipsets (hey, how about two chipsets per board!?). But OP article is completely right that AMD’s chipsets just are not very efficient.

Dalewyn
1 replies
15h0m

Personally I never found AMD is that efficient at idle.

Since we're talking about Japan, Intel is affectionately called the IdleM@ster as a pun on the very popular IdolM@ster franchise.

https://twitter.com/neko1942/status/1261661579101143040

https://twitter.com/Fact_M_Q/status/1325604249623953408

paulmd
0 replies
10h42m

that's hilarious, thanks

but yeah, it's true, the paradox of intel is that they can get it so wrong in the big picture and execution and integration but sometimes the little details are so right. I225V is a mess. Sapphire Rapids has 700W transients above average. But idle power and interactive scenarios on client processors is a dream. They still tend to be better than AMD on their driver side and general platform validation and stability (as much as that has been a doubtful thing and continues to be going forward, it's true).

skippyboxedhero
0 replies
1d

The NUC and Optiplex aren't bad either. There are also very good AsRock boards (I can't remember what the modern ones are called but H110T is one, I used this for a bit, idled at 6W, laptop memory and power brick). But Fujitsu is the S-tier.

In practice, I found I needed a bit more power but you can get some of the Fujitsu boards with a CPU for $30-40, which is hard to beat.

ThatMedicIsASpy
0 replies
1d

I have a HP ProDesk powertop shows up to C10 which I never reach. Must be the SSD or NVMe I have. But yeah The BIOS in those are super cut down and there are hardly any energy settings I can change.

eurekin
1 replies
23h52m

What effective client speeds are you getting?

jnsaff2
0 replies
23h36m

Currently only gigabit network and I can easily saturate that.

Thinking about chucking in 25gbit cards.

fomine3
0 replies
7h8m

Used Skylake/Kaby Lake PCs are going to be sold as dirt cheap due to Win11 unsupported. These are still good about idle power usage and fine performance for Linux server.

chx
9 replies
1d

Why not the N100?

Even an N305 fits the purpose, the N100 would be even less https://www.reddit.com/r/MiniPCs/comments/12fv7fh/beelink_eq...

arp242
2 replies
1d

N100 wasn't yet released when this was written in May (or was only just released).

Also the N100 only supports 16G RAM, and this guy has 64G. Number of pcix lanes (9 vs. 20) probably matter for their use case as well. And the i5 does seem quite a bit faster in general.

Comparison: https://ark.intel.com/content/www/us/en/ark/compare.html?pro...

s800
1 replies
23h55m

I'm running a home server on an N100 (ITX model) with a 32GB DIMM, works well.

adrian_b
0 replies
23h23m

It has been widely reported that Alder Lake N actually works with 32 GB, but for some reason Intel does not support this configuration officially.

The same happened with the previous generations of Intel Atom CPUs, they have always worked without any apparent problems with more memory than the maximum specified by Intel.

trescenzi
1 replies
20h24m

I bought a tiny, fits in the palm of my hand, N100 box on Amazon for $150[1]. It currently does basically everything I need and idles at 7.5W.

I’ve got cloudflare setup for dns management and a few simple sites hosted on it like blogs and Gitea. It has an sd card slot that I use as extra storage.

Sure it’s not nearly as awesome as the setup detailed here but I couldn’t recommend it more if you just want a small simple home server.

[1] https://a.co/d/cIzEFPk

SomeoneFromCA
0 replies
9h27m

7.5 W is a bit high idle for N100, IMHO. Power supply is not very efficient I bet.

imglorp
0 replies
1d

Because author said they wanted a bunch of disks.

hrdwdmrbl
0 replies
1d

+1 for the Nx00 series of chips. I just bought myself a pre-built mini pc with an N100. Low power, good price, great performance.

I wonder if in a few years they might not eat the whole mini PC market. If the price can come down such that they’re competitive with the various kinds of Pis…

cjdell
0 replies
1d

I'm very impressed with my N100 mini PC (fits in your palm) that I bought from AliExpress. Takes between 2-8W and uses just a plain old 12V plug-style power supply with a DC barrel. Perfect for Home Assistant and light virtualisation.

Performance is actually better than my quad core i5-6500 mini PC. Definitely no slouch.

NelsonMinar
0 replies
15h48m

Another happy N100 mini PC user. Mine's about 7W when idle, including a couple of spinning USB drives. The most I could get it to pull was about 14W. Really remarkable little systems for a very good price.

ulnarkressty
6 replies
23h9m

As exciting as it is to design a low power system, it's kind of pointless in the case of a NAS that uses spinning rust as storage media - as the author later writes, the HDD power consumption dwarfs the other system components.

If one uses SSD or M.2 drives, there are some solutions on the market that provide high speed hardware RAID in a separate external enclosure. Coupled with a laptop board they could make for a decent low power system. Not sure how reliable USB or Thunderbolt is compared to internal SATA or PCIe connections though... would be interesting to find out.

V__
4 replies
22h17m

Don't they stop spinning when idle?

layer8
2 replies
21h55m

Not by default, but you can have the OS have them spin down after a certain idle period. Doing that too frequently can affect the life time of the drive though. You save maybe 4 Watts per drive by spinning them down.

Dalewyn
1 replies
15h3m

Personally, I make sure my HDDs (regardless use case) never spin down when idle. The cost in life span isn't worth the electricity bill saved.

Arn_Thor
0 replies
10h43m

Same. Also I’ve had some software error out as it waits for drives to spin up (in an NFS share context).

orthoxerox
0 replies
21h54m

They are never idle if the NAS is seeding torrents.

nabla9
0 replies
19h50m

You can shut down HDD's when you don't use them.

    sudo hdparm -Y /dev/sdX

uxp8u61q
4 replies
23h41m

I know nothing about building NASs so maybe my question has an obvious answer. But my impression is that most x64 CPUs are thoroughly beaten by Arm or RISC-V CPUs when it comes to power consumption. Is there a specific need for the x64 architecture here? I couldn't find an answer in TFA.

pmontra
0 replies
21h25m

I'm using an Odroid HC4 as my home server. It has an ARM CPU and it's idling at 3.59 W now with a 1 TB SATA 3 SSD and some web apps that are basically doing nothing, because I'm their only user. It's got a 1 GB network card, like my laptop. I can watch movies and listen to music from its disk on my phone and tablet.

There is no need to have something faster. The SATA 3 bus would saturate a 2.5 GB card anyway. The home network in Cat 6A so it could go up to 10 GB. We'll see what happens some years from now.

hmottestad
0 replies
23h34m

You can very easily run docker containers on it. That’s why I went with a ryzen chip in mine.

You could always use an rpi if you want to go with ARM, and you’ll want something with ARMv8.

arp242
0 replies
22h53m

my impression is that most x64 CPUs are thoroughly beaten by Arm or RISC-V CPUs when it comes to power consumption

Not really.

ARM (and to lesser degree, RISC-V) are often used and optimized for low-power usage and/or low-heat. x64 is often more optimized for maximum performance, at the expense of higher power usage and more heat. For many x64 CPUs you can drastically reduce the power usage if you underclock the CPU just a little bit (~10% slower), especially desktop CPUs but also laptops.

There are ARM and RISC-V CPUs that consume much less power, but they're also much slower and have a much more limited feature-set. You do need to compare like to like, and when you do the power usage differences are usually small to non-existent in modern CPUs. ARM today is no longer the ARM that Wilson et al. design 40 years ago.

And for something connected to mains, even doubling the efficiency and going from 7W to 3.5W doesn't really make all that much difference. It's just not a big impact on your energy bill or climate change.

adrian_b
0 replies
23h15m

Most Arm or RISC-V CPUs (with the exception of a few server-oriented models that are much more expensive than x86) have very few PCIe lanes and SATA ports, so you cannot make a high throughput NAS with any of them.

There are some NAS models with Arm-based CPUs and multiple SSDs/HDDs, but those have a very low throughput due to using e.g. only one PCIe lane per socket, with at most PCIe 3 speed.

treprinum
3 replies
20h22m

My NAS has Pentium J 4-core and is way under 7W idle, inside some small Fractal case with 6x20TB HDD. Why would you need 12th/13th gen for file transfers?

dbeley
1 replies
19h29m

Interesting I assume it's with all drives off, how many Watts with some disk usage?

treprinum
0 replies
5h43m

Each drive adds around 7W non-idle.

wffurr
0 replies
19h32m

For encoding maybe? OP says “reasonable CPU performance for compression” and also it was a CPU they already had from a desktop build.

pomatic
3 replies
17h9m

7 watts idle is unimpressive out of context

If you have a NAS that is idle most of the time, and want to minimise power consumption, how about an embedded-cpu based WoL generator? Sniff packets destined for the fileserver, which is otherwise in deep sleep, and automagically wake it up when relevant traffic is detected, with the relevant WoL packet. You'd get say <300mA consumption at 3v3, and full performance on wake. It would be transparent to the machines attempting to access the server. If idle power measurement is your criteria, this might be a way forwards?

getwiththeprog
2 replies
16h3m

Cool idea. I have no idea how to automagically do that.

graphe
1 replies
16h0m

PoE or power over Ethernet.

allset_
0 replies
13h24m

You don't need PoE to do WoL.

mattbillenstein
3 replies
18h49m

Everyone's needs are different, but over time with the loss of a couple of disks, I started to hate running RAID5 or 6 with HDDs. It became an exercise in how fast could I replace a disk before the next one died and would the rebuild actually work or not - although, it always did. Also, the hot-swap case/cage I had with all the SATA connectors and power connectors seemed kinda flaky - it was very cheap.

So a couple years ago, I downsized to a single 2TB SSD in a smallish ATX case - and another in a completely different machine that gets rsync'd every 4 hours. My nas is now just my last desktop's hardware with two SSDs - one boot, one larger storage running plex, smbd, backups of misc stuff on cron, duplicity backup to the cloud, etc on Ubuntu. If I didn't have this extra hardware, I'd probably just run two medium powered NUCs with a nvme boot disk and a bigger SSD for storage.

It's all very very simple and I have it setup so I can run some LXC containers should I want to do some dev work there, but I usually have other hardware for that anyway.

nine_k
0 replies
18h27m

Good thing that 2TB suffices for you! I think RAID NAS boxes seem to make sense at sizes several times this.

at_a_remove
0 replies
18h20m

I am imagining a NAS appliance designed for high availability and low-touch, like a RAID 10 box with a ton of hot spares, designed to automatically rebuild in case of failure. The hot spares are spun up once a month to prevent stiction.

CPLX
0 replies
18h24m

I have done a bunch of stuff in the past but most recently I just bought two synology arrays and followed the instructions to set them up in less than an hour and they just worked with literally zero hassle.

jepler
3 replies
19h31m

Author seems to have built 5 systems from 2016 to 2023, or around every other year.

Some parts (e.g., RAM) are re-used across multiple builds

It's interesting to wonder: How much $$ is the hardware cost vs the lifetime energy costs? Is a more power-hungry machine that would operate for 4 years better than one that would operate for 2 years?

The motherboard + CPU is USD 322 right now on pcpartpicker. At USD 0.25/kWh (well above my local rate but below the highest rates in the US), 36W continuous over 4 years is also about $315. So, a ~43W, 4-year system might well be cheaper to buy and operate than a 7W, 2-year system.

hrdwdmrbl
1 replies
19h22m

Fun tinkering like that is not about saving money. It's about the journey, not the destination.

1-6
0 replies
18h52m

At least the author documents their journey well. If many people who read this save energy by learning good fundamentals of system build, that alone goes a long way. CPU/mobo makers are also probably taking notes.

buro9
0 replies
10h1m

I average a fully loaded NAS every 8 years apparently. I don't really expand their storage, as by this time the network and software is out of date, so I buy a new one and make the current the backup to the new, and then dispose of the one that was the old backup.

It's interesting, the upgrade interval has held for 20 years already and so it becomes easy to understand the amortisation, 16 years of use, but really only 8 as the primary and 8 as the backup.

ggm
3 replies
1d5h

Sort of not surprising how variant divergent chipsets go with power states, and other things.

How does he get raidz2 to spin down without busting the raidset? Putting drives into sleep states isn't usually good for in-CPU zfs is it? Is the l2arc doing heavy lifting here?

Good comments about ECC memory in the feedback discussion too.

phil21
2 replies
1d

I've found ZFS to be extremely forgiving for hard drives having random slow response times. So long as you are getting a response within the OS I/O timeout period, it's simply a matter of blocking I/O until the drives spin up. This can honestly cause a lot of issues on production systems with a drive that wants to half fail vs. outright fail.

I believe this is on the order of 30-60s from memory.

l2arc likely works quite well for a home NAS setup allowing for the drives to be kept spun down most of the time.

Strangely I also built (about 10 years ago now) a home NAS utilizing a bunch of 2.5" 1TB Seagate drives. I would not repeat the experiment as the downsides in performance was simply not worth the space/power savings.

Then again, I also built a ZFS pool out of daisy chained USB hubs and 256 (255?) free vendor schwag USB thumb drives. Take any advice with a grain of salt.

paulmd
0 replies
21h17m

yup. the problem is really with the SMR drives where they can (seemingly) hang for minutes at a time as they flush out the buffer track. ordinary spin-down isn't really a problem, as long as the drives spin up within a reasonable amount of time, ZFS won't drop the disk from the array.

ZFS is designed for HDD-based systems after all. actually it works notably kinda poorly for SSDs in general - a lot of the design+tuning decisions were made under the assumptions of HDD-level disk latency and aren't necessarily optimal when you can just go look at the SSD!

however, tons and tons of drive spin-up cycles are not good for HDDs. Aggressive idle timeout for power management was famously the problem with the WD Green series (wdidle3.exe lol). Best practice is leave the drives spinning all the time, it's better for the drives and doesn't consume all that much power overall. Or I would certainly think about, say, a 1-hour timeout at least.

https://www.truenas.com/community/threads/hacking-wd-greens-...

However, block-level striping like ZFS/BTRFS/Storage Spaces is not very good for spinning down anyway. Essentially all files will have to hit all disks, so you have to spin up the whole array. L2ARC with a SSD behind it might be able to serve a lot of these requests, but as soon as any block isn't in cache you will probably be spinning up all the disks very shortly (unless it's literally 1 block).

Unraid is better at this since it's a file-level striping - newer releases can even use ZFS as a backend but a file always lives on a single unraid volume, so with 1-disk ZFS pools underneath you will only be spinning up one disk. This can also be used with ZFS ARC/L2ARC or Unraid might have its own setup for tiering hot data on cache drives or hot-data drives.

(1-disk ZFS pools as Unraid volumes fits the consumer use-case very nicely imo, and that's going to be my advice for friends and family setting up NASs going forward. If ZFS loses any vdev from the pool the whole pool dies, so you want to add at least 2-disk mirrors if not 4-disk RAIDZ vdevs, but since Unraid works at a file-stripe level (with file mirroring) you just add extra disks and let it manage the file layout (and mirrors/balancing). Also, if you lose a disk, you only lose those files (or mirrors of files) but all the other files remain intact, you don't lose 1/8th of every file or whatever, and that's a failure mode that aligns a lot better with consumer expectations/needs and consumer-level janitoring. And you still retain all the benefits of ZFS in terms of ARC caching, file integrity, etc. It's not without flaws, in the naive case the performance will degrade to 1- or 2-disk read speeds (since 1 file is on 1 disk, with eg 1 mirror copy) and writes will probably be 1-disk speed, and a file or volume/image cannot exceed the size of a single disk and must have sufficient contiguous free space, and snapshots/versioning will consume more data than block-level versioning, etc. All the usual consequences of having 1 file backed by 1 disk will apply. But for "average" use-cases it seems pretty ideal and ZFS is an absolutely rock-stable backend for unraid to throw files into.)

anyway it's a little surprising that having a bunch of individual disks gave you problems with ZFS. I run 8x8TB shucked drives (looking to upgrade soon) in RAIDZ2 and I get basically 8x single-disk speed over 10gbe, ZFS amortizes out the performance very nicely. But there are definitely risks/downsides, and power costs, to having a ton of small drives, agreed. Definitely use raidz or mirrors for sure.

justsomehnguy
0 replies
20h10m

home NAS utilizing a bunch of 2.5" 1TB Seagate drives. I would not repeat the experiment as the downsides in performance was simply not worth the space/power savings.

5400 drives? How many and how bad the performance was?

Dachande663
3 replies
1d

Is there not an element of penny-wise, pound-foolish here where you end up optimizing the cpu/mono side of things but then run 6+ drives vs fewer larger ones?

bluGill
2 replies
21h11m

You should buy drives in multiples of 5 or 6. Of course this is subject to much debate. Drives fail so you need more than one extra for redundency - I suggest 2 so when (not if!) one fails you still have procuction while repacing the bad drive. 3 drives in a raid-1 mirror for the price of one is spendy so most start looking at raid-5 with dual parity. However putting more than 6 drives in those starts to run into performance issues better handled by striping across two raids. (if you do try more than 6 your odds of 3 drives failing become reasonable so start adding more paricy stripes) which is why I say 6 drives at once is the sweet spot, but others come up with other answers that are not unreasonable.

of course one input is how much data do you have. for many 1 modern disk is plenty of space, so you go raid 1 and redundancy is only so you don't need to wait for off site backups to restore after failure.

dannyw
1 replies
13h5m

It also depends on your use case. If your NAS is write-once-heavy and just used for archiving data, and isn't being used to run a database or something like that; then you can even do a 20 drive zpool3 if you want.

That's my setup for nearly 1/3 PB.

bluGill
0 replies
3h17m

You should replace thet system with off site backups. Local NAS is useful for fast local access which isn't what you get.

jeffbee
2 replies
1d

I wonder if this system is connected with wired ethernet or wifi. I found that it makes a large difference on my NAS. With a wired link the SoC can't reach a deep sleep state because the ethernet peripheral demands low-latency wakeup from the PCIe root port. This is power management policy that is flowing from the link peer all the way to your CPU! I found that wifi doesn't have this problem, and gives better-than-gigabit performance, sometimes.

skippyboxedhero
1 replies
1d

If you have an network card over PCIe then there may be an issue with the card. I have never had an issue reaching low sleep state, you can modify WoL behaviour too. Wifi is, again in my experience, uses significantly more power. I have seen 3-5W and usually switch if off.

jeffbee
0 replies
1d

I don't think it's an issue with the card. It's a combination of ethernet and PCIe features that make this happen. There is a standard called "energy efficient ethernet" that makes it not happen, but my switch doesn't do it.

Palomides
2 replies
1d

amusing to read this very detailed article and not have any idea what OP actually does with 72TB of online storage

1gbe seems a bit anemic for a NAS

dannyw
1 replies
12h29m

it's usually p0rn

jmnicolas
0 replies
3h23m

Try to be charitable, I'm sure there are a couple Linux ISO in there.

1letterunixname
2 replies
1d

I feel like an petrochem refinery with my 44 spinning rust units NAS 847E16-RJBOD, 48 port POE+ 10 GbE switch, 2 lights-out and environmental monitoring UPSes, and DECISO OPNsense router using a combined average of 1264W. ]: One UPS is at least reporting a power factor with an efficiency of 98%, while the other one isn't as great at 91%.

APM is disabled on all HDDs because it just leads to delay and wear for mythological power savings that isn't going to happen in this setup. Note that SMART rarely/never predicts failures, but one of the strongest signals of drive failures is slightly elevated temperatures (usually as a result of bearing wear).

This creates enough waste heat such that one room never needs heating, but cooling isn't strictly needed either because there's no point to reducing datacenter ambient below 27 C.

syntheticnature
1 replies
23h44m

I was looking into water heaters that use heat pumps recently, and a lot of them function by sucking heat out of the room. While water and computers don't mix, might be an even better use for all that waste heat...

sgarland
0 replies
14h47m

My plan for when I build a house (so far with two moves during my serious homelab period, the room layout hasn’t worked) is to build a small server room that vents out to the heat pump water heater.

newsclues
1 replies
1d

I had the same issue of picking a motherboard with limited SATA ports and then having to deal with extra expansion cards.

4 is not enough for homelab type servers.

fomine3
0 replies
7h0m

MiniPCIe or M.2(PCIe) to SATA board may help you http://www.iocrest.com/index.php?id=2349

Paradigma11
1 replies
12h30m

How viable would it be to run a home server in a VM on your main PC that is occasionally also used for high performance activities like gaming. I do have 12 cores and 64GB Ram and I would not mind parting with 4C/16GB and maybe the onboard GPU (video decoding) but I dont know how the rest of the system might be influenced if there is heavy IO or other stuff.

marcosscriven
0 replies
9h8m

Proxmox is good for this. You can have a Windows VM with GPU passthrough. Then you can add any other VMs or LXCs to your heart’s content.

vermaden
0 replies
23h52m
vardump
0 replies
17h38m

What would be the cheapest and lowest power option to get 100G networking (50-100G actual performance is good enough) and at least one 8 lane PCI-e GPU?

Plus at least 2x M.2.

squarefoot
0 replies
1d

For those interested in repurposing a small mini-PC with no Mini PCI ports available as NAS, I recently purchased a ICY IB-3780-C31 enclosure (USB3.1 to 8xSATA), and although I still have to put it in operation (will order new disks soon), I tested it with a pair from my spares and can confirm it works out of the box with both Linux and XigmaNAS (FreeBSD). Just beware that although it can turn back on after the connected PC goes to sleep and then wakes up (the oddly named "sync" button on front panel does that), it doesn't after a accidental loss of power or power outage even if the connected PC is set up to boot automatically, which to me is quite bad, therefore having recently moved to a place where power outages aren't uncommon and can last longer than a normal UPS could handle, I'll probably modify the enclosure by adding a switchable monostable circuit that emulates a short press of the power button after power is restored. That would mean a big goodbye to warranty, so I'll have to think about it, but the problem can indeed be solved.

neilv
0 replies
18h53m

Great work, for lots of storage.

If you can fit your storage on an SSD (or RAID-mirrored pair), and don't need much compute, you can do a low-power server (like to run little services for yourself) using an SBC like a RasPi, or something like a NUC.

Personally, I currently have a couple 1U Atom servers that run fanless except for the Noctua fans that I swapped into the PSUs. Advantages over RasPi include SATA and ECC RAM, and were also easier to buy over Covid. (I also have a 4U GPU server, which is currently off when not in use, because I haven't invested in figuring out how to low-power idle it like the article writer has.)

louwrentius
0 replies
1d

I have the same amount of storage available in a ~9-year-old 24-bay NAS chassis that does 150 Watt idle (with drives spinning).

My NAS is powered down most of the time for this reason, only booted (IPMI) remotely when needed.

Although the actual idle power consumption in the article seems to be a tad higher than 7 watts, it's so much lower, it's not such a big deal to run it 24/7 and enjoy the convenience.

Loved the write-up!

jmnicolas
0 replies
3h29m

I have a 4 Cores Intel i3 13100 and I can't reach these low numbers.

The machine idle with just 1 nvme is 16 watts. I managed to go to 12 watts with the powersaver mode in tuned but I have random ssh freezes and I didn't find any relevant doc to solve the problem.

But as soon as the machine is in light use, just a network transfer for example the power usage go through the roof: 30 watts.

Now I have loaded it with 2 internal 3.5 hard drives, I have a VM with Docker running, a torrent client and Syncthing (so nothing terribly intensive). The thing never goes below 40 watts and is currently at 60 (I'm copying data onto the drives).

I would never have bought a desktop class CPU if it wasn't for these articles that promise unreachable low power draw.

jauntywundrkind
0 replies
1d

I feel like I see a good number of nas builds go by, but rarely are they anywhere as technical. Nice.

homero
0 replies
21h1m

Crucial Force GT supposed to say Corsair

alphabettsy
0 replies
1d

Very cool write up. Good timing too as I find myself attempting to reduce the power consumption of my homelab.