I always admire the depth and effort people are willing to get into, especially if the chance of messing up means losing actual money, like tweaking guitars or hardware. I assume you have to gain some experience with soldering irons or woodworking tools elsewhere before taking the proverbial axe to a project like this. I wonder why there isn't much of a market of more hackable small form factor boxes where it's easier to tinker with the hardware or software. I'd love a consumer (price) level NAS where the OS can be ripped out and replaced with some vanilla OS/kernel.
Or maybe I'm just too chicken with modifying actual physical objects.
A NAS is just a computer that is only used for storage. Any old hardware with drives attached to it will work.
Just beware of too old hardware that may end up costing more than expected because of their power draw. A NAS usually stays up 24/7 and power saving and reliability become important. I would also avoid anything with a fan, particularly if they expect it to be running constantly. A mini PC with enough internal space to host a long m.2 SATA adapter (0) can become a quite compact solution; many used low performance Celeron based systems considered obsolete for desktop stuff would be more than enough for that task.
0: https://www.ebay.com/itm/256268302567
(no relation with the seller, just found searching "6xsata m.2" and removing those with lower feedback ratings). There are also 5xsata similar cards.
It seems odd to me that there's no ARM-based NAS with SSDs on the market, to focus on minimum power consumption rather than maximum read/write speeds.
The Kobol Helios64 was one, but they stopped development a few years ago. It is fully Open Source, and I still hope someone could take on the work because it could become a really interesting product.
https://kobol.io/
From memory it had a built in backup battery too
There is the CM3588 board [0] from FriendlyElec. LinusTechTips made a video about it. I just can't find a nice case for it. I've asked MyElectronics about it and they said they would consider making a case if they get enough requests.
[0] https://www.friendlyelec.com/index.php?route=product/product...
[1] https://myelectronics.nl/
When people talk about using these cards with the minipc how do they imagine setting these up? shuck the minipc out of its original case and zip tie it (no clue if its standard mobo probably not) to some case that can hold these drives and a psu? hack a hole out of the minipc and run the cabling to some cage of drives, powered by a separate psu?
The boards sizes aren't standard, however drilling holes to fix them into a bigger case isn't that hard, same for power supplies. As for the disks, there are backplanes with SATA connectors that can host for example 5x3.5" drives into a 3x5.25" bay (0), so that the NAS can be pretty solid. Unfortunately they aren't cheap but the overall cost would be still lower than buying a new NAS with comparable features and performance.
0: https://www.reichelt.de/de/en/mobile-rack-3x-5-25-for-5x-3-5...
That’s what’s a bit wild about this project.
It’s an amazing project. I’m floored that it all works and that the author figured it out.
But still, I’ve never fully understood the appeal of using some of these custom design NAS products when a standard ATX computer is so much more customizable, modular, expandable, repairable, etc.
And many of these NAS systems are such a ripoff on pricing.
I don’t think desktop power consumption is all that bad if you’re careful about component selection.
Another alternative is to have one more performant system that runs a bunch of different things in virtual machines so that the NAS functionality is only a small part of the power consumption.
I get what you are saying. But the 'up and running' is dead easy with these NAS boxes. They are wickedly plug and play. Where a DIY version is fun and interesting to do, but takes some configuring and fiddling and researching. Not necessarily hard just something I did not really want to mess with. Spin on about 10 years and now I am considering DIY. But for other reasons (like you point out). Storage is no longer my primary sole use case. Which means a bit more DIY on the next one.
It for me was just a mater of how much time/effort did I want to sink into it. I just wanted a large storage system. A off the shelf NAS I can be up and running in 2-3 hours. For DIY there is the research time for parts (including the drives), cases, and which software to run (there are 3-4 decent choices). Then gluing it all together. Probably 2-3 days for me fiddling around with it and research time.
Take the one I went with synology. The one I picked was in the 600 dollar range for the case. By the time your done picking a decently powered board, case and memory and cpu. You are in the same ballpark of cost for similar perf. The package was all done. I did not have to do much (notice a theme?). My most expensive part of the build? The drives themselves.
The downside is now I want something like 2.5g (or better) and the particular one I picked has zero way to upgrade (other than ripping the thing apart and hoping the right drivers are in there). It is about trade offs. Did I really want the possibility of upgrade or up and running quickly. Remember this is not something I do all the time. It is something I do every 3-4 years so I have to refamiliarize myself with all the different possible parts. Plus getting the OS in there 'just right'. Not impossible or even hard. I just chose a different trade off.
I don't get it either. Isn't a NAS just a normal computer with a ton of SATA ports as well as physical space for all the disks? Even consumer PCs have hot swap bays.
The legit reason is that you don't want your storage tied in with compute. For a business, I get it. Home lab? I agree, I don't really get it.
Now that I live in a house it’s sorta a no brainer to just get an ATX case and build my own, but in my apartment days something smaller would’ve been more appealing
A regular PC will work fine if this is what you want to do. A lot of consumer motherboards these days have at least 2 x NVMe and 6 x SATA ports or more. There are rack-mount cases that allow you to front-mount the SATA hard drives to make it easy to swap them.
Or if you want something small, you can get a compact Mini-ITX case with 5.25" drive bays and put hotswap trays in them.
Like maybe this case https://www.bhphotovideo.com/c/product/1193826-REG/istarusa_...
And put one of these for 4 x 2.5" SSDs: https://www.amazon.com/gp/product/B00V5JHOXQ/
And one of these for USB/SD card reader galore: https://www.amazon.com/EZDIY-FAB-Internal-Reader-Support-Com...
The reason I often do NOT want a vanilla OS is that I don't want to have to maintain it or risk corruption of the OS partition, and don't want to ever have to wire up a keyboard, mouse, and monitor because some upgrade and reboot caused it to get stuck on some interactive prompt. Most commercial NAS (e.g. Synology) have web-based administration and read-only partitions for the OS fully figured out.
6x SATA ports on a motherboard is getting to be rare. You'll need to shop carefully, and make tradeoffs to get there.
I'm moving towards used, ancient, large desktop cases. You can find them at computer recyleries if you have them in your area or local marketplaces. I got one with a hotswap backplane and trays and everything in the 5.25" bays. Of course, the backplane didn't work for me, so I had to pull out the backplane and just wire the drives like normal, but trays are still nice.
I've got lots of space, so I just leave a keyboard and monitor next to my servers, in case I break something. It's not elegant, but it's cost effective. A used $10 monitor and a keyboard I don't like will last forever, but buying boards with IPMI adds $100-$200 everytime I upgrade.
Why do you need 6x SATA? 18 TB drives are to be had for around $300. Just do 4x of those in striped mirrors with ZFS for 36 TB of available storage. Resilver times are stupid fast.
And you have an easier upgrade path. If you buy one pair of ~40 TB drives in a couple of years, you can replace one of the mirrors just by swapping out the old disks one by one and waiting for resilver. Then ZFS can autoexpand your storage to 18 + 40 TB total. And you can repeat further down the line, going 40 + 80 TB or whatever is available, true Ship of Theseus style.
Yes, you lose a little resilience compared to RAIDZ2, but RAID is just a HA solution, not a backup solution, no matter how fancy you make it. And a RAIDZ2 gradual-upgrade by replacing disk after disk is just horrible.
So 4x18TB for HA 36TB.
Then another 2x18TB for backup.
Surely you don't have the backup in the same machine as the NAS? It should at least be in another building entirely.
Home users don't usually have the money to rent another building.
What I do for my personal setup is RAID + nightly backup (3 drives total) in one machine, and then a 4th hard drive that normally lives airgapped in a rental storage unit that I bring home and sync every 6 months or so.
The nightly backup serves to always fetch yesterday's version of any file, and deals with 99% of scenarios that I'd need a real backup.
The 1% (hackers + fires/disasters) is taken care of by the storage unit but I lose the last 6 months of changes.
Well, the person I was responding to asked for 6x SATA. So there's that.
Personally, I like 6x SATA for a couple reasons. One is my case has sleds for 6 drives :P. The other is I run two servers with 2x mirrors as the main storage (they backup each other and are in different buildings although on the same site, which gives me a little redundancy, but not much); I've done an upgrade once from 4 tb to 10 tb, so my case with 6 sleds now has the main 10tb mirror array, and a playspace for less important stuff that I can figure out how to use over time; currently I've got it split up between 50% of each disk used individually for tv recordings, and the other 50% as a raidz2 to hold dvd's i've ripped and fanedits.
When I upgrade/replace the main mirrors, I may update the other server to have more slots, and then it can have a 4x 10tb playspace.
I haven't done it, so I'm not disagreeing... I'm assuming the main objection is it takes a long time, and the secondary objection is the system has a lot of disk contention while it's in progress. I'd probably address the takes a long time by just planning to do one disk every other week. That means it's a two month process, but it also means that the drives in the system have a two month offset on usage time; a lot of drive array failures are due to correlated failures, and offsetting the power on time can help. If all of your disks are likely to fail when their power on counter rolls over, it's nice if you have two weeks between failures --- that should hopefully be plenty of time to replace disks before you lose the data.
Ordering disks over a two month period may also help avoid getting all disks from the same batch; but sometimes it's better to order all at once for pricing, so that's mixed.
If you do have extra SATA ports, you might be able to do a replace while both the old and new disk are available... I'm not sure if that helps the process or not; it won't reduce the amount of writes to the new drive, but maybe it helps with disk contention during the process?
You might also consider a rack mount case. 4U rack cases are usually designed for standard ATX builds. For home use it cleans up a lot of wire mess to have router, switch, NAS, your main desktop, and maybe some other things all on one rack.
When you move you can also just move it all in one piece if you have a liftgate van.
I recently stood up a 140TB unraid instance, first time I did any nas stuff. Get manufacturer recertified drives for about 13 dollars a tb. Went with a framework case and a normal am4 processor/mobo with 8 sata connections.
2tb nvme cache + 20TB a drive + parity is 140 useable TB, and unraid makes the NAS stuff really easy, just installed a bunch of "apps" which are basically docker containers + settings, and things are a breeze.
Where'd you get the drives? That's a quite reasonable price.
serverpartdeals has a seagate-exos-x20-st20000nm007d-20tb-7-2k-rpm-sata-6gb-s-3-5-recertified-hard-drive that I went with.
Beware that not all port combinations are supported. For example, the Z270 chipset supports 6 SATA, but only 4 SATA if NVMe is also used. The motherboard manual will tell you this.
I built my own NAS for a total cost of £1000. 24TB physical drives, 16TB useable (Raid Z2). It runs Ubuntu LTS server edition and I spin up any containers I need in Docker using Portainer. Is it a point and click GUI that is friendly to all? No ... I have to manually create SMB mounts and the like in the terminal, but it's easy enough to keep running.
Hardware wise it's:
- Asrock Rack C246 WSI Mini ITX motherboard - 32GB ECC RAM (2x 16GB UDIMMS) - benefit of using the C246 chipset - Intel i3-9100T - 6 IronWolf 4TB NAS drives - 2U Short Depth rackmount chassis (so it fits in my network rack) - 1TB Samsung 850 Evo boot drive
Idles at 23W which is low enough for my tastes, ramps up to 65W under load (I spin the disks down after 10 minutes of inactivity, even doing that I'm only seeing about 10k load/unload cycles a year (drive is rated for 600k).
For the lazy-but-not-so-lazy you can just install Proxmox, as weird as that sounds at first. Still Debian based, still supports root on ZFS with the GUI installer, and then you get a virtualization/container host with a friendly GUI on top for free. You can even make you NFS/SMB/Whatever shares as containers if you want to compartmentalize.
There's also free distros like OpenMediaVault for making a NAS with a nice web UI.
Who do I have to kill to get one :)
These peeps seem to have some in store: https://www.senetic.de/product/C246WSI but (a) I didn't find a way to change the language on the site to English, and (b) while they ship to several countries, those are mostly in Europe. If you can work with these restrictions, you might be in luck! (I have never ordered from this shop and can't vouch for them, but they seem legit.)
Ever wrote a build log for that? Would love to read it.
Did you build this a while ago? 4TB drives are tiny nowadays. If so, the price isn’t really comparable is it?
Are there any good options today? I keep thinking of picking up a NAS, but I really do not want to use a proprietary vendor OS that might feel entitled to scan my data for useless/hostile cloud integrations. Or might have left some hardcoded password backdoors.
Ugreen NAS - currently on kickstarter for this summer - but it's a reputable company.
https://nas.ugreen.com/pages/ugreen-nas-storage-preheat
Their hardware is dramatically better at the price points then the alternatives - even if they do have some comprimises.
Wow, those hardware specs are nice. What compromises do you refer to, specifically? I think that the 8GB ram and 128GB flash cache are on the small side, but both are user-replaceable so not that big a deal.
The only thing I really miss here is ECC ram, but that's entirely due to Intel's market segmentation and not something they can change, save by completely changing their platform vendor.
Interesting! Unfortunately no power draw numbers.
I ended up downgrading my chunky Supermicro server to a cheap Chinese AOOSTAR with an Intel N100 for $199[0], and slapped in an NVMe drive and extra RAM. Since I only need 8-9TB of storage, a zpool mirror worked perfectly for my use case. The build quality is mediocre but the product is more than adequate for my home NAS use case.
[0] https://aoostar.com/products/aoostar-r1-2bay-nas-intel-n100-...
From the same site https://codedbearder.com/posts/nixos-terramaster-f2-221/
This probably isn't normal consumer price, but supermicro makes some very nice nas-targeted hardware that you can populate with your own drives and choice of OS. I have one with 183 TB running trueNAS (zfs) and it's great.
They're just bog standard x86 boxes with a lot of drive slots, a backplane, and a lot of sad or sata ports and controller capacity. There's a pretty healthy used market in these things also. But they are targeting people who want to put 10 or more drives in a single enclosure, which is why I say they might not be consumer.
SuperMicro stuff has gotten so expensive. I remember when they were first getting started and even years afterwards, and it's nothing like today.
Honestly, best thing you can do now is just live with a tower server or buy off-lease from eBay. We've passed the hump where the off-lease eBay specials had the energy footprint of a small factory and used turbine jet engines to power the fans and cool things down.
I did buy once buy a no-name (ATX?) 2U case/rails only from eBay, but the manufacturing, accessibility, ergonomics, design, and everything else were terrible enough that I would not do it again.
NAS dual 3.5" HD, dual M.2 NVME, dual 2.5GbE LAN. BIOS quality unknown. Barebone no RAM, expandable to 32GB. BYO OS.
Intel Alder Lake N100, $189, https://aoostar.com/products/aoostar-r1-2bay-nas-intel-n100-...
Ryzen 5700, $299, https://aoostar.com/products/aoostar-r7-2-bay-nas-amd-ryzen-...
You can get a truenas but its a little under powered for what it is especially if you want to use it not as a nas. This is the tradeoff for buying a turnkey product in a niche market.
That being said you can just build your own nas. There are nas boards with multiple sata already built in for you as well as cases designed for these sorts of boards and multiple hdds. jonesbo cases are popular for this. at a certain point though it stops making sense to build one of these and starts making sense to rack mount your storage array, of which the options are far more numerous.
There are actually, more than a few options if you're willing to tinker. People use RPis as NAS. They're slow but they work as a cheap local backup. Older integrated NAS devices like DNS-320 can be rooted to run a fairly recent debian flavour that comes with security patches, but you have to be familiar with the CLI given that the specs are very low (cpu/ram).
You could buy an old HP Proliant server (gen8?) and plug in 4 x 16TB HDDs and boot from the 5th drive.