Note that it doesn't look like it has ECC, so make sure to have backups. Fancy file systems like ZFS don't remove the need for ECC.
When people set up these NAS's, how are they accessing the files? NFS? SFTP?
And how are you accessing it when away from home? A VPN that you're permanently connected to? Is there a good way to do NAT hole-punching?
Syncthing kind of does what I want, in that it lets all my computers sync the same files no matter what network they're on, but it insists on always copying all the files ("syncing") whereas I just want them stored on the NAS but accessible everywhere.
Nextcloud kind of does what I want but when I tried it before it struck me as flaky and unreliable, and seemed to do a load of stuff I don't want or need.
Synology does all that. I run two one at home one at the office, my only complaint is that it’s a bit “idiot proof”… both other times the web based GUIi is great. Also has free software that punches through NAT and dynamic IPs works great (quickconnect.to) I use sftp, media server, primarily
Beefier models (I have a DS923+ with the RAM bumped up to 32GB) can run Docker containers, too. I have all kinds of things running on mine.
Is ram upgradeable on these machines?
Mine is. It ships with a 4GB DIMM and I swapped in 2 16GB DIMMs. Not all models are.
I second that wholeheartedly, and I also run two 19" Synology NAS units, one at home and one at the office. All smooth sailing so far.
A colleague uses a QNAP instead, which he claims is better price/storage ratio at the expense of lesser software usability, and I'm okay paying a bit more of my own money (at home) as well as taxpayers' money (at work) on better usability, because it will likely pay off by saving time in the long run, as I currently don't have a dedicated sysadmin in my team.
The only question mark to date was when installing with non-Synology (enterprise SSD) drives I got a warning that mine were not "vendor sourced" devices, and decided not to take any risk and replace all drives with "original" Synology ones just because I can. This may be just disinformation from Synology to make their own customers nervous, and it reminds me of the "only HP toner in HP laser printers" discussion, but it would have been a distraction to investigate further, and my time is more valuable than simply replacing all drives.
It seems a bit weird they’d disable the SMART fields just because the drive is not on their list. Those fields should work perfectly fine…?
Synology can even serve as a macOS Time Machine.
Answering your questions in order:
- On mine I use NFS and SMB which covers most possible clients.
- I use an ssh bastion that I expose via Tailscale to connect to mine remotely. So a VPN but it's wireguard based so it's not too intrusive. I have a gig up, though, YMMV.
- My NAS has 28TB of space. I'm still working on backup strategy. So far it just has my Dropbox and some ephemera I don't care about losing on it.
- Regarding other services: I use Dropbox pretty extensively but these days 2TB just isn't very much. Plus it gets cranky because I have more than 500,000 files in it.
This is my personal setup but I think it's a bit different for everyone.Wow! What kind of data are you generating that 2TB ‘just isn’t very much’? (Video editing?) All my personal files take up around 10GB in my Google Drive.
Google takeout of my personal pictures from Google photos takes 600gb+ alone. And I'm not avid picture taker (that's the archive since 2000s, I did upload a lot of my old dslr photos to google photos when it was unlimited). I guess if people make more personal videos, they will use more space easily
I think we probably have different definitions of ‘not an avid picture taker’ :D
I'd say so. I take over 500GB of personal photos/videos per year, and I'm not a huge phone user.
One example: If you take picture with a decent camera in raw format, your storage gets filled ridiculously fast. A short travel with a mere 200 pictures can easily be like 25M*200=5G. Another example: if you're doing any kind of AI training (especially picture based), the training materials can easily amount to many terrabytes.
Even a router can do that these days, GLiNet routers have USB ports and SSH, you can setup such basic stuff
Most mid range routers allow SSH, and have decent CPU
Seafile + Samba + OpenVPN is my stack. I use Seafile for a dropbox style file sync on my devices, and Samba for direct access. OpenVPN for remote access on all devices. Works quite well.
I’d replace OpenVPN with WireGuard at this point - WireGuard is a lot faster and the client software is pretty good. All of my Apple devices are set up to use VPN 100% of the time automatically if I’m not on home WiFi.
Could you please share how you went about configuring your Apple devices to automatically switch to VPN?
Thanks!
When you install WireGuard client, there's "On Demand" option there that you can enable. That option has two additional settings - it can turn WireGuard only for a particular list of SSIDs, or it can _not_ turn it on for a particular list of SSIDs. So you just add the SSID of your home WiFi to the list for which WireGuard will not be turned on. On macOS client there is an identical option. This works really well.
Has anyone compared Seafile with Syncthing? I'm quite happy with Syncthing but always interested in trying out new setups.
Regarding the connectivity: tailscale... So far I am happy with them and the free plan hasn't been kneecapped afterwards (so far).
Even if it is, you can run Headscale on a server somewhere (or just pay).
IIRC they have improved the free plan over time, and even mailed users suggesting the relaxed limits might enable moving from paid to free tier [1].
I barely use my tailnet now, might have more of a case for it later, but they are near the top of my "wishing you success but please don't get acquired by a company that will ruin it" list.
I use Syncthing to synchronize my smaller datasets between my laptop, my phone, and my NAS. This covers all of my productive and creative scenarios.
On the LAN, I just use SMB. It is adequate for my needs.
For remotely accessing my collection of Linux ISOs, I use Plex.
Same here. I have wireguard vpn for the few times i need it to tunnel my traffic through home or need to access larger files not sync’ed with syncthing.
My nas is a Synology. Vpn is also used so that i can continue sending timemachine backups back home when i’m traveling.
This is pretty much my setup as well!
Syncthing for a small collection of files I want available from all my machines - commonly used documents, photos, stuff I want quickly backed up or synced automatically.
Samba for my long term mostly-read rarely-write storage with larger files, ISOs, etc.
And how are you accessing it when away from home?
I usually just use zerotier for this, it's extremely lightweight
I use Tailscale, but I’m amazed that the size of the ZeroTier app is 2.6 MB versus 23MB for Tailscale.
How come ZeroTier is 10X smaller?
Tailscale uses Go https://tailscale.com/security#tailscale-is-written-in-go which might explain the larger sizes.
A cursory look through https://github.com/zerotier/ZeroTierOne shows more C++ and some Rust. Not sure how much static linking is involved here.
An easy solution for the VPN part would be Zerotier / Tailscale. IIRC Zerotier uses chacha20 for encryption which is faster than AES, especially for a power-strapped SBC.
I tried to build a setup like this with OpenVPN years ago and OMG.
Tailscale/Wireguard has been such a big leap forward.
I use sshfs. If you can login via ssh then you can mount the remote server through ssh as a local drive.
https://github.com/libfuse/sshfs
For added security I limit my home ssh access to a handful of trusted IPs including my cloud VM. Then I set up an ssh tunnel from my hotel through the cloud VM to home. The cloud VM never sees my password / key
Its worth keeping this (from their readme) in mind though:
However, at present SSHFS does not have any active, regular contributors, and there are a number of known issues (see the bugtracker).
Not that it is unusable or anything, it is still in widespread use, but I'd guess many assume it to be part of openssh and maintained with it, when it isn't.
An interesting alternative might be https://rclone.org/, which can speak SFTP and can mount all (of the many) protocols it speaks.
I used samba, it's supported everywhere. I also served files with HTTP server which might be convenient way for some use-cases. I also generated simple HTML-s with <video> which allowed me to easily view movies on my TV without all that nonsense.
My router has public IP so I didn't have any problems reaching it from the outside, so any VPN could work. Another approach is to rent some cheap VPS and use it as a bastion VPN server, connecting both home network and roadwarrior laptop.
No idea about any "integrated" solutions, I prefer simple solutions, so I just used ordinary RHEL with ordinary apache, etc.
SMB and Tailscale.
Tailscale works perfectly for remote access, I do "backups" with rsync over VPN nightly to an offsite location.
Syncthing over Tailscale is running smoothly too, it doesn't matter where my machines move, they find each other using the same internal address every time.
I have a (completely overkill) Ubiquiti Dream Wall that lets me VPN in using WireGuard. I do have a Raspberry Pi that runs (among other stuff) a script to ping a service on hosted server that keeps a dns entry updated in case my IP address changes, although that's rare.
I built the service to keep the dns entry updated myself, so I'm sure it's not as secure as it could be, but it only accepts pings via https and it only works if the body of the POST contains a guid that is mapped to the dns entry I want it to update.
NFS + SMB.
Also I use SonicWall VPN to connect to my house to be in the network so it covers most of it. I also use Synology QuickConnect if I need to use the browser without VPN which also covers most urgent needs. Haven't failed me over a decade and my NAS also syncs with Synology C2 cloud which is also another peace of mind. I know it might sound unsafe a little having files stored on the cloud but it is what it is.
I won't play with half-baked library dependent homebrew solutions which cost way more time and cause headache more than commercial solutions. I won't open ports and forget them later either.
Mostly CIFS, I use tailscale to put my laptop inside of my home network wherever I go.
SMB + Tailscale and SyncThing for me. Both combos just work, although admittedly SMB over mobile connections _and_ a VPN can be iffy.
SFTP for my other Linux devices, SMB by Samba for the rest of the world (mainly Android.)
I just use NFS on the LAN. No remote access.
I use NFS over WireGuard. That way I can mount my resources wherever I go and it's encrypted whether I'm at home or out.
Depends on what you need. I have a NAS with syncthing, and it's a combination.
- I use a lot of different folders within syncthing, and different machines have different combinations to save space where they aren't needed; the NAS has all of them.
- on the LAN, sshfs is a resilient-but-slower alternative to NFS. If I reboot my NAS, sshfs doesn't care & reconnects without complaint...last time I tried to use it, NFS locked up the entire client.
- zerotier + sshfs is workable-but-slow in remote scenarios
Note I'm mostly trying to write code remotely. If you're trying to watch videos....uh, good luck.
Samba and tailscale.
Depends on your use case. I just use scp and access the NAS box through Tor when traveling, so I don’t have to open up any ports.
I usually just use SMB shares within my LAN. It serves my modest needs. I have used WebDAV or FTP in the past. Depends on the specific use. Away from home, VPN is essential. Too risky to just forward ports these days.
Depending on the make and model - I've got a Synology NAS box and can't recommend them enough.
RAID support, NFS/SFTP/Samba support, a nice Web UI to set up access and configure sharing, and even the ability to enable sharing outside your own NAT.
My secret protip: old Fujitsu desktop/nuc PCs. At least in Germany (Europe?) they are cheap on ebay since a lot of businesses use them and upgrade on a regular schedule.
If you care about power consumption like I do, you can Google "$model energy consumption white paper" which contains very accurate data about idle usage, for example https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-energy...
In one case I had a nuc where on Linux after enabling power saving features for the sata controller, idle usage even fell to 5W when the pdf claimed 9.
Having an actual pc instead of a random sbc ensures best connectivity, expandability, and software support forever. With almost all sbcs you're stuck with a random-ass kernel from when the damn thing was released, and you basically have to frankenstein together your own up-to-date distro with the old kernel because the vendor certainly doesn't care about updating the random armbian fork they created for that thing.
I have a nuc but how do I connect 6 hdds to it?
Add a disk shelf :). Basically supplies power and data cabling for just a rack of drives. Those get interfaced with the NUC via a host based adapter.
Can you elaborate? What’s the economic way to do this?
they make external HDD chassis that connect via USB. I don't have any experience with them so I can't comment on their reliability, but search for "ORICO 5 Bay USB to SATA Hard Drive Enclosure".
FWIW, I wouldn't recommend Orico. I don't live in the US so my options are somewhat limited but I found a local retailer that carries Orico. I've had five of them in the past five years and four died within 12-18 months.
If it was just one, I'd put it down to random bad luck. But with that many failures I assume they are doing something stupid/cheap.
Usually they would simply fail to power on but sometimes individual slots seemed to die (which RAID just loooooves).
And having an entire enclosure fail and waiting days/weeks for a replacement sucks as you lose access to all your data.
I eventually bought a Jonsbo N3 off of Aliexpress and PCI SATA card (to support more than the 2-4 drives most motherboards support) and that has been working well for months.
I've never tried Orico, that was just the first brand that came up when I searched. I suspect these things are fundamentally unreliable, especially because they are powered by external AC adaptors, meaning there is no real ground between the two switching power sources (one in your PC, the other in the HDD caddy). It's either due to that, or due to the very sensitive signaling along the line, that eventually you get USB disconnects (if you try to run it as an appliance) that wreaks havoc on filesystems, particularly RAID.
The Jonsbo N3 is not comparable. I own one as well (how quiet is yours? I upgraded the rear fan but my CPU fan is noisy), but it's a complete PC case, not an external HDD array.
Highpoint has some decent-ish toaster-style drive docks. I have an a couple of the older model with dual drives dual usb-a ports - 5422A - but the Highpoint RocketStor 3112D seems available for $70 with a single 10Gbit usb-c port and dual drives.
There is one deeply troubling flaw to them though, they don't turn back on of the power goes out, until you physically hit the button again. I think this is alas all to common for many of these enclosures!
Used PCIe HBA cards pulled from retired servers can be found on eBay for ~$50. They have external facing ports and/or internal facing ports. External is the way to go if you're using a small form factor PC like a business class Lenovo. These are almost all low profile cards, so they will fit in any SFF PC with a PCIe slot. There are special cables which will connect one port on the card to four SATA- or SAS-based disks.
The PC's PSU will need SATA power on its cables or else you'll need to scavenge a separate PSU and use the paper clip trick (or better yet, a purpose built connector) to get it to power things on without a motherboard connected.
Once you have all of that, then it's just a matter of housing the disks. People have done this with everything from threaded rod and plastidip to 3D printed disk racks to used enterprise JBOD enclosures (Just a Bunch Of Disks, no joke).
Total cost for this setup, excluding the disks, can easily be done for less than $200 if you're patient and look for local deals, like a Craiglist post for a bunch of old server hardware that says "free, just come haul it away".
Check or r/DataHoarder on reddit or ServeTheHome's blog
You can buy a DAS (Direct Attached Storage) enclosure[1], some even support RAID. If your Nuc is multipurpose, you could then run a virtualized TrueNAS guest (BSD or linux) in QEMU and give it control of the DAS block device for ZFS pools. Being able to run a virtual NAS that actually gets security updates on demand is pretty neat - TrueNAS has an excellent API you can use to start/stop services (SMB, SSH, iSCSI, etc) as well as shutdown the vm cleanly.
1. Newer DAS devices connect using USB-C, but USB type-A/e-SATA ones can be found.
Edit: figuring out how to run TrueNAS as a guest OS was a nightmare, the first 5+ page of results will be about TrueNAS as a host.
Isn't running a NAS on top of USB storage very strongly discouraged? TrueNAS cautions against it.
I also want to set up a NAS on a mini-PC with some sort of attached external storage, but I haven't been able to get past this blocker. USB is the only external interface these mini PCs typically support.
You aren’t wrong. USB is not a good way to do this.
There are issues with USB from a compatibility standpoint. I think its mainly a factor of the ubiquity of it, there are SO many poor controller chips out there, even when you buy seemingly reputable hubs/drive cases. Its hard to find a good one sometimes. I did, however, stumble upon a gem early on, it was a 3.5" usb drive case from BestBuy which has since been discontinued(because it was good). Never in 15 years has any of the half dozen ones i got dropped from thier system. This is more than i could say about alot of pricey stuff on amazon sadly. Its typically manifested as a random loss in connectivity to the system.
Similarly, heres a very low power writeup I did for using 2.5" drives with a dedicated power hub/splitter. http://www.jofla.net/?p=00000106#00000106 This will still have issues if the mains lines sag (a pole goes down somewhere), but you can fix it with a reboot remotely. Other than that it works great.
This was/is definitely a labor of love, primarily as I've come from a time when all you could get for a server were huge boxes idling at 50 watts, so i felt guilty of all the power I used to consume.
Thunderbolt/USB4 -> NVME enclosure -> M.2-to-SATA OR M.2-to PCIe to HBA-to-SATA.
I was under the impression, that for most (popular) chip families, like RockChip, Allwinner, Amlogic, some assorted Broadcoms, .. the Mainline linux kernel support has mostly been sorted for, and it's only the stragglers like Hisilicon, Huawei, Most Broadcom, Qualmcomm where mainline support is not on their priority list?
Depends what you mean by “kernel support”. In general it does not really include decent idle power optimizations even on say raspberry pi.
IDLE power optimizations are pretty low on my list when I look for an SBS/SOC, I must admit.
Do you have a list of remaining, large deficits in Linux power management, because I'm not finding many urgent open issues in that area?
Maybe, but regular distros on x86/x64 thin clients are even more sorted out. GPIOs are better handled through an Arduino clone over USB than with scripts running on inherently laggy desktop OS.
After configuring the vendor uBoot to chainload into a newer uBoot-compile with JustEnoughUEFI compiled in, you can just launch the standard Debian Arm64/UEFI install iso on many/most(?) popular SOCs.
W.r.t. GPIOs, I agree, that delegating that to an e.g. Arduino connected via USB/UART or one of the available internal(often RTC), or external(HDMI/VGA) I2C connections as an I2C slave is the preferred solution.
Does Raspberry Pi fit your definition/mental model of a straggler? It doesn’t have mainline support either!
ODroid H-series SBC's are standard Intel CPUs with (at least for the H2+) Linux supported hardware for pretty much everything (haven't tried running X on them though :-P )
they are my favorite 'home server' currently...cheap, standard, and expandable - oh! And SILENT! :-)
They also have this magic feature where they can use ECC with non-ECC RAM, at the cost of some capacity.
Where can I read more about this?
From Unraid forum thread[1] there's links to another forum[2] and a wiki page[3]. Not yet sure myself what all of this means, but it looks interesting.
[1] https://forums.unraid.net/topic/167669-odroid-h4-intel-n97-2...
[2] https://forum.odroid.com/viewtopic.php?p=384823#p384823
[3] https://wiki.odroid.com/odroid-h4/hardware/h4_bios_update#bi...
In general some of the Intel Atom CPUs intended for embedded applications support the so-called In-Band ECC, where a part of the memory is reserved for storing ECC codes.
Two of the three ODROID H4 variants use the "Intel Processor N97" CPU, which is intended for embedded applications and it appears to support In-Band ECC, even if this is not clearly advertised on Intel Ark (i.e. at other CPUs of the Alder Lake N family, like i3-N305, at ECC Support it says "No", but at N97 it says neither "Yes" nor "No", but it is mentioned that it is intended for embedded applications, not for consumer applications, and the embedded models normally support In-Band ECC).
The ODROID H4 BIOS allows to enable In-Band ECC on ODROID H4 or ODROID H4+ (the latter is slightly more expensive at $139, but it has more I/O, including two 2.5 Gb/s Ethernet ports and four SATA ports; to the bare board you must add between $10 and $20 for the case, depending on its size, and a few other $ for SATA cables, RTC battery and optionally a cooling fan; you must also buy one 16 GB or 32 GB DDR5-4800 SODIMM, so after adding shipping and taxes a NAS would cost a little more than $200, besides the SSDs or HDDs).
You can see test results with ECC enabled at:
https://www.cnx-software.com/2024/05/26/odroid-h4-plus-revie...
Which ones support ECC?
Those with "Intel Processor N97", i.e. ODROID H4 and ODROID H4+.
I'm looking for a machine like this (affordable, small, low power usage) with 64GB memory. If anyone has any recommendations I'm all ears.
Some Lenovo Tiny models, https://forums.servethehome.com/index.php?threads/lenovo-thi...
There was this article on lobste.rs yesterday:
https://michael.stapelberg.ch/posts/2024-07-02-ryzen-7-mini-...
Perhaps it can be configured to meet your requirement for "affordable"?
The cheapest computers with Alder Lake N CPUs, like ODROID H4 and many others, have only a single SODIMM socket and they are limited to 48 GB (which works, despite Ark advertising a limit of only 16 GB).
However there are many NUC-like small computers made by various Chinese companies, with AMD Ryzen CPUs and with 2 SODIMM sockets, in which you can use 64 GB of DRAM.
Those using older Ryzen models, like the 5000 series, may be found at prices between $220 and $350. Those using newer Ryzen models, up to Zen 4 based 7000 or 8000 series, are more expensive, i.e. between $400 and $600.
While there are dozens of very cheap computer models made by Chinese firms, the similar models made by ASUS or the like are significantly more expensive. After the Intel NUC line has been bought by ASUS, they have raised its prices a lot.
Even so, if a non-Chinese computer is desired and 64 GB is the only requirement, then Intel NUCs from older generations like NUC 13 or NUC 12, with Core i3 CPUs, can still be found at prices between $350 and $400 (the traditional prices of barebone Intel NUCs were $350 for Core i3, $500 for Core i5 and $650 for Core i7, but they have been raised a lot for the latest models).
EDIT: Looking now at Newegg, I see various variants of ASUS NUC 14 Pro with the Intel Core 3 100U CPU, which are under $400 (barebone).
It should be noted that this CPU is a Raptor Lake Refresh and not a Meteor Lake CPU, like in the more expensive models of NUC 14 Pro.
This is a small computer designed to work reliably for many years on a 24/7 schedule, so if reliability would be important, especially when using it as a server, I would choose this. It supports up to 96 GB of DRAM (with two 48 GB SODIMMs). If performance per dollar would be more important, then there are cheaper and faster computers with Ryzen CPUs, made by Chinese companies like Minisforum or Beelink.
Old business laptops work well
I’ve also heard the same advice with Dell/HP SFF PC’s. BTW, your link requires a username/password login.
That's... Odd. I didn't click my own link yesterday after posting, but that was straight from Google and it did load just fine. Now I get the login too. But for completeness sake, another example: https://www.fujitsu.com/global/Images/wp_ESPRIMO_P758_E94.pd...
HN possibly sent too much traffic their way, and someone, somewhere decided to require credentials for access perhaps? :-)
That happens. I've once foolishly linked brochure page for a trash-found HSM on social media, supposedly that logged too many referrers server side, and the URL was mis-configured into something else less than a day later. It wasn't even a 404.
Have had great success with this myself. Oddly resellers on Amazon seem to be the best source I have found, just search for something like "HP ProDesk" or one of the generic corporate-focused lines from other manufacturers and find one that fits your budget. Maybe filter to 100-300 dollars to get rid of the new stuff. There's also a surprisingly vast selection of recycled commodity servers and similar on there, too.
Years ago there were auction houses specialising in selling off business recycled PCs and bankrupt stock - it was great fun to go and mooch around and see if there was a real bargain not spotted, but they seemed to vanish under the onslaught of eBay and frankly for second hand tag I struggle to trust ebay
I’ve had good luck recently (sample size: 2) buying small form factor PCs off of ebay. Way more powerful than even a new raspberry pi, and a 4 core/16G RAM/256 ssd machine can be had for less than $60 if you are patient.
Here in Australia there are thousands of ex-Lease Dell 7060 SFF and 7060 Micros PCs on aucation sites every week.
The Dell's coming off lease now have modern features including Intel 8th Gen CPUs with TPM and USB-C etc...
in my experience those come from mining and other FIFO operations that are shutting down. source: used to de-com those, wipe em, and get them ready for bulk sale to a different group.
Don't share this information too far and wide, you might drive up the price for these in the second hand market, which will hurt us dirt-cheap-pc-gluts.
I cannot say no to cheap compute for some reason.
I cannot say no to cheap compute for some reason.
I sympathise - for me I think it comes from growing up with early generation PCs that were expensive, hard to get, and not very performant; now you can buy something that's a supercomputer by comparison for almost nothing. Who can resist that?! I'll think of something to do with them all eventually...
These boxes are also great for running a k8s cluster if you want to experiment.
Correct. I have a 5 node Fujitsu esprimo D756/D757 cluster that has i5-6500 CPU's and 96GB RAM and 5x NVMe + 5x HDD that usually sits around 80W total. Removing HDD and reducing RAM would drop the power usage but in my case it's not important to go after the last Watt.
I bought them them for 50 euro a piece without RAM and disks.
I bought one of those thin Lenovo clients for about $200 and use it as a home server with Debian, it works great for pretty much everything and is a lot more bang for your buck than raspberry pi or a brand new mini-pc.
The only downside is that it doesn't have space for multiple 2,5/3,5 disks, but that is just personal preference anyway.
My local electronics resale shop had the Dell versions of these, they make great hypervisors!
Parkytowers is site about repurposing thin clients of various kinds, it's a goldmine for finding out power consumption, Linux compatibility, possible hardware mods, etc: https://www.parkytowers.me.uk/thin/hware/hardware.shtml
Everyone is an expert at storage as long as everything is working great. It's when stuff fails that you feel like an idiot and wished you had one extra hdd in your RAID array or a secondary NAS you were backing up to or one extra site you offloaded your data to.
I don't do cheap any more. But I can see the appeal.
These are all strategies and the price point of the unit doesn’t affect it.
Need extra drives, buy extra drives. Need extra NAS for backups?, buy an extra NAS. Need an offsite copy?, buy space and get an offsite NAS and drives for an offsite copy.
Price point of the unit doesn’t change anything here.
Synology sure provides an expensive but complete package for home office and enthusiasts.
Just buy it and be done with it. It's certainly more expensive than DIY it yourself using off the shelf components and things bought out of online classifieds. But for most people that have no interest in tinkering or don't know what to do, just paying the price of a complete solution might be worth it.
Totally. If you enjoy the config and have the time, by all means.
If you just want it to work, by a Synology. Mine has been running strong for several years now and has docker images for my unify controller, pi hole and Plex. Took minimal time to setup and none since that day. Love it
Edit: And my encrypted cloud backup in Backblaze B2 was equally as easy to setup and costs a whopping $2 a month for every family pic, video and doc.
I have triple backup, with mirrored RAID for one of those. No effort, maximum peace of mind.
Redundancy and backup are not the same thing. RAID gets you redundancy but if you get pwnd, you get redundantly pwnd!
So you backup to elsewhere. Now you have two copies of your data. Cool. Now you probably can recover from a silly mistake from up to a week ago or whatever your retention period is. However, if you don't monitor your backups, you'll never notice certain snags such as ransomware. OK, that might be low risk for your home backups.
It's quite hard to get the balance right but I think that you might not be quite as protected as you think you are. Why not buy a cheap hard disk and clone all your data to it every three or six months and stash it somewhere?
I have a similar argument with a colleague of mine, to the point that I will probably buy a LTO multi head unit myself and some tapes.
RAID is not a backup, its a data integrity thing. It ensures that what you save now stays saved correctly into the future. It protects now. Backups protect the past.
Think long and hard about what might go wrong and take suitable steps. For you I think a simple, regular off line back up will work out quite well with minimal cost, for disaster recovery.
Good points.
I didn’t actually specify it out, but my third backup is an offline SSD that I plug in every once in a while and store at my office. I only mentioned the RAID for local redundancy reasons.
You are right about the data being corrupted, either maliciously or bitrot. The NAS is not accessible outside my home network, so I think I am ok there.
Bitrot would require snapshots and full backups stored over time, which I could do fairly easily but I am currently not.
+1 for Synology. bought it off an old gig when they were moving from NAS to SAN, and it's still solid. probably uses more power than I need though.
got a rsync script that pushes backups to Dropbox, though just "can't lose" docs and other things I want in the cloud.
DIY it yourself
Look, I'm sorry, but you literally asked for this:
Hello, I'm from the Department of Redundancy Department with a citation for a "Do It Yourself it yourself". Please pay the fine in internet chuckles, thank you.
He did not literally ask for this, actually.
if the I or the Y stop working or get corrupted, the redundancy allows you to recover them properly from the spelled out words.
I had a (dis)pleasure of running multiple Synology units in a business setting. They do die out on you like everything else if not more frequent, the QC is generally non-existent.
Synology's biggest reliability issue was when they used the Intel Atom C2000 CPUs. Designs with those CPUs have a 100% failure rate on the longer term (not just Synology, everything with it, Cisco was hit hard too). There's a workaround by soldering on some resistors to pull up some marginal line that will fix dead units.
Just buy it and be done with it
Or buy a QNAP and watch it brick itself when you most need it.
++1 for synology. I've been running their 5-bay model for a few years without any issues. It just works. I have it paired with a synology router which also just works. They both do their jobs transparently enough that I can basically forget they exist.
I'm not so sure anymore. My synology is now on its third backup application, so doing recovery means you have to hunt down the old client first.
The last one seems to be in Javascript. It's slower than the previous one, eats more RAM and has a few strange bugs. Backing up my smartphone photos is not always reliable.
I used to be happy about them, but since my update to 7, I don't feel I trust my backups anymore. Maybe I have to byte the bullet and do some rsync based scripting instead.
I don’t like that symbology provides ridiculously small amount of ram - I had one 7 years ago and it had 512 mb of ram, and was chugging. Looking at their website, they still sell a version with 512 mb. That’s a total joke in this day and age.
That being said, there are plenty of companies providing NAS drives at lower price points, with various levels of quality. Generally, they are not worse than a random AMR SBC with a closed source kernel, just like the author here assebled.
I appreciate not everyone wants a mini pc they have to manage, I had a nice setup but now it has died and I can’t find time to deal with it
Kids today have no idea just how often drives failed back in the day.
IBM deathstar[0] drives in a collection of RS6000's is still too fresh in my memory :-)
Lost the first five years of my source code thanks to a Deathstar. Still miffed about it, would have enjoyed looking back at it from time to time.
Learned a valuable lesson in backups though...
Same, that was also my lesson that RAID is an availability mechanism, not a data safety/backup one. (Of course, Ransomware would also hammer that point home for many later on)
"you could actually hear it die! the 'click of death' "
"sure, grandpa, drives make noise [rolls eyes]"
My first 5mb winchester failed every day. So the day began with reformatting it and restoring from last night's floppy disks backup.
I think this depends on what you're storing though.
Business documents, accounting records, family photos - sure you probably want to keep them safe.
But if my NAS is just 20TB of pirated movies and TV shows (not that I'm saying it is...) then I'm much more comfortable buying the cheapest drives I can find and seeing how it goes.
For me it's the opposite in a way: I need a proper remote backup of things like business documents and photos because they have to survive an issue with the NAS or my house not just a drive so local copies can go on "whatever" and something like cloud backup makes more sense to meet the reliability mark. Generally it's not tons and tons of terabytes which is great because the backup needs to actually be usable within a reasonable amount of time when I need to pull it.
On the other hand terabytes and terabytes of pirated content is a lot of work but not necessarily worth paying to try and to backup over the internet. I can redownload it if I need but I'd rather not do that because some crap drive or NAS I saved 20 bucks on died and now I need to spend a week rebuilding my entire collection. It doesn't need to be Fort Knox but I'll spring for a proper NAS, drives, and pool redundancy for this content.
Yes. When it comes to data, don't cheap out.
Do 2x cheap then, you need backup anyway.
I sync a few single external drives every week or two over good old USB. In house sneaker net. Tools like Freefilesync make this easy and fast (and give me a manual check so accidental deletes are visible too).
Very cheap, has served me for more than a decade now. Highly recommended. I dealt with dataloss through drive failing, user error and unintentional software bugs/errors. No problem.
The entire concept of network attached storage is kind of cargo-cult in the vast majority of personal use cases. Just put the drives in your computer. Fewer abstraction layers, fewer problems, cheaper, faster, less fragile, easier to debug and fix if problems do happen. It's just not as hip or cool sounding as "NAS".
> Just put the drives in your computer
NAS works with phones, tablets and laptops with egregiously expensive, non-expandable storage.
On iOS/iPadOS, use SSH/SFTP to workaround business-model-challenged "Files" client.
A NAS is not magical. It is just a rather limited computer. Anything a NAS can do so can a normal desktop, and it'll do it better. Sharing files over a network is one of these things. Managing the files is way, way, better since you just use your normal desktop file manager rather than some NAS web stuff that'll break due to CA TLS issues in a few years.
Desktop computers are becoming increasingly uncommon. I'm a pretty technologically inclined person and I havent owned a desktop in years. For most people, laptops have been more than capable enough for their needs for a long time. And for general purpose computing, smartphones are pretty heavily used too.
Yes, people do think that. And then they end up with an octopus of fragile external drives prone to problems and breakage. It's a sad state of affairs for the non-tech inclined. Like everyone is buying mopeds and pulling around trailers rather than just buying a small car.
Among many features, ZFS offers storage snapshots, deduplication and file integrity.
Eh, I'm much more likely to accidentally reformat all my drives on my desktop than a NAS. Sure I can just restore from backup but it's just a security blanket to not have to worry about it
A NAS is nice for serving local media files to a smart TV. That’s my main use case.
The thing I don't get is, why do I need to drop $1000 on a Synology machine just to do this? Let's say I want to create a setup with 8x18TB drives or something, do I really need to spend $1000 just to make this accessible to a couple clients at once (say a smart TV + another machine in the house)?
Right now I have Plex running on a raspberry pi hooked up to an 8tb external HDD. Works fine, but I want to scale up to the 100-200TB range of storage, and it feels like the market is pushing me towards spending an inordinate amount of money on this. Don't understand why it's so expensive.
This is my rationale for my specific circumstances:
With my NAS, I pay for the box, install the drives, and it Just Works with basically no maintenance other than me clicking the upgrade button every few months when it emails me that an OS update is ready.
I could build a similar system myself, but the hardware isn't that much cheaper. Cases, PSUs, hot swappable drive mounts, and all that add up quickly. And when I'm done, I have to install the OS, read the docs to see how the recommended configuration has changed since last time I did this, and periodically log in to look at it because it doesn't have nearly as much monitoring set up out-of-the-box.
Given the choice between the small amount of money I'd save vs the amount of time I'd have to invest in it, I'd rather pay the money and outsource the hassle to the NAS maker.
As to why I don't just hang a bunch of drives off the computer I'm already using:
- Backups. If my Mac dies, I can restore it from the Time Machine on my NAS.
- Noise. The NAS has busier fans than my Mac. It's in a different room from where I spend most of my time.
- I run Docker stuff on it. I don't want those things running 24/7 on my desktop.
- Availability. I want to reboot my desktop occasionally without worrying if it'd interrupt my wife watching something on Plex.
2 bay Synology NAS' are less than $200, 4 bay are less than $400. Yes, if you need 144TB of storage the NAS unit is going to be more expensive, but the drives themselves are the majority of the cost.
Good idea to put your backup on the same machine! /s
Or just put drives in all your computers and back-up between instead of buying a limited (potentially proprietary) NAS computer just for the purpose and having it being one central point of failure.
I just don't understand the reluctance of people to put storage in their actual computer(s).
Because one of the reasons I got my NAS so I don't need to tinker.
HN readers' computers are typically macbooks with extremely expensive and non-upgradable drives.
My “NAS” is my late model Intel iMac. I have, like, 5 USB drives hanging off it.
I SyncThing my wife’s laptop to it. I serve a bunch of videos off of it to our AppleTV. All our photos are there.
I have a Time Machine backup of the main system drive (my wife uses TM as well). The whole thing is backed up to BackBlaze, which works great, but I have not had to recover anything yet.
I would like to run ZFS mostly to detect bit rot, but each time I’ve tried it, it pretty much immediately crushed my machine to the point of unresponsiveness, so I’ve given up on that. That was doing trivial stuff, have ZFS manage a single volume. Just terrible.
So now it’s just an ad hoc collection of mismatched USB drives with cables not well organized. My next TODO is to replace the two spinning drives with SSD. Not so much for raw performance, but simply that when those drives spin up, it hangs up the computer. So I’d like SSD for “instant” startup.
Not enough to just jettison the drives, but if opportunity knocks, I’ll swap them out.
The majority of users have laptops, not desktops.
Ignorant person question here: “why they make NAS servers without ECC memory?”
Because they barely do anything. It's like like there's 4TB of RAM in there churning away at multiple databases. It's debatable if you even need it in enterprise servers.
You absolutely, positively, 100% need it on anything that carries data you care about. I personally consider it a hard requirement for a NAS. I don't want to lose data just because a cosmic ray flipped a bit somewhere.
I don't want to lose data just because a cosmic ray flipped a bit somewhere.
If you have disks set up in RAID 1 or RAID 6, would you still lose data though?
Consider all the RAM along a network transmission. Maybe you’re using authenticated encryption, maybe your transfer has an internal or out of band checksum. Maybe not.
Absolutely. Imagine you are saving a text file to NAS with a super-secret password to your Bitcoin wallet, for example "password". While it was in memory before it reached disk, one bit was flipped and the file contents became "pastword" which OS happily saved on your RAID. And now you've lost your Bitcoins forever.
In raid1 all you need is a bit flip in RAM between writing to disks to cause a permanent error (one disk gets the flipped bit, the other does not).
Raid6 will repair single bit errors, assuming a bit wasn't flipped before/during the erasure code calculation when writing the data.
There's a lot of places the file can get corrupted on its way to your drive. The memory of the NAS itself is only one of those. If you want any certainty you need to verify it after writing, so ECC RAM is not enough. And once you do set up that verification, you don't need the ECC RAM anymore.
Can you tell us HOW to setup a NAS so that it doesn't need ECC RAM?
That differs based on the program you're using to put files onto the NAS.
But I'll note the even with ECC you need to double check things in case there was corruption on the drive wires or in many other places. With the right filesystem you can find some of those locally during a scrub, but double checking end-to-end isn't much harder.
I am not sure this is correct - there could be an error not only in the data, but also in instructions - but flip could cause data to be written to an incorrect location of the hard disk.
Storing error correcting codes/checksums/signatures and sending them along the data seems like a more cost effective solution. Without those you need to ensure that every single link in the chain supports ECC (and all the hardware works perfectly)
ECC may still be needed for the actual processing, but I don't see a point on having it on a NAS (especially considering you need to send the data over a network to read or write it)
non-ECC is cheaper. i can't think of any other possible reason. anything else would be a lie to cover up being cheap
The cheapest NAS is usually just taking some old desktop PC and repurposing it headless. :-)
or even better - old laptop
Yep, optimised for power consumption, and comes with a built in UPS
Just be careful leaving an old battery permanently on charge:
Or a Raspberry Pi you have kicking around:
Assuming power is free. Even small wattage differences add up quickly for a server running 24/7 and those older CPUs can be very inefficient.
5 year old CPUs today (i.e., 2019 era chips) are generally pretty efficient.
Is there any NAS software that just lets you add disks whenever you want, while using them for redundancy if they aren't full? I wish something was as easy as adding another disk and having the redundancy go up, then removing a disk and having the redundancy go down.
If you just want a mirror, that's easy (I like ZFS but any software RAID should let you add/remove mirrored disks easily). If you mean mirror until it's full then automatically convert to striped, I don't think anyone does that and I don't think anyone would want that, because people who care enough about protecting their data to use a mirror don't want it to automatically stop being a mirror.
Mergerfs plus SnapRAID comes close to your ask:
Windows Storage Spaces kinda works like this.
Unraid is probably the closest to that.
The image builder for this board looks dodgy af:
"The distribution builder is a proprietary commercial offering as it involves a lot of customer IP and integrations so it cannot be public."
Seems like a supply side injector to me!
Yeah, reading through the linked https://hub.libre.computer/t/source-code-git-repository-for-... really sours my opinion of Libre Computer - shipping with UEFI so you can just use generic images is a huge advantage, but creating your default images (and firmware! which is worse, IMO) with a proprietary process is such a big red flag that it makes me question the whole thing. If the firmware is FOSS and you can build it yourself using only FOSS inputs (which isn't obvious to me from that discussion), then you could do that and any old image (again, UEFI to support generic images is a huge win) and it would be fine, but the fact that that's not the default really makes me question the values/culture of the company.
Too bad TALOS II is a $5K motherboard. It actually is open down to the brass firmware tacks!
Impossible, they even put “libre” in the name!
Although not quite as cheap, I bought a mini pc (Intel N-95, 8GB, 256GB) for not a whole lot more. It has room for a 4TB SSD in a built-in enclosure, which I mirror to an external 4TB HDD nightly. Important stuff is cloud-synced and manually backed up monthly to a external HDD that lives at work. It also runs Jellyfin, minimserver, syncthing etc.
One of the nice things is that it has a full sync of my cloud storage, so I don't have think about backing up individual devices much any more: I create a file on my laptop, it syncs to cloud storage, then to the minipc. From that point on it's part of the regular nightly/monthly backup routine.
If I hit the 4TB limit it might be a pain, as I'm not sure it'll support an 8TB SSD.
Cloud storage isn't a backup any more than RAID 1 is.
Of course it is. PC explodes -> download the cloud copy.
Not to be relied on by itself, but it absolutely qualifies as a backup.
Backup solutions usually need to cover more scenarios than "PC explodes". In fact solutions for that are usually called "disaster recovery" instead.
A real backup solution ought to cover the case where you deleted the wrong file, and you find out the next day. Or it got corrupted somehow (PCs and disks can explode slowly) and you find out the next time you open it, a week later. If the cloud service happily replicated that change, it can't be used to restore anything.
What convenient solutions exist for graceful shutdown of a NAS, without data loss or other drama, in case of power outage? It seems a more pressing concern than flipped bits or network failures.
A filesystem designed to survive sudden power loss? Which is most modern ones right?
My Helios64 has a built in backup battery. I'm not sure they are available anymore (and I wouldn't recommend anyway for other reasons) but I imagine other NASs have similar setups.
The common solution is having an UPS and a NAS that supports graceful shutdown when the UPS detects powerloss
I found an ex-office HP computer with and i5-4670 on the side of the road and have been thinking about setting it up as a home server. Does anyone have a recommendation for how to set it up as a NAS, VPN, Home Assistant and Plex server?
Home Assistant + plugins/extensions ~might be the easiest. Unsure about the VPN part. If you want to tinker and do some sysadmin work, then go install Proxmox and have separate containers for each as you wish.
Tailscale VPN addon for HomeAssistant is relatively simple to set up. You can use it to access your home network remotely ("site to site networking") or to proxy your network traffic through your home network when you're away ("exit node")
I would question all these Raspberry PI-ish NAS attempts, especially when it includes some power adapters and milling out cases. It all feels so fiddly and sluggish while still being "not that cheap". Storing my important personal data on an USB-drive somewhat feels risky. It probably wouldn't burn a house down, but still...
The real benefit is the small form factor and the "low" power consumption. Paying 43 bucks for the whole thing - now asking myself if it is worth saving a few bucks and living with 100Mbit network speed, instead of spending 150 bucks and having 2.5Gig.
There are so many (also "used") alternatives out there:
- Fujitsu Futro S920 (used < 75, ~10W)
- FriendlyElec NanoPI R6C (< 150, ~2W, https://www.friendlyelec.com/index.php?route=product/product...)
- FriendlyElec Nas Kit (< 150, ~5W, https://www.friendlyelec.com/index.php?route=product/product...)
- Dell T20 / T30 (used < 100, ~25W)
- Fujitsu Celsius W570 (used < 100, ~15W)
My personal NAS / Homeserver:
Fujitsu D3417-B
Intel Xeon 1225v5
64GB ECC RAM
WD SN850x 2TB NVMe
Pico PSU 120
More expensive, but reliable, powerful and drawing <10W Idle.You are comparing apple and oranges: the SBC used by the author has a consumption of 0.75W idle and 4W Full load
No I don't. The FriendlyElec NanoPI R6C can be brought to 1W Idle including NVMe SSD and 2.5GBe (and REAL transfer rates of >200MB/s). It's more expensive, but totally worth it in my opinion. See https://www.hardwareluxx.de/community/threads/ultra-low-powe...
Since it has neither ECC nor support for common Open Source NAS Operating Systems, I still would not buy it as my daily driver. I just don't think that a difference of 5W Idle Power is worth the effort of milling out stuff, using USB-Storage and the additional maintenance effort keeping this system up to date.
100Mb may seem like a joke nowadays, but the main purpose of such a toy NAS for me is to keep a copy of a directory with ~200K small files. Having 1Gb would only marginally improve the syncing speed even if the SBC supported USB 3.0.
He's wrong here. The most important thing with small files is latency, and a 1000M network will have significantly less latency than a 100M network.
Anyone running TimeMachine over network knows what I mean - local attached storage is blazing fast (particularly SSDs), wired network storage is noticeably worse performing, and wifi is dog f...ing slow.
So I am a network dummy, but why would 100M vs 1000M have a difference in latency unless the pipe was saturated?
Even in an otherwise unsaturated link, the ping packet will take 1/10th of the time to be transmitted, as the transmission clock is 10x faster.
My NAS is just my old gaming PC - I swapped out the GPU with a more basic one, and I add another hard drive or two every time storage gets low. It works great and costs me very little in both money and time.
I'm currently at 46TB or storage, and I recently threw in a 2.5Gbps NIC when I upgraded the rest of my home network.
(Mine certainly uses more electricity than the one in the article, but I pay $0.07/kwh, and run a few docker images that take advantage of the added performance, so I'm happy with it.)
That's basically my setup. I used to have a dedicated gaming PC and a desktop server but haven't used them in a few years. They both have something wrong with them so I'm just going to frankenstein them together into something pretty good. A 10 year old core i7, 16gb of ram, and an old Nvidia quadro is more than enough to run a bunch of apps and Plex transcoding on top of basic file serving.
That's pretty cool. Fits their use case for sure. I would probably opt to spend a little more for a gigabit port. From what I've seen watching Jeff Geerling, you can setup a pretty reasonable performing NAS on something on these small SBCs.
Any of the latest generation Arm SBCs is actually pretty adequate for NAS purposes, especially if that's all you want to run on it.
If you get a Pi 4 or Pi 5, or one of the Rockchip boards with RK3566 or RK3588 (the latter is much more pricey, but can get gigabit-plus speeds), you can either attach a USB hard drive or SSD, or with most of them now you could add on an M.2 drive or an adapter for SATA hard drives/SSDs, and even do RAID over 1 Gbps or sometimes 2.5 Gbps with no issue.
Some people choose to run OpenMediaVault (which is fine), though I have my NASes set up using Ansible + ZFS running on bare Debian, as it's simpler for me to manage that way: https://github.com/geerlingguy/arm-nas
I would go with Radxa or maybe Libre Computer if you're not going the Raspberry Pi route, they both have images for their latest boards that are decent, though I almost always have issues with HDMI output, so be prepared to set things up over SSH or serial console.
Personally I like the Dell/HP/Lenovo Micro PCs. For ~200€ you can get one with an i5-10500T, 16GB DDR4, and 256GB NVMe SSD and it can be upgraded to 64GB RAM with lot of storage (1x NVMe + 1x 2,5")
I mean you can get one for 50$ on ebay with similar ram and hd, just with a 6700 or 8700 which is more than enough for a NAS lol
As other comments in this thread, I want to echo the value for money that is to be had in refurbished SFF office PCs that come available on the second-hand market.
I picked up an HP ultradesk something or other for dirt cheap a while back. When I got it it turned out to be surplus stock, so not even second hand - was brand new, for maybe 20% the retail price. Dead quiet, and super power efficient. It's not the most powerful CPU, but it's 10th or 11th generation which is perfect for hardware encoding for my media server use case.
It does not have all the hardware for RAID and multiple hard drives and all that, but one NVME boot disk, and one 16TB spinning rust disk is more than enough for my needs. It's media, so I'm not worried about losing any of it.
These boxes are cheap enough that you can get multiple ones responsible for multiple different things in a single "deployment". At one point I had a box for NAS, a box for media server, a box for my CCTV IP cameras and a box running homeassistant. All humming along nicely. Thankfully I was never masochistic enough to try some kubernetes thing orchestrating all the machines together.
This is all obviously for the homelab/personal use case. Would not recommend this for anything more serious. But these machines just work, and they are bog standard X86 PCs, which removes a lot of the hardware config and incompatibility bullshit associated with more niche platforms.
I'd rather have 2 devices with a single drive, each having its own copy of the data than one with a RAID 1.
Odroid HC-4 is only $73... and lightyears ahead https://www.hardkernel.com/shop/odroid-hc4/
It's triple the price so I would hope it's lightyears ahead.
But if you're open to paying that much, well, I was considering that specific board along with some others, then I found an entire 12th gen Intel mini-PC for only 50% more and immediately changed my mind.
I'd say the cheapest NAS would be an external HDD plugged in your WiFi router - most of them have at least one USB port nowadays, with some offering advanced features like photo gallery etc.
Old PC or terramaster enclosure from aliexpress
I'm happy with XigmaNAS (BSD) on a used mini-pc and a multiple USB3.1 HDD enclosure. Speed is excellent as is stability. Having some memory and CPU cycles to spare, I also am playing with Home Assistant Supervised run as a Virtualbox VM inside of it.
Regarding that LaFrite board, I mailed a while ago LoverPi, which appears to be the only one selling it, to ask them if they accept PayPal, but got no reply. Does anyone know of a distributor in the EU or a different worldwide seller?
total 43
So the price of used Sandy Bridge or newer laptop (optionally cracked screen) with 1Gbit ethernet, USB3, couple SATA, couple PCIE lanes (ExpressCard and mpcie slots) and build-in UPS.
If you need to add storage to just one computer (at a time), consider just getting a hard drive enclosure. It's much simpler, cheaper, more secure, and faster than a NAS.
You can turn it into a NAS at any time by adding a mini pc or similar.
My "NAS" and homeserver is an old Lenovo ThinkCentre mini PC with a large SSD inside. My "RAID" is an rclone cloud backup. Might be a bit scuffed but it works really well, at least for me.
cost and power are legitimate motives. I'd probably have started with a rpi-02 though: storage and ethernet over USB is not going to win any races, though it'll compete with what he ended up with...
You can find ex-corporate dell optiplexes or hp prodesks for like $40 for i5-7/8xxx on eBay in the US. They're fantastic.
Same reason I'm not buying another raspberry pi, I'd rather have solar panels and an old reliable pc running somewhere
It’s ironic they call themselves Libre Computer, but don’t release the tools to allow users to create their own images
100MB LAN port is unqualified for a NAS…
I had an old RaspberryPi model 2 around, installed OpenMediavault, a couple of USB HDs and off I went [1]. Amazing what you can do with old hardware!
up-to-date Debian
OP has a perverse sense of humor :)
----
But, not to waste space on this mindless joke, here's my (or, more precisely, my wife's) success story.
So, I've had it with a laptop and built myself a PC. Money wasn't really a problem, I just wanted to make sure I will have enough of everything, and spares, if necessary. So, I've got a be quiet case with eight caddies and a place for a SATA SSD. It's expensive... but it doubles as my main workstation, so I don't have any regrets about spending more on it! It has a ton of room for installing fans. It has like ten of them at this point, plus liquid cooling. The wifi modem that was built into the mobo that I bought doesn't have a good Linux driver... but the case has a ton of space, and so I could stick an external PCIe wifi modem. And I still have plenty of room left.
Anyways. My wife was given some space for her research in the institute she works for. And they get this space through some unknown company with vanishing IT support, where, in the end, all the company does is putting a fancy HTML front-end on Azure cloud services, sometimes mismanaging the underlying infrastructure. While the usage was uncomfortable but palatable, she continued using it. Then the bill came, and oh dear! And then she needed to use a piece of software that really, absolutely, unquestionably needs to be able to create symlinks. And the unknown company with vanishing IT has put together their research environment in such a way that NAS is connected via SMB, and... no symlinks.
So... I bought a bunch of 4T hard-drives, and now she has all the space she needs for her research. She doesn't even pay for electricity :(
I mean wtf wouldn't you just buy a G1 Elite Slice, or any of the various NUC's you can buy for 50$ and get you a full Intel computer with a 6700 or 8700 cpu 4-8gb of ram and a full drive slot, and normally extra space for a m2 and a gbit nic lol
The remedy is to turn UAS off via adding usb-storage.quirks=152d:0578:u to the kernel cmdline.
This is the point where I'd have thrown it in the trash and given up. I simply don't know how people have the patience to debug past stuff like this: I get that the point of the project is to be cheap and simple, but this is expensive in time and decidedly not simple.
Cheap, but not necessarily good, expandable, or resilient.
Although, I can tell you what not to do: a 45 drive SAS/2 or /3 4U JBOD case takes way too much power to use all the time and uses screaming 1U fans and PSUs by default.
I do have 45 drives in various XFS on md raid10 arrays. Don't even mention ZoL ZFS because that was a fucking disaster of undocumented, under-tested, and SoL "support" for something that should only be used on a proper Sun box like a Thumper. XFS and md are fairly bulletproof.
Perhaps one can Ceph their way into significant storage with a crap ton of tiny DIY boxes, but it's going to be a pain to deploy and manage, take lots of space, and probably damn expensive to physically install, network, and provide UPS for.
Performancewise this looks similar to https://gist.github.com/SvenKortekaas/60387b0428b1592e5c9ec0... , where the aluminum-case and baseboard were available for about 11USD for a time. (ex. shipping, OFC)
The fitting SBC was about the same price, the most expensive part was the high-efficiencý (GAN) wall-wart, and 2.5" Disk.
I know this, because I ordered this eons ago :-)
Still running somewhere, that thing. 24/7 since then, with some reboots, because updates...
Runs Armbian, if you like to, or anything else if you are willing to mess more.
Seems to be still on sale, according to https://www.friendlyelec.com/index.php?route=product/product...
Personally I don't want to put too much stuff on my router/firewall. The reason I run OpenBSD on it is because I want it to be as secure as possible, it has the final word in my network and it's the outer most device of my network.
For storage I've been using Synology for a long time, first ds411+slim and now a ds620slim. I love the slim form factor, only 2.5" drives. It just works™
Interesting cheap low power system. My brain connects "NAS" with "Data I value" and there isn't anything in the author's system that is focused on enhancing the data integrity. Not saying that is bad, just saying it's the thing that often makes commercial NAS system both more expensive and more power consumptive. I've got an 8 drive RAID-6 setup as a ZFS pool[1]. More power, last I checked about 50W when idle, close to 150W when "working", but I think at this point I've had failures in nearly every piece that ended up replacing something but never lost any data, never corrupted any files. Replaced the power supply, the mother board, and two drives. I haven't had to replace any memory but I do have an extra stick "spare" because that is the kind of thing that ages out and is hard to replace (it's DDR3 memory but I've had systems with DDR2 so really hard to get DIMMS for).
That said, I do see a lot of value in low power systems like that of the author and run a couple. The way I do the energy calculation though is that I boot them off internal storage (MMC/SD) and then mount a root filesystem from the NAS. That way they don't have any storage power cost directly, they are easy to replace, and the power consumed by my NAS is amortized over a number of systems. giving it some less obvious economics.
[1] It is an iXSystems FreeNAS based system.
Can you (or someone) suggest a backup scheme? I have a 28TB NAS. Almost everything I've looked into is expensive or intended more for enterprise tier.
Are there options for backup in the "hobbyist" price range?
Check Storj distributed storage. Fraction of aws Storage*
$0.004 Per GB/month
For 28TB, that's $1,146.88 per month.
It's a tenth of that.
Hetzner gives you 20TB for around $50/month. Isn’t really redundant, but it’s certainly offsite backup.
AWS, GCP, and Azure all offer cold storage for about $1 per TB per month. If you want any cheaper you need to build a second NAS.
You could also take the awkward route and add one or two large drives to your desktop, mirror there, and back that up to backblaze (not B2).
The other suggestions you got for hot storage strike me as the wrong way to handle this, if you're considering $80 per year per TB for backups then just make another NAS.
For the OP - be careful with AWS, the closest pricing to one dollar per terabyte is S3 Glacier Deep Archive and you'd be surprised how expensive a full restore can be in the event that you need to do so in terms of restore pricing, egress cost, etc.
Another NAS isn't really a good solution (unless you can place it in a different house) - the goal of a cloud back up is that it's offsite.
I use Glacier alongside RAID with 4 drives so that I can recover from any single drive failure (which will happen) just by swapping in a new drive.
Had this setup ~10 years and have had to replace a drive on two occasions but never needed to restore from Glacier.
At this point even if I do need to do a Glacier restore one day it’s still going to work out to be pretty economical.
While true that it is something to be wary of, if you restore the entire backup at full cost every two years it's still cheaper than B2.
The egress is only super high when you compare to how cheap $1 per month is.
And I bet you can find somewhere offsite for a NAS for free or a tiny fraction of $150/month.
`zfs send --raw` of encrypted datasets to https://www.rsync.net/products/zfsintro.html.
That would cost at least 336 € per month.
Yeah, rsync.net is pricey, but in reliable.
I been using Interserver[1] + borg[2] for the last 3 years. With the 10TB plan comes out to $25/mo, but if you prepay a year there's discounts.
For the OPs use case, they have a 40TB plan for $84/mo. Still pricey, but, cheap compared to most other cloud storages. If you have data you care about, off-site backups are required.
[1] https://www.interserver.net/storage/
[2] https://borgbackup.readthedocs.io/en/stable/
The low-tech and not super resilient method is to buy a second 28TB NAS, put it in a different location and sync them periodically when you know your primary is in good shape.
Back in the days of DVDs, I used to backup my 20GB drive onto DVDs. I wonder if you could do something similar today but instead of a bunch of 4GB optical disks, you would use 4 x 8TB drives?
There is 'amanda'. It will split your data up if and you can rotate a bunch of disks.
Used it years ago, we rotated disks every week or something and periodically would take one out of commission and get a new one.
I believe you can mix and match storage mediums - like have your monthly snapshot write to tape.
I have a NAS that has 18 TB effective storage, 36 TB mirrored. It all gets backed up to a B2 back blaze which is about six dollars per terabyte - but I'm currently only using about 8 TB at the moment so it's only about 50 bucks a month.
So this might be on the higher end of the price range if you're using up all 28 TB uncompressed since that's about $168 per month though...
You pretty much need a second, similar system, hopefully not physically nearby. Tape doesn't scale down to home use, and optical is too small.
My home NAS has several roles, so I don't have to backup the full capacity. The family shared drive definitely needs to be backed up. The windows desktop backups probably don't, although if I had a better plan for offsite backups, I would probably include desktop backups in that. TV recordings and ripped optical discs don't need to be backed up for me, I could re-rip them and they're typically commercially available if I had a total loss; not worth the expense to host a copy of that offsite, too; IMHO.
You might do something like mirrored disks on the NAS and single copy on the backup as a cost saver, but that comes with risks too.
A cheap backup scheme:
Buy the hardware to make a lightweight backup server. Make backups work with it. Take it to your friend's place along with a bottle of scotch, plug it in, and then: Use it.
Disaster recovery is easy: Just drive over there.
Redundancy is easy: Build two, and leave them in different places.
None of this needs to happen on rented hardware in The Clown.
https://www.interserver.net/storage/
if your talking cloud backup Wasabi (which uses S3) is the cheapest i could find it’s pay as you go and they don’t charge for upload/download. The pay as you go is $6.99 per TB which would be pretty pricey at 28 TB, but it’s super cheap for my 4tb NAS.
I've had non-ECC NAS systems for over 20 years and I've had exactly zero cases where memory corruption was an issue.
It's OK for corporate systems, but complete overkill for personal setups.
My personal files are ultimately a lot more important to me and much more irreplaceable than any files at work.
I'd never run a NAS without ZFS and ECC.
I don’t think I’ve ever had ECC, and have never had any issues. What kind of problems would you expect to see?
Well part of it is you likely won't see the problem until a long time after it happens. But on servers with ECC and reporting I saw several different patterns:
a) 99%+ (or something) of the servers had zero reported errors for their lifetime. Memory usually works, no problems experienced.
b) Some of the servers reported one error, one time, and then all was well for the rest of their life. If this happens without ECC, and you're lucky, it's in some memory that doesn't really matter and it's no big deal. Or maybe it crashes your kernel because it's in the right spot (flip a bit in a pointer/return address and it's easy to crash). Or maybe it managed to flip a bit in a pending write and your file is stored wrong and the zfs checksum is calculated on the wrong data. If you're really unlucky, you could probably write bad metadata and have a real hard time mounting the filesystem later?
c) some servers reported the same address with a correctable error once a day; probably one bit stuck, any time data transits that address, that bit is still stuck, and that will likely cause trouble. If it's used for kernel memory, you'll probably end up crashing sooner or later.
d) some servers had a lot more ram errors; sometimes a slow ramp to a hundred a day, once or twice a rapid ramp to so many that the system spent most of its time handling machine check exceptions, but did manage to stay online but developed a queue it could never process. Once you're at these levels, you'll probably get crashes, but you might write some bad data first.
Ram testing helps on systems without ECC, but you won't really know if/when the ram has gone from working 100% to working almost 100%. I have a desktop that was running fine for a while, tests fine, but crashes in ways that have to be bad ram, and removing one stick seems to have fixed it.
Non shielded RAM is subject to bit flipping. Non-ECC always carries this risk in general computing, but the problem is compounded when you run a filesystem like ZFS which uses memory as a significant storage element for write cache.
If it would hugely impact your life if a bit were flipped from a 0 to 1 in your stored data - say you make videos or store your bitcoin wallet key on your NAS - you are running a risk not using ECC.
You may not have had issues or ever have issues with non-ECC. Your car may never be stolen if you leave the keys in either, but it's not a good risk proposition for most people.
The "personal files" I really want to keep safe amount to around 500MB of truly important data (pdfs, scanned documents, text files etc) and ~200GB of images and videos.
Both of which are 3-2-1 backed up.
Adding an ECC motherboard to my NAS would cost more than a quarter century of cloud storage for that amount of data.
The rest of my terabytes of data would be inconvenient to lose, but not fatal. In the worst case I'd need to track down a few rare DVDs and re-rip them.
Keep in mind that backups are a solution to a different problem than ECC.
If a file is silently corrupted you could be backing up the corrupt file for years and by the time you discover the problem, it has spread to every available backup.
As for my anecdote, I had a computer with 3 HDDs in a raid5, it had some of my very early programming projects and other various things which I wish I still had. But, I don't have any longer because something, I'm assuming memory, was silently failing and over 40% of the files were turned into jibberish and random binary bytes.
I now use ECC EVERYWHERE now. My laptop, my desktop, my little home server. All ECC. Because, ECC is cheap and provides a lot of protection for very little effort on my part.
Which laptops support ECC?
If you just want to buy something that'll "just work", The Lenovo P16[1], is ECC capable from the factory. Basically anything AMD "should" support ECC, it may need to be turned on in the bios. The problem with "should" is the trail-and-error you'll have to do to find a working combination, though, I personally I've never had many issues getting ECC working.
[1] https://www.lenovo.com/us/en/p/laptops/thinkpad/thinkpadp/th...
Note that AMD APUs prior to the 5000 series only supported ECC on the "PRO" models. For example, the Ryzen 3 PRO 3200G supports ECC, but the Ryzen 3 3200G doesn't.
You might have been leaving fire pits to burn out for 20 years when you go to bed, but it only takes 1 errant spark for the "overkill" of wetting the ashes isn't overkill.
There are various trade offs you can make depending on your filesystem, OS tooling and hardware which can mitigate risks in different ways. However non-ECC invites a lot of risk. How often are you checksumming your backups to validate integrity? It seeks unlikely you've had 0 memory corruption over 20 years, more likely you didn't notice it or your run a filesystem with tooling that handles it.
been using zfs on my home nas without ecc for well over a decade and never had any problems. i've seen people claiming this since before i started using zfs and it seems so unnecessary for some random home project.
Unless you've verified hashes of your files over time you may be having problems and not realizing it.
They did mention ZFS, so verified hashes of each file block. I hope they are scrubbing, and have at least one snapshot.
Why would one snapshot help?
One snapshot would help because, if EVERYTHING collapses, and you need data recovery, the snapshot provides a basepoint for the recovery. This should allow better recovery of metadata. Not that this should EVER happen -- it is just a good idea. I use Jim Salter's syncoid/sanoid to make snapshots, age them out, and send data to another pool.
I agree that ECC is a damn good idea - I use it on my home server. But, my lappy (i5 thinkpad) doesn't have it.
ZFS does nothing to protect you against RAM corrupting your data before ZFS sees it. All you'll end up with is a valid checksum of the now bad data.
You can Google more, but, I'll just leave this from the first page of the openZFS manual:
[1] https://openzfs.readthedocs.io/en/latest/introduction.htmli've heard people say this, like i said, since before i started using zfs and i've never had an issue with a corrupted file. there's a few things that could be happening: i'm the luckiest person who has ever lived, these bit flip events don't happen nearly as often as people like to pretend they do, or when they do happen they aren't likely to be a big deal.
If all you have on your Nas is pirated movies, then yes
But with more sensitive data it might matter to you. Ram can go bad like hdds can, and without ecc you have no chance of telling. Zfs won't help you here if the bit flip happens in the page cache. The file will corrupt in ram and Zfs will happily calculate a checksum for that corrupted data and store that alongside the file.
I have some JPEGs with bit flips. I could tell because they display ugly artifacts at the point of the bit flip. (You can see the kind of artifacts I'm talking about here: https://xn--andreasvlker-cjb.de/2024/02/28/image-formats-bit...)
I'd happened to archive the files to CD-R's incidentally. I was able to compare those archived files to the ones that remained on my file server. There were bit flips randomly in some of the files.
After that happened I started hashing all of my files and comparing hashes when I migrate files during server upgrades. Prior to using ZFS I also periodically verified file hashes with a cheapo Perl script.
I believe ZFS does periodic checksuming (scrubbing).
Strictly speaking I don't think ZFS itself does, but it is very common for distros to ship a cronjob that runs `zpool scrub` on a schedule (often but not always default enabled).
If a single byte flips in a 4-10GB video file, nobody will ever notice it.
There aren't that many cases where it actually matters.
I know ECC is a special type of ram, but how does it help a NAS/Raid setup?
If you're unlucky enough to experience memory errors in one of the intermediate buffers files go through while being copied from one computer to another an incorrect copy of the file might get written to disk.
When running software RAID, memory errors could also cause data to be replicated erroneously and raise an error the next time it's read. That said if the memory is flaky enough that these errors are common it's highly likely that the operating system will crash very frequently and the user will know something is seriously wrong.
If you want to make sure that files have been copied correctly you can flush all kernel buffers and run diff -r between the source and destination directory to make sure that everything is the same.
It's probably way more likely to experience data loss due to human error or external factors such as a power surge than bad ram. I personally thoroughly test the memory before a computer gets put into service and assume it's okay until something fails or it gets replaced. The only machine I've ever seen that would corrupt random data on a disk was heavily and carelessly overclocked (teenage me cared about getting moar fps in games, and not having a reliable workstation lol)
I wonder whether something like Syncthing would notice a hash difference with data corruption caused by such a memory error? And whether it’d correct it or propagate the issue…
Data that's about to be written to disk often resides in ram for some period of time - bit flips in non-ECC ram can silently corrupt the data before writing it out. ZFS doesn't prevent this though it might detect it with checksumming.
https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-y...
ECC for nerds is like gear heads arguing about motor oil.
Only if one set of gear heads was arguing that you don't really need it.
This is only 1 disk, so you are way more likely to lose all your data due to an ordinary single disk failure than to some ram errors.
Hate to see this downvoted because I have personally lost files to ZFS on failing non-ECC memory. It was all my most-used stuff too because those were the files I was actually accessing, then it would compare checksum in memory to checksum on disk and decide disk was the one that was wrong. I noticed via audible blips appearing in my FLACs and verified with `flac -t <bad_flac>`.