return to table of content

Why I self host my servers and what I've recently learned

apitman
37 replies
2d19h

I think we'll see some stratification in the self hosting community over the next few years. The current community, centered around /r/selfhosted and /r/homelab, is all about articles like this. The complexity and learning are sources of fun and an end in themselves. That's awesome.

But I think there's a large untapped market for people who would love the benefits of self hosting, without needing to learn much if any of it.

I think of it similar to kit car builders vs someone who just wants to buy a car to use. Right now, self hosting is dominated by kit cars.

If self hosting is ever going to be as turnkey as driving a car, I think we're going to need a new term. I've been leaning towards "indie hosting" personally.

burningChrome
10 replies
2d17h

I've wanted to do this for years, but trying to secure a server is the stuff of nightmares for me.

Are there resources out there about what I need to know about making sure my stuff is secure enough and I'm not just leaving my stuff wide open for people to hack it? I've always been interested in hosting my own email server, but the security parts have kept me from doing it.

Any resources you can point me to would be much appreciated.

don-code
5 replies
2d16h

I do self-host my mail, and I've done so since about 2005! It gets harder and harder to with every passing year. Some notable things that have changed since then:

Many ISPs now block inbound port 25, required to receive mail via SMTP. It's quite hard to get an ISP to unblock this. My university wouldn't at all, and I left a laptop under my parents' couch for four years to do it instead. Some time later, Comcast began blocking it as well, and the only way to get it unblocked was to call support, work your way up the phone tree to someone that realized you were talking about inbound rather than outbound (no, this is _not_ a misconfiguration in Outlook), and get them to push a special config to your cable modem, which would be reset whenever another config was auto-pushed, or your modem lost power. You may notice that implies extended downtime when Comcast, my electric service, or my physical operations (read: I unplugged it by accident) suffers a failure.

Many mail servers (e.g. Gmail) require you to have reverse DNS that matches your forward DNS. Getting your ISP to understand what they're asking you to do is... difficult. The last time I changed ISPs, it took about a week to get this done. Comcast batches these updates weekly, and support wanted to double, triple, and quadruple-check that what I was asking for was, indeed, what I was asking for.

There are a bunch of anti-spam measures in effect that use DNS: SPF and DMARC are table stakes for most mail servers (again, e.g. Gmail) to speak to you. I've so far managed to get by without setting up DKIM, but I suspect that's next.

The worst part, by far, is spam blacklists. Many blacklists will already have your IP address listed by policy - you're a _residential IP_, not to be trusted. The Spamhaus PBL, for instance, automatically blocks all Comcast residential IPs. There is nothing you can do about this, and many mail servers will refuse to speak to you if you're on a blacklist.

These days I am paying Comcast an arm and a leg for business-class service, which both gives me unbridled inbound port 25, and also a (luckily!) clean IP on block lists.

apitman
1 replies
2d16h

Thanks for the writeup. Very interesting.

Many mail servers (e.g. Gmail) require you to have reverse DNS that matches your forward DNS

What does this look like on a technical level, ie records and whatnot? I'm not super familiar with reverse DNS.

rstupek
0 replies
2d13h

For reverse dns you’ll need the help of your isp (the owner of the IP address) to delegate naming of the IP or for them to set it up on their end

BobbyTables2
1 replies
2d14h

Why pay for business service?

$6/month gets you a cloud VM that can be used to proxy incoming connections to your home…

supertrope
0 replies
2d13h

Many cloud provider IP ranges are on spam ban lists.

citizenpaul
0 replies
2d11h

I left a laptop under my parents' couch for four year

My coworkers house burnt down because of doing exactly this. Though don't think it was hosting anything just being put out of the way when not using it.

layer8
1 replies
2d17h

A Linux server (e.g. stock Debian) on a well-reputed VPS is pretty secure by default, in my experience. Use software packages from the Linux distribution whenever possible (certainly for email software) and configure unattended security updates.

Note that you generally can’t host email from a residential IP, so you’ll probably want to use a VPS. Making services on your home network publicly accessible (i.e. not just via VPN) obviously comes with more risks; personally I wouldn’t do that.

transpute
0 replies
2d17h

> Making services on your home network publicly accessible

Tailscale's private-overlay-on-public-internet has made it feasible to provide services to a few trusted clients, even behind NAT.

Tailscale app on Apple TV can be an exit node, e.g. travelers can access geo-restricted content via their residential broadband connection.

apitman
1 replies
2d16h

I would echo others here and just use a cheap VPS to experiment with. Then you have much less to worry about.

How technical are you?

burningChrome
0 replies
2d2h

I'm pretty adapt technically. I've been a front-end developer for about ten years so using Wordpress and Drupal and setting up sites either manually or via an ISP is pretty familiar. In that regard, using VPS is also pretty familiar so I will most likely start there.

telgareith
9 replies
2d19h

The term is "managed vps" and/or some variation of "marketplace image", I think it's linode that has a particularly... Vibrant (not in an all positive way) selection. AWS' is pretty good, but not as diverse. I assume due to the increased technical aptitude of the average customer and the learning curve.

apitman
8 replies
2d19h

One thing I strongly agree with you here is being open to the cloud. Self hosting strongly favors running on your own hardware, but indie hosting focuses more about the tangible benefits, ie data ownership, mobility which breeds competition, etc.

That said, I think the VPS marketplace is still too complicated. What about updates, backups, TLS certs, domains, etc?

layer8
3 replies
2d17h

What about updates, backups, TLS certs, domains, etc?

You are right that one has to take care of those individually. For domains, however, I would say that it’s important that you manage them separately from the VPS provider, because this lets you switch VPSs easily at any time. For TLS certs you use something like certbot, or a web server like Caddy that has it built-in. It’s generally straightforward. VPS providers usually also offer backup solutions. If you use software from a Linux distribution like Debian or Ubuntu, automated security updates are easy.

apitman
2 replies
2d16h

It’s generally straightforward

For me, but what about my grandma? I want her to be able to live in a world where she can use her old smartphone to run an Immich server by simply installing an app like any other, then going through a simple OAuth flow to create a tunnel to the net so her friends can access her photos from a link she gives them. That's the level of UX I'm pursuing.

fragmede
1 replies
2d16h

Why does grandma need that level of UX and to self host it? Why doesn't any of her loving grandchildren run an unRAID server at their house and help her out?

apitman
0 replies
2d16h

What if she doesn't want any of us to have unfettered access to all her data?

transpute
1 replies
2d17h

> Self hosting strongly favors running on your own hardware

In comparison, tenant (storage, colocation, cloud, VPS) hosting contracts often encompass Terms of Service, metered quotas/billing, acceptable use definitions, and regulatory compliance.

> data ownership, mobility which breeds competition

Historically, the buyers of commodity "web hosting" and IaaS have benefited from many competing vendors. Turnkey vertical SaaS often have price premiums and vendor lock-in. If "indie hosting" gains traction with easy to deploy and manage software, there may be upward pressure on pricing and downward pressure on mobility.

apitman
0 replies
2d16h

Great points.

This is one reason I think it's important to build on protocols. It's attempt to "lock things open" and foster competition from the beginning. For example, my product TakingNames[0] builds on a simple, open OAuth2 protocol for delegating authority over domains. Anyone could implement a competing service in a matter of days, forcing me to compete on quality/price/etc.

Another project I have is focused on bringing tunneling a la ngrok or Cloudflare Tunnel to the masses. There are many[1] tunneling services. This will be the first one built on a simple, open protocol for both auth and transit.

[0]: https://takingnames.io

[1]: https://github.com/anderspitman/awesome-tunneling

indigodaddy
1 replies
2d16h

As someone mentioned, regarding TLS, Caddy makes that REAL easy, as in pretty much touchless and the most dead simple config file you’ve ever seen

apitman
0 replies
2d16h

I use Caddy every day. My grandma, not so much.

transpute
4 replies
2d19h

If Apple ships a Home Intelligence competitor to $15K Tinybox, it could be called "lux hosting".

[1] https://tinygrad.org/#tinybox

ajcp
2 replies
2d13h

Hard to take this product/company seriously when every piece of copy on their site feels toxic, condescending, or dismissive.

transpute
1 replies
2d13h

What's an alternative server for self-hosted AI with comparable price/performance?

ajcp
0 replies
1d17h

Oh, I truly have no idea, but that wasn't my point. Their product could be best (or the only) in class for all I know.

PLG88
4 replies
2d9h

I love this idea. Personally, I would love to self-host, but don't due to not being technical enough to use a command line.

I am from a non-technical background but learnt loads of technical stuff over the years, to the extent that I can describe many complex topics, present, or write stuff for technical people. But my non-computer science background means I am not familiar with the command line. I have used it and understand, but have not 'learnt its language'.

Does any 'turnkey' self-hosting solution exists which provides an abstraction, so that I can just deal with GUIs and not command line to start (and learn on the way)??

In fact, that would be a great way to learn.

pxc
0 replies
1d14h

and learn on the way

In case you're interested in resources for this: I think _Learn Enough Developer Tools to Be Dangerous_ is a great start. I've been guiding my roommate through it as he studies, a chapter a week. If you do pick it up, just skim the chapters on editors— their examples are overly specific to a choice of editor of few outstanding strengths, and if you prefer a different editor you may struggle to find equivalents for some of the hokey examples the book uses.

https://www.oreilly.com/library/view/learn-enough-developer/...

Special editions/compilations of Linux magazines can also be a very good source of high-quality tutorials, including for CLI stuff. These are nice because while they include general introductions, they're mostly comprised of bite-sized tutorials that you can pick and choose according to your interest. I also like them because they're available in print, colorful, and shiny, and thoughtfully laid out, plus there are no ads— very pleasant compared to the web in many ways. Linux Pro Magazine did one on shell topics this year, back in February: https://www.linuxpromagazine.com/Resources/Special-Editions/...

Such magazines also include step-by-step guides for setting up services that I was able to follow (with some trepidation!) when I was just a kid who was new to Linux and still honestly a bit scared of the command line. Linux Format is really good for this because it's targeted at desktop Linux and computer hobbyists broadly rather than rather than programmers or IT professionals. Their guides assume little to no familiarity with the command line, so they often include reminders of little bits of command line basics rather than just assuming you share that context with the authors: https://linuxformat.com/

Besides web-based management interfaces for servers, like Proxmox, you might consider getting started by just running some services on a spare desktop computer. openSUSE has a long history of emphasizing GUI administration tools, so many relatively 'advanced' tasks for it do not require the command line, which is somewhat different from other distros. (If you give it a try, its GUI configurator, YaST2, will strike you at first glance as having a dated look. This is intentional— continuity is a priority for YaST, so GUI-based tutorials from many years ago will still be accurate.) It's also a distro with good guts and nice CLI tools, so you won't necessarily outgrow it after you get your feet wet with the command line.

nakkaya
0 replies
2d6h

Yes there are. I would suggest going through the subreddit mentioned /r/selfhosted. There are GUI tools and NAS products that would let you host docker images. As for CLI ask an LLM, for simple common commands you are going to be dealing with they are pretty good at it.

gsck
0 replies
2d2h

Definitely worth just sitting down and learning how the command line works. Its not as scary as it looks.

No need to have any fancy comp-sci background, hell I have an arts degree!

77owen
0 replies
17h47m

As far as turnkey solutions go coolify.io is the one I’ve seen floating around recently.

austin-cheney
3 replies
2d19h

Absolutely. I got my wife hooked on self hosting too.

I am currently writing a new web server to solve for this space that is ridiculously simple to configure for dummies like me, has proxy and TLS built in, serves http over web sockets, and can scale to support any number of servers each supporting any number of domains provided port availability. The goal is maximum socket concurrency.

I am doing this just for the love of self hosting demands in my own household. Apache felt archaic to configure and my home grown solution is already doing things Apache struggles with. I tried nginx but the proxy configuration wasn’t simple enough for me. I just want to specify ports and expect magic to happen. The best self hosted solutions ship as docker compose files that anybody can install within 2 minutes.

fragmede
2 replies
2d19h

Fascinating! What didn't you like about caddy?

austin-cheney
1 replies
2d17h

I have not tried caddy. I will look that up.

mrinfinitiesx
0 replies
2d17h

You're going to be like 'Oh' once you do try it. It's worth it.

mr_toad
0 replies
2d12h

But I think there's a large untapped market for people who would love the benefits of self hosting, without needing to learn much if any of it.

Isn’t that what devices like My Cloud are aimed at?

fyi626367
0 replies
1d22h

Freedom Box is(was?) a pretty good system for making self hosting things accessible and easy. A couple of clicks was usually all it took.

https://freedombox.org/

kkfx
20 replies
4d12h

A small suggestion about resources: try using NixOS/Guix System instead of containers to deploy home services, you'll discover that in a fraction of resources you get much more, stability, documentation and easy replication included.

Containers now, like full-stack virtualization on x86 are and was advertisement stuff pushed because proprietary software vendors and cloud providers need them, other do not need them at all and devs who works for themselves and generic users should learn that: if you sell VPS et al. obviously you need them, if you made your own infra from bare metal adding them it's just wasting resources and add dependencies instead of simplify life.

snowpalmer
10 replies
3d19h

I agree that removing the container would be better on resources.

However, most self-hosted software is already "pre-packaged" in Docker containers. It's much easier to grab that "off-the-shelf" than have to build out something custom.

kkfx
7 replies
2d22h

In NixOS/Guix System there is no need of such package, the configuration language/package manager takes care of anything, configuration included.

Let's say you want Jellyfin?

    jellyfin = {
      enable = true;
      user="whatyouwant";
    }; # jellyfin
under services and you get it. You want a more complex thing, let's say Paperless?

    paperless = {
      enable = true;
      address = "0.0.0.0"; 
      port = 58080;
      mediaDir = "/var/lib/paperless/media";
      dataDir = "/var/lib/paperless/data";
      consumptionDir = "/var/lib/paperless/importdir";
      consumptionDirIsPublic = true;
      settings = {
        PAPERLESS_AUTO_LOGIN_USERNAME = "admin";
        PAPERLESS_OCR_LANGUAGE = "ita+eng+fra";
        PAPERLESS_OCR_SKIP_ARCHIVE_FILE = "with_text";
        PAPERLESS_OCR_USER_ARGS = {
          optimize = 1;
          pdfa_image_compression = "auto";
          continue_on_soft_render_error = true;
          invalidate_digital_signatures = true;
        }; # PAPERLESS_OCR_USER_ARGS
      }; # settings
    }; # services.paperless
Chromium with extensions etc?

    chromium = {
      enable = true;
      # see Chrome Web Store ext. URL
      extensions = [
        "cjpalhdlnbpafiamejdnhcphjbkeiagm" # ublock origin
        "pkehgijcmpdhfbdbbnkijodmdjhbjlgp" # privacy badger
        "edibdbjcniadpccecjdfdjjppcpchdlm" # I still don't care about cookies
        "ekhagklcjbdpajgpjgmbionohlpdbjgc" # Zotero Connector
        # ...
      ]; # extensions
     
      # see https://chromeenterprise.google/policies/
      extraOpts = {
        "BrowserSignin" = 0;
        "SyncDisabled" = true;
        "AllowSystemNotifications" = true;
        "ExtensionManifestV2Availability" = 3; # sino a 06/25
        "AutoplayAllowed" = false;
        "BackgroundModeEnabled" = false;
        "HideWebStorePromo" = false;
        "ClickToCallEnabled" = false;
        "BookmarkBarEnabled" = true;
        "SafeSitesFilterBehavior" = 0;
        "SpellcheckEnabled" = true;
        "SpellcheckLanguage" = [
                           "it"
                           "fr"
                           "en-US"
                         ];
      }; # extraOpts
    }; # chromium
Etc etc etc. You configure the entire deploy and get it generated, a custom live? With auto-partitioning and auto-install? Idem. A set of hosts in a network similar (NixOps/Disnix) and so on. The configuration language do all, fetching sources and build if a pre-built binary is not there, setting up a DB, setting up NGINX+let's encrypt SSL certs, there are derivation (package) per derivation options you can set, some you MUST set, defaults etc., it's MUCH easier than anything else, only issue is how many ready-made derivations are there, and in packaging terms Guix is very well placed, NixOS is more than Arch, even if something will be always not there or incomplete as long as devs do not learn alone the system and start using Nix/Guix also to develop, so deps are really tested in dedicated environments and so on, and users always get a clean system, can change and boot in a previous version and so on.

Arelius
2 replies
2d12h

I mean, I don't know what that is, but looking at the documentation, it should be something like:

  config = {
    services.uptime-kuma = {
      enable = true;
      settings = {
        PORT = 3001
      };
    };
  };

kkfx
1 replies
2d3h

It's a node.js web-app, so it might need some NGINX (or other web server) settings, eventual acme/let's encrypt etc, few SLoC to customize and that's the very point: being able to customize anything without two-stage with one by hand in a terminal or with separate tools like Ansible.

Arelius
0 replies
2d

Yeah, none of those seem to be covered in the parent post, so the Nix example is equally simplistic.

zarzavat
0 replies
2d14h

Nix/NixOS and Docker work in fundamentally different ways so you have to decide which method of operation is most suitable for your use case.

If you use Docker then each container requires its OS image, which is fine if you’re running on a server and running other people’s software, but if you want to develop your own containers on a laptop then that convenience becomes an inconvenience. Nix is more lightweight.

pxc
0 replies
1d21h

Support for uptime-kuma is built into NixOS and documented here: https://search.nixos.org/options?channel=24.05&from=0&size=5...

As to your question: on NixOS there's no need to fuck around with volume mounts or port forwards or specifying a source image or explicitly setting a restart policy, so the majority of what is defined in that file, which is pure boilerplate, disappears. And that docker-compose file doesn't configure any non-default settings or install any optional dependencies, so all you're left with for the NixOS equivalent is:

  services.uptime-kuma.enable = true;
so about 0.1x the amount of configuration as appears in that docker-compose.yml file.

But imagining for a moment that NixOS didn't already have built-in support for running uptime-kuma as a service, or even include uptime-kuma in its package repositories, I would still prefer to deploy it on NixOS, if I were deploying it for self-hosting purposes. Chasing the docker-compose.yml files of others is like distrohopping. The distrohopper bounces aimlessly from one operating system to another, his choice of fundamental operating system tools dictated to him by a string of out-of-the-box experiences which are determined by a complex enough confluence of factors that they are significantly random. When his distrohopping solves a problem, it causes another one, and his mode of engagement guarantees that neither problem is a thing he'll ever understand or have real control over.

Unlike the distrohopper, the distrochooser is patient. She is willing, when needed, to RTFM. For the price of learning how the software she runs is operated, and how the separate programs and libraries included in her OS fit together to form a stack, she is free. The distrochooser can run any Linux distro she wants, because she knows how to bring over whatever tools she needs. As a result, she has an opportunity that the distrohopper does not: the opportunity to choose basic tools— a package manager, an operating system, a deployment method, a configuration management system— based primarily on technical merits.

In so much of our lives, and perhaps especially our professional lives, we don't use the tech we use because its good. We don't use it because it's fun, or pleasant, or even reliable. We use it because it's bundled; the company already paid for it. Or we use it because of network effects: some application expects it, or it's the only platform some compliance-mandated tool supports, or its file format is widely abused for interchange because the vendor is a monopolist. Or we use it because it's the thing we already know and we're on a tight deadline.

Hobby computing is special. It's a context where there is actually room to choose our software for virtues that are intrinsic to it, or for reasons that are personal to us. Where we can choose tech just because it is beautiful, or because it was made by a friend, or because it feels good to use, or because it respects our freedom. To shrink our horizons when the code is transparent and freely available, when mastery is right there for the taking by anyone who wants it, is to erase one of the vital differences between the kind of computing that personal self-hosting is and corporate computing. It's a terrible waste.

I want to use NixOS because the iteration loop for working with it is fast, safe, predictable, and ultimately very satisfying. I want to use NixOS because it at least attempts to solve fundamental problems of building and distributing software that container-based solutions just punt on. I want to use NixOS because the interfaces it offers for managing services are outstandingly flexible and feature-complete. Why would I just surrender to worse-is-better, to network effects, to incidental bullshit— and give all of that up— when I can just roll up my sleeves and write the frickin' NixOS module? I can run whatever applications I like *and* run it on an OS I like, managed by tools I like. So why the hell wouldn't I?

kkfx
0 replies
2d13h

    services.uptime-kuma = {
      enable = true;
      appriseSupport = true;
      settings = {
        ...
      } # settings
    };
These are examples, you have to write down the system configuration, meaning picking uptime-kuma, choose what deps you want and so on. No need to "run a oneliner" on a homeserver than ssh and do things by hand to tweaks, with no reproducibility or hoping someone else have already made them in something ready-made.

Doing the pick-a-oneliner is a classic developer in Silicon Valley mode move which cause a classic set of disasters, like wasting gazillion of resources for nothing, not seen an image with someone else ssh authorized key, keep running vulnerable software and so on because he/she just need a oneliner, no matter if it pull down 10 images, consuming 1Tb of storage, having stellar CPU requirements etc because you are in SV-mode, you do not care and anyway you do not pay for the iron because it's a company one or it's a cloud service you do not even see how much loads you generate, AWS bill will go to someone in accounting. On the other side of course IT business like that, because they can sell you a ready made image, selling just few line of code can't be priced much, while selling a service who provide a big file set might be priced well. Also selling a VPS and so on "hey devs, you literally just need to click here and we do the rest" pay very well. Those who sell of course. Aside such mindset create an enormous amount of layers who in the long run create so heavy and complex infra no one knows them.

Just try to see what Home-assistant do. By default their devs suggest to run it on a dedicated machine (!) so they imaging you buy a single-board computer juts to run a damn app, ending up with this model with a gazillion of single board computer scattering around. Second option containers. So to run HA you need let's say 1-3Gb of stuff, nearly no mention to the simplest pip install. Zero mention that unlike pip install NixOS does not need to download a gazillion of binary basic stuff for python deps, if you have them on your /nix/store you just link them on as many pkgs needs them, keeping just one copy.

In the end you craft your system, as a single application, a single configuration, which demand 1/10 of resources used otherwise, have much less thing who can breaks, it's easy to know entirely etc.

You might state that's a classic operation vs devs.

transpute
0 replies
3d16h

NixOS improves the reproducibility of both self-hosted software and configuration state.

pxc
0 replies
1d22h

However, most self-hosted software is already "pre-packaged" in Docker containers. It's much easier to grab that "off-the-shelf" than have to build out something custom.

Imo, the quality and documentation level of an application's build system is a valuable signal in determining its overall health and competency. It usually (though not always) ends up being that well-maintained software written for modern runtimes is very easy to build from source and run. Even if I do end up using the developer's container image, I generally want to check out their manual deployment documentation.

apitman
3 replies
2d16h

I've heard good things about Nix's dependency management. Does it seamlessly handle the case where you have 3 different apps that all require 3 different version of python with different python dependencies?

Also, does it offer any isolation between apps and between apps and the OS, especially the filesystem?

kkfx
0 replies
2d5h

Yes by design, meaning you do not have an FHS structure where "installing" means copy this in /usr/share, this in /usr/lib ... Alla packages just see a small "virtual" root tree, meaning a network of symlinks.

You can choose for package A to se package B and C, but not D, different versions means simply different packages because you have package alone, let's say the current stable version but also package_major_minor for various supported major or minor version. As well as you can choose to fetch sources from a specific commit from a public repo, build them and link them in the system.

Every package is not "isolated" in the container self, like small userlands on a common kernel, but in the sense of "having different views of the common system", so you avoid wasting gazzilion of storage and ram and CPU plus anything is still from the upstream or yourself, there are no outdated forgotten deps left around.

Plus being anything configured in the config (well, not mandatory, but that's the typical way) you have a fully reproducible system, maybe not binary-reproducible meaning "packageA now is version x.y" when you rebuild the system, but anything could be rebuild in the current state of nixpkgs/guix states from few kb of text, typically versioned in a repo. Every update does not overwrite anything, if put some new versions in a special tree /nix/store o /gnu/store and updates symlinks accordingly, if it does not work you restore the previous state instead/before garbage collection.

Practically it do the same of IllumOS with IPS (the package manager) integrated with zfs (and the bootloader), here instead of zfs clones and snapshots you have a poor man's version with symlinks.

There is no isolation by default, meaning it's a single system, but you can "portion" the system in various ways. Of course on Linux there are no IllumOS zones of FreeBSD jails, the state of paravirtualisation it's very behind those unices.

granra
0 replies
2d11h

The first, yes.

The second, you can do this by setting certain options in the systemd service for your apps, but this is true for any systemd distro.

Arelius
0 replies
2d12h

Does it seamlessly handle the case where you have 3 different apps that all require 3 different version of python with different python dependencies?

Ohh yes! That is exactly what it does on a repeatable and documented way..

But it also does a lot of work to do this. And doesn't hide the systems of that so much. So that means you're likely to have yo care about how it does all that work. And if you start doing something weird, might have to do a lot of work yourself..

VTimofeenko
3 replies
2d17h

Containers allow running software that does not have a nix package available and one could not be bothered to write. My lab is fully on NixOS, but a couple of services are happily chugging along as containers in podman.

kkfx
2 replies
21h34m

So you prefer manually handling such containers, that you probably do not really know enough, perhaps even for serious usage, instead of writing a derivation? No reproducibility etc

For quick testing something, nothing to say, but for real, albeit personal homelab, usage...

pxc
1 replies
13h42m

An escape hatch like that can be really nice for trying software out on a temporary basis, before you know if you care about it enough to write a package and a service module.

Given time limitations, I can imagine living with some applications like that for quite some time, their packaging sitting at the bottom of a long todo list. :)

kkfx
0 replies
11h21m

Now dream, it's a dream, but perfectly feasible technically, where most devs have found and understood the NixOS/Guix System way to develop and manage whole deploy. We would have various distro with different configuration languages like Nix and Guix, and some less diverse, since all devs simply "package", meaning describing, their code in their own distro how hard could be port the derivation/description?

Try also compute how hw requirement for whole infra plunge thanks to an immense overhead avoided on scale. The sole unhappy? Those who sell services, because for that they do need containers. This would push another evolution: built-in easy to tune isolation from a config, so they can sell instead of a VPS a system where you upload just your config, they import it, producing an isolated FHS for you, something we already have (i.e. FHSUserEnv) but with tunable system isolation. You would get your VPS, configured with just few SLoC instead of manually/in wrapped ways via SSH, and they can sell much more on the same iron, with much better performances as well.

All of the above is perfectly possible TODAY, it's only a matter of mass knowledge or lack thereof.

otter-in-a-suit
0 replies
2d18h

We use nix at work. I'm not a huge fan - I find it too opinionated. Appreciate it for what it is, though, and understand its fans.

Since at $work, we run K8s + containers in some shape or form (as well as in... basically all previous jobs), using the tech that I use in the "real world" at home is somewhat in line with my reasoning about why the time investment for self hosting is worth it as a learning exercise.

ProllyInfamous
13 replies
2d17h

This is tangentally-related, but I feel it is very wrong that so many smaller governments (e.g. smaller US cities) host "public information" on private servers (e.g. links to PDFs from a Google Drive)... or even worse inside some walled-garden (e.g. Facebook).

My own personal DNS does not resolve to any Google/Facebook products, reducing profiling; but by denying their ad-revenue, I also deny myself access to information which IMHO should be truly available to the public (without using a private company's infrastructure).

I absolutely understand that many people will just say "don't block them, then." My argument is that governments should not host public items on private servers.

throwaway8481
3 replies
2d14h

Tangentially, I really dislike walking into the DMV and seeing ads from private companies. I heavily dislike that ID checking and document verification is done by ID.me and others for what is a public service I pay for through my taxes.

Maybe for a while I can avoid submitting my documents and information to partners-of-the-DMV, but just like airport security it's a convenience tax. They do not value your time, and they will demonstrate it by putting you through extra hoops to coerce you into giving them everything.

anonexpat
2 replies
2d13h

Unfortunately, you can’t avoid having your information sold by the DMV in California. Likely others too.

How this is legal is beyond my comprehension.

throwaway8481
0 replies
1d14h

Yep. California DMV. 1 in 10 Americans live in California and these companies have our data.

ProllyInfamous
0 replies
1d6h

Honest-to-god, I do not have a permanent email address (I use burners, when necessary). It has been years since I received any SPAM (cause there's nowhere to receive).

Just over a year ago, I had a civil court action where the court REQUIRED I list my email address; when I wrote "none," the clerk was upset; eventually a judge required me to sign an attestation that I do not use email.

Just seemed ridiculous that Tennessee's court systems even ask for this, let alone assume everybody has/uses email. Plus, it's all public information...

transpute
2 replies
2d17h

> governments should not host public items on private servers

Some works of the US federal government are not subject to copyright and can be mirrored freely.

What licenses do city government use to release public information?

stackskipton
1 replies
2d17h

Depends on the city/state laws but vast majority of the time, it's all public domain under FOIA laws. City/State won't say that because it's assumed.

transpute
0 replies
2d17h

Local newspapers could mirror public domain citygov content, providing a public service and growing their online traffic.

mr_toad
2 replies
2d12h

Getting a PDF published on a government website is a six month long process involving approval from a dozen managers, editors, and assorted hangers on. It’s little wonder people use things like Google drive and Dropbox.

treyd
1 replies
2d12h

So that should be an indicator that the government offices should just change their internal policies if people are going to go around them anyways.

bingo-bongo
0 replies
2d12h

I agree, but the keyword here is “just” - if you think 6 months is a long time for publishing a pdf, imaging trying to make actual changes to the same system :/

https://www.dailymotion.com/video/x36574i

kevin_thibedeau
1 replies
2d17h

I had body camera video sent to me over a "private" YouTube link. I would have welcomed GDrive over that. On the plus side, I took advantage of the automatic transcript generation to review the obnoxious things the officer said without having to watch it all.

ProllyInfamous
0 replies
1d6h

automatic transcript generation

There is a neat application called "Whisper" which will translate/transcribe just about any media format, locally on your computer.

But I guess that's only necessary if bodycam footage isn't uploaded to YouTube (wow!).

obnoxious things the officer said

My hope is for your swift and proper resolution, whatever the charges/incident. They're allowed to lie/bait/entrap; glad you got the footage.

wannacboatmovie
0 replies
2d13h

My own personal DNS does not resolve to any Google/Facebook products, reducing profiling

This is incredibly silly.

If you smashed your computer with a sledgehammer you would also be unable to access those documents.

Do you stop there? What if they host their site on GCP? Amazon? Azure? They're all in the ad business. It's a slippery slope to a whitelist-only Internet.

rented_mule
6 replies
4d15h

I self-host a lot of things myself. There is one scary downside I've learned in a painful way.

A friend and I figured all this out together since we met in college in the 1980s. He hosted his stuff and I hosted mine. For example, starting in 1994, we had our own domain names and hosted our own email. Sometimes we used each other for backup (e.g., when we used to host our own DNS for our domains at home as well as for SMTP relays). We also hosted for family and some friends at the same time.

Four years ago he was diagnosed with cancer and a year later we lost him. It was hard enough to lose one of the closest friends I ever had. In his last weeks, he asked if I could figure out how to support his family and friends in migrating off the servers in his home rack and onto providers that made more sense for his family's level of technical understanding. This was not simple because I had moved 150 miles away, but of course I said yes.

Years later, that migration is close to complete, but it has been far more difficult than any of us imagined. Not because of anything technical, but because every step of it is a reminder of the loss of a dear friend. And that takes me out of the rational mindset I need to be in to migrate things smoothly and safely.

But, he did have me as a succession plan. With him gone, I don't have someone who thinks enough like me to be the same for my extended family. I'm used to thinking about things like succession plans at work, but it's an entirely new level to do it at home.

So, I still host a lot, but the requirements are much more thoroughly thought through. For example, we use Paperless-ngx to manage our documents. Now there's a cron job that rsync's the collection of PDFs to my wife's laptop every hour so that she will have our important papers if something happens to me.

Thinking carefully enough to come up with reliable backups like this makes things noticeably harder because not all solutions are as obvious and simple. And it's not something that ever occurred to us in our 20s and 30s, but our families were one tragedy away from not knowing how to access things that are important soon after we were gone (as soon as the server had trouble). There is more responsibility to this than we previously realized.

zeagle
3 replies
4d15h

I've given this some thought too and am doing some documenting for friends. Hard to know the answer.

I have paperless photos seafile and a few other things copying to a usb drive nightly that my spouse may remember to grab unencrypted. I'm tempted to throw a 2tb ssd in her laptop to just mirror it too. But access my nas let alone setting it up somewhere else after a move or with new network equipment, email hosting for our domain, domain registration are all going to be voodoo to my spouse without some guidance. I'm tempted to switch to bitwarden proper instead of self hosted too.

justsomehnguy
1 replies
2d18h

are all going to be voodoo to my spouse

That's why you really need to rethink your 'if you are hearing this I musta croaked' procedure.

Thing is, 99% of the files on your NAS and whatever never would be accessed after your death. And anything of importance should be accessible even if you are alive but incapacitated but your NAS is dead.

So the best thing to do is to make a list of Very Important Documents and have it in the printed form in two locations, eg your house for the immediate access and someone's parents who are close enough. And update it every year, with a calendar reminder in both of yours calendars. You can throw a flash drive[0] there too, with the files which can't be printed but you think they have a sentimental value.

[0] personally I don't believe SSDs are for the task of the long term storage, but flash drives I've seen to survive at least 5 years

zeagle
0 replies
21h56m

For sure. Good advice. It's a fool dream to think almost anything has value for more than pennies on the dollar or access after one passes as many of us have learned cleaning out elderly parents homes and estates.

For what it's worth: my solution is having an external USB plugged into the NAS that gets nightly rsync'd copies of photos, phone backups, paperless' PDF archive and seafile's contents in a regular folder. Few people know to grab it. The second part is our laptops keep a copy of seafile's contents (all our documents and another flat file paperless backup in it). A few of my friends and a txt file on that drive have a list of stuff that will break in the midterm, namely: email hosting, domain renewal.

A few things on my todo list are: probably stop self hosting calendar/contacts one day, put a large SSD in her laptop so it syncs the photo share from the NAS, switch to paid bitwarden instead of self hosted.

Other things are gravy. My accountant and lawyer can figure out business stuff, corporate liquidation, and life insurance. Funny you say that about SSDs, just in the last day my <1 year old 990 is having issues.

transpute
0 replies
4d14h

Data recovery instructions can be documented on paper in the same physical location used for financial accounts, e.g. fireproof safe, trusted off-site records, estate attorney. These recovery instructions are also required for data hosted by third parties.

transpute
0 replies
4d14h

Continuity and Recovery are required by all infrastructure plans, since the number of 3rd-party suppliers is never zero, even with "self" hosted infrastructure.

dmvdoug
0 replies
2d18h

I have nothing to say about the technical stuff, just that I’m sorry for your loss, and that from perspective you were a true friend by taking that task on after they were gone.

bovem
5 replies
4d4h

Just today I had to sign up for a service and went to bitwarden app on my phone to generate password (linked to self hosted vaultwarden server) but the new password entry couldn’t be saved into the app because the server was unreachable.

Then I had to go restart my VM and reconnect my VPN. I am now thinking about switching to bitwarden premium and opt-out of self hosting for password managers.

transpute
0 replies
3d16h

Virtualization platform tooling can monitor VM operational status and restart when needed to maintain availability.

sethammons
0 replies
2d9h

This is one reason why I moved from self hosted to their paid offering. The other is that I trust their security better than mine

otter-in-a-suit
0 replies
2d18h

Author here. Bitwarden (as much as I appreciate them!) isn't something I self host, since it’s too critical an application for me (similar to email). I pay for 1Password.

hypeatei
0 replies
2d17h

Exporting your vault every so often to offline storage (like an encrypted hard drive) is a good happy medium IMO.

greenavocado
0 replies
2d20h

KeepassXC on Syncthing is so easy to use even my girlfriend uses it without problems

2OEH8eoCRo0
5 replies
2d19h

I love this but I'd like to know more about the hardware.

As an aside, I find it amusing that commenters here say that they "self host" in the cloud. It ain't self hosting unless the server is under the same roof as the family!

sethammons
1 replies
2d8h

Would you consider a VPS like a droplet on digital ocean to be self-hosted?

2OEH8eoCRo0
0 replies
1d22h

No

otter-in-a-suit
1 replies
2d18h

That I can help with! :-)

This is more or less still accurate hardware wise, although it predates me using proxmox: https://chollinger.com/blog/2019/04/building-a-home-server/

This is my SAS HBA setup for the zfs drives: https://chollinger.com/blog/2023/10/moving-a-proxmox-host-wi...

The last node (besides the Pi 5 I mention) is a 2019 System76 Gazelle: https://chollinger.com/blog/2023/04/migrating-a-home-server-...

All re-used / commodity hardware. Drives are mostly WD Reds with mixed capacity zfs arrays (mirrored), as well as Costco sourced external drives for the laptop's storage (Seagate? I'd have to check) connected via USB (not ideal, but beats $1k+ for a rack mounted server w/ SATA or SAS drives).

transpute
0 replies
2d17h

Thanks for the additional detail.

Your blog post inspired a discussion on "Coding on iPad using self-hosted VSCode, Caddy, and code-server" (80+ comments), https://news.ycombinator.com/item?id=41448412

pxc
0 replies
1d15h

I don't think that's a conventional requirement. And having done both, I frankly don't find the hardware piece of the picture very interesting. It was never where most of the action was.

rr808
4 replies
2d17h

Self hosting makes you realize how insanely expensive cloud providers are. AWS charging for IP4 addresses was the last straw for me.

apitman
1 replies
2d16h

Depends how much you value your time. I think most self hosters do it for reasons other than money.

pxc
0 replies
1d23h

It's really not that hard to run a server with a handful of applications on it, especially for personal/home use. It seems to me that when developers talk about this kind of thing they often massively overstate how much work is involved in running a server and keeping it up to date while massively understating the inherent complexities of working with cloud services.

I haven't really witnessed the cloud serving as a big time saver in my career. Cloud-centric ops teams seem to be consistently larger than those running applications on regular, shmegular servers (whether rented or their own), if anything.

For self-hosting purposes, I'd expect serverless deployments of open-source apps to take more time for most people to figure out and get right than just running the same apps on a VPS, unless you had spent years deploying to the cloud at work and also never learned Linux usage basics. And if you deploy to a only a single cloud VM, you're just using an extremely overpriced VPS at that point.

Gigachad
1 replies
2d16h

I’ve found the opposite. Google drive costs me less than a VPS, and far less than a physical server to do the same job.

shprd
0 replies
2d14h

I’ve found the opposite. Google drive costs me less than a VPS, and far less than a physical server to do the same job.

Can you share the web applications and databases you serve from your google drive? since you can "do the same job" and it costs less? You must be into something here.

Ignoring any privacy concerns for a moment. Neither the comment you're replying to, nor the submission is talking about just storing documents in a storage. They're hosting applications, databases, services which you need a "server" for and can't do in Google drive.

You might not the have the same use case, you might not need to host websites, databases, virtual machines, DNS, and other services, but then you're not doing "the same job". You just have a different use case. Just like you don't need an IDE if you just want to take notes.

m463
4 replies
2d18h

I self host too.

a couple points

- proxmox hits an SSD pretty hard, continuously. I think with zfs, it probably hits even harder. A lot of it is every second keeping state for a cluster, even if you have only one machine.

- I bought mikrotik routers for openwrt. I tried out routeros, but it seemed to phone home. So I got openwrt going and didn't look back. I am switching to zyxel since you can have an openwrt switch with up to 48-ports.

- I used to run small things on a pi, but after getting proficient at proxmox, they've been moved to a vm or container.

- the most wonderful milestone in self-hosting was when I got vlans set up. Having vlans that stayed 100% in the house was huge.

- next good milestone was setting up privoxy. Basically a proxy with a whitelist. All the internal vlan machines could update, but no nonsense.

- it is also nice to browse the web with a browser pointing at privoxy. You'd be surprised at the connections your browser will make. Firefox internally phones home all. the. time.

dmateos
2 replies
2d16h

I noticed that with proxmox, my SSD wear out was going up about 1-3% a month.

There are things you can do to minimize this even with ZFS tho in terms of arc cache and setting syslog to log to memory instead of disk etc.

Now i get about 1-2% every 6 months.

timc3
1 replies
2d12h

Do you have any more details such as a blog post on what you did?

zxexz
0 replies
2d13h

I've been curious about a running something proxmox-like for a while, but I really want something a little more "hackable" without learning an entirely new system for managing configurations, yet still has an intuitive interface that people that don't understand all layers of the stack very well can still use without having to feel reliant on the others. I'm curious if you or others have any thoughts on that. It's probably too specific and complicated to exist.

justsomehnguy
4 replies
2d18h

One^W two things what makes self-hosting a bit more attractive:

a) besides the some bootstrapping nuances you are not forced to have a working phone number to be able to use some resource. It's usually not a problem until... well until it became a problem. Just like for me yesterday when for whatever I tried but I couldn't register a new Google account. There is just no other option than SMS confirmation.

b) there is way less things to change 'for your own convenience', like a quiet removal of any option to pre-pay for Fastmail.

PS oh and Dynadot (which I was happy using for more than 10 years) decided (for my convenience, of course) to change the security mechanism they used for years. Of course I don't remember the answer for the security question and now I forced to never ever migrate from them, because I literally can't.

floating-io
3 replies
2d17h

quiet removal of any option to pre-pay for Fastmail

Eh?

I just purchased a new 12 month Fastmail plan for my business with no issue a few weeks ago.

Up to 36 months is still listed on their pricing page...

justsomehnguy
2 replies
2d17h

Refer to https://news.ycombinator.com/item?id=41242945

For years I just uploaded a lump sum and it was spent as I used the service per year. This way I didn't need to bother what I would be without email when the paid period is over and I didn't need to overpay much in case I would need to cancel early.

Now I need to be sure I be around with a working CC at the time the bought period is over... and what if won't? Do I need to jump through cancel-and-reorder hoops every couple of years? More importantly, am I sure what a couple years later these hoops would work or through some very unlucky coincidence it would wipe my decade+ emails? Sure, Fastmail guys would be so sorry but... that wouldn't help me.

floating-io
1 replies
2d16h

Okay, I misunderstood what you meant.

That said, I feel like you're asking for something that is fairly non-standard in the industry in general, though I could be wrong; I've never tried to do things that way myself.

If I were fastmail, those tax issues they mention would definitely take precedence over other issues in any event. Writing your own sales tax/VAT/whatever software strikes me as a special kind of hell given all the tax laws that have to be supported (and kept updated!) for every different jurisdiction out there.

For me, being able to pay three years in advance and only have to renew once every three years is fine, and they did offer a workaround for you, but to each their own I suppose.

justsomehnguy
0 replies
2d15h

non-standard in the industry in general

Except Fastmail had it for years, every hoster on WHMCS has it always as it's built-in. AWS is piloting it for a small subset of American-only customers. Every domain registrar is having an option to add a prepay, because this is the way how this business works.

those tax issues they mention would definitely take precedence over other issues in any event

Sure, but we are back to the point I made in a first place.

once every three years is fine

For me too (though I prefer two years), but I definitely don't want to make a bizarre dances because 3rd party billing which praise itself as a solution for SaaS and digital doesn't have a basic thing what was a thing for decades. I've seen enough of shit happen (even at Fastmail) what would make me extremely 'dance-averse'.

And again 'we made this change for your convenience' - except it's no longer convenient for me. And thanks, I can do the very hard math problem of multiplying the cost to the current exchange rate.

johnklos
3 replies
2d19h

I agree with Christian about pretty much everything here. We self-host for multiple reasons, and we don't necessarily need others to necessarily understand our rationale, although that'd be nice.

For me, one thing that stands out as something driving the desire to self-host everything is that large corporations, given enough time, invariably let us down. Christian's experience with Contabo illustrates the one game that I will do any amount of work to avoid: people who pretend to know what they're talking about but who really only waste our time in hopes to put off dealing with an issue until someone else actually fixes it.

The one place where I can't avoid this truly stupid game is with getting and maintaining Internet for my clients. You're not paying for "enterprise", with "enterprise" pricing of $750 a month for 200 Mbps? Then tough cookies - you'll get the same junk we force on our residential customers, and you'll never, ever be able to talk to a human who has any clue what you're talking about, but you'll be able to talk to plenty who'll pretend to know and will waste hours of your time.

The more time they waste of mine, the more energy I'll expend looking for ways to subvert or replace them, until I eventually rely on corporations for the absolute minimum possible.

transpute
1 replies
2d17h

> you'll get the same junk we force on our residential customers

In locations with few competing providers for wired broadband, 5G "broadband internet" has brought some competition to entrenched telcos. While mobile data latency is not competitive with cable/fiber, it can serve as a backup for wired connections lacking an SLA.

vel0city
0 replies
2d16h

The 5G T-Mobile stuff in my area has often had latency at least as competitive as most cable providers in the area. I've had friends do cloud gaming on it no problem.

xarope
0 replies
2d15h

"Contabo down for 4 days"... looks nervously at my contabo instance with 16GB of RAM and 600GB SSD storage.

Just putting it out there, is contabo really that bad? I've had mine for just over a year, had a roughly 6 hour outage recently which did make me a bit nervous.

renewiltord
2 replies
2d20h

I used to self-host a lot of things:

1. My blog

2. My friends' blogs

3. BIND for all this

4. A mail-server on this

5. A MySQL database on this

All this was on a Hetzner server that was nominally set up to be correct on restart. But I was always scared of that because I built this up from when I was a teenager onwards and didn't trust my younger self and couldn't find the time to audit. 10 years afterwards, with 10 years uptime, and no consequences of data loss or theft (it might have occurred, just that nothing affected me or my friends) Hetzner actually warned me they were going to decomm the underlying instance and no longer supported that VPS.

I backed everything up, copied it, and for the last 8 years have faithfully moved from home to home carefully transporting these hard-drives and doing nothing with them.

When I finally set up everything again, I did it much more manageably this time, with backups to Cloudflare R2 for the database and resources, and Dockerfiles for the code. I restarted the machine and brought everything up.

And now I use GSuite instead of my own mail. I use Cloudflare instead of my own DNS. There's a lot I outsource despite "self-hosting". It's just far more convenient.

So the answer is that I had no BCDR on the old thing. Maybe I'll train my kids and have them be my BCDR for the new thing.

wahern
1 replies
2d18h

The nice thing about OpenBSD is that HTTP, SMTP, DNS, and many other common services are bundled into and developed with the OS. Every 6 months a new release comes out, and each release comes with an upgrade FAQ with step-by-step instructions for any service that might require configuration changes. Sometimes major changes in popular ports/packages, like PHP or PostgreSQL, are mentioned as well. See, for example, https://www.openbsd.org/faq/upgrade74.html. Note that it's a single page, and for a release with quite a few changes, fairly easy to read and understand--major changes within and across subsystems are planned to minimize the total amount of changes per release.

This upgrade FAQ is priceless, and has accompanied each release for the past 20 years (since 3.5 in 2004). OpenBSD is deliberately developed for self-hosting all the classic network services.

The trick to minimizing complexity is to keep your system as close to stock as possible, and to keep up with the 6-month cadence. Over the long haul this is easier with OpenBSD--most of the time your self-hosting headaches are some OpenBSD developer's self-hosting headaches.

sgarland
0 replies
2d17h

Or you build golden images with Packer + Ansible or the like, and swap images out every N months. The work to get everything automated and configured can be a bit daunting, but I much prefer the end result over doing continual upgrades.

akira2501
2 replies
2d20h

Home labs are great. They are a good learning tool to understand systems in _isolation_.

They're terrible for understanding emergent properties of production systems and how to defend yourself against active and passive attacks. Critically you also need to know how to unwind an attack after you have been bitten by one. These are the most important parts of "self hosting."

Otherwise, you might be getting in the habit of building big rube goldberg machines that are never going to be possible to deploy in any real production scenario.

Make it real once in a while.

sgarland
1 replies
2d17h

Otherwise, you might be getting in the habit of building big rube goldberg machines that are never going to be possible to deploy in any real production scenario.

"Not with that attitude"

  – People I have worked with.

zxexz
0 replies
2d13h

If the feedback loop is tight enough, anything is possible! Every crash becomes a dopamine rush!

ipaddr
1 replies
4d15h

I went back and read some previous blog posts. He was part of the great 2023 layoff. I'm curious where such a talented guy landed. Did he find a position?

bionsystem
1 replies
2d13h

We used GCS (like S3 for GCP) for storing prometheus backups (via thanos) at $previous_job. That thing cost an arm and a leg for what it did (presumably, thanos was partially at fault but I didn't dig that much).

I moved to minio (in a GCP VM) and reduced our overall GCP bill by 70%. Yes, using cloud storage was 2/3 of the cost of our cloud infrastructure.

But overall, going too far in the self-hosted route has its costs. Hardware depreciation is one (and the author mentions UPS which seems huge in addition to being critical), cooling/powering, and of course the time for maintenance. If you are going this route you are doing this because you want to learn that stuff, not because you want to save the subscription. Otherwise, just use less services and keep price comparison lists updated.

vidarh
0 replies
2d13h

The sweet spot is renting dedicated services. Most of the maintenance is someone else's job, and you still just get a single bill. Compared to the big clouds providers, cost reductions of 50% to 90% are common.

And devops costs often go down (I do contracting on this area, and those of my customers who opt for dedicated services invariably need less of my time)

linsomniac
0 replies
2d18h

Has anyone tried those Lithium Ion UPSes? ~5 years ago we removed the UPS from our dev/stg stack because in the previous 5 years we had more outages caused by the UPS than issues with the utility power. A better battery technology sounds compelling.

For production, of course, it's all dual feed, generator, UPS with 10 year batteries, N+1.

iwontberude
0 replies
2d8h

No mention of how many terrible routes there are to residential ISPs? The packet loss you get as a home enthusiast service provider to points of presence around the country is abysmal. VPNs become a requirement for accessing these non-cloud services. Always a troll toll.

arcastroe
0 replies
1d14h

The author mentions they use Komga for comics and calibre-web for ebooks. However I personally find the Komga is a much better eBook reader than calibre-web. I use it for all my books and works incredibly well on mobile. The only thing I don't like about Komga is the logo, which I simply replaced on my private instance.

akho
0 replies
1d10h

I do not understand this. Why use a three-node Proxmox cluster to self-host services that would (generously) need one N100 (with straightforward LiIon battery backup)? Why all this complexity for a relatively barebone setup?

They aren’t even self-hosting their own files, relying on Wasabi instead, and I do not understand why. Surely there is an HDD somewhere between those three nodes in the Proxmox cluster?

FloatArtifact
0 replies
2d14h

There are very few people or projects talk about actually backup and restore application data.

This is especially true the open source systems like truenas scale. Any turnkey selfhosting software that's not implement a robust backup restore system is essentially holding your data hostage.