For someone that's not a web developer, I found Kanboard to be the easiest to set up, and it has all the basic features you'd expect. It's a traditional PHP app where you copy the files to your web server and set a few configuration options and you're good. If you want to use it locally, you download it, run php -S localhost:8080, and start using it.
Note: The project is in maintenance mode, it hasn't shut down or been abandoned.
Also available as a Docker image, for example:
https://docs.kanboard.org/v1/admin/docker/#running-the-conta... (that page has info about persistent storage, configuration and so on)Honestly one of the fastest and least "bloated" pieces of software in recent memory, way more responsive than something like OpenProject (which I use as a self-hosted Jira replacement for my personal needs), as long as the feature set is enough for you. I did rather enjoy the cost reports of OpenProject, as well as having all of my usual epics and whatnot, but kanban works better for smaller projects than scrum.
What's the point of using Docker for PHP apps?
The main appeal of PHP for me has always been it's ability to work as a “serverless” execution environment, long before this marketing concept even existed, so hosting your own PHP on a cloud machine with Docker sounds really backward to me.
Same reason you'd use Docker for anything, why would it matter if it's Python, PHP or Rust?
Is there something specific about the language that makes Python (or other language) more suitable with Docker for you, compared to PHP?
(Personally I only use Docker when I start to deal with multiple hosts/distributed architecture, which doesn't happen a lot tbh)
I would ask the same question if you were using docker for a Rust project btw.
IMHO Docker mostly make sense when you have projects that require globally installed dependencies (like C or Python).
A globally installed dependency such as, say, PHP?
PHP is somewhat different, because there are many providers that can host it for you in a “serverless” fashion, and the deployment process is as easy as it can ever be: just drop the files on the server and call it a day.
As such it makes little sense to deploy a PHP app on a dedicated server where you're responsible for the PHP server. Or when it is, it's because that you have many of such apps, but then deploying them all side by side with their own Apache/Nginx is going to be very wasteful.
Python has notoriously awful dependency management. One of the biggest appeals of Docker is that it lets you build the equivalent of a "fat jar" so that you get at least somewhat reproducible versions of your dependencies at runtime. For a language with decent dependency management the value proposition is much weaker.
I can't think of anything that Docker would give Python that pyenv and venv wouldn't already.
Not that I wouldn't just use Docker for this, but those two help a lot when dealing with multiple clients who have different versions requirements.
Programmers these days like to overcomplicate things for some reason. I’m as puzzled as you are.
This comment shows a remarkable lack of curiosity. You're not the least bit interested to know why so many people find tools like Docker to be valuable?
It does show empathy, though.
Docker has its advantages, but the approach also has a lot of disadvantages which are not so obvious to junior developers.
Isolation seems fun, but the interfaces (Unix sockets where anything goes) are extremely brittle. Version management seems simple at first, but will become horrible once old containers offer no upgrade path in the future, or when the free hubs from today will become tomorrow's subscription model.
I'm not advocating for PHP, but it sure made deployment of several websites on one machine extremely simple. Eventually version management destroyed some of the fun, which will probably happen with Docker containers as well, given enough time.
Java's application servers were initially also hailed with similar enthusiasm as Docker containers, and look at the complicated mess that has become.
To some, all that is old is new again.
because raccoons like shiny things
One person's over-complication is another person's simplification, it's only "hacked together" if someone else wrote it, etc, etc
For me, I have a cheap cloud server that handles multiple low-traffic personal websites, side projects, etc. Each project has a different tech stack and it can be months or years before I circle back to one to bring it up to date. I don't want to wrestle with making sure that I have the right versions of php and apache for my ubuntu. Having them all as docker containers makes it a lot easier, and a lot easier to move to new servers, too.
To add to this, for me it really helps to look at any piece of software that I want to run pretty much the same way, as a self-contained bundle, not unlike an app installed on a phone.
I can give them resource limits the same way (CPU/memory limits, except easier than cgroups), as well as set restart policies and have a clear look at what's executing where, with something like Docker Swarm it becomes like systemd across multiple nodes and scaling up/down becomes easy, especially with load balancing for network calls. Software like Portainer also has pretty nice discoverability.
Speaking of networking, I don't have to worry about tunnels or firewall configuration myself, can just expose a web server that acts as a reverse proxy and give everything else custom private networks that span across nodes (with something like Docker Swarm again, though Consul and Kubernetes have the same functionality, details aside).
I can have custom port mappings (regardless of what the software uses, I might not even care about digging in some configuration file to change it), which is especially useful when running multiple separate instances on the same machine (like different versions of PostgreSQL, or separate instances for different projects), or hostnames in case I don't want to expose ports.
I can easily have custom persistent/transient storage paths or even in memory storage (tmpfs), when I have persistent storage then suddenly backups become easy to do and I can be very clear about all other directories being wiped and being in a known state upon startup/restart. It's also immensely useful for me to escape the sometimes weird ways how software on *nix uses the file system, I can just mount my persistent files in /app/my-app/database/var/lib/postgresql/data or /app/my-app/web-server/etc/apache2/sites-enabled and know that I don't care about anything outside of /app.
I can also treat Docker as lightweight VMs, except a bit more stateless, in that I can have container images that I base on a version of Debian/Ubuntu/Alpine or whatever, ship them, and then don't have to worry about a host OS update breaking something, because only Docker or another runtime like Podman is the actual dependency and most of the other software on the node doesn't come in contact with what I'm running. With rootless containers, that also improves the separation and security there a little bit.
With all of that in place, suddenly I can even move apps and all of their data across nodes as necessary, load balance software across multiple nodes, be able to easily tell people how to run what I have locally and store and later use these images very easily. Are there pieces of software or alternatives (e.g. jails) that do a lot of the same? Sure, but Docker essentially won in ease of use.
For me, it's the simplicity. I don't have to care whether a project is super basic, or a thorny hairball from hell. Whatever it is, "docker run" is how I spin it up. It doesn't infect my local. I can have three differently hobbled versions of it side by side. Virtualization makes it simple, conceptually - and for me that's more precious than it being actually technically simple.
That's the biggest problem I see with Docker: nobody has an incentive to make well structured software with a lean dependency chain and a straightforward installation process… These used to be good proxy of the overall software quality of the project, but now Rube Goldberg projects that just happen to work by luck are routinely distributed and the user has no idea of how big of a mess it is internally.
With docker you can self host on your dev machine and for a solo developer it's the end of all problems.
For me the point of using Docker is that it's a unifies configuration and backups, and makes installation easier.
I can easily see which directories or files to back up, and it's fairly explicit which knobs I've tweaked or config files I've changed, regardless of what stack the app relies on.
It's also makes it much easier to roll back a version. Just take zfs snapshots of relevant directories before pulling new image, if it goes south just roll back snapshots and use the old image.
I self-host a dozen or so different web apps locally on an old PC, and containers are what makes that feasible to do in my very limited spare time.
If I tried to run all of these directly on the hardware with whatever minimal non-Docker setup each uses, I'd have a dozen update processes, a dozen different ways to start the server, and a dozen log files following a dozen different conventions for storage. I'd also have to be sure that each app I add either uses a different database and language runtime than the ones I've installed already or is compatible with the versions of those that I already installed.
Instead, with Docker/Podman, I can use the same tool (compose files stored in a git repo) to manage all of the apps and their dependencies with zero risk of weird dependency issues across app boundaries.
I always keep the host clean of any language, interpreter, tool, except for docker, and everything I run is ran within docker, I have multiple clients with multiple level of support and PHP versions needed, each project lives in its container
What about installing the right php libs that are expected.
What about keeping the machine up to date to new distro release that inevitably comes with a new version of PHP that isn't compatible with the app ?
Don't get me started on setting properly php-fpm and any other reverse proxy.
All of those issue are gone with docker. You always run the right version of everything as it was intended by the developer (if they are the on that maintain the image)
Tangent - is there a Docker Wherehouse where I can find dockers to DL an run, that HN would suggest and some use cases of "pull a docker from here to do X - super cool"
Docker hub has been one of the primary registries. Each of the cloud providers typically have their own concept for docker or image repositories, and you can build docker files locally of you have a docker file in source code.
Thank you,
I was more looking for use-case as opposed to a barf of all dockers....
"I want to do TASK so here are all the dependencises for you to do TASK and how they will link"
---
And yeah; how do you tink a 3-year old in 2050 is going to be able to setup his dev env? Do you want him to learn binary.
Dunno about 2050, but it wasn’t particularly difficult in the 1980s.
I guarantee it will be harder in 2050 than it was in 1980.
This feels like a good longbets.org :-)
The awesome selfhosted* list is a pretty good resource. While it does mention if there's a Docker container, I've found a few of the services without one listed do actually offer one, just have to search for it.
* https://github.com/awesome-selfhosted/awesome-selfhosted
It's plug-in system is quite comprehensive. I just finished writing a note taking plug-in and the source code itself was a great reference for developing a plug-in.
Mind sharing a link if it is public?
I'm also interested in the link!
I didn't realise it'd moved into maintenance mode, where abouts is that detailed?
https://github.com/kanboard/kanboard
Really great project, just wish nested tasks or sub-tasks was easier to interact with.
I also use Kanboard, it's pretty decent.
Does it offer Agile-like integration?