This looks good, and Dokku has been very solid for me, but removing the Docker dependency means that now I'm beholden to my OS's choices. For apps that might run for years without maintenance, that's not ideal, as you'll quickly find you need a specific version of the OS for them.
I love piku. I wrote a webapp tutorial for piku which got turned into a repo as part of the official GitHub piku org. You can find that here:
https://github.com/piku/webapp-tutorial?tab=readme-ov-file#b...
It explains how piku works under the hood, as well as showing a minimalistic Python web app example from a user standpoint.
Thanks for the explanation, official repo doesn't make it clear enough for me.
So, did I understand correctly, that Pico installs both an agent on the remote machine and a commit hook on the local machine? Why didn't they minimize the overhead by just making the remote machine a Git remote and do all the work there when you push a specific branch to that remote?
You’re confusing things, there is only the remote, the local machine doesn’t need anything. We do have a simple CLI you can run locally, but all it does is ssh remote <command> to scale up/down workers, change settings, etc.
Thanks for clarifying!
piku installs an agent on the remote machine (piku.py) which itself also provides the support for making that machine a git remote.
There is no commit hook on the local machine. On the local machine, you simply have a shim named "piku" which is essentially running "ssh remote /path/to/piku.py $@" to control the remote machine.
Thanks for clarifying!
"What is a Heroku-style deploy?"
thanks for that. I have no idea what heroku is or does.
Sure thing! Bit of cloud computing history. Covered a bit here:
Basically it was the first PaaS to improve the developer experience when working with server infrastructure. It had git integration and allowed to scale easily your apps from a CLI
The new piku docs are pretty but, as a potential new user very interested in trying piku, the new docs are completely useless to me. I gave up on piku because the docs essentially assume I already know everything I need to know to run and use piku. Your doc fixes that, but I never found your doc even after spending quite a bit of time trying to figure out how and whether I could use piku. I never would have known it existed without your comment here.
At a minimum, your doc should be prominently linked to from both the piku repo and the piku docs (or more prominently linked, if it's already linked somewhere), if not pulled completely into the docs home page.
That said, if you're interested in a suggestion, take a look at an end-to-end coolio tutorial that shows how to go from new bare metal server to publicly accessible custom domain name with SSL cert, and add the extra steps to your doc (even though yes, they have nothing directly to do with piku, because they have everything to do with what a potential new user actually wants to do and the potential new user doesn't know how to do those steps yet even though you do).
Your doc is already hundreds of times more useful than the main piku docs page. Extending your doc to cover an example of how to get to exposing a publicly accessible custom domain with SSL cert would make your doc hundreds of times more useful than it is now. Yes, I know, there are a ton of ways to get from where your doc ends to a publicly available custom domain with SSL cert. Pick one. It doesn't matter what you pick. The person who cares which approach you use already knows how to do the approach they prefer. You're adding these steps for the person who doesn't know how to do any of the anpproaches and just wants to get to their site hosted on a $5 droplet or whatever.
Again, your page is a huge help, this suggestion is just about making your page a huger help.
For reference, here's a sample coolio end-to-end example showing how they go from bare metal to publicly accessible custom domain with SSL:
https://billyle.dev/posts/self-hosting-your-website-with-coo...
The goal of all this isn't about making it possible to do things, it's about massively increasing the number of people who adopt piku by making it easier for more people to do so.
Acknowledged. The tutorial is linked someplace deeper in the docs, but I am adding a direct link to it in the docs home page. Should be up in a little while.
This is now linked from the docs home page.
I think a more common use case than doing deploys by pushing to a different remote is to send git repo webhooks on PR merges to main to an API that has a deploy key and can fetch the repo itself.
This afaik is missing from most PaaS tools (CapRover excluded, but it has been illegally relicensed to non-open-source). Perhaps watchtower or something could replace the functionality?
Actually, this is how I deploy my static websites: piku in lazy mode handles GitHub hooks, pulls the source and renders them out to cloud storage, then kills all workers and idles again.
Does it support deploy keys, or are your website source repos public?
The piku micro-app that does the deployment is just a 10-line Bottle app that validates the GitHub hook and does a git pull with a private SSH key, so yes.
are there docs for this setup?
It’s just a 10-line script, I’ll see if I can sanitize it and add to the docs (one of the samples already does something similar, you can peek at the repos to get ideas)
Didnt know ...
"CapRover has built in anonymous usage analytics starting v1.11"
https://github.com/caprover/caprover/blob/master/TERMS_AND_C...
https://github.com/caprover/caprover/issues/1852
Was looking at CapRover to see if it has REST API
Looks shaddy
You scared me for a moment, as I've just setup a new VPS with CapRover and migrated all my projects from heroku. Doesn't look too shady for me, there's a oneliner to disable analytics, it seems enough for me.
You still have to agree to the terms and conditions of use of the nonfree application which can of course change at any time without notice. It’s a time bomb.
I’m thinking of forking it and adding all his dumb and easy table stakes features (2fa etc) that he is trying to gate as subscriptionware.
If you want to contribute any of that to piku, we’ll welcome it. Might take a bit to review and merge, but we’re always looking for non-breaking improvements
It’s not even open core. The solo maintainer simply relicensed the entire repo to a nonfree license without consent of the copyright holders to all the external contributions.
With PHP, 1-line (no new tools):
sftp user@host remoteFile localFile
Joking aside, I’m a bit surprised such a tool would be developed in Python given its dependency’s and runtime (which is not easy on the user).To be honest, Python made it stupendously simpler than anything else because it has a great standard library. The only dependency (click) is rock solid and made it a lot simpler to handle commands independently, but we could probably do without it and just use the built-in argparse—-but at the expense of a few more lines of code I didn’t want to maintain.
Also, Python is everywhere, on every OS and Linux system, so it was a natural choice. I also wanted it to be easily hackable and extensible, and few languages would make it simpler to understand or extend.
That’s pretty funny. You may want to look a little further field to discover that the machines with Python are far from “all the machines” out there. Particularly production servers, which, if they run responsibly, are hardened with every extraneous bit of software removed.
I developed security software in Python that ran on 100k+ production nodes covering dozens of operating systems. They all had Python.
Counter-anecdote: none of my Linux PCs have python.
Counter-counter-anecdote: my toaster has python.
Debian comes prepackaged with Python. If there are distros that are good enough for a server almost out of the box, surely Debian stable is one.
It's actually worth taking your joke seriously to compare and contrast:
- piku deploys via git rather than scp/sftp, but authenticates via ssh like those tools
- piku supports a number of runtimes, including Python, Ruby, Node, Go, Clojure. The runtimes are implemented rather simply, you can add your own rather easily, see examples here in the code: https://github.com/piku/piku/blob/8777cc093a062c67d3bead9a5d...
- For each runtime, a mechanism is utilized to install and isolate package dependencies (requirements.txt in Python, Gemfile in Ruby, packages.json in Node, etc.)
- a Procfile and ENV file are used to declare your application entrypoints and envvars akin to Heroku / 12 Factor App ideas
- a CLI (ssh shim on dev client machine) is provided for checking status and logs from the client (as well as stop/start/restart)
- since all applications are managed via uwsgi on the remote, there is also support for worker/sidecar processes and cronjob-style scheduled tasks
- HTTPS via Let's Encrypt (acme.sh) is handled automagically for web apps
I describe more about how piku works in this tutorial:
https://github.com/piku/webapp-tutorial?tab=readme-ov-file#b...
You're right that PHP apps have a simple deployment story, and in a way piku brings something akin to this level of simplicity to other web programming runtimes.
Isn't it better to create local docker repository and then use Podman Quadlet with autopull images to run apps?
Better in what way?
1. Less dependencies (only Podman and registry is needed) 2. Rock solid rootless systemd service management 3. Easy integration with systemd-proxyd 4. Easy manage dependencies between containers (with healthchecks) 5. Rollbacks
Sounds interesting! Is there any support for multi-node systems? Let's say I want to have an ingress Caddy proxy on one node, which reverse proxies to several backed APIs on other nodes - can this be done simply with Podman Quadlet?
Also, what is the localdev UX like? With Docker Swarm I can easily run very similar setups in dev, test and prod, and it works with multi-node setups, has support for secrets etc. But the lack of work going into Docker Swarm becomes more concerning as the years pass by.
Also, had no idea systemd-proxy was a thing - is there anything systemd doesn't have its tentacles into? :)
If your VPS is wired with another one using VPC or any other internal network it'll just work. Just point Caddy to specified internal IPs of your other servers.
It's not designed to work on local envs. When I wanted to debug infra I used to run it on Vagrant though
Does this all fit in 256MB of server RAM?
You can’t do that on tiny systems very easily.
First time I read about piku. I have no idea why, but the feeling of `git push` to initiate a deployment like piku does always felt magical to me. There's nothing simpler than that.
This is timely for me as well as I just open sourced (yesterday!) a project that is in the same space, but for Kubernetes (https://github.com/pier-oliviert/sequencer).
All of this to say, congrats! It looks great.
It works like magic, but it's also extremely simple to DIY if you wanna learn.
If you set up a server, you can create a git repo by just doing `git init --bare`, add the setting `git config receive.denyCurrentBranch updateInstead`.
After that you can use git hooks (more specifically push-to-checkout hook), to receive uploads, compile and launch. The hook will just be a simple shell script, the most basic version could be a variant of `compile && install && systemctl restart service`.
From there you'll be able to copy the repo locally and pushing your changes will now trigger the hook you've setup.
git clone root@yourserver.com:/path/to/git/folder
You just described Piku, except that it’s a Python script that also sets up nginx and a process supervisor for your code :)
Yeah I love the simplicity of Piku, being able to actually understand what is happening behind the scenes is a great quality. :)
I've been doing almost exactly this. Have set up Ansible to automate it.
Why would I want to use Piku? Would it give me some benefits I currently don't have?
Maybe I'm missing something obvious, but how does sequencer use git to do deploys, if it's similar to Heroku/dokku/piku? Seems like you're dealing with kubernetes templates and kubectl rather than `git push` to deploy, which would put the project is a completely difference space.
Very happy to see this here - check out our freshly revamped docs at https://piku.github.io/
Is this the successor to Dokku? I didn't know you had a second project.
Nope, just took inspiration from it because I couldn’t run Docker on some of my targets.
The new docs look great!
Great to see the updated docs.
pikku means tiny or little in Finnish. Is it where the name came from?
I don't know but my first association was "pico-dokku"
My guess has been they both originate from heroku; docker heroku to dokku, pico heroku to piku
Cute, as in the sibling language, Estonian it means “big” or “tall”
Can it be a good replacement for Capistrano (for deploying rails applications)?
Love the focus on being lightweight
Recently I wanted to create a super basic website, and discovered it’s actually pretty hard to create something simple
And then, even if you manage to create something actually simple, you usually end up having to manage some not so simple deployment process together with hopefully some sort of version control
Ended up settling for putting plain html/css/js files in a git repo, then configuring auto deploy to GitHub Pages on merge to master (via Actions)
Also an option, if it's just for you and with not too many updates, you can upload the new files to ftp as a manual step.
Does GitHub pages support ftp? Or are you talking about some other potential hosting options?
Yes, ftp is pretty easy for static sites. However, given I want to have version control, it’s nice to have automated deploys happen after a git push
Use Podman Quadlet, I use it as replacement
Does someone know how it handles (if any) zero downtime deployments? Like, if your Python service is running in one machine on port 8080 behind nginx, how does piku switch to a fresh instance running in the same port?
Slightly off-topic, but you can do zero downtime deployments using systemd and socket activation.
That gives me a couple of ideas...But picking a shorter name than "piku" is going to be hard... Maybe I can whip up a proof of concept and call it "syd".
Currently it will only kill running processes after it finishes deploying the new git push. Socket and session handling will vary depending on your code and whether you use uwsgi or run your own HTTP daemon.
One thing it already does (optionally) is to kill off unused instances and idle, lazily starting them when a new connection comes in.
That is brilliant. Something complex, but not complicated. A project distilled down to its UNIX essence: hackable, lean, and magic.
That said I want to give this a go but don't immediately see how I can migrate my overengineered 8-10 container spaghetti of a docker-compose file to a state where I can use piku instead of manual 'git pull && docker compose up' on the remote
That kind of situation was what drove me to go simpler :)
Yes it's me, not you ;)
Currently hyping myself up to drastically simplify everything, which will be a joy onto itself
The initial commit was eight years ago??
I wish I had known about this project ~18 months ago. I was specifically looking for a way to have a Heroku-like dev experience deploying to my Raspberry Pi, and this looks like it's trying to be exactly that.
Exactly. There's a visibility problem. I've just setup a new VPS with CapRover and never found any mention of piku in the hour I've spent checking for comparisons between "Heroku-style self-hosted PaaS" dokku, CapRover, coolify, and dokploy.
We’ve been using it for a long time, yes, but doing Marketing for a 1500 LOC Python script felt a little overblown :)
Still, Chris did a public presentation on it near the beginning (video’s in the docs) and other folk did similar things, so…
15 years ago it was common to deploy web applications as live SVN repositories with a hidden path executing 'svn update' on manual http request.
Not quite the 'push deploy', but that was the way apps were developed back in the days, and for some reason I still prefer that approach. Commit, test, and at one point manually nominate and deploy stable version.
Yes, when we didn’t want a build machine, we’d just build in production. Isolating production with no unauthorized binary (like Alpine) was a long path away…
Has anybody used this for Ruby on Rails?
Yes. Not any of the maintainers, though.
Nice work. But why isn't Docker supported as a runtime? Or is it?
The FAQ explains it: https://piku.github.io/FAQ.html
You can use docker run commands, but that’s not the main goal.
Eventually, we'll need something more secure than effectively `sudo curl INSTALLER | sh` as a way to install stuff. I can see why package managers aren't always the answer, but still.
piku itself is neat and I like it.
Actually, we had manual install steps as the only way to go for a while. You'd be surprised at how many people asked for a one-liner... I'm going to add a note to the docs about that, since I've seen a couple of comments here of people who were put off by it and didn't even read the rest.
I actually only install piku via cloud-init, but there are plenty more options: https://piku.github.io/install/index.html
Is there support for secrets?
You have to bring your own. I have some trivial deployments that fetch secrets from Azure keyvaults using either release hooks or app startup code.
These self-hosted open source paas alternatives are really cool.
Off the top of my head I know of
coolify dokku kamal
and now piku
Don't forget CapRover. I'm just trying it on a new VPS and it just works as expected. I would have tried piku first if I knew about it, because it's even more minimal.
Is go support planned?
It works with Godeps. Module support was always a bit in flux when we added that, but it should be an easy first contribution…
Maintainer and co-author here. If you like simple, minimalist deployment tools, check out https://github.com/rcarmo/ground-init for a very much down to earth take on cloud-init…
Cool project, but I’ll stick with Dokku, which is a wonder for managing single server deploys via Docker/Git.
What is a PaaS?
You can use docker with it - I have a couple of things with "docker run" statements in the procfile, but of course it’s not designed for that.
Most of the deployments I got wind of are on extremely stable distros - typically LTS versions where you will not need to upgrade your runtime every six months (and my website has been running in it for at least two Ubuntu LTS releases…)
But you can trivially use pyenv/nvenv/etc. by just setting environment variables. My home automation system now needs two different Node versions, and I have one set per app.
Oh yes, I definitely use LTS distros, but my longest-running apps are from 2008, so even LTS won't cover that.
That depends on your tech stack. I have Perl CGI and Java apps that have been running unchanged for two decades. And the only thing I ever had to change on Debian over that time was adding HTTPS (Let's Encrypt) and SPF/DMARC for email.
Yeah, but my point is that you have to upgrade your OS. If you never change anything, obviously you don't need to worry.
My point is that OS upgrades don’t have to break tech stacks, and don’t tend to with runtimes that care a lot about backwards compatibility like Perl and Java. I did regularly upgrade Debian across those two decades.
IMO that quality should be the default, and I would choose my OS and tech stacks accordingly.
Don't they link against static libraries? How do they do that?
The runtimes are part of the Linux distribution and get upgraded along with it (and receive continuous security updates along with it), while maintaining backwards compatibility for the application code (Perl scripts or Java bytecode). Tools like needrestart will notify when a process needs to be restarted to take advantage of the update.
Ah, all your dependencies are in the language you're using? Some of mine use dependencies that are written in compiled languages.
Not necessarily, but they are part of the Linux distribution.
Well, I don't know about you, but my dependencies have often been built against a static library from a different version of the OS, so they wouldn't work on mine.
OS updates are important sometimes. Security and all...
At -some- point you actually need to update things. If you're using a 2008 docker container you have all manner of bugs and security issues.
But at least the attack vectors are limited
yes, limited to those that work 100%!
Question - how can dependency hell be solved when using such a tool?
It seems so elegant and I love the "it just works" attitude, and I do understand that docker can't be used everywhere due to its technical (and mental) overhead, but I love it because it allows to isolate everything, freeze everything in time so running a container 5 years for now "just works".
In my humble workflow, I'm using lazydocker to manage the containers, gitlab workflow (action?) for deployment on push and a small VPS to build and push the containers to gitlab registry and to run it, on the same VPS. It's a little bit overkill - I could use a combination of a Dockerfile and a compose.yml with docker compose build. Also, I didn't figure out scaling yet. Good thing I don't need it! Otherwise I would swap docker for k8s and lazydocker for k9s.
(I'm open to suggestions. I just got into devops, and I love it!)
Personally I use the same approach to piku, but instead rebuild my Nixos config on push. My projects use nix flakes, so I get both something that I know will run on my server and on my local machine with the full development environment. No containers needed technically, but I use systemd nspawn to run the software in its own sandboxed namespace.
My entire server is then managed declaratively, so if I want to add a new project, it’s like 3-5 lines of Nginx config and push, that’s all. Something goes wrong? Just revert the commit.
This sounds super interesting! Do you have an example of such a config somewhere, that you can share?
I use nix via jetify devbox. Maybe something like that could help here.
Nix would actually be fantastic for this, but I've never been able to get it to work (including with Devbox and a few other such solutions). I might try again, thank you.
A different niche than Piku but I will give Dokku another vote.
I've upgraded my dokku install over 3-4 Ubuntu LTS so far and it's been problem free for my use case of hosting little side projects on a VPS.
Sometimes docker is overkill and I'm so glad something exists that doesn't require it.