return to table of content

The demise of the mildly dynamic website (2022)

jimbokun
44 replies
5d23h

I sometimes wonder what the hell AWS Lambda is and whether or not I should care. Now I have a succint answer:

What captured people's imaginations about AWS Lambda is that it lets you a) give any piece of code an URL, and b) that code doesn't consume resources when it's not being used. Yet these are also exactly the attributes possessed by PHP or CGI scripts.

From now on, when anyone mentions "AWS Lambda" I'm going to replace with "CGI" in my head.

afavour
14 replies
5d23h

The author’s assertion isn’t correct though.

Yes, technically speaking a PHP script that isn’t being executed isn’t itself costing you money. But it’s stored on server that is running 24/7 and is costing you money.

Set up in the traditional way, CGI/PHP is priced by server uptime, be that per hour, per day, per month, whatever. The server runs all the time, it waits for requests, it processes them.

By contrast Lambda only costs money while your code runs. There’s a big difference there. Pricing of the two is different enough that a Lambda isn’t automatically cheaper but it’s misleading to suggest it’s just a CGI/PHP script. And we haven’t even started on the differences around scaling etc.

chubot
6 replies
5d22h

There's nothing stopping anyone from implementing a cloud that runs CGI or FastCGI that scales down to zero (with per second billing), and scales up to infinity.

It's just that nobody has chosen to do so

Though I suppose not without reason. Google App Engine was one of the first "PaaS", billed on a fine-grained level, and it WAS initially based on CGI. Later they changed it to in-process WSGI, probably because working around CGI startup time is difficult / fiddly, and FastCGI has its own flaws and complexity.

I think it would have been better if they had developed some of the open standards like SCGI and FastCGI though. I think it could have made App Engine a more appealing product

Comments on Scripting, CGI, and FastCGI - https://www.oilshell.org/blog/2024/06/cgi.html

fernandopj
3 replies
5d22h

You have to consider that AWS Lambda does have "cold start" - if your code wasn't run for about 10 minutes it isn't "hot" anymore and will have a penalty time cost to its first next request. This is not billable but it is a latency, explained here [1]

[1] https://docs.aws.amazon.com/lambda/latest/operatorguide/exec...

chubot
0 replies
5d22h

Yes it's exactly like FastCGI ... if you make enough requests, then you have a warm process.

If you don't, then you may need to warm one up, and wait.

So yeah I think AWS Lambda and all "serverless" clouds should have been based on an open standard.

But TBH FastCGI is not perfect, as I note in my blog post.

The real problem is that doing standards is harder than not doing them. It's easier to write a proprietary system.

And people aren't incentivized to do that work anymore. (Or really they never were -- the Internet was funded by the US government, and the web came out of CERN ... not out of tech companies)

The best we can get is something like a big tightly-coupled Docker thing, and then Red Hat re-implements it with podman.

EGreg
0 replies
5d21h

I think that Cloudflare Workers have been optimized to avoid the cold start problem far better than Amazon Lambda.

Do you know about them?

notatoad
0 replies
5d17h

There's nothing stopping anyone from implementing a cloud that runs CGI or FastCGI that scales down to zero (with per second billing), and scales up to infinity. >It's just that nobody has chosen to do so

i suspect they have. i'm sure there's a ton of different in-house implementations out there at various enterprises, that are a minimal wrapper on aws lambda to turn an ALB request into a CGI request

WorldMaker
0 replies
5d22h

scales down to zero (with per second billing)

It's just that nobody has chosen to do so

If we're having fun with "everything old is new again", then I do remember some classic hosts for things like "Guestbook" perl CGI scripts would charge in those days per-"impression" (view/action), it's not quite CPU time but a close approximate you'd hope, and the associated costs would scale down to zero whether or not their hosting tech actually did. (Also, some of them certainly didn't scale to "infinity", though they tried.)

magicalhippo
3 replies
5d23h

So it's like code-only shared hosting, billed by CPU-time rather than wall-time, and with a load balancer in front.

afavour
1 replies
5d23h

And it scales to multiple servers to accommodate traffic without your input. But essentially, yes.

Not to sound uncharitable but I’m confused by statements like the OPs who profess to not understanding what Lambda is… it’s not actually that complex!

jimbokun
0 replies
5d21h

Didn't know enough about it to know whether I should care enough about it to investigate more.

salawat
0 replies
5d21h

And mind the opportunity cost: You never learn how to set such a thing up yourself; and AWS runs off to the bank. Where you could have had an asset, (frequently a good idea), that you can do something with.

Remember, the innovation of cloud was to add a payment, IAM, and config layer on top of basic compute tasks.

gumby
2 replies
5d21h

By contrast Lambda only costs money while your code runs.

There is no fee for uploading it, only a fee for executing? In that case could I upload a function that only disgorges a static payload? Then I could store a multi-GB file for free and just invoke it in order to download the data.

icedchai
0 replies
5d21h

You still get charged for data / bandwidth egress, of course. Either way, that multi-gig response wouldn't work. You'd discover lambda can only generate a 6 megabyte response!

Lambda has all sorts of weird limitations: https://repost.aws/questions/QUjXEef9ezTTKpWqKRGk7FSg/limita...

untech
8 replies
5d23h

I understand the sentiment, but to be fair, CGI doesn’t give you an infinite scalability. AWS Labmda based solution is far less susceptible to something like HN hug-of-death.

loloquwowndueo
1 replies
5d23h

But it’s entirely susceptible to your wallet being hugged to death if someone decides to ddos you.

afavour
0 replies
5d23h

That’s a trade off many businesses are very happy to make! Spike in traffic = spike in sales. For an actual DDOS there’s always WAF firewalling.

I have to admit I don’t get the lambda hate. You don’t have to use them but there is a valid use case for them.

chubot
1 replies
5d23h

CGI can definitely scale infinitely!! It's stateless.

PHP can scale to all of Facebook's front ends, and so can CGI. Because they are stateless.

Same with FastCGI. FastCGI just lets you reuse the process, so you don't have the "cold start" every time

> CGI had the original "cold start problem"

It's just that nobody has implemented a cloud that uses the CGI or FastCGI interfaces.

I quoted this same article in a blog post yesterday:

Comments on Scripting, CGI, and FastCGI - https://www.oilshell.org/blog/2024/06/cgi.html

afavour
0 replies
5d22h

This maybe gets to the core of why the author’s assertion is incorrect: they’re comparing a programming language to an execution environment.

Yeah, PHP can scale infinitely… if you provide it with the necessary server resources to do so. PHP is not going to scale infinitely on a Raspberry Pi.

This dynamically scaling server resource is Lambda. Hell, you can even run PHP on Lambda these days the comparison really doesn’t work.

troupo
0 replies
5d21h

but to be fair, CGI doesn’t give you an infinite scalability.

That your startup with three users will never need.

Your startup with a million users will not need it either

hooverd
0 replies
5d21h

Lambda doesn't either. Unless you ask AWS support nicely.

coryrc
0 replies
5d23h

Instead your wallet gets ddos'd.

derefr
7 replies
5d22h

Lambda is also distributed, though — while CGI scripts just live on the webserver itself. So if you're old like me and want a crisper mental model, the actual '90s equivalent of Lambda would be:

• a cluster of "diskless" web-server (let's say Apache) machines with a round-robin load-balancer in front of them;

• where these web servers are all using mod_userdir, pointing /~*/ at, let's say, "/wwwhome/*/www/" — and with `Options +ExecCGI` enabled for all UserDirs;

• where /wwwhome on all these web-server machines is a read-only NFS mount from the /home dir of a shared disk server;

• and where each user of this service has an FTP-enabled user and home directory on the disk server.

Given this setup, if a user bob FTPs a script to /home/bob/www/foo.pl on the disk server, then anyone can trigger that script to execute on some arbitrary web-server in the cluster by hitting http://lb.oldschool-lambda.example.com/~bob/foo.pl.

(I'm actually curious whether this exact architecture was ever implemented in the 1990s in practice, maybe by a university or ISP. It seems totally feasible.)

---

One other major difference I should point out, is that Lambda functions are versioned — you don't overwrite a function, you just deploy a new content-hash-named copy of it and then update some "symbolic names" to point at it, but can always call a fixed version by its content-hash.

Implementing enforced versioning wouldn't require any changes to the above architecture, though, so it doesn't matter here. (It'd "just" be a fork of ftpd that would transparently map between a versioned backend view and an unversioned frontend view. Very much like what Amazon S3 does for versioned buckets, actually.)

troupo
6 replies
5d21h

Lambda is also distributed, though — while CGI scripts just live on the webserver itself.

Most of the stuff people use lambdas for they could run from the cheapest Digital Ocean droplet or cheapest Hetzner server.

You need "omg distributed" perhaps after your millionth user, and even then it's highly debatable.

derefr
4 replies
5d20h

You're thinking about the (rather few) advantages distribution confers for single-tenant scaling. But the distribution here is mainly for other reasons:

operational fault-tolerance: arbitrary servers in the cluster hosting the function can fail or be taken down for repair without any workloads going down. This allows the ops team that owns the cluster to freely keep the cluster's machines up to date with OS package updates, reboot them as needed, swap them out for newer-generation hardware, move them to new racks in new DC buildings, etc. (This is also something you get from a VM on most larger clouds — but only because the VM itself is a workload running on a hypervisor cluster, where the cluster's control-plane can live-migrate workloads to drain host nodes for maintenance.)

• horizontally scaling the cluster as a whole, to keep up with more and more users sticking more and more workloads onto it, and more and more requests coming in for those workloads. As load increases, just add more diskless web servers. (This also means that in the rare case where your workload is too popular to fit on a cheap VPS, the ops team is doing your personal infrastructure scale-out for you for free, as part of scaling the capacity of the service as a whole.)

• maintaining your multi-tenant Quality-of-Service in the face of other users who have hugely-expensive workloads. On that 90s cgi-bin webserver — or on a modern VPS — if someone else deploys some workload that pins all the CPU cores for five seconds each time anyone calls it, then that impacts how long it takes you to serve calls to your workload, because your request has landed on the same machine and is waiting in some kernel queue behind that other workload's CPU bottleneck. VMs solve this by reserving capacity per workload that's "always on" even when the workload is idle. Lambda solves this by collecting resource-accounting statistics for each function and resource-utilization metrics for each backend, and putting that info together at the load-balancer level to plan a hybrid of least-conn and resource-packing routing to backend nodes.

---

But even ignoring all that, the point of the "distributed-ness" of the Lambda architecture on the low-usage end, is the "scales to zero" part that it shares with CGI.

Unlike VMs, you aren't paying anything per month for a function nobody is calling. And for a workload that does get called, but only runs for N aggregate CPU-seconds each minute, you're only paying N/60 of what you'd pay for a VM (which runs every CPU-second of the every minute.)

If you have one script and you have $5/mo burning a hole in your pocket, then sure, I guess you could put it on a DO droplet? (But you could just as well deploy a function that will cost you less than $5/mo.)

But if you have ten scripts, where some of them might be high-utilization (you're not sure yet, depends on how many people hit the web pages that call them) while others are definitely extremely low-utilization — then your options are either "becoming a hobbyist DevOps engineer, utilization planning a packing arrangement of VMs or hardware to optimize cost, installing metrics to ensure things aren't falling over, getting emails when it does, re-planning things out, and migrating them as necessary"... or just deploying 10 functions and getting what will probably be a bill each month for <$1.

troupo
3 replies
5d20h

You write a long wall of text that does boring to dispute what I wrote.

The cheapest DO droplet is 4 dollars a month. It is more than capable of running "10 scripts", and will last your startup with three users indefinitely long.

If you're concerned about that cost, lambdas will not save you

derefr
2 replies
5d20h

Maybe we write very different kinds of scripts. For my startup, such a "script" might be:

1. An hourly itemized invoicing batch job for incremental usage-based billing, that pulls billable users from an ERP DB, grabs their usage with a complex Snowflake query (= where our structured access logs get piped), then does a CQRS reduction over the credit-and-spend state from each user's existing invoice line-items for the month, to turn the discovered usage into new line-items to insert back into the same ERP DB.

2. A Twilio customer-service IVR webhook backend, that uses local speech-processing models to recognize keywords so you don't have to punch numbers.

3. An image/SVG/video thumbnailer that gets called when a new source asset (from a web-scraping agent) is dropped into one bucket; and which writes the thumbnail for said asset out into another bucket. (For the SVG use-case especially, this requires spinning up an entire headless Chrome context per image, mostly in order to get the fonts looking right. We actually handle that part of the pipeline with serverless containers, not serverless functions, because it needs a custom runtime environment.)

#1 is an example of a low-utilization "script" where we just don't want to pay for the infra required to run it when it's not running, since when it is running it "wants" better-than-cheapest resourcing to finish in a reasonable time (which, if you were deploying it to a VM, would mean paying for a more expensive VM, to sit mostly idle.) We do have a k8s cluster — and this was originally a CronJob resource on that, which made sense at first — but this is an example of a "grows over time" workload, and we're trying to avoid using k8s for those sort of workloads, because k8s expects workloads to have fixed resource quotas per pod, and can't cope well with growing workloads without a lot of node-pool finessing.

#2 and #3 are high-utilization (one in CPU and RAM, one in requiring a GPU with VRAM) "scripts", where only one or two concurrent executions of these would fit on a $4/mo DO droplet; and where ten or twenty of these on a more vertically-scaled VM or machine would start to strain the network throughput of even a 2.5Gbps link. Many cheap-ish machines, each with their resourcing (including their own network bandwidth), all instantaneously reserved and released at one machine per request is a perfect match for the demand profile of these use-cases.

troupo
1 replies
5d11h

I still don't see a need for lambdas or how a cost to run them on a DO droplet/Hetzner server would be so prohibitive that you'd be concerned about saving a few dollars a month.

Note: There's a reason I keep saying "You need 'omg distributed' perhaps after your millionth user, and even then it's highly debatable."

derefr
0 replies
4d22h

I think you misunderstood what I said above, because I wasn't talking about what you as "a person who wants to run scripts" needs; I was rather talking about what the team managing the infrastructure for these people needs. Serverless functions are great for the people managing a multitenant cloud-function compute cluster, because the architecture is nearly-stateless and can be easily scaled.

But these properties matter not one bit to most users deploying serverless functions. Most users don't need a "distributed" function. The advantages that users see, come from the set of key service objectives that the IaaS's DevOps team can deliver on because they're able to scale and maintain the service so easily/cheaply/well.

Think of an customer-employer-employee relationship. A plumbing company doesn't buy you-their-employee a company car because they want to depend directly on you having a car (e.g. by adding "chauffeuring" to your job duties.) They buy you a company car because it enables you to get to job sites faster and more reliably; to keep all your equipment securely in the truck ready to go rather than having to load/unload it from your regular family SUV when you get a call; to bring along more, heavier equipment that would be impossible to load up on short notice; etc. In short, it enables you to do your job better — which in turn enables the company to deliver the service they market to customers better, and probably cheaper.

Choosing a "CGI-bin server host" because you know it's built on a distributed substrate, is like picking a plumbing company because you know all their employees roll out with nice, well-equipped company vans. The plumbing companies without those vans could still do the job... but the van, with a bevy of equipment all well-organized on wall-hooks and shelving units, makes the person who comes to your call-out more well-equipped to help you. Serverless function (= distributed CGI-bin) hosts, are more well-equipped to host functions.

---

My key assumption — that I maybe left too implicit here? — is that in general, all else being equal, people who "just want to deploy something" (= don't need to glue a whole lot of components together into a web of architecture), should prefer using "managed shared-multitenant infrastructure" (i.e. paying for usage on "someone else's server", without any capacity-based resource reservations) over paying to reserve capacity in the form of a bare-metal machine, VM, or PaaS workload.

(Specifically, people should prefer to use standard FOSS frameworks that ship adapters for different IaaS solutions — e.g. https://www.serverless.com/ in the FaaS case — to enable the use of any arbitrary "managed shared-multitenant infrastructure", without vendor lock-in.)

Due to many simultaneous economies of scale involved — in hardware, in automation, in architecture, in labor, etc — "managed shared-multitenant infrastructure" almost always has these benefits for the user:

1. more reliable / lower maintenance

2. cheaper (often free for hobbyist-level usage!)

3. higher potential performance for the same price

For example, managing a few MBs of files through an Amazon S3 bucket, is going to be more reliable, lower maintenance, free(!), and more performant than managing those same files using a two-core, 2GB RAM, single 100GB HDD deployment of Minio.

In this case, deploying a cloud function is going to be more reliable, lower maintenance, cheaper (probably free), and more performant than deploying the same script to a tiny VM with a tiny webserver running on it.

---

I should especially emphasize the "free" part.

There's a big mental barrier that hobbyists have where they're basically unwilling to pay any kind of monthly fee to run a project — because they have so many nascent projects, and they're unsure whether those projects will ever amount to anything useful, or will just sit there rotting while costing them money, like a membership to a gym you never go to.

Being able to deploy a pre-production thing, and have it cost nothing in the months you don't touch it, creates a dramatic worldview shift, where suddenly it's okay to have O(N) projects "on the go" (but idle) at once.

(If you don't understand this, try putting yourself in the position of asking: "do I want to start paying monthly for a VM to host my projects, or is it higher-value to use the same 'fun money' to pay for an additional streaming service?")

---

But also, from another perspective: these "trivial costs" of a few dollars a month, add up, if for whatever reason you need to isolate workloads from one-another.

For example, if you're a teenager trying to do web-dev gig work on Fiverr, charging a flat fee (i.e. no pass-through OpEx billing) to deliver a complete solution to clients. Each client wants to be able to deploy updates to their thing, and wants that to be secure. How do you deliver that for them? Read Linux sysadmin books until you can set up a secure multitenant shell and web server, effectively becoming your own little professional SRE team of one? Or just build each of their sites using something like Vercel or Netlify, and give them the keys?

For another example, if you have personal projects that you don't want associated with your professional identity, then that'd be another VM, so another $4/mo to host those. If you have personal projects that are a bit "outre" that you don't even want associated with your other personal projects, then that'd be another $4/mo. If you do collaboration projects that you want to have a different security barrier, because you don't trust your collaborators on "your" VMs — another $4/mo per collab.

Why do this, when all these projects would collectively still be free-tier as functions?

MOARDONGZPLZ
0 replies
5d20h

I’m guilty of this sort of “kids these days” thinking as well. But really the AWS lambda free tier is very generous and I don’t get any sort of bill unless there’s a very large spike in usage. With DO or Hetzner I pay for the server to be up and get a monthly bill no matter what.

I would even somewhat reverse your assertion: use Lambda until you have a need for a real server infra and then upgrade to things like Hetzner.

pram
4 replies
5d23h

Lambda has its places, but I've noticed devs consistently using it for things where it doesn't make any goddamn sense. Like processing messages from a Kafka consumer, the function is running 24/7. People think it's a container or something.

morkalork
0 replies
5d22h

I can definitely see lambdas being abused when the process for requesting dedicated resources is too arduous.

liveoneggs
0 replies
5d22h

yes where I work there is an alert for when a lambda is not running. I have proposed wrapping the lambda code in a while() loop on a fargate many many times to save $thousands of dollars but I lack the political power to actually do it.

Sharlin
0 replies
5d23h

News at eleven: devs use tech for utterly inappropriate purposes just because it happens to be cool at the moment.

EGreg
3 replies
5d21h

Lambda is like CloudFlare workers.

They are deployed AT THE EDGE TO THE CDN, and therefore can handle requests without hitting your network and database.

For example I recommend your servers sign all the session IDs they give out, so they can easily be discarded if the signature doesn’t match or if they’ve been blacklisted. Such decisions can be made without doing any I/O, or by checking a local cache that was built up when eg a blacklisted session ID was tried just recently.

They can also spin up huge numbers of instances and fan out requests to MANY different servers, not just yours and your domain names. They allow client-first development, where you might not build ANY back end at all, just use some sort of JAM stack.

And now, CloudFlare Workers supports a key-value store and a SQL database so it is sort of an environment that will autoscale all your deployments in CDNs around the world and take care of the eventual consistency too.

troupo
2 replies
5d21h

where you might not build ANY back end at all, just use some sort of JAM stack.

So, a backend

And now, CloudFlare Workers supports a key-value store and a SQL database

So, a backend.

EGreg
1 replies
5d20h

A back end with autoscaling on CDN.

Not a single server, like you'd set up.

troupo
0 replies
5d11h

The autoscaling 99% of businesses don't need, and won't need even when they hit their 1 million DAU

throwaway22032
2 replies
5d23h

Basically everything AWS is things we already have but more expensive because someone else does the easiest bit for you.

tracerbulletx
0 replies
5d23h

I don't know how anyone who experienced how Ops worked at companies before AWS can say things like this. Is AWS right for everyone economically? No. It is an impressive automation of software operations, that has driven the industry forwards by leaps and bounds. Yes.

jjk166
0 replies
5d22h

Some people don't want to create a universe every time they want an omelet.

decasia
37 replies
6d

I think the spirit of this article is correct, although some of the digs at modern web tech and SPAs seem to be beside the point.

I used to have a "mildly dynamic website." It was a $5 digital ocean box. It ran nginx with php-fpm, mostly so it could have a Wordpress install in a subdirectory, and it had a unicorn setup for an experimental Rails app somewhere in there.

Given that environment, the "mildly dynamic website" experience that TFA talks about was absolutely true. If I wanted a simple script to accept form input, or some little tiny dynamic experimental website, I could trivially deploy it. I could write PHP (ugh) or whatever other backend service I felt like writing. I ported the Rails app to golang after a while. It was fun. It made for a low cost of entry for experimental, hackish things. It's a nice workshop if you have it.

The thing is — if you are running this setup on your own linux virtual machine — it requires endless system maintenance. Otherwise all the PHP stuff becomes vulnerable to random hacks. And the base OS needs endless security updates. And maybe you want backups, because you got lazy about maintaining your ansible scripts for system setup. And the price of the $5 virtual linux box tends to go up over the years. And the "personal website" model of the web has kind of declined (not that it's altogether dead, just marginalized by twitter/facebook).

So I got exhausted by having to maintain the environment (I already do enough system maintenance at work) and decided to switch to static HTML sites on S3. You can't hack it anymore. But so far — I can live with it.

edflsafoiewq
12 replies
6d

Shared hosting is probably better, and AFAIK more common, for the mildly dynamic website. The host handles a lot of the admin tasks like OS updates that you have to handle yourself with a VPS.

massysett
8 replies
5d23h

NearlyFreeSpeech.NET is good for this. Their main tier - "production" sites - are very inexpensive and the admins take care of OS and server-software updates. They have another tier - "non-production" sites - that are even cheaper and can be perfectly sufficient for a personal homepage. The admins maintain these servers as well but they might do beta testing on them.

The environment is fully hackable and has PHP, SSH, SFTP, MariaDB, dominant languages like Perl and Python, obscure languages like Haskell and Lisp, etc etc.

svieira
3 replies
5d20h

And even more obscure languages like Forth and Octave. The only thing to watch out for is that they run FreeBSD (instead of a more "normal" distribution) so if you're used to Linux-as-seen-on-Debian-or-Ubuntu there are a few things that are different (but so cozy).

kbolino
2 replies
5d5h

For what it's worth, FreeBSD is actually it's own thing and not Linux at all. It's descended from Berkeley Unix and has no code in common with Linux or GNU (though it can still run software that's cross-compatible).

svieira
1 replies
5d5h

Correct - but if you're normally interacting with Linux boxes on the command line you're probably not going to be too far from home since the _programs_ behave mostly-the-same.

kbolino
0 replies
5d4h

Yes, though I guess this means we're in the era where "Linux" is more widely understandable than "Unix".

mk12
1 replies
5d20h

+1 for NearlyFreeSpeech! I’ve used them for years paying only 40-45 cents a month. I love that I can just ssh in and mess around. My site is mostly static but recently I wanted to add a private section for specific family and friends. So I implemented OAuth 2.0 login with 2 php files and an .htaccess rule.

Something1234
0 replies
5d18h

Love nearly free speech.

Although I wonder how often your family actually uses it.

mapmap
0 replies
5d10h

Does it support dot net?

bachmeier
0 replies
5d16h

For some reason I've never heard of it before. Looks like a good value as long as you avoid storage. Even shows D as a supported language!

endofreach
0 replies
5d14h

As much as i love every P standing for PHP, "PaaS" usually means Platform as a Service

wwweston
0 replies
5d22h

This. For people looking for hosting as a service that don’t need scale (and even for some people who think they do) shared hosting is often the best low-admin + low-cost solution. Buuut you can’t brag about your cloud setup.

dreadnip
9 replies
5d22h

it requires endless system maintenance. Otherwise all the PHP stuff becomes vulnerable to random hacks

How so? I've seen PHP websites & apps run for 10+ years in production without updates. Even longer with a simple "sudo apt update" every few months and a "composer update" every year or so. The maintenance rate is actually very very low.

crazygringo
4 replies
5d19h

Years ago a Digital Ocean virtual server of mine stopped working because I had never upgraded Ubuntu to the newest major version. After a few years, the version of Ubuntu was no longer supported by the Digital Ocean hypervisor and couldn't mount or boot at all.

In my experience, yes you absolutely need maintenance. In the past I've had to upgrade from HTTP to HTTPS, upgrade the OS, upgrade to newer versions of external API and embedded components because the old ones were deprecated, handle a domain registrar shutting down, and then yes absolutely PHP updates and upgrades for security that then start giving you warnings because less secure versions of functions are being deprecated...

And frequently updating the one thing that's broken necessitates upgrading a bunch of other things that breaks other things.

I literally cannot imagine how you would keep a PHP site running on a virtual server for 10 years without any maintenance. I need to address an issue probably roughly once a year.

Publius_Enigma
3 replies
5d15h

These are all problems that shouldn’t exist. You have succinctly described the problems with modern IT. Software doesn’t need to have an expiration date. It doesn’t decay or expire. But because of our endless need to change things, rather than just fix bugs, we end up with this precarious tower of cards.

If, as an industry, we focussed on correctness and reliability over features, a lot of these problems would disappear.

quest88
0 replies
5d15h

I agree there's some truth in what you say. I do think these upgrades are part of a path towards correctness and reliability (bug fixes, security vulnerabilities, etc).

kbolino
0 replies
5d5h

But the hardware does expire. Computers aren't just magically "faster" than they were decades ago; they're altogether different under the hood. An immense number of abstractions have held up the image of stability but the reality is that systems with hundreds of cores, deep and wide caches, massively parallel SSDs and NICs, etc. require specialized software compared to their much simpler predecessors to be used effectively. Feature bloat is a major annoyance, and running the old software on new hardware can give the appearance of being much faster, until it locks everything up, or takes forever to download a file, or can't share access to a resource, or thinks it has run out of RAM, or chews up a whole CPU core doing nothing, etc.

ArneBab
0 replies
5d12h

Yes, these problems shouldn’t exist. But they do.

One of the big strength of the Web is the commitment of Mozilla „do not break the web“.

But this is hitting its limits, because the scope of Javascript is being expanded more and more (including stuff like filesystem APIs for almost arbitrary file access — as if we had not learned from Java WebStart that that’s a rabbit hole that never stops gifting vulnerabilities), so to keep new features safe despite their much higher security needs, old features are neutered.

I lost a decentralized comment system to similar changes.

imabotbeep2937
3 replies
5d17h

"I have run production websites where I didn't patch security for months or years on end." Linux users wondering why nobody takes them seriously.

citizen_friend
1 replies
5d17h

Security people on high alert for every possible scenario with no sense of relative risk or attack surface wonder why their concerns aren’t taken seriously.

Publius_Enigma
0 replies
5d15h

This. Furthermore, this posture has percolated down to home computing environments (because it is all Windows or Linux) so even my home computer has to receive constant updates as if it’s controlling a Luna lander.

remram
0 replies
5d17h

I have a box with nearly 5 years uptime, the one it replaced had at least that much, my experience matches GP's. unattended-upgrades gives you 99% of the patches, a manual upgrade every few months will get you the rest.

If you see a problem with this, why not point it out directly, instead of this snark?

notatoad
3 replies
5d17h

and decided to switch to static HTML sites on S3

it's really hard to overstate how great s3 sites are with cloudfront in front of them. mine costs me <$1/month, i have essentially no concerns about security, ddos, maintenance, anything. if i want to update it, i push some different files to s3. the backups are the other copy of the data on my local machine that i pushed to it.

the added complexity for any level of dynamic-ness is really just not worth it, unless i'm going to go to the trouble of making a full-on app that needs a revenue model.

selcuka
1 replies
5d11h

CloudFlare Pages are another great product for zero maintenance static sites, and they can be made mildly (or heavily, if you want) dynamic with CloudFlare Workers.

mmarian
0 replies
5d8h

Second Cloudflare. And it's super easy to hook up to domains registered via Cloudflare too.

jmathai
0 replies
5d14h

Similarly, Github Pages are great. You get rudimentary dynamic support by using a static site generator if you need it.

fpoling
2 replies
5d20h

I am puzzled that your site required constant maintenance. I run a similar setup that I hardens using systemd service restrictions with nothing running as root. Then I subscribed to Debian and couple more mail lists with security announcements. It turned out I needed to spend like 20 minutes per month to maintain it.

I also find that PHP works much better than Go regarding maintenance efforts. With Debian I have automatic updates of all PHP dependencies that I need so security announcements is a nice single source of truth. But with Go I would need to setup monitoring of dependencies for updates myself and recompile/deploy the code as necessary.

bitslayer
1 replies
5d18h

They didn't say constant maintenance, they said endless maintenance. 20 minutes a month is a never ending commitment of time. They have better things to think about.

oopsallmagic
0 replies
5d16h

Yeah, like developing a résumé.

pixl97
1 replies
6d

And then you forgot to add "I had to stick it behind cloudflare because someone decided to DDOS it and send me 500GB/s traffic for no apparent reason at all.

The early web was the wild west. The modern web has turned into small fry trying to hide in the shadows of megalodons so they don't get ate by a goliath.

ozim
0 replies
5d7h

You don't have to go as far as DDOS.

Blogpost mentions comment section, like it something you can "just do" - no one wants to do comment sections in 2024 on their own. Amount of crap one will get in case they start to have mildly popular post is insane.

mtalhaashraf
0 replies
5d4h

I tried S3 for my static HTML site but the problem is my personal website gets infrequent visitors and for each visit the first load is always slow probably because my website gets moved from memory to disk. The thing is, my website is less than 20KB and I don't like it to be slow for all the users just because it is infrequent. Managing my own server allows me to lock my website into memory.

layer8
0 replies
5d4h

The thing is — if you are running this setup on your own linux virtual machine — it requires endless system maintenance. Otherwise all the PHP stuff becomes vulnerable to random hacks. And the base OS needs endless security updates.

Just use Debian with unattended-upgrades. Done. Only rarely do you have to do anything manually.

Setting up daily backups with verification is also a one-time thing.

jimmaswell
0 replies
5d1h

A daily apt update/upgrade in crontab has been working fine for me. What is the maintenance beyond that?

chubot
0 replies
5d22h

The thing is — if you are running this setup on your own linux virtual machine — it requires endless system maintenance.

Yeah, this is exactly what I talk about this post -- why do I used shared hosting?

Comments on Scripting, CGI, and FastCGI - https://www.oilshell.org/blog/2024/06/cgi.html

BenjiWiebe
0 replies
5d16h

Linode's base tier VPS is still $5/month.

politelemon
29 replies
6d1h

What captured people's imaginations about AWS Lambda is that it lets you a) give any piece of code an URL, and b) that code doesn't consume resources when it's not being used. Yet these are also exactly the attributes possessed by PHP or CGI scripts. In fact, it's far easier for me to write a PHP script and rsync it to a web server of mine than for me to figure out the extensive and complex tooling for creating, maintaining and deploying AWS Lambda functions — and it comes without the lock-in to boot. Moreover, the former allows me to give an URL to a piece of code instantly, whereas with the latter I have to figure out how to setup AWS API Gateway plumbing correctly. I'm genuinely curious how many people find AWS Lambda interesting because they've never encountered, or never properly looked at, CGI.

Well, assuming you are genuinely curious and not just using an expression!

The difference is that the 'web server' is still consuming resources when the code is not in use. They aren't equivalent at all. The web server is hosted on an OS and both require ongoing maintenance.

Further, the appeal of Lambda is in its ease of onboarding for newcomers; I can run a piece of .NET or JS or Python locally and directly without a lambda 'layer' to host it, just invoke the handler method.

I'm not sure what complex tooling that the author is referring to though, it's a zip and push.

icedchai
10 replies
6d1h

Once you add API gateway, IAM roles/permissions, VPC, security groups, it gets a lot more complicated. Then you want to host a static web site, reverse proxying to API gateway, add CloudFront, WAF, etc. You'll go crazy setting this up manually, so you'll also want Terraform or CloudFormation to make it repeatable.

For anything complex, you'll run into "slow start" issues and have to look at provisioned concurrency.

Lambda also sucks for dependencies. Zips and "layers" can only be so large. Eventually you'll hit the limit and have to move to containers. There are also other limitations, like payload sizes. Eventually you might run into that, too.

Also, it would be nice if Lambda just reused the CGI interface instead of inventing its own thing.

owenversteeg
8 replies
5d19h

Yeah, it's an absolute explosion of complexity, and with it comes the risk that you miss something and are faced with a security issue or giant bill or both.

What I would kill for is something in between all of this and FTPing PHP around like it's 1999. I've hunted for years for middle-ground solutions and haven't found anything. Security, cost, performance etc are important, sure, but what I really yearn for is a simple, easy, bulletproof solution like the days of yore when a handful of simple scripts could chug on for decades. Two decades ago you could set up a simple site that needed no attention for twenty years. Today's technologies require constant attention and updates and if you blink then everything requires an update that's not compatible with your code. Meanwhile, PHP from 2004 can be trivially run in 2024. What's the PHP-and-FTP of today? Does it exist?

llm_trw
2 replies
5d17h

It's still php-and-ftp. Sometimes I get fancy and use python and ssh.

icedchai
1 replies
5d17h

I was going to say the same thing. I generally use Laravel instead of "raw" PHP these days, but still...

ArneBab
0 replies
5d12h

I nowadays mostly use a simple static website plus minimal Javascript, but I still have PHP sites.

It still works. And PHP today is at least factor 3 faster than PHP was back then — on the same hardware.

So maybe just use PHP and ftps or sftp?

zeroq
1 replies
5d13h

Actually they don't, at least most of the time.

At my last shop the entry cost for hosting a single static file was around $2500 for infra, because that was the agreed upon template for a project that was pre-approved by all neccessary committees. I've tried to fixed it for quite some time, presented solution to C level, got some initial funding, but eventually after many pushbacks I died on that hill.

People like to play with toys, cv driven developement is a real thing, and when it comes to security and compliance everyone want to be sainter than the pope, just in case.

icedchai
0 replies
5d1h

Is that $2500/month?

konfusinomicon
0 replies
5d16h

tgere are far better ways but for the sake of simplicity I suppose you could ssh in to box, cd /some/dir, and git pull on the latest branch. that way atleast you have version control and can rollback quickly if you bork your site

denton-scratch
0 replies
5d4h

Two decades ago you could set up a simple site that needed no attention for twenty years.

...but as of yesterday, you can't?

inopinatus
0 replies
5d16h

nice if Lambda just reused the CGI interface

Lambda is an event-driven system; synchronous, stateful request/reply for HTTP is shoehorned onto that. At heart it's a scale-out queue processor, not a web server. This whole "lambda gives your code a URL" folks speak of, is actually an API Gateway thing. This is also why Lambda has at-least-once semantics rather than that at-most-once, which definitely surprises the casual web developer from time to time ("why did that happen twice??").

That said, I don't disagree with the sentiment; it would be quite nice if API Gateway had a simple option to map incoming request parameters to CGI-like attribute names in the event payload.

chasd00
8 replies
6d1h

In a lot of these discussions the point gets raised about the work to maintain a self-hosted server. When i've done it I install the os (usually ubuntu server), turn off unused services, setup the firewall to only allow required ports, and then it just sort of sits there and does its thing. Uptimes have been measured in years in some cases and the server just sits there happily serving whatever html and connecting to whatever db forever.

pixl97
3 replies
6d

I mean, typically hosting a static site that's fine, but with the number of exploits these days and the ability for people to chain them together, maybe you don't even realize your box has been exploited?

Uptimes in years means your box, especially without updates has something it could be targeted with.

graemep
2 replies
6d

Most of those that are find for a static site but not dynamic are those that are not fixed by just applying updates from the distro.

You still need to update your app code even if you are using someone elses servers, so its the same either way.

pixl97
1 replies
5d20h

Note that the parent said "uptimes measured in years", so they are either using a more complex system with multiple servers, or they are not doing security updates.

graemep
0 replies
5d11h

I agree I would not endorse uptimes measures in years, which means not doing kernel updates - but OP does qualify that with "in some cases".

josephcsible
3 replies
5d21h

Do you enable unattended upgrades and kernel live patching? If not, then that doesn't seem secure.

cutler
2 replies
5d20h

`dnf upgrade` or `apt update && apt upgrade` once a month isn't so much work. If either includes a kernel upgrade then it's `reboot now` and I'm done.

josephcsible
1 replies
5d19h

But if you're doing that, it's not just sitting there, and you'd never get anywhere close to years of uptime.

poincaredisk
0 replies
5d10h

I'm most cases people stress uptime too much. Developers worry about downtime during restarts and overcomplicate everything because of it. Meanwhile, my national railway reservation system has an hour of planned downtime daily and life carries on.

jacobgkau
3 replies
6d

They aren't equivalent at all. The web server is hosted on an OS and both require ongoing maintenance.

From a technical standpoint, surely the servers running Lambda have an OS and require maintenance internally by Amazon at some level (even if it's dev-ops-abstracted away for their operators). It's just their responsibility instead of yours, and it's also their responsibility to find other work for those servers when your code isn't running (or bear the cost of an idle server). It's like if they were letting you push PHP functions to their web server, using the quoted comparison.

Useful from a business standpoint, yes. Revolutionary, maybe not as much as it seems at first glance. That's the takeaway I got.

Joker_vD
1 replies
6d

And, of course, they don't just "bear the cost of an idle server". Just like any other fixed costs in any other industry, those costs get smeared over the charged prices. Unless there is some serious economy of scale (i.e. ops at Amazon managing to handle the upkeep of a 1000 of their servers for less than 1/1000 it costs you to manage 1 of your server), you're ending up paying somewhat more. But then again, you don't have to spend time to manage your own server which is probably a positive trade-off.

Uehreka
0 replies
5d23h

Useful from a business standpoint, yes.

This business standpoint is the whole thing, I don’t see many people arguing otherwise. And the business standpoint is in fact quite revolutionary.

Amazon can have a team of like 5 engineers maintaining VM images and as a result 500,000 other people don’t have to. And so you can host a “server” that’s only “on” when it’s in use, and usually end up paying less than $1/mo.

In fact, you could likely run all of your side projects in lambda and if they’re “conventional web server” type things you could still end up paying less than $1/mo across all of them.

Compare that to a $5/mo droplet for every side project (you probably don’t want to bunk multiple services into a $5 VM) and it definitely adds up before even considering updating the OS on those droplets.

Sharlin
3 replies
5d22h

Running PHP on a VM in a data center doesn’t cost you anything either while the VM isn’t doing stuff, and you still have something to ssh into and manage and investigate without layers of abstraction bolted on top. And PHP-on-httpd is pretty much one of the most newcomer-friendly straight-to-coding environments ever developed.

sophacles
2 replies
5d21h

Where can I get a VM in a data center that doesn't cost me anything?

theendisney
1 replies
5d16h

cost and convenience: Time is money.

Whatever works for you is more important than 5 euro.

rty32
0 replies
5d14h

Lambda is pretty straightforward as well for those who know how to use it.

You probably could do something marginally useful with lambda within 30 seconds as well.

ArneBab
0 replies
5d12h

Don’t you have a virtual server? I’m pretty sure that my managed webserver runs on some virtual machine somewhere on a bigger server, and when it is not doing stuff, other tasks are running there and using the CPU and bandwidth.

There is likely some fixed memory and disk space cost, but those are negligible with todays capacities.

solardev
8 replies
6d

This perspective isn't really making an apples-to-apples comparison. The author is comparing modern framework bloat to the simplicity of a standalone PHP script, but disregarding the underlying stack that it takes to serve those scripts (i.e., the Linux, Apache/Nginx, MySQL/Postgres in LAMP).

Back in those days, it was never really as simple as "sftp my .php file into a folder and call it a day". If you were on a shared host, you may or may not have access to any of the PHP config, needed for things such as adjusting memory limits (or your page might not render), which particular PHP version was available (limiting your available std lib functions), which modules were installed (and which version of them, and whether they were made for fastcgi or not). Scaling was in its infancy those days and shared hosts were extremely slow, especially those without caching, and would frequently crash whenever one tenant on that machine got significant traffic. If you were hosting your own in a VM or bare-metal, things were even worse, since then you had to manage the database on your own, the firewall, the SSH daemon, Apache config files in every directory or Nginx rules and restarts, OS package updates, and of course hardware/VM resource constraints.

Yes, the resulting 100-line PHP script sitting on top of it all might be very simple, but maintaining that stack never was (and still isn't). Web work back then was like 25% coding the PHP and 75% sys-admining the stack beneath it. And it was really hard to do that in a way that didn't result in customer-facing downtime, with no easy way to containerize, scale, hot-standby, rollover, rollback, etc.

=====================

I'd probably break down this comparison (of LAMP vs modern JS frameworks) into questions like this, instead:

1) "What do I have to maintain? What do I WANT to maintain?"

IMHO this is the crux of it. Teams (and individual devs) are choosing JS frameworks + heavy frontends because even though there are still servers and configurations (of course), they're managed by someone else. That abstraction and separation of concerns is what makes it so much easier to work on a web app these days than in the PHP days, IMO.

Any modern framework now is a one-command `create whatever app` in the terminal, and there, you have a functioning app waiting for your content and business logic. That's even easier than spinning up a local PHP stack with MAMP or XAMPP, especially when you have more than one app on the same disk/computer. And when it comes time to deploy, a single `git push` will get you a highly-available website automagically deployed in a couple minutes, with a preconfigured global CDN, HTTPS, asset caching, etc. If something went wrong, it's a one-click rollback to the previous version. And it's probably going to be free, or under $20/mo, on Vercel, Cloudflare Pages, Netlify, etc. Maybe AWS Amplify Hosting too, but like Lambda, that's a lot more setup (AWS tends to be lower-level and offers nitty-gritty enterprise-y configs that simpler sites don't need or want).

By contrast, to actually set up something like that in the PHP world (where most of the stack is managed by someone else), you'd either have to find a similar PHP-script-hosting-as-a-service like Google App Engine (there's not many similar services that I know of; it's different from a regular shared host because it's a higher level of abstraction) or else use something like Docker or Lando or Forge or GridPane to manage your own VM fleet. In the latter cases you would often still have to manage much of the underlying stack and deal with various configs and updates all the time. It's very different from the hosted JS world.

The benefit of going with a managed approach is that you're really only needing to touch your own application code. The framework code is updated by someone else (not that different from using Laravel or Symfony or Wordpress or Drupal). The rest of the stack is entirely out of your sphere of responsibility. For "jamming" as an individual or producing small sites as a team, this is a good thing. It frees up your devs to focus on business needs rather than infrastructure management.

Of course, some teams want entirely in-house control of everything. In that case they can still manage to their own low-level VMs (an EC2 or similar) and maintain the whole LEMP or Node stack. That's a lot more work, but also more power and control.

A serverless func, whether in JS (anywhere) or PHP (like via Google Cloud Run), is just a continuation of this same abstraction. It's not necessarily just about high availability, but low maintenance. You and your team (and the one after them, and the one after that) only ever have to touch the function code itself, freeing you from the rest of the stack. It's useful the same way that being able to upload a video to YouTube is: You can focus on the content instead of the delivery mechanism.

2) Serverside resource consumption

It's not really true that "PHP scripts don't consume any resources (persistent processes, etc.) when they're not being used", any more than a JS site or serverless func isn't consuming resources when they're not being used. Both still require an active server on the backend (or some server-like technology, like a Varnish or Redis cache or similar).

Neither is really an app author's concern, since they are both hosting concerns. But the advantage of the JS stuff is that it's easier and cheaper for hosts to containerize and run independently, like in a V8 isolate (for Cloudflare Workers). It's harder to do that with a PHP script and still ensure safety across shared tenants. Most shared PHP environments I know of end up virtualizing/dockerizing much of the LAMP stack.

3) Serverside rendering vs static builds vs clientside rendering

As for serverside rendering vs static builds, the article doesn't really do a fair comparison of that either. This is a tradeoff between delivery speed and dynamicness, not between PHP and JS.

Even in the PHP world, the PHP processor itself offered caching, then frameworks like Wordpress would offer its own caching on top of that, then you would cache even the result of that in Varnish or similar. That essentially turns a serverside rendered page into a static build that can then be served over a CDN. This is how big PHP hosts like Pantheon or Acquia work. No medium or big size would make every request hit the PHP process directly for write-rarely, read-often content.

In the JS world, you can also do serverside rendering, static builds, clientside renders, and (realistically) some combination of all of those. The difference is that it's a lot more deliberate and explicit (but also confusing at first). But this is by design. It makes use of the strength of each part of that stack, as intended. If you're writing a blog post, chances are you're not going to edit that more than once every few weeks/months (if ever again). That part of it can be statically built and served as flat HTML and easily cached on the CDN. But the comments might trickle in every few minutes. That part can be serverside rendered in real time and then cached, either at the HTTP level with invalidations, or incrementally regenerated at will. And some things need to be even faster than that, like maybe being able to preview the image upload in your WYSIWYG editor, in which case you'd optimistically update the clientside editor with a skeleton and then verify upload/insertion success via AJAX. The server can do what it does best (query/collate data from multiple sources and render a single page out of it for all users to see), the cache can do what it does best (quickly copy and serve static content across the world), and the client can do what it does best (ensure freshness for an individual user, where needed).

It is of course possible (and often too easy) to mis-use the different parts of that stack, but you can say the same thing about the PHP world, with misconfigured caches and invalidations causing staleness issues or security lapses like accidentally shared secrets between users' cached versions.

solardev
7 replies
6d

(too long... here's part 2):

4) Serverless as "CGI but it's trendy, [with vendor lock-in and a more complex deployment process]"

What vendor lock-in? Most of the code is just vanilla JS. There might be a different deployment procedure if you're using Cloudflare vs Lambda vs Vercel vs Serverless Framework, but those are typically still simpler than having to set up an SFTP connection or git repo in a remote folder. With a framework like Next, a serverless function is just another file in the API folder, managed in the same repo as the rest of your app. Even without a framework, you can edit and deploy a serverless function in a Cloudflare Sandbox with a few clicks and no special tooling. If you later want to move that to another serverless host (what an ironic term), you can copy and paste the code and modify maybe 10-15% of it to get it running again. And the industry is trying to standardize that part of it too.

And I think this directly relates to #1: It's not so much that serverless is high availability (which is nice), but more than they are well... server-less. Meaning maintenance-less for the end user. You don't have to manage a whole LAMP stack just to transform one object shape into another. If you already have a working app setup, yes, you can just add another script into your cgi-bin folder. But you can do the same in any JS framework's API folder.

5) Framework bloat

I feel like what this author really doesn't like is heavy frameworks. That's fine, they're not for everyone. But in either the PHP or JS world, frameworks are optional.

I guarantee you Drupal is heavier and more bloated than any popular JS framework (it's also a lot more powerful). Just like the PHP world has everything from Drupal to Wordpress to Symfony to Laravel, JS has Next, Remix, Astro, Svelte, Vue, etc. HTMX has Alpine. Ruby has Rails. Etc.

On the contrary, you can certainly write a few paragraphs of HTML as a string and render it in any of those frameworks, either as a template literal (PHP heredoc) or using JSX-like syntax.

That's not really what the frameworks try to solve. They are there for addressing certain business needs. In the case of the heaviest framework of them all, Next, it goes back to #3 and #4, about optimally separating work between the server, cache, and client. If your app is simple enough that you don't need that complexity, then either don't use that framework, use its older "pages" mode, or use another framework or none at all. If you don't need deterministic component rendering based on state, don't use React. If you don't need clientside state, don't use Javascript at all.

Similarly, you can write a dead-simple PHP page with a few server-side includes and heredocs, or maintain a labyrinthine enterprise Drupal installation for a few blog posts and marketing pages (not recommended... no, really, don't do that to yourself... ask me how I know).

In either case, it's again a question of "what do I want or need to maintain it". Choosing the right level of power vs simplicity, or abstraction vs transparency perhaps, is an architectural question about your app and business needs, not the language or ecosystem underneath it.

6) Vendor lock-in

You can host PHP anywhere. You can also host JS anywhere these days. In fact I'd argue there are more high-quality,low-cost JS hosts now than there ever were similar PHP hosts. Shared PHP hosts were a nightmare, because PHP was not easy to containerize for shared tenancy. JS hosting is cheap in comparison.

Most of the frameworks in either world are open-source. Not many are not-for-profit (Drupal is, but Laravel Forge/Forge and Vercel/Next have similar business models of open-source frameworks coupled with for-profit hosting).

In either case, though, it's really your application logic that's valuable (and even then, questionably so, since it'll likely end up completely rewritten in a few years anyway).

Ultimately we're all at the mercy of the browser developers. Google singlehandedly made shared hosting very difficult for everyone with the introduction by forcing HTTPS a few years back. It singlehandedly made the heavy JS frontend possible with its performant Javascript engine. WASM is still recent. WebGPU is on the horizon. New technologies will give rise to new practices, and new frameworks will soon feel old.

But JS is here to stay, because it's the only language that can natively interact with the DOM clientside. If your core business logic is written in almost-vanilla JS (or even JSX, by now), portability between JS frameworks isn't as hard as porting between different languages (like PHP to JS, or PHP to Ruby). Using it for both the client and server and in between just means fewer languages to keep track of, a shared typing system, etc. In that sense there's probably less vendor lock-in with JS than there is with PHP, which fewer and fewer companies and hosts support over time. PHP is overwhelmingly just Wordpress these days, which itself has moved to more dynamic React-based elements too (like in the Gutenberg editor).

I think the problem with the JS ecosystem is actually the opposite: not lock-in, but too many choices. Between the start and end of this post, probably five new frameworks were released =/ It's keeping up that's hard, not portability. You can copy and paste most of the same code and modify it slightly to make it work in another framework, but there is rarely any obvious gain from doing so. For a while there Next seemed like it was on track to becoming the standard JS framework, but then the app router confused a lot of people and now simpler alternatives are popping up again. For that much, at least, I can agree with the article: everything old is new again.

troupo
3 replies
5d21h

What vendor lock-in? Most of the code is just vanilla JS.

That runs in a specific environment with vendor-specific IAM configurations, vendor-specific DNS configurations, vendor-specific network configurations, vendor-specific service integrations, vendor-specific runtimes and restrictions, vendor-specific...

solardev
2 replies
5d20h

That sounds like an AWS thing? There's a lot of frameworks that can deploy straight to Vercel, Cloudflare Pages, Netlify, etc. without all that.

And if you really want to manage all that, it would apply to both PHP sites and JS and anything else. That's really more of a discussion of fully vs partially managed cloud solutions, not PHP or JS or any framework in particular.

troupo
1 replies
5d11h

There's a lot of frameworks that can deploy straight to Vercel, Cloudflare Pages, Netlify, etc. without all that.

All of them need all that. And those frameworks are for a reason: they sweep a lot of these things under a rug, and after a certain complexity you will have a vendor lock-in. Just because you will end up depending on certain policies that other vendors don't provide. Or on certain services that other vendors don't provide. Or guarantees that other vendors don't provide. Or pricing that... Or...

solardev
0 replies
5d5h

Can you provide some examples? This hasn't been my experience, as a web dev who switched from PHP to such frameworks.

There's so much less complexity and maintenance overall, IMHO, but maybe we're looking at different facets of it?

svth
1 replies
5d19h

Thank you for that lengthy diatribe, which I heartily agree with. It's really all about separation of concerns.

solardev
0 replies
5d14h

Lol, yeah, sorry, I really gotta work on brevity...

It's really all about separation of concerns.

Exactly!

SomeCallMeTim
0 replies
5d1h

Agreed with pretty much everything.

I couldn't agree with the article on almost any point.

One additional point: You can get "mildly dynamic" websites by using services. I have a completely static web site that's 100% on a CDN and that I've written zero lines of code for...but it has a full dynamic comment section due to Disqus integration. My "how many people have visited my page" is handled by Google Analytics. Other similar embedded services can provide many of the most common "mildly dynamic features".

I'm using Astro on a newer project, which allows you to static-generate pages however you like, but also allow you to run just one component as JavaScript, you can, without the inherent danger of running code on a server every time someone hits your web site. For full heavy-dynamic pages, you can render on the server as well. It's a nice compromise IMO.

That and I never want to use PHP again. Especially Drupal. I liked Drupal at first, but I never want to see it again.

spacebuffer
6 replies
5d20h

Semi-related: what's the best place to learn the old school style of working with php, I already know laravel but it feels so far removed from normal php that I am not confident working with it on it's own.

hu3
2 replies
5d20h

I would try some "old style" php frameworks. Like CodeIgniter or Yii.

If you want even simpler, then maybe try Slim framework. You'll have to add your own data access lib.

If you want Grug brain PHP then there's https://github.com/bcosca/fatfree

You can also use composer package manager to build your framework with libs like routing, sql query builder, etc.

wild_egg
1 replies
5d19h

Fast and clean template engine

Is PHP not already a template language?

The whole idea of PHP frameworks always confused me. Why do I need a router when I can simply add new files at the routes I want?

hu3
0 replies
5d6h

You're right, it is.

The advantage of a router is basically: please always execute this code (auth, db connection, logging, session, rate limiting, csrf protection) before running the actual code for pages that will be rendered.

Or you could also just keep using normal page.php files and possibly include a boot.php at the top of the pages to achieve the same. But then you don't get pretty URLs unless you're doing some server URL rewriting.

Come to think of it, routers require URL rewriting too.

oopsallmagic
1 replies
5d16h

Why not read the PHP docs and any tutorials they might have? The docs are pretty comprehensive.

dventimi
0 replies
5d15h

Because they're mostly a vast array of reference docs with just one simple tutorial that barely scratches the surface?

https://www.php.net/manual/en/index.php

simonbw
0 replies
5d13h

0. Forget everything you know about php and programming in general really. 1. Make a simple web page in an html file 2. rename it to .php 3. then throw in some <?php ?> tags 4. Look up [include](https://www.php.net/manual/en/function.include.php) and not much more 5. PROFIT

At least that's how I remember writing php in high school.

snovymgodym
6 replies
6d

I like this article except for the part about Lambda. The author doesn't seem to get that for some use cases there are serious benefits to having bits of code that run only when you need it, and only getting billed for those runs instead of getting billed for a VM or container runtime that's always present.

Obviously if your application involves processing a predictable high volume of requests, then you're probably better off running it on your own server/container, but depending on your use case there are times where Functions-as-a-service are the prefect solution.

The part about "why use lambda when cgi-bin exists" reminds me of the HN comment on the DropBox announcement from 2007 where the guy says something like "this is cool but why would anyone use it when you can just whip together ftp and svn on a debian box and have the same thing?"

Stratoscope
3 replies
6d

Here's the full quote:

For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.

https://news.ycombinator.com/item?id=8863

jimbokun
1 replies
5d23h

Which is 100% true. And it's also true Dropbox was a great product because it provided this functionality for the vast majority of the population who don't have the time or interest to learn all those tools.

rty32
0 replies
5d14h

You missed the point. It's usability rather than functionality. I can 100% do all of this but there is no chance I would choose this over OneDrive/Dropbox etc.

Although there are new functionalities from Dropbox -- (easy) online access, apps, preview, on-demand access, (presumably) redundancy on the server etc

ikari_pl
0 replies
5d23h

i just remembered how FTP server was built-in in Windows, easy to set up, and IPs were public, so yeah, it was easy to share a local folder directly (but you had to know what you're doing anyway, and same for the other person).

good times.

decasia
1 replies
6d

I think it kind of goes both ways. There are times when you absolutely want lambda functions instead of cgi-bin scripts. But conversely - there are times when you absolutely want cgi-bin instead of lambda. (For interfacing with other linux services or packages, for example). The two tools don't always substitute for each other.

pixl97
0 replies
6d

I mean, that is the difference between using something because it's the flavor of the day and using something because it's the best tool for the job.

superkuh
5 replies
6d1h

Server side includes are still the perfect amount of power if you want to do templating stuff like comments.html footer.html or right_menu.html includes across all site pages. And the attack surface is so minimal and code so stable there's basically no increased risk using SSI over just html with nginx and similar webservers.

bandrami
2 replies
6d

Ah but! The problem is SSI includes the bang directive, which outputs the results of a shell command.

Once that's available, people will demand and abuse it, and we're back at cgi-bin.

superkuh
1 replies
6d

SSI includes the bang directive,

Not in ngx_http_ssi_module or any modern webserver I've used? As for "people"? What people? I guess your implicit assumption is this is a group or commercial project? I was thinking more website made by a human person.

RedShift1
1 replies
6d

But can you write Doom with SSI?

doublerabbit
0 replies
5d23h

I suppose in theory.

Pass the button press of the clicked button iframe to a headless version of the game.

Capture and transfer the output to an transparent png back via an http meta refresh in another iframe.

jak2k
3 replies
5d22h

I like the idea of just renaming an `html` file to `php` and adding a bit dynamic stuff.

A webserver that could do this with JavaScript/TypeScript would be cool! (Or maybe I should learn php…)

leobg
2 replies
5d21h

Yeah, I was shocked when, coming from PHP, I realized that in Python, in order to serve a website, you have to start and maintain an extra server process.

notatoad
0 replies
5d17h

you don't have to - apache will execute your python cgi scripts just as happily as it will execute your php cgi scripts.

it's just that python developers figured out that having options other than cgi, and being freed from the rigid directory structures is really nice.

motogpjimbo
0 replies
5d19h

Don't forget deploying updates by either (a) stopping your Python application and then restarting it, hoping your users are not too badly inconvenienced in the interim, or (b) rigging together some kind of blue-green deployment setup. Meanwhile, the PHP developer runs rsync and the deployment is done.

ss64
2 replies
5d22h

This is missing any discussion of the dynamic functionality you can add with a bit of vanilla JavaScript.

krapp
0 replies
5d18h

The article's insistence on differentiation between "web sites" and "web applications" and a further article linked in the footnotes about the "HTML5 coup"[0] suggests the author doesn't consider functionality added by javascript to be valid for web sites, which they define as purely documents, only web applications. This seems to be a common viewpoint on HN, from what I've observed.

[0]https://www.devever.net/~hl/xhtml2

PKop
0 replies
5d22h

No it isn't. Did you read the article?

simonbw
2 replies
5d18h

I think some of this article resonates with me, but I also think a big part of it rubs me the wrong way. It seems to assume that everyone has a webserver running a LAMP stack. If you have a static site on a webserver running PHP then of course PHP is going to require the least amount of effort to make your site mildly dynamic.

On the other hand, if you have nothing, I don't think that the fastest/easiest way to a mildly dynamic website is to use PHP.

I got curious about the most minimalist setup I could get for running a static site that would also easily transition to becoming dynamic piece-by-piece. Here's what I came up with using NextJS:

1. Create a `package.json` containing:

    {
      "scripts": {
        "dev": "next dev",
        "build": "next build",
        "start": "next start"
      },
      "dependencies": {
        "react": "^18",
        "react-dom": "^18",
        "next": "14.2.4"
      }
    }
2. Create `app/layout.js` containing

    export default ({ children }) => children;

3. Create your home page at `app/page.js`. This is pretty much just what your index.html would have been before except it's wrapped with `export default () => (/.../)`:

    export default () => (
      <html>
        <head>
          <title>Home</title>
        </head>
        <body>
          <h1>Home</h1>
          <p>Welcome to our website!</p>
        </body>
      </html>
    );

4. Put the rest of your static files in `/public`

Now, when you want to make one of your html files dynamic:

1. Move it from `public/your-page.html` to `app/your-page/page.js`

2. Wrap it with `export default () => (/.../)`

And there you go, you can start using JavaScript and JSX in your page.

Here's a summary of the differences between this approach and creating a PHP-based site:

1. You have to create 2 more files containing 13 more lines of code than you would with PHP

2. You need to find a NextJS web host rather than a PHP web host

3. Your dynamic pages are html-inside-javascript rather than php-inside-html. You are also technically writing JSX and not HTML, which has slightly different syntax.

4. You have a much easier to set up local dev server (Install `node`, then `npm install` and `npm dev`)

5. This framework can scale pretty seamlessly to "highly dynamic", whereas with PHP you'd probably need to introduce a frontend framework if you wanted to create something actually "highly dynamic".

6. You only need to learn one programming language (JavaScript) instead of two (JavaScript and PHP).

nucleardog
1 replies
5d16h

It seems to assume that everyone has a webserver running a LAMP stack.

Shared hosts are pretty near a dime a dozen. Or just `apt-get install apache2 libapache2-mod-php` and you’re ready to go with no project changes.

You have a much easier to set up local dev server (Install `node`, then `npm install` and `npm dev`)

Which version of node? Which version of npm? When I come back to this in six months am I going to find out some dependency relies on a dependency that relies on a dependency that only works on node 3.14 and end up installing nvm and stepping through versions trying to get this running?

For PHP I will need to install PHP. For this kind of stuff basically any version from the past decade will work. It’s in every package manager everywhere, has a windows installer, etc. (There’s a reason every PHP dev isn’t using a PHP equivalent to “PHP version manager”.)

Once that’s installed, just `php -S 127.0.0.1:8000` and open it in your browser. Serves all your static files and stuff too without any changes to any of them—an existing jumble of HTML and CSS is a 100% valid PHP project.

It’s usually substantially easier for me to get a ten year old PHP project running than a six month old JS project.

SomeCallMeTim
0 replies
5d1h

Which version of node?

Latest LTS. Why wouldn't you?

Which version of npm?

The one that comes with Node? Duh. Or just `npm update -g npm` to bring it up to the latest.

...a dependency that only works on node 3.14

So I know you're making crap up now because there was no such version. Node jumped right from 0.12.x to 4.x, as a result of a fork and associated project politics. [1]

And I've been working with Node projects for nearly a decade. I don't see "only works on older-verion-of-Node x.y" almost ever. You're thinking of Python and Ruby.

`nvm` exists because, yes, sometimes you want to run an older project in its exact environment. And it's super easy to install new versions or test under different versions Just In Case there's an issue. Sometimes newer releases only work on the latest Node, so you need to upgrade Node to update to the latest-and-greatest.

But frankly it's worth using Node (and TypeScript) just to not have to ever touch PHP again. It was an is a nightmare fractal of bad design. I'm never going back.

[1] https://nodejs.org/en/about/previous-releases

Legion
2 replies
5d19h

PHP deployment was indeed easy.

But it turns out "dump everything in docroot and let mod_php interpret and execute whatever it finds there" had security implications...

oopsallmagic
0 replies
5d16h

You always had to configure your web server properly. "Don't let programs execute arbitrary code" was a solved problem even then.

cies
0 replies
5d18h

indeed.

the gap with PHP and alternative stacks has mostly closed.

PHP apps now alse get deployed by container or VM... so why not go with something like Kotlin + kotlin.html (HTML eDSL for server-side templating and HTMX), Ktor or http4k (web libs), jOOQ (SQL eDSL with some typesafety on queries) and Postgres?

the PHP, MySQL (MyISAM), mod_php, Apache days are over. and it's not only for security reasons: there are alternatives that score better in every dimension AND run/deploy well on cheap hosting

flobosg
1 replies
6d1h

(2022)

dang
0 replies
6d1h

Added. Thanks!

Reason077
1 replies
5d18h

I dunno. Isn't HN an example of a "mildly dynamic website"?

resoluteteeth
0 replies
5d17h

The article is talking about things like pages that are primarily static but with a comment box added as a dynamic part.

In this sense, I think the entirety of HN would count as "dynamic" by the standard in the article, so it would not be "mildly dynamic".

Are you thinking about something like "progressive enhancement" with JavaScript? Because that is not what the article is describing as "mildly dynamic".

syrusakbary
0 replies
5d21h

I'm amazed on how well this article fits with a new product that we have been working on at Wasmer. AWS Lambda is great, but it doesn't really solve the cold start problem of dynamic languages. Nor does FastCGI.

We are very close to launch Instaboot, a new feature for Wasmer Edge that thanks to WebAssembly is able to bring incredible fast cold-starts to dynamic languages. Bringing 90ms cold-start times to WordPress (compared to >1s in state-of-the-art cloud providers).

rkozik1989
0 replies
6d1h

Did everyone just forget about Varnish HTTP Cache or something? Or are we just using the new shiny ball because its new?

lwhi
0 replies
5d12h

Once upon a time there was a technology called dHTML.

Waaaay before Web 2.0 and full interactivity on the client.

lovasoa
0 replies
5d19h

This article resonates with me. I do love "mildly dynamic websites", and have fond memories of my days hacking together PHP websites 15 years ago.

And what I am working on today might be called a bridge for the "dynamicity gap". I'm making an open source server to write web apps entirely in SQL ( https://sql.ophir.dev/ ). It has the "one file per page" logic of PHP, and makes it easy to add bits of dynamic behavior with just a normal INSERT statement and a SELECT over dynamic data.

leobg
0 replies
5d21h

Makes me think of Pieter Levels who used to run a 60k/mo SaaS with hundreds of paying users from a single index.php on a bare metal server.

(I don’t know if he still does it that way.)

kolme
0 replies
5d21h

Fun fact: nextjs was inspired by PHP and can generate static pages that you can throw into a web server.

Another technology that kind of covers the use cases of the article (the mildly dynamic pages) would be htmx and friends.

ggm
0 replies
5d13h

Looking stuff up in an SQL DB and showing it to the masses (wordpress) is not very dynamic.

But, neither is deploying neko the cat to sleep on your cursor, or the dancing turtle of Kame-IPv6.

Intruding PHP into the actual FQDN of the web, and making 3x4.ARITHMETIC.EXAMPLE.COM work as a calculator, now thats dynamic.

REST is pretty dynamic. If I GET it and then POST it back with changes, I feel like I've had a good day.

I personally hate the cursor implicit in tabular data websites. There should be a normative "no cursor" mode to just get all the damn data not the page management requirement.

Riddle me this: How does "the independent" newspaper create a high visual content web which on my tablet appears to render INSTANTLY and yet has 100 points of image and text? it loads faster than almost any other site. It also kills my CPU and is a terrible design, but my goodness it's fast.

brendadavis22
0 replies
4d16h

I've been struggling with how to hack someones WhatsApp messages without touching their cell phone for almost a year. I've tried at least 10 different apps and none of them worked. I found shadownethacker44@gmail.com after reading a blog post on how to hack someones WhatsApp messages without touching their cell phone and the reviews were all good so I thought I would give it a try

amluto
0 replies
5d23h

I think the real ability that got lost is the ability to easily mix and match multiple logically separate things. Once upon a time, if you have some slightly dynamic material and you also wanted to add some PDFs from the technical writers, the webmaster would do it in five minutes. Want a video? Just add a file. Need a form? Fire up some PHP or whatever. Want a support contact or FAQ? No big deal.

Now even big companies outsource their PDF hosting and viewing to a third party, they outsource their FAQ and support contact to a different fancy startup, surveys and similar forms to to yet another company, and the list goes on. The all-in-one website seems to be dead.

alexanderscott
0 replies
5d9h

I feel every admin panel I’ve built over the years, including recently, have been “mildly dynamic”. only enough jquery to be usable by other staff.

Terretta
0 replies
5d18h

Why should we “not speak of” Allaire's ColdFusion also released in 1995?

PHP made web development accessible with its simplicity, but ColdFusion was arguably more influential in the '90s. It led the way with features like built-in database connectivity and templating, setting the stage for how dynamic web pages would work as implemented by Microsoft and others, and even shaping what PHP itself became.

Separately, I think projects like Caddy carry on the spirit of the blog post.

Dwedit
0 replies
5d19h

I think CDNs led to the demise of "mildly dynamic" websites. They made static websites get served super fast, so you got the most benefit from completely static sites.

You could still include a small amount of JS to make a mostly-static site partially dynamic if you were logged in. Limited mostly to things like including the user's username on the page when logged in.

ArneBab
0 replies
5d13h

I know why I no longer have a mildly dynamic website: the security risk of PHP is not linear.

That’s where Javascript shines: if you avoid comments (those are actually a hard problem: a social one) and server-side data (security risk), it actually has this linear increase in effort without the jump in security risk.

And this risk has increased a lot since the early mildly dynamic websites.

1vuio0pswjnm7
0 replies
5d20h

"Or, suppose a company makes a webpage for looking up products by their model number. If this page were made in 2005, it would probably be a single PHP page. It doesn't need a framework - it's one SELECT query, that's it. If this page were made in 2022, a conundrum will be faced: the company probably chose to use a statically generated website. The total number of products isn't too large, so instead their developers stuff a gigantic JSON file of model numbers for every product made by the company on the website and add some client-side JavaScript to download and query it.... This example is fictitious but I believe it to be representative."

As an end user, I have seen this perplexing design pattern quite often. As soon as I see it, I just get the URL for the JSON file and I never look at the web page again. It is like there is so much bandwidth, memory and CPU available but the developer is not letting the user takes advantage of it. Instead the developer is usurping it for themselves. Maybe a user wants to download data. But the developer wants to run Javascript and keep _all_ users staring at a web page.

Why not just provide a hyperlink on the rendered search results page pointing to the JSON file, as an alternative to (not a replacement for) running Javascript. What are the reasons for not provding it.

On some US government websites, for example, a hyperlink for the the CSV/JSON file is provided in the rendered search results page.^1

That is why a non-commercial www is so useful, IMHO: the best non-commercial websites do not try to "hide the ball".

Perhaps they have no incentive to try to force users to enable Javascript, which is a practical prerequisite for advertising and tracking.

1. ecfr.gov and federalregister.gov are two examples

101008
0 replies
5d23h

I feel identified with the first part of the article. I remember the problem about navigations and headers (and right sidebars!).

The first solution was of course framesets, but they were kind of ugly. Then iframes came, and they were almost perfect solution (at least for me). With no borders, they looked like an include. The only problem (and depending of your website) it was that the height was fixed (for headers that may not be a problem, but it was for left and right sidebars).

Of course, with PHP and includes everything became trivial. I kind of miss the old `index.php?cont=page1`...