return to table of content

Python Cloudflare Workers

jatins
14 replies
1d3h

Clouflare has a lot of great stuff for hosting and databases but I think they haven't done a great job marketing themselves as developer platform which has lead to platforms like Vercel, Netlify taking significant mindshare.

Tangential: does Cloudflare provide container hosting service agnostic of language -- something like Google Cloud Run?

Difwif
7 replies
1d3h

I agree something is wrong with their marketing. I was also initially drawn to Vercel and Netlify but after extended use and not being happy with either I eventually tried Cloudflare and discovered I love it. The pricing and the product is fantastic.

nstart
5 replies
1d2h

I think it’s because the experience of familiarizing oneself with the platform and getting to a hello world level crud app/basic static site is done a lot better with vercel and netlify than it is with cloudflare. Cloudflares site and docs are not built with the approach of getting an app from 0 to 1 ASAP.

derefr
2 replies
1d

I get the sense that Cloudflare Workers is targeted almost exclusively at existing customers of Cloudflare, who have a “legacy” app proxied through Cloudflare DNS, who use Page Rules and Firewall Rules and the like.

For these customers, Workers are an incremental optimization of an existing app — shifting some work to the edge, or allowing some systems to have a previously-internal backend stripped out, leaving them as e.g. Workers in front of an object-storage bucket. And that’s exactly how Cloudflare advertises them.

It looks like Cloudflare’s outreach advertising, meanwhile, is all about CF Pages and CF Sites. You can find SEOed landing pages for these; whereas Workers is mentioned ~never in external media as a “solution” — even though it totally can be.

kentonv
1 replies
23h43m

Workers started with that use case, but these days we're definitely aiming to be a general-purpose platform for app hosting, especially for new apps. It sounds like we're not getting that message across very well.

derefr
0 replies
2h46m

Not just the message; the docs, and even the internal structure of the Workers section of the CF dashboard, are also lacking on describing how to approach building a greenfield app using Workers.

I recently tried to build a Worker-based app for a personal project, after a few months of not having touched Workers, having only previously used them at $work for the "decorator of legacy backend" use-case. I knew what capabilities Workers had, and knew exactly what I wanted to do... but getting it up and running was still confusing!

One specific example: trying to figure out how to get a production-quality workflow for maintaining and deploying a worker.

- It's actually hard to notice the "Edit Code" button on a Worker's overview. I flipped through all the other tabs twice before noticing it.

- Then, after getting in there, I remembered how useless the web IDE is for testing, when the domain is fronting a bucket named after a custom domain associated with a Workers Route rather than with the canonical name of the worker. So I wanted to set up the worker to deploy from a Github Action.

- But how do I do that? Can I go into "Integrations" and select "Github Repo"? (No.)

- I figure that the Wrangler CLI will set me up for doing on-push deploy. So I download it. (It doesn't.)

- Also, the CF Workers docs tell me to install the wrangler NPM package and run `npx wrangler init`. But once I do, that command itself tells me that it's deprecated, and that I'm supposed to run `npm create cloudflare\@2.5.0`.

- I read through the Wrangler docs and figure out that it's a "build a local slug from your worktree and push it" kind of deployer, rather than a "build a deploy from a git ref and push it" kind of deployer. So I figure, to enable GitOps deploys, I'll need an action that runs Wrangler itself on Github Actions. I search the Workers docs, and the Wrangler CLI docs (separate site) for this. Neither one mentions this possibility.

- I end up just googling "wrangler github action" and finding https://github.com/cloudflare/wrangler-action. Great! I add it to the new Github repo that Wrangler created, and add the relevant secret.

You know what this could have been instead? A button on the Workers dashboard — or even on a CF marketing landing page! — that 1. SSOs the user into Github; 2. automates the creation of a new repo that gets pre-populated with a Worker skeleton project + this Wrangler workflow already committed to it + the correct relevant secret already bound into the repo; and then 3. drops me into either the Workers IDE (modified to connect to the repo by creating PR branches + committing to them for "drafts", and merging those PRs for "publish"), or alternately, into Github Codespaces with the Workers IDE stuff reimplemented as a VSCode preview plugin. (And same for Gitlab, etc.)

rvnx
1 replies
1d2h

Pricing also. 0.40 USD / month for a globally deployed site @ CF versus 199 USD / month for a static site with limited traffic usage @ the supposedly premium other hosts

vorador
0 replies
23h42m

I've heard rumors that lots of these premium hosts are running on top of AWS/GCP which means they have much worse unit economics than Cloudflare.

spxneo
0 replies
1d3h

Same experience its fascinating how much marketing impacts developer minds.

dandaka
1 replies
1d1h

It is not only about marketing. Initially, I was optimistic about Cloudflare's offerings. However, I encountered significant issues with compatibility, especially with website generators such as Next.js and Astro. Some features didn't work at all, while others were only partially supported. Faced with the prospect of dedicating valuable development time to troubleshooting these issues, I found it more efficient to use alternative platforms. Services like Vercel, Netlify, and Deno Deploy offer a smoother experience for our team's needs, minimizing the overhead and enabling us to focus on development rather than infrastructure challenges.

VHRanger
0 replies
1d

Anecdata: I've just switched over to cloudflare pages for an 11ty site and it works really well.

thangngoc89
0 replies
1d3h

does Cloudflare provide container hosting service agnostic of language -- something like Google Cloud Run?

Nope. Their Workers are V8 based so JS or Wasm

skybrian
0 replies
1d2h

I believe the CloudFlare free tier was pretty limited until recently. D1 (their SQLite implementation) became generally available yesterday, and read replicas are announced.

pier25
0 replies
14h16m

something like Google Cloud Run?

No but it would be awesome.

I've been using Workers for about 4 years in production and love them but containers are still where I run most of my apps.

impulser_
0 replies
19h25m

I think Vercel and Netlify aren't aimed at developers, because if you are a developer and are using Vercel and Netlify you are literally getting robbed.

Bandwidth costs are 40x-50x more expensive on Vercel and Netlify that the vast majority of cloud providers. Cloudflare bandwidth is barely a cost.

Edge function calls are 6x more expensive on Vercel and Netlify than Cloudflare. Not including compute time costs which is free on Cloudflare.

I think the only reason Vercel is even popular is because it's by far the best place to host NextJS and that might be why they make it hard to deploy NextJS else where.

syrusakbary
13 replies
1d2h

This is awesome, I'm happy that Cloudflare is adding more attention into running Python via WebAssembly at the Edge.

I'll try to summarize on how they got it running and what are the drawbacks that they have from their current approach (note: I have deep context on running Python with WebAssembly at the Edge as part of my work in Wasmer).

Cloudflare Workers are enabling Python at the Edge by using Pyodide [1] (Python compiled to WebAssembly via Emscripten). They bundled Pyodide into Workerd [2], and then use V8 snapshots [3] to try to accelerate startup times.

On their best case, cold starts of Python in Cloudflare Workers are about 1 second.

While this release is great as it allows them to measure the interest of running Python at the Edge, it has some drawbacks. So, what are those?

  * Being tied to use only one version of Python/Pyodide (the one that Workerd embeds)
  * Package resolution is quite hacky and tied to workerd. Only precompiled "native packages" will be allowed to be used at runtime (eg. using a specific version of numpy will turn to be challenging)
  * Architecturally tied to the JS/v8 world, which may show some challenges as they aim to reduce cold start times (in my opinion, it will be quite hard for them to achieve <100ms startup time with their current architecture).

In any case, I welcome this initiative with my open hands and look forward all the cool apps that people will now build with this!

[1] https://pyodide.org/

[2] https://github.com/cloudflare/workerd/blob/main/docs/pyodide...

[3] https://github.com/cloudflare/workerd/pull/1875

Edit: updated wording from "proof of concept" to "release" to reflect the clarification from the Cloudflare team

kflansburg
8 replies
1d2h

I believe that your summary misunderstands how we will handle versioning. The pyodide /package versions will be controlled by the compatibility date, and we will be able to support multiple in production at once. For packages like langchain (or numpy as you mentioned) the plan is to update quite frequently.

Could you expand on why you believe V8 will be a limiting factor? It is quite a powerful Wasm runtime, and most of the optimizations we have planned don’t really depend on the underlying engine.

Edit: Also just want to clarify that this is not a POC, it is a Beta that we will continue improving on and eventually GA.

syrusakbary
7 replies
1d2h

pyodide /package versions will be controlled by the compatibility date

That's exactly the issue that I'm mentioning. Ideally you should be able to pin any Python version that you want to use in your app: 2.7, 3.8 or 3.9 regardless of a Workerd compatibility date. Some packages might work in Python 3.11 but not in 3.12, for example.

Unfortunately, Python doesn't have the full transpiler architecture that JS ecosystem has, and thus "packaging" Python applications into different "compatibility" bundles will prove much more challenging (webpack factor).

Could you expand on why you believe V8 will be a limiting factor?

Sure thing! I think we probably all agree that V8 is a fantastic runtime. However, the tradeoffs that make V8 great for a browser use case, makes the runtime more challenging for Edge environments (where servers can do more specialized workloads on trusted environments).

Namely, those are:

  * Cold starts: V8 Isolates are a bit heavy to initialize. On it's current form it can add up from ~2-5ms in startup just by initializing an Isolate
  * Snapshots can be quite heavy to save and restore
  * Not architected with the Edge use case in mind: there are many tricks that you can do if you skip the JS middleware and go all in into a Wasm runtime, that are hard to do with the current V8/Workerd architecture.
In any case, I would love to be proven wrong on the long term and I cheer for <100ms cold starts when running Python in Cloudflare Workers. Keep up the good work!

kflansburg
3 replies
1d1h

We discussed a separate configuration field for Python version. It’s not technically challenging, this was a design choice we made to simplify configuration for users and encourage more efficiencies in terms of shared dependencies.

Your concerns about V8 would impact JavaScript Workers as well and do not match what we see in production. It is also definitely possible to invoke C++ host functions directly from Wasm with V8.

syrusakbary
2 replies
1d1h

Your concerns about V8 would impact JavaScript Workers as well and do not match what we see in production

Interesting! I thought V8 snapshots were mainly used in the Pyodide context, as I could not find any other usage in WorkerD (other than promise tagging and jsg::MemoryTracker).

Are you using V8 snapshots as well for improving cold starts in JS applications?

kflansburg
1 replies
1d1h

I was responding to your point about isolates and cold starts. Snapshots are unique to Python, but V8 does not seem relevant here, all this is doing is initializing the linear buffer that backs Wasm memory for a particular instance. We have a lot of ideas here, some of which are mentioned in the blog post.

syrusakbary
0 replies
1d1h

Awesome. Eager to see how the product evolves :)

kentonv
1 replies
1d1h

(Cloudflare Workers tech lead here.)

I disagree about V8 not being optimized for edge environments. The needs of a browser are actually very much aligned with needs of edge, namely secure sandboxing, extremely fast startup, and an extreme commitment to backwards compatibility (important so that all apps can always run on a single runtime version).

Additionally, V8 is just much better at running JavaScript than you can hope to achieve in a Wasm-based JS implementation. And JavaScript is the most popular web development language (even server-side).

On it's current form it can add up from ~2-5ms in startup just by initializing an Isolate

So, you and I seemingly have a disagreement on what "cold start" means. Wasmer advertises its own "cold start" time to be 50ns. This is only remotely possible if the application is already loaded in memory and ready to go before the request arrives. In my mind, this is not a "cold start". If the application is already loaded, then it's a "warm start". I haven't spent the time to benchmark our warm start time (TBH I'm a little unclear on what, exactly, is counted in this measurement), but if the app is already loaded, we can complete whole requests in a matter of microseconds, so the 5ms number isn't the correct comparison.

To me, "cold start" time is the time to load an application, without prior knowledge of what application will be needed. That means it includes the time to fetch the application code from storage. For a small application, we get around 5ms.

Note that the time to initialize an isolate isn't actually on the critical path to cold start, since we can pre-initialize isolates and have them ready to go before knowing what application they will run. That said, we haven't implemented this optimization historically, since the benefit would be relatively small.

However, with Pyodide this changes a bit. We can pre-initialize Pyodide isolates, before we know which Python app needs to run. Again, this isn't implemented yet, but we expect the benefits to be much larger than with plain JS isolates, so we plan to do so.

Ideally you should be able to pin any Python version that you want to use in your app:

Minimizing application size is really essential to making edge compute inexpensive -- to run every one of two million developers' applications in every of our hundreds of locations at a reasonable price, we need to be able to run thousands of apps simultaneously on each machine. If each one bundles its entire language runtime, that's not gonna fit. That does mean that many applications have to agree to use the same versions of common runtime libraries, so that they can share the same copies of that code. The goal is to keep most updates to Pyodide backwards-compatible so that we can just keep everyone on the latest version. When incompatible changes must be made, we'll have to load multiple versions per machine, but that's still better than one copy per app.

syrusakbary
0 replies
1d

Hey Kenton, great to see you chiming in here as well!

Additionally, V8 is just much better at running JavaScript than you can hope to achieve in a Wasm-based JS implementation. And JavaScript is the most popular web development language (even server-side).

I agree with this statement as of today. Stay tuned because very cool things are coming on Wasm land (Spidermonkey will soon support JITted workloads inside of Wasm, bringing the speed much closer to V8!)

Note that the time to initialize an isolate isn't actually on the critical path to cold start, since we can pre-initialize isolates and have them ready to go before knowing what application they will run

That's a good point. Although, you are kind of optimizing now the critical path to cold start by actually knowing what the app is running (if is Python, restore it from a Snapshot). So even though if isolate initialization is not in the critical path, there are other things on the critical path that amounts for the extra second of latency in cold starts for Python, I would assume.

Minimizing application size is really essential to making edge compute inexpensive

By leveraging on proper-defined dependencies, you just need to compile and load in memory the dependency module once (lets say Python) and have "infinite" capacity for initializing them. Basically, if you put Python out of the picture and consider it a dependency of an app, then you can suddenly scale apps as much as you want there!

For example: having 10 Python versions (running thousands of apps) will have a overhead of 5Mb (Python binary size in avg) * 10 versions (plus a custom memory for each initialization of the app, which is required in either strategy) ~= 50Mb, so the overhead of pinning a specific Python version should be truly minimal on the server (at least when fully leveraging on a Wasm runtime)

hoodchatham
0 replies
1d

Are people maintaining wasi ports of Python 2.7 and 3.8?

panqueca
2 replies
1d2h

Does this architecture supports uvloop?

syrusakbary
0 replies
1d2h

As far as I know uvloop is not supported in Pyodide, mainly because it requires compiling libuv into WebAssembly (which is possible but not trivial).

In any case, it shall be possible to run uvloop fully inside of WebAssembly. However, doing so will prove challenging using their current architecture

hoodchatham
0 replies
1d

Pyodide uses its own event loop which just subscribes to the JavaScript event loop. My suspicion is that this will be more efficient than using uvloop since v8's event loop is quite well optimized. It also allows us to await JavaScript thenables from Python and Python awaitables from JavaScript, whereas I would be worried about how this behaves with separate event loops. Also, porting uvloop would probably be hard.

deanCommie
0 replies
1d2h

(in my opinion, it will be quite hard for them to achieve <100ms startup time with their current architecture).

Who's running python workloads with sub-100ms latency requirements?

ssijak
9 replies
1d3h

Can we just get full node runtime? Cloudflare is amazing, but without a full node runtime, we (and most of the usual apps) can't switch from things like Vercel/Netlify to Cloudflare.

kentonv
8 replies
1d2h

The unique architecture of our runtime is what enables most of our competitive advantages. It's what lets us run your application in hundreds of locations around the world while also charging less than competing serverless platforms. If we used a full Node runtime, we would need to charge a lot more money, or only run your app in a couple central locations, or both.

So, no, we can't just offer full Node.

However, we are always expanding our Node API compatibility: https://developers.cloudflare.com/workers/runtime-apis/nodej...

(I'm the tech lead for Cloudflare Workers.)

switch007
2 replies
1d1h

Your comment would have been great without:

So, no, we can't just offer full Node.

(Sounds a bit snotty)

cstrahan
1 replies
17h8m

How else would you write that? Would this be better?

"Thus, no, we can't just offer full Node."

Is it the use of "so" that is off limits?

Or is simply providing a concise conclusion inappropriate?

I don't see what part of that quote is "snotty".

switch007
0 replies
11h3m

Omitting it

No PR person would have included that line

ssijak
2 replies
1d

I understand what you said. I did not expect full node runtime workers with all the other benefits that you listed that current workers have (global distribution, no cold starts, cost..). But it would be great to have a choice. I feel it would benefit both users and Cloudflare to support both (with different tradeoffs). For example, selecting "I want node runtime for this app workers," I would need to select the region they will run in, and that's it, those functions/workers would not be globally distributed, would probably cost more, but when you need full node runtime you need it and that is fine.

kentonv
1 replies
23h41m

Yeah, in theory we could build a parallel service that's more Lambda-like and hosts apps in a more centralized way. It's certainly something we've thought about.

The challenge is, can we actually build that in a way that is significantly better than the existing competition? It's a crowded space. If we just build the same thing everyone else is doing, will it attract enough use to be worth the investment?

It sounds like you would be interested. What in your mind would potentially make our product more attractive than competitors here?

ssijak
0 replies
22h47m

Even if it replicates the same capability for example Vercel has for lambdas/server functions I would still use it because I could remove one vendor for my apps and stay with Cloudflare for everything. I've noticed the same sentiment in random internet conversations. Obviously it will not work in every case and for everybody but for me it is not very hard to imagine having such functionality even if it just on par with current offering being enticing to a lot of people taking into account the complete Cloudflare offer and having it on the same platform. Most of those apps would need images, storage, firewall.. all the things cloudflare offers

ddorian43
0 replies
11h44m

I pick the 2nd. Few apps needs to run in the edge.

amirhirsch
0 replies
23h45m

I like that you answered this, but the request wasn't about workers but rather easing migration to Cloudflare. People want to migrate their entire business to Cloudflare. Provide dedicated servers and a container service, then the applications can migrate to workers. You can even produce an AI to do it for them.

jasoncartwright
8 replies
1d3h

I've played with JS workers on a Cloudflare-fronted site and found them to be easy to use and very quick. Would love to port the whole Django app behind the site over, using their D1 database too.

manishsharan
4 replies
1d3h

>Would love to port the whole Django app behind the site over, using their D1 database too.

Is that wise? One DDOS attack could break your budget.

piperswe
3 replies
1d3h

Only if the DDoS isn't blocked by the Cloudflare DDoS protection

spxneo
2 replies
1d3h

what are the advantages of using Cloudflare db over supabase? So far im loving supabase but wasn't aware CF products have increased drastically

Rows read 5 million / day First 25 billion / month included + $0.001 / million rows

Rows written 100,000 / day First 50 million / month included + $1.00 / million rows

Storage (per GB stored) 5 GB (total) First 5 GB included + $0.75 / GB-mo

piperswe
0 replies
1d2h

Personally my favorite part of using D1 is that the database is managed the same way as everything else, and you just access it through a Workers binding rather than needing any authentication or connection strings or anything. I'm excited to see how the new session API and read replicas work too, since they might be able to reduce DB read latency to being within the same datacenter in many instances. But I only know about as much about the D1 session API and read replicas as anyone else that read the blog post about it.

Disclaimer: I work for Cloudflare, but not on Workers (my team just is a heavy user of Workers). I'm just speaking as a Workers user/enthusiast here.

pdyc
0 replies
1d1h

when D1 moved from alpha to beta they removed the backup feature. Granted it was beta but their support and tooling is quite fragile. Even now they have declared general availbility for D1, but if you try to take backup using their proposed wrangler commands it does not work(there are bugs in handling of dates). You end up wastting lot of time due to this.

D1 is sqlite and supabase is postgresql so they are not exactly comparable but pros/cons of sqlite vs postgres apply here except that sqlite pros of db in process would not apply here since now both the db's have to be connected via wire.

infamia
1 replies
1d3h

Agreed, this looks really cool. While there is no Django/DRF support at the moment, it does say that that they'll be increasing the number of packages in the future.

woutr_be
0 replies
1d3h

Same here, I have a couple of mobile apps that use Cloudflare workers + KV/D1, and it’s been great. I’m low traffic enough to be on the free tier, but would happily pay given how easy it’s been to build on.

noman-land
4 replies
1d3h

This is kind of a game changer for running AI stuff on Cloudflare. Been hoping for this for a while now.

spxneo
1 replies
1d3h

what is the maximum length a worker can run for? curious how this compares to AWS Lambda? or is it something completely different?

dabber
1 replies
1d3h

This is kind of a game changer for running AI stuff on Cloudflare.

That certainly appears to be the intention.

Been hoping for this for a while now.

You should check out the other two announcements from today as well if you haven't yet:

"Leveling up Workers AI: General Availability and more new capabilities"

https://blog.cloudflare.com/workers-ai-ga-huggingface-loras-...

"Running fine-tuned models on Workers AI with LoRAs"

https://blog.cloudflare.com/fine-tuned-inference-with-loras

tyingq
3 replies
1d4h

A performance comparison to a JS worker would be helpful. It does sound interesting, but also sounds potentially slow, given all the layers involved.

Not that I'm expecting parity, but knowing the rough tradeoff would be helpful.

brendanib
1 replies
1d2h

Three aspects of performance:

1. Cold start perf 2. Post-cold start perf - The cost of bridging between JS and WebAssembly - The speed of the Python interpreter running in WebAssembly

Today, Python cold starts are slower than cold starts for a JavaScript Worker of equivalent size. A basic "Hello World" Worker written in JavaScript has a near zero cold start time, while a Python Worker has a cold start under 1 second.

That's because we still need to load Pyodide into your Worker on-demand when a request comes in. The blog post describes what we're working on to reduce this — making Pyodide already available upfront.

Once a Python Worker has gone through a cold start though, the differences are more on the margins — maybe a handful milliseconds, depending on what happens during the request.

- There is a slight cost (think — microseconds not milliseconds) to crossing the "bridge" between JavaScript and WebAssembly — for example, by performing I/O or async operations. This difference tends to be minimal — generally something measured in microseconds not milliseconds. People with performance sensitive Workers already write them in Rust https://github.com/cloudflare/workers-rs, which also relies on bridging between JavaScript and WebAssembly.

- The Python interpreter that Pyodide provides, that runs in WebAssembly, isn't as fast as the years and years of optimization that have gone into making JavaScript fast in V8. But it's still relatively early days for Pyodide, compared to the JS engine in V8 — there are parts of its code where we think there are big perf gains to be had. We're looking forward to upstreaming performance improvements, and there are WebAssembly proposals that help here too.

riazrizvi
0 replies
1d

Very helpful- thanks

deadbabe
0 replies
1d4h

Anecdotally it seems very fast.

neonsunset
3 replies
1d3h

I with they added Azure Functions style workers using C# too, or AWS style lambdas using NativeAOT. Way lower runtime overhead and time to first response latency.

But C# is an underdog language in those lands, so it's understandable.

mdasen
2 replies
1d2h

At the moment, it seems like they're concentrating on using V8 isolates. In the article, there's a good diagram of why: an isolate is able to share so much between different applications. Even with NativeAOT, you're still launching an entire program that has to load everything into memory and execute.

In some ways, they're using V8 isolates the way that mod_php was used back in the day. One reason PHP became so dominant was because PHP was cheap and easy to deploy for small websites. Because the PHP runtime contained 90% of what a person wanted to do with PHP, your PHP code might be a small amount of code that mostly just called standard library functions like `mysql_query()`. If you were running a shared hosting service, you could have huge numbers of people running on the same box because every PHP script would be sharing a single instance of the PHP standard library - and that standard library was fast and written in C. If you wanted to offer Python hosting, each Python app would be duplicating the standard library they were using in memory and also needing lots of web packages that aren't part of the standard library (like a database package). So a minimal Python application was using tons more RAM because it wasn't sharing most of the code with everyone else on the box.

Even with NativeAOT, you're still duplicating a lot when running many different C# projects - as is the case with Go, Java, Ruby, etc. V8 isolates are this case where they tend to be lighter weight because so much can be shared between different users in the system.

In fact, the reason they're supporting Python is because Pyodide (Python interpreter in WASM) allows for dynamic linking. It means they can have a single Pyodide interpreter in memory that's shared by all the Python workers on the same box. Likewise, they can also share Python libraries that two different people on the same box might be using. They note that most languages that target WASM don't support dynamic linking and that the only way they can provide Cloudflare Workers at the price point they offer is because those Workers can share so much rather than duplicating and using more memory for each user.

If you really want C# on Workers, C# does support WASM.

tredre3
0 replies
21h17m

Indeed, isolates are similar to mod_php in that a small pool of shared processes can handle thousands of different applications simultaneously. PHP is truly great in that way!

But just to clarify, mod_php isn't thread safe so parallel requests do not share memory, they each have their own process (prefork mpm). And for untrusted tenants you also need to combine with mod_setuid or mod_suexec for proper isolation, as PHP doesn't do any isolation of its own (they tried for a while but gave up, remember open_basedir?).

In other words a server with 16GB of RAM could handle maybe 250-350 simultaneous PHP requests using mod_php, whereas I'm sure they can fit thousands of isolates in that footprint.

neonsunset
0 replies
1d1h

WASM performance and overhead currently make it a poor application for edge serverless scenario for something that is compiled (as it is in many other languages, really, stop adding overhead of yet another runtime undoing decades of optimization work).

"Consumption plan" azure functions as they call it are much more in line with V8 isolates where your function is just an isolated assembly run on a common runtime alongside many other functions. It has limitations and I assume the implementation of this is not open source (I don't know what it runs on exactly as azure functions implementation details never really interested me much).

pjmlp
2 replies
1d2h

Why anyone would like to slow down their requests using a full interpred implementation is behind me. Don't be surprised by scalability issues.

atomicnumber3
1 replies
1d2h

There's a lot of value in "just write a python thing in 5 minutes".

People tend to hem and haw about performance and "doing it right." But it's often a misplaced argument of "python vs ["right" thing]", it's "python vs not having anything." Often, a shitty python thing is worth a ton and fixes the problem, and then if performance becomes enough of an issue you can evaluate whether you want to prioritize fixing it. And I find that once something that works is in place, and just quietly doing its job, people suddenly find it a lot less objectionable to their sensibilities.

And even if we do replace it, having the python thing taking the heat off the "right" way's timetable lets you actually do it right because you can take your time.

pjmlp
0 replies
1d2h

We all know what is the outcome of that temporary script that was written in 5 minutes.

pelletier
2 replies
1d3h

I'm curious to see how the limitation of using pyodide packages only will play out for non-trivial builds. Thinking of all the non-pure python code out there that need to be manually rebuilt to support a non-trivial production app.

Maybe Cloudflare's adoption will help bring more packages into the fold, and if it's an 80/20 rule here, would be good enough.

hoodchatham
1 replies
1d3h

I certainly think there's an 80/20 rule here. Most packages are not very hard to port, and generally the ones that are hard to build use features like threads and multiprocessing, graphics cards, raw sockets, green threads, or other capabilities that have no obvious analogue in a webassembly runtime.

As we mention in the blog post, the biggest issues are around supporting server and request packages since they are clearly useful in cloudflare workers but are difficult to port because they frequently use raw sockets and some form of concurrency.

dom96
0 replies
1d2h

As we build out support for some of these features in the Workers Runtime, we should be able to port Python modules to use them. Some features like raw sockets are already available, so we should be able to make some quick headway here.

(Myself and Hood above are the folks who implemented Python Workers)

jarpineh
2 replies
1d1h

More development and users for Pyodide is great news. Especially that better serverless story for Python server frameworks.

I wonder if Jupyter can work in this stack? It is essentially JavaScript, but built for browser environment. Just the Python kernel might be worker compatible. It essentially has to do code evaluation which might a limitation as well. Should it work you could offload compute from browser or other HTTP clients to waster resources of worker environment. Direct access to databases would be better as well.

jarpineh
0 replies
22h39m

Yes, Jupyter could work. I just fear there is things that except browser’s JavaScript APIs. Some I think aren’t even available for web workers. V8 workers are a different thing (I believe) and not at all familiar to me. But, it should be easy enough to test…

https://github.com/jupyterlite/jupyterlite?tab=readme-ov-fil... <- Currently kernel supports at least web workers.

garrettgu10
1 replies
1d1h

Haha, we included it just because it's part of the standard library. Total coincidence in terms of timing but it's nice that using Wasm gives us isolation guarantees :-)

dom96
0 replies
1d1h

Yeah, pure coincidence. I picked it before the xz news broke.

alfor
2 replies
1d2h

I don't see how people will start using a completely new way python is running.

If you are just experimenting and having fun, sure. But would you bet your company or many many months of developpement on this? What happen if you get random bugs?

The advantage need to be extremely high to make it worth it. Maybe for specialized work that need to happen at the edge and then, why not use js instead that is the bedrock of this implementation?

victorbjorklund
0 replies
23h49m

I can totally see it being used for small services where python has better libaries

jppope
0 replies
1d2h

Cloudflare's engineering is top notch. I wouldn't expect anything different than what you would get from any of the other major cloud providers.

Workers are also extremely fast/performant and inexpensive... if you are working on a company those two aspects can be fairly important to the success of some types of companies.

Terretta
2 replies
23h59m

"This is about more than just making bindings to resources on Cloudflare more Pythonic though — it’s about compatibility with the ecosystem."

Someone might be getting editorial help from GPT-4.

// Or a human might be getting fine-tuned interacting with LLMs, which I've noticed happening to me.

Terretta
1 replies
16h52m

To the dead comment beside me, asking why...

GPT-4 has a variety of tells. One of them is:

"This is not just about good thing, it's about another good thing." That "it's not just A, it's also B", or "it's more than A, it's B too" show up most any time you ask it for "persuasive" copy, like marketing, sales, or a rewrite of anything to make the reader a buyer.

It's disproportionately common in GPT-4 copywriting or copy-editing, relative to human copy.

Similar to seeing the word "Overall, ..." for a concluding paragraph, another tell.

corinroyal
0 replies
16h41m

My personal favorite is, "It's important to note..." I asked it to stop using that phrase or variations and that lasted one prompt. I'm tempted to put the phrase on a T-shirt.

gregorymichael
1 replies
1d4h

I’ve used CF Pages for static sites with great results and am intrigued by all their open-source-LLM-as-a-service offerings. Main issue preventing me from building more on CF is lack of Python support. Excited to try this out.

johnmaguire
0 replies
1d4h

Yes! I'm also using CF Pages, and a couple Worker functions, and really love the CF ecosystem. Very easy to get something running quickly, and not have to worry much about infrastructure.

Very happy to see the Python addition. I'd like to see first-class Go support as well.

zinclozenge
0 replies
1d3h

I'd be curious to see a direct performance comparison between their python and JS workers. Based on my own experience with pyodide, I'd wager there might be up to a 2x performance penalty.

rodolphoarruda
0 replies
1d1h

Interesting. HTMx -> Python -> SQlite all in Cloudflare. I was kind of waiting for this day.

paddy_m
0 replies
1d2h

Glad to see it includes numpy (and presumably pandas). Getting those to work in constrained serverless environments can be a huge pain.

hdlothia
0 replies
1d1h

Wow this is huge for llm and data engineering. Many of the best libraries are in Python

fastball
0 replies
8h59m

Tried this out today and it was great, was very quick to get up and running!

One question though – does anyone know how I can get my local dev environment to understand the libraries that are built-in to CFW's Python implementation? e.g. there is an `asgi` library that I do not want my linter to flag as unknown, but as it only exists at runtime in the `on_fetch` handler (and isn't actually present in my local dev machine, I couldn't figure this out.

devwastaken
0 replies
1d

It compiles python to we assembly which then runs on their modified V8 runtime. We assembly is generally a non solution to any problem, especially not this one. While it is convenient it is a clear lack of engineering ability that they can't implement a proper Python runtime.

The reasons to not use wasm are many - the tool chains for emscripten are not well documented, hacky, and we're not built to the quality you'd expect from a compiler. After all it's doing something nothing was designed for.

The performance will never be an improvement over a native engine, much of the context is lost in translation when compiling to wasm.

anon373839
0 replies
18h41m

With Pyodide getting some serious backing, is there a glimmer of hope that we could end up with Python as a real alternative to JavaScript in the frontend?

adam_arthur
0 replies
1d1h

I would like to see CloudFlare implement workers with WASM as the first class citizen, and a general purpose API not tied to JS workers.

Up until now you've been able to deploy WASM code (e.g. effectively can use any language), but it runs within a JS context, rather than natively.

Just a bit more overhead/awkwardness in deployment. I believe eventually all services will be deployed directly to WASM (securitized) runtimes, rather than via containers, similar to how we moved from images -> containers).

There's very little benefit currently to trying to use something like Rust on the edge (in CF), because a lot of the perf advantage is negated by the overhead and startup times.

e.g. https://github.com/WasmEdge/WasmEdge