I believe with version 3.3 Ruby is back in a big way! The language focused on developer happiness and derided for its slowness is slow no more.
YJIT is an amazing technology, and together with other innovations like object shapes and various GC optimizations, Ruby is becoming seriously fast! Big Ruby shops such as Shopify [1] have been running 3.3 pre-release with YJIT and reporting double digit percentage performance improvements.
Personally I'm really excited about Ruby and its future. I can't wait to start working with Ruby 3.3 and using it on my client's production sites...
[1] https://railsatscale.com/2023-09-18-ruby-3-3-s-yjit-runs-sho...
Edit: add percentage to performance improvements.
You mean like 10% faster, or 10x faster?
Edit: clicked the link; it's 10%. I don't think that's going to make any difference to the perception of Ruby's slowness given that it's on the order of 50-200x slower than "fast" languages like Rust, Java, Go and C++.
Note this is for a Ruby on Rails application. The slowness is I believe more due to the framework than the language runtime. In any case, it's still early to determine the impact on performance from upgrading to Ruby 3.3. I guess we'll be able to tell more in the coming months.
Hardly relevant, I'd think at least 80% of all Ruby usage everywhere is in Ruby on Rails applications.
It is relevant if the cause of the slowness is due to the framework rather than the language itself.
The performance can vary drastically depending on which Ruby framework you use, so it's not due to the language but the upper layer instead.
Note, even if Rails is the most popular framework, there is still other alternatives which makes it even more relevant to the performance impact.
I suspect there's not just one cause of slowness here. Pure language benchmarks also tend to rank Ruby very low. So I'd wager that a fast Ruby framework would still lose to a fast Go framework.
Apples to oranges again. A more relevant comparison would be Ruby to Python and in recent years Ruby has edged ahead in performance if you factor-out Python's C-based libraries such as Numpy.
Isn’t factoring out C based libraries ignoring a large part of the Python ecosystem?
Yes, but if you're comparing language performance that's important.
I haven’t used Python in years but from what I’ve understood by reading other comments, factoring out C-based libraries would rule out a large portion of what makes Python so popular. Especially on the scientific side.
So I think you’re right.
Absolutely, I should maybe clarify that I did not mean to say that we can fully rule out the language itself as a cause of the poor performance, Ruby always perform worse than Go in this example for obvious reasons.
But, saying that the frameworks using the language under the hood has almost no relevancy is wrong in my opinion! And that is what I was trying to point out.
How much of the time in a given request is even spent in Ruby code? The majority of web apps that were slow and I got a chance to analyze were spending much of the time in DB queries and slowness was usually due to unoptimized DB queries. Even endpoints that were fast, still spent a large percentage of their time not in Ruby but in DB and service requests
That is true, yes, but still comparing e.g. Rust vs. Ruby I'd think Rust spends anywhere from 10ns to 100ns (outside of waiting on DB) and Ruby no less than 10 ms. Still pretty significant and can add up during times of big load.
Also I remember Rails' ActiveRecord having some pretty egregious performance footprint (we're talking 10ms to 100ms on top of DB waiting) but I hear that was fixed a while ago.
I seriously don’t think it’s worth comparing Ruby to languages like C++ and the rest.
One is scripting language, the other compiled, the difference is huge already there.
It is worth comparing any two languages and ecosystems if they are used for the same things, in this case -- web backends.
Anything and everything that has a web backend is a fair game for comparison.
Building a web backend with C++ is a very dangerous and complex ordeal, you're extremely likely to expose memory based vulnerabilities to the entire world.
I agree but people are doing it anyway. So technically Ruby on Rails and C++ are competitors in the web backend space.
RoR and whatever C++ based web backend there is count as a valid comparison in my book. But comparing the languages itself is maybe a bit off.
On a side note, you can actually compare their performance here if you’re really curious. But take it with a grain of salt since these are synthetic benchmarks.
https://www.techempower.com/benchmarks
Google, Facebook, Amazon and Twitter beg to differ (not completely C++, but for services where it makes sense). Also nginx, passenger and other parts of your Ruby app.
Web backends in Rust and C++? Not saying web frameworks in these languages don't exist but to claim they're anything other than curiosities is misleading. In any case, good luck with the fraction of a millisecond such languages gain you while waiting on database i/o. There are use cases for switching to a compiled language like Rust or C++. Web back-ends isn't one of them.
Nah. One rust based server can power the same number of connections as 10-100 ruby based servers.
This is not just “pretend database IO” problem. This is actual cost that no business should accept on the basis of “but but but database IO (that I’ve never actually measured, but it’s an easy cop out because I read it on medium once).
I am not interested in being put in a corner where I have to defend web backend creation that I don't even practice. I only said it's being done by people. Feel free to reject it or degrade it as being "a curiosity", from where I am standing reality disagrees with you though.
Right I see your point, but in this case it’s about Ruby, the language itself, not RoR.
It may be worth comparing this new JIT to fast implementations of dynamic languages, like LuaJIT or SBCL for Common Lisp (SBCL is an AOT compiler and not a JIT though).
I'm so glad that you said this and weren't downvoted into the ground. Honestly, Ruby needs to die. It performs like a go cart in a Formula 1 race. I'm actually just exhausted watching smart people tell me this is a language and toolchain worth dedicating brain cells to.
Ruby and Python are the only two ecosystems that seem to prioritize developer happiness. They're a pleasure to work with. So they're not going to die anytime soon.
Python might be popular but developer happiness isn't a concept I'd associate with the language. There's no joy in being limited to single statements in lambdas, for example.
What a bizarre take. Ruby is primarily used in web applications, where round-trip http requests, database queries, and other 10s-of-ms things are commonplace. Ruby is very rarely the bottleneck in these applications. Choosing to make your job significantly more challenging in order to maximize the performance of a small portion of the total response time of a web application is not, in my estimation, a smart decision.
If you compare pure Ruby without Rails to fast language like Rust, Go and Java. It is probably closer to 10-20x.
The 100x to 200x mainly comes from Rails.
Apples to oranges, no?
Ruby the language may be fast but the whole ecosystem is painfully slow. Try writing a server that serves 1mb of json per request out of some db query and some calls to other services. I get 100 requests per second in Rails. Same service rewritten in go serves 100k requests/s.
But, like, why do you need a 1MB json response? That’s probably either a bad design or a use-case Rails is not designed for.
It's a paginated list of 1000 objects of 1kb size each. Any nontrivial API will have responses like that.
My entire point was that Rails is not designed for this.
But you could return the 1000 objects (or less? 1000 records sounds like a lot for any UI to show at once) of 1kb size and allow the clients to request specific pages with a request parameter. There may be applications where you need to ship the full 1M records I guess, but that seems like very much an edge case as far as web apps go.
1,000 records is absolutely not a lot on modern computers or connections. On a business LAN, this request should take well under a second full latency.
On an average mobile connection, it’s maybe a second or so.
True, you would not return 1000 objects at once to the frontend.
I first thought it's just a backend use-case, where processing 1000 records in a paginated result is common, but the parent mentions "rails", so it sounds like a frontend use-case.
The better question is
“Why would I pointlessly accept this clear case of massive technical debt for literally no reason what-so-ever?”
Rails does not present any sort of promise that go does not also present, so just saying “yeah, I’ll handcuff my app like this cause I feel like using Ruby” is, frankly, absurd.
When you ask the right questions, you never land on Ruby, and that’s why Ruby continues to decline.
You're pushing 100Gb/s of JSON (1Mb*100k/s)? AND your calling other services + a DB per request on a single server? I'm skeptical.
The test was local, ie using the loopback interface on a large server.
Why does that matter?
You’re likely in one of two situations as a businesss:
* You’re a struggling startup. Development velocity trumps literally everything. Your server costs are trivial, just turn them up.
* You’re a successful business (maybe in part because you moved fast with Ruby), you can pay down the debt on that absurdly large response. Chunk it, paginate it, remove unnecessary attributes.
1MB is "absurdly large"? This is not the Todo app industry sorry.
This is paginated (page size of 1000) and the caller chooses only the attribute they need already, thanks.
One area where Ruby could help improve developer experience is by providing a better debugging experience. I feel incredibly spoiled with Chrome Dev Tools. Meanwhile the last time I tried debugging heavy metaprogramming Ruby code it was a pain to figure out what was happening.
Meta programming is my largest complaint with Ruby. It creates huge surprises that are very difficult to inspect and debug.
We ban most meta programming in our own code. While the meta programming solutions are fun and clever they are often more code than a functional version and hard to maintain.
We do allow the occasional use of “send” but try to avoid it. Dynamic method definitions are strictly banned.
what is ruby debug not able to do that you want it to do?
https://github.com/ruby/debug
a nice ide integrated experience:
https://code.visualstudio.com/docs/languages/ruby#_debugging...
https://github.com/ruby/vscode-rdbg
https://code.visualstudio.com/docs/editor/debugging
heavy metaprogramming in any language is going to be a pain to debug so i'm not sure what you're expecting but there are tools to help. you can also call source_location on some method to figure out where things exist.
that's surprising considering `pry`[1] is such an amazing debugger IMO.
[1] https://github.com/pry/pry
I don't know if a slight performance increase is going to sell anyone on ruby but I'm glad they're making incremental improvements on things. Being overly concerned about performance is almost always premature optimization, and ruby is more than fast enough for everything I've ever asked of it (including the binding glue between our redis DNS record storage and PowerDNS, where the entire stack serves half a billion queries a month across 14 tiny VPSes without even a blip on htop). I probably could have just used ruby instead of PowerDNS but it's generally not great to roll-your-own on public facing encryption, HTTP, DNS, etc. It wasn't really a performance consideration for me.
The recent irony of the web is anyone that implemented a web app with "slow" ruby and backend rendering now has the fastest page loads compared to bloated front-end web apps backed by actually slow eventually consistent databases that take seconds to load even the tiniest bits of information. I see the spinner GIF far too often while doing menial things on the "modern" web.
> half a billion queries a month across 14 tiny VPSes
For reference:
I always do this when I see large-sounding query counts; a month has a lot of seconds in it, and it’s easier to visualize only one at a time: I can imagine hooking a speaker up to the server and getting a 14Hz buzz, or do a quick mental arithmetic and conclude that we have ~70ms to serve each request. (Though peak RPS is usually more relevant when calculating perf numbers; traffic tends to be lumpy so we need overhead to spare at all other times which makes the average much less impressive-sounding.)I also like to double check these kind of numbers and basically agree with your take. Although, I like to use a 28 day month and an 8-10 hour day rather than assuming smooth traffic over 24 hours. Even with all that, 1/2 a billion is well under 50 req/sec which is not a big deal for the rendering servers. All that traffic coming together on a database server might be a bottleneck though.
Unless those “double digits” are 98+%, ruby is still going to be quadruple digit percentage points behind strong competitors in terms of performance.
If I’m going to give it even a second glance, I can’t be seeing “ruby 10,000% slower than Java” for measured use cases.
I was pretty excited hearing “double digit” thinking 50 or 80%.
The link shows 13-15%.