I tried zed for a few weeks because I'm generally sympathetic to the "use a native app" idea vs Electron. I generally liked it and its UX but:
1. VSCode is pretty damn fast to be honest. Very rarely is my slowdown in my work VSCode loading. Maybe I don't open very large files? Probably 5k lines of typescript at most.
2. Integration with the Typescript language server was just not as good as VSCode. I can't pin down exactly what was wrong but the autocompletions in particular felt much worse. I've never worked on a language server or editor so I don't know what's on zed/VSCode and what's on the TS language server.
Eventually all the little inconveniences wore on me and I switched back to VSCode.
I will probably try it again after a few more releases to see if it feels better.
I think people just have very different tolerances for latency and slowness.
I keep trying different editors (including VS Code), and I always end up going back to Neovim because everything else just feels sluggish, to the point where it annoys me so much I'm willing to put up with all the configuration burden of Neovim because of it.
I tried out Zed and it actually feels fast enough for me to consider switching.
Sublime Text 3 is still one of my favorite editors. I use VSCode lately because of its excellent "Remote SSH" integration - but when it comes to latency sublime has it beat.
Zed does not feel fast on my machine, which is a 13900K/128gb ram. It is running in xwayland though, so that could be part of the problem. It feels identical to vscode.
| It is running in xwayland though
It definitely isn't on my system, and I did not touch the configs at all; are you sure about that?
Fairly positive due to blurry cursors, but I have no way to verify.
If you run xeyes and the eyes follow your cursor when it's above the application you want to test, it's running under xwayland. If they don't follow your cursor, the application is running under native Wayland.
finally a use for xeyes?
I don't use vanilla xeyes but I use the Window Maker dockapp version (https://bstern.org/wmeyes/) to make it easier to find my cursor on the screen.
Ha. KDE 6 has something like if you jiggle the cursor a certain way, it temporarily grows larger.
Better than Windows's function of "hide all my windows"...
Pressing some key a few times in Windows highlights your cursor. I just can't remember what it was (Ctrl I think)
Yup, Ctrl twice.
Once works. It's an option you have to turn on: Settings > Mouse > Additional mouse options > Show location of pointer when I press the CTRL key.
Oh thank you thank you. I moved to Windows 11 and the feature disappeared - it is right where your path points to.
I think every OS has this feature. Sometimes it is hidden in an accessibility menu and needs to be turned on.
I always run xeyes in any net-enabled gui. iykyk.
Welp, looks like it is running native wayland yet the cursors are blurry. The only time I have ever experienced that is when an app is running under xwayland.
If you run xlsclients it will list all applications running through xwayland.
[0] https://archlinux.org/packages/extra/x86_64/xorg-xlsclients/
Oooh, thank you this is very convenient. Confirmed zed is not listed here.
Sublime Text gang, raise up.
I was always a fan of Sublime Text and I moved away from it once because VSC felt more "hassle-free". The extensions just worked, I didn't need to go through endless JSON files to configure things, I even uncluttered its interface but at the end of the day I returned to good old Sublime Text. Now with LSPs it requires way less tinkering with plugins. I only wish it had just a little bit more UI customizability for plugins to use (different panes etc). Maybe with Sublime Text 5 if that ever comes.
Also about the speed: VSC is fast but in comparison... Sublime Text is just insta-fast.
ST4 is my go-to for quickly viewing and editing individual files. It really is instant compared to VSC.
I don't really run ST with any complex plugins though and leave cases where I want those for VSC. The ones I have installed right now are just extra syntax highlighting and Filter Lines (which I find very handy for progressively filtering down logs)
I still use ST for opening huge files. 9 times out of 10 if a huge file cannot be opened in any other editor, I will open it in subl and it will be just fine.
It is hard, when so many in our industry are cheapstakes and don't want to pay for their tools, like in every other profession.
They rather suffer with VSCode than pay a couple of dollars for Sublime Text.
I paid for Sublime, but moved to VSCode because at least at the time it had better hassle free support for more languages. Including linters, auto formatting and just generally convenient stuff.
I‘m not sure where it stands now. My guess is that Sublime has caught up for mainstream languages, but the support for languages that are a bit more niche like Clojure or Zig is nowhere near as good.
I miss the speed and editing experience of Sublime though.
I feel the same way about Notepad++
notepad++ is a respectable editor but sublime defeats it at everything except price.
I have used Sublime Text my entire pro programming career. Before that I used emacs for a while.
I love it and will not switch it for anything. It is maybe one of the best pieces of software ever made. A lot of the things such as multiple cursors, command palette etc where first popularized by ST.
Today, I use it to write Rust, Go, web stuff and with LSP I get all the autocomplete I need. I also use Kitty as a separate terminal (never liked the terminal in editor thing).
Things like Cmd-R and Cmd-Shift-R to show symbols in file and symbols in project work better, faster and more reliably than many LSP symbol completions.
I'm all for Sublime Text and Merge, my daily drivers for all kinds of writing..
Sublime's focused/minimalist UI is nice. VS Code sometimes feels like it tries to do too much.
My ideal editor would probably be something like a variation on Sublime Text that's modeled more closely after TextMate while keeping the bits that make Sublime better (like the command palette).
Sublime is the better Textmate. What would you do to subl to make it more like mate? I used textmate for years and years before switching to ST and it was a drop-in replacement.
The two are pretty close, but between the two TextMate feels more like a golden era OS X desktop app thanks to several small differences and tiny Mac-isms, and I'd like Sublime to have that feel too.
I also feel TextMate had the nicer overall UX. When I first tried Sublime, TextMate had the better text rendering (IMO). Sublime has more features but still doesn’t feel as slick somehow.
I’ve recently returned to Sublime from VSC. I prefer VSC’s UI for following links to definitons/references, but in most other ways I prefer Sublime’s nimbleness.
Not that this was necessarily better in terms of capabilities, but TextMate had a very pleasing Unix-style extension model where there was no mandated language and extension commands used scripts/executables written in any language. There was even a nice graphical editor for fine-tuning exactly what input they would be given and how their output would be acted upon.
TextMate was very much "Mac OS X UI sensibilities combined with Unix power", whereas ST pretty much has its own self-contained philosophy that's then brought to Mac/Windows/Linux in a slick way.
I'm begrudgingly stuck with VSCode because of language support in the smaller-community languages I work with, but any time it starts being a dog (and it doesn't take much, think a 20MiB test data file) I switch back for that purpose.
I'm also never letting it anywhere near a merge again, after the worst merge in my years of using git. Sublime Merge doesn't give me the same warm feelings as Sublime Text, but it works, and it won't choke on a big patch and apply a huge deletion without showing it to me first.
I use Helix and feel the same way. The pickers/fuzzy finder particularly have no equal for speed in any editor I’ve found. (Zed seems pretty fast but I didn’t get on well enough with it to find out how it performs with more serious use.)
fwiw I’ve also found the configuration overhead much lower with Helix than for pretty much any other editor I’ve seriously used.
This makes me want to use Helix, because while I love the idea of a terminal editor, I'm not the kind of person to whittle away a day screwing around with my config files.
Helix has been stalled for a few months, and there are issues that make it frustrating to use at times. For example, :Ex and friends have been relegated to the plugin system (the root cause of the stall, it hasn't been merged). I still prefer it to the config overhead of nvim (as well as the kakoune-style movements!), but the paper cuts have hit a threshold and I've started writing my own text editor (I'd probably use Zed, were it not for lack of kakoune movement support): https://youtu.be/Nzaba0bCMdo?si=00k0D6ZfOUF8OLME
The Helix community is the worst part about Helix. Especially the not so benevolent dictator of the project. Way too many comments like “if you don’t like how it’s done go use a different editor” instead of listening to feedback. That’s fine if they don’t care about adoption (they publicly say they don’t), but an actively hostile community doesn’t give me confidence in the editor, despite it being quite nice.
Author here. I listen to feedback, but it's hard to incorporate every possible requested feature without the codebase becoming an unmaintainable mess.
We're a small team with limited time and I've always emphasized that helix is just one version of a tool and it's perfectly fine if there's a better alternative for some users. Someone with a fully customized neovim setup is probably going to have a better time just using their existing setup rather than getting helix to work the same way.
Code editors in particular are very subjective and helix started as a project to solve my workflow. But users don't always respond well to having feature requests rejected because they don't align with our goals. Plugins should eventually help fit those needs.
I like this response. Kudos to sticking to your vision; it's easy to be swayed by users into building a kitchen-sink-fridge-toilet. If you build for everyone, you build for no one.
I've found attitudes like this to be the worst parts of the community.
Maybe it's quite nice because of how they've approached building it? I've been actively watching Helix for quite a while now, and I've observed as hostile those who approach the project are.
From what I've seen, they do listen to feedback. Perhaps similar to the person who said it had stalled, people take not saying yes as not listening to feedback?
My experience is rather different.
The community is welcoming, and will help solve issues. However, it’s true (and good IMHO) that the project seems to have a strong idea of what is and is not a core feature. They prioritise building what you might call the Helix editing model and the Helix vision for what an editor should be.
Importantly, Helix isn’t (or doesn’t appear to be) trying to become something approaching an OS, or to be a faster, easier to configure way to get an editor that works like [your preferred configuration of] vim or emacs with lower input latency.
I applaud these things! I like the Helix model more than the vim or emacs models, and the project’s priorities for what should and shouldn’t be in an editor core are pretty well aligned with my own. I do not find I’m desperate for plugins to fix some major deficiency, though I’m sure I’ll use a few once they become available.
This is all what I want to see and fits my definition of a good “benevolent dictator”, maintaining focus and taking tough decisions.
I do maintain a reasonable set of extra keybindings and small configuration changes, as well as a very slightly modified theme [0], but I don’t think many of them are essential and I try pretty hard not to conflict with Helix defaults or radically diverge from the Helix editing model.
It works for me right now, and keeps getting better (rather quickly if you install from git as I do). I’m excited for the future, especially seeing some of the features and improvements moving through PRs.
YMMV.
[0] https://gist.github.com/barnabee/82f39d02a85291b0045f53f2473...
Stalled how? There was a release a couple of months ago. There's another on the way. There are regular changes merged in. There's been foundational changes (events) made to enable new features. The plugins are being worked on, and whilst the speed may not be for you, that doesn't mean its stalled?
It's the main reason I switched from Neovim. I didn't want to maintain a thousand lines of Lua of stuff to have a good baseline editor. I only wanted to maintain my configuration idiosyncracies on top of an editor with good defaults. I think there are Neovim distributions that accomplish mostly the same thing, but then I fell in love with Helix's Kakoune-inspired differences.
Give it a try! It's lovely.
Zed is still quite a bit slower than Neovim in my experience.
Interesting. That tells me there's something wrong with my neovim config. When I open a file for the first time, it takes some time before it shows the contents of the file. It's not even a big config, but maybe I'm using a plugin that slows things down or something.
Try using Neovim without loading a config, just like a fresh install, and see how it is.
Yeah, it's due to something in my config.
Neovim is quite a bit slower than cat and echo.. in my experience.
I've honestly never considered this and it's genius. I have always been surprised when people recommend kitty as a "fast" terminal when it takes 200ms (python interpreter) to start up, which is unbearable to me.
But yeah, people sometimes just open a couple and see speed in other areas that I don't care about.
I would actually say that this is more of a system/OS issue to a point. Why doesn’t my OS keep such often-used programs in memory, simple opening a new window when clicked, like mobile OSs do? Just because desktop hardware can get away with a lot more, I believe that making programs go to a background mode, and pausing its thread would make everything so much smoother with zero, or even beneficial effect on memory/battery consumption.
It’s not genius. It’s just very appealing to those on the side of wanting something faster, because - like all topics like this - everyone is always looking for subtle ways to signal themselves as somehow patrician. “Oh, well, some people just want more ownership of their computer, that’s why I use Linux :)”, is similarly thought-terminating. The conversation shouldn’t end there.
I wonder if it's because of a form of "touch typing". I'm not really looking at text appearing as I type. My fingers work off an internal buffer while my mind is planning the next problem. If not so deep in thought to almost be blind, I am reading other docs / code as I type. I am not an ultra fast typist but if I mistype, I can feel it and don't need the visual feedback to know it. I might be this way because I am old and have used tools with lag you measure in seconds.
I only care about latency if it interrupts me and I have to wait and that's typically not typing but heavier operations. I am utterly intolerant to animations. I don't want less I want zero, instant action. I don't want janky ass "smooth scrolling" I want crisp instant scrolling. I have no idea why animations are even popular.
Some of the text-editor latency discussion reminds me of high screen refresh rates for office work. When people "check the refresh rate" they have to do that violent wiggling of window to actually have large content moving faster enough to see a difference. You have to look for it to then get upset about it.
The worse case would be if it's more of an illusion like fancy wines - a fiction driven by context. Lie to someone that an editor is an electron app and they will complain about the latency. Software judgement also has toxic fashion and tribal aspects. Something unfashionable will accrue unjustified complaints and something cool or "on your team" will be defended from them. I'm reminded of Apple fans making all sorts of claims about rendering unaware that they were using Apple laptops that shipped not running at their natural resolution and visibly blurry. Your lying eyes can't beat what the heart wants to believe.
I'm on the same camp, but in the end it turns out we were not putting it to the actual, real, hard-world test.
VSCode is very fast for me, when I open it in the morning and just starting my day.
But once I've opened the main project and 7 support library's projects, and I'm in a video-call on Chrome sharing my screen (which is something that eats CPU for breakfast), and I'm live-debugging a difficult to reproduce scenario while changing code on the fly, then the test conditions are really set up where differences between slow/heavy and fast/lightweight software can be noticed.
Things like slowness in syntax highlighting, or jankyness when opening different files. Not to mention what happened when I wanted to show the step-by-step debugging of the software to my colleagues.
In summary: our modern computer's sheer power are camouflaging poor software performance. The difference between using native and Electron apps, is a huge reduction in the upper limit of how many things you can do at the same time in your machine, or having a lower ceiling on how many heavy-load work tasks your system can be doing before it breaks.
Same can be said about a lightweight web page and 'React' with tons routers all in SPA and vdom. Maybe the page is fine when it is the only page open, but when there are other SPA also open, then even typing becomes sluggish. Please don't use modern computer's sheer power to camouflaging poor software performance. Always make sure the code uses as little resource as possible.
That brings a Python "performance" talk to mind that I was recently listening to on YouTube. The first point the presenter brought up was that he thinks the laptops of developers need to be more modern for Python to not be so slow. I had to stop the video right there, because this attitude isn't going anywhere.
You know what? I actually believe in having developers work (or maybe just test) with slower computers (when writing native apps) or with crippled networking (when doing web) in order to force them consider the real-world cases of not being in a confy office with top-notch computers and ultra high-bandwidth connections for testing.
I totally agree. However, I feel like this is an ageism :-) Are you 40+ perhaps :-)
Heheh no. I'm in my 30s. My opinion comes from experience. I like to travel a lot, and have been several times on trips that brought me to places where the norm is a subpar connection. Taking 30 seconds to load the simplest bloatware-infested blog that doesn't even display text without JavaScript enabled, teaches you a thing or two about being judicious with technology choices.
“Craptop duty”: https://css-tricks.com/test-your-product-on-a-crappy-laptop/.
I think dev containers can help here. You have a laptop that can run your editor, and a browser. The actual build is done on a remote machine so that we're not kneecapping you by subjecting you to compiling kotlin on a mid range machine, but your laptop still needs to be able to run the site.
I agree with this approach. I used to always have hardware no more than 2 years old and were med-high to high spec. When I helped troubleshoot on my families and extended families devices and internet connection I saw how normal people suffered on slow systems and networks. I since operate on older devices and do not have gig internet at home every web and app designer should have to build or test with constraints.
This is giving me flashbacks to editors of yore; EMACS, Eight MB And Continually Swapping. I remember reading almost the exact same comments on Usenet from the 80s and 90s.
Flashbacks? It’s 2024 and Emacs is still single threaded
And it still performs better than vscode.
That’s not entirely surprising. Emacs’s UI is a character-cell matrix with some toolkit-provided fluff around it; VSCode’s is an arbitrary piece of graphics. One of these is harder than the other. (Not as harder as VSCode is slower, but still a hell of a lot.)
I use Emacs in textmode. It's super fast! But I've also never found VS Code slow, and that's with viewing multiple large log files at the same time
It’s also 2024 and you still can’t share JavaScript objects between threads. Do not underestimate the horror that is tracing garbage collection with multiple mutator threads. (Here[1] is Guile maintainer Andy Wingo singing praises to the new, simpler way to do it... in 2023, referring to a research paper from 2008 that he came across a year before that post.)
[1] https://wingolog.org/archives/2023/02/07/whippet-towards-a-n...
Sure. Just allocate 10x the engineering resources and I can make it as fast and bug free as you like.
Getting the same amount of current engineers or possibly less that actually care and know about performance can work. There’s a reason applications are so much relatively slower than they were in the 80s. It’s crazy.
Yeah, all I need to do to reliably show the drastic performance difference is open 5 different windows with 5 different versions of our monorepo. I frequently need to do that when e.g. reviewing different branches and, say, running some of the test suites or whatever — work where I want to leave the computer doing something in that branch, while I myself switch to reviewing or implementing some other feature.
When I start VS Code, it often re-opens all the windows, and it is slow as hell right away (on Linux 14900K + fast SSD + 64GB RAM, or on macOS on a Mac Studio M2 Ultra with 64GB RAM).
I'll save a file and it will be like jank...jank... File Save participants running with a progress bar. (Which, tbh, is better than just being that slow without showing any indication of what it is doing, but still.)
I've tried to work with it using one window at a time, but in practice I found it is better for my needs to just quit and relaunch it a few times per day.
I try Zed (and Sublime, and lapce, and any other purportedly performant IDE or beefed-up editor that I read about on this website or similar) like every couple months.
But VS Code has a very, very large lead in features, especially if you are working with TypeScript.
The remote development features are extremely good; you can be working from one workstation doing all the actual work on remote Linux containers — builds and local servers, git, filesystem, shell. That also means you can sit down at some other machine and pick up right where you left off.
The TypeScript completion and project-wide checking is indeed way slower than we want it to be, but it's also a lot better than any other editor I've seen (in terms of picking up the right completions, jumping to definition, suggesting automatic imports, and flagging errors). It works in monorepos containing many different projects, without explicit config.
And then there's the extensions. I don't use many (because I suspect they make it even slower). But the few I do use I wouldn't want to be without (e.g. Deno, Astro, dprint). Whatever your sweet set is, the odds are they'll have a VS Code extension, but maybe not the less popular editors.
So there is this huge gravity pulling me back to VS Code. It is slow. It is, in fact, hella fucking slow. Like 100x slower than you want, at many basic day-to-day things.
But for me so far just buying the absolute fastest machine I can is still the pragmatic thing to do. I want Zed to succeed, I want lapce to succeed, I want to use a faster editor and still do all these same things — but not only have I failed so far to find a replacement that does all the stuff I need to have done, it also seems to me that VS Code's pace of development is pretty amazing, and it is advancing at a faster clip than any of these others.
So while it may be gated in some fundamental way on the performance problem, because of its app architecture, on balance the gap between VS Code and its competitors seems to be widening, not shrinking.
Vscode is very snappy for me on less powerful machine Ryzen 3900 (Ubuntu, X-windows). I have a good experience running multiple instances, big workspaces and 70+ actively used extensions and even more that I selectively enable when I want them. It's only the MS C# support that behaves poorly for me (intentional sabotage?!).
I wonder if you have some problem on your machine/setup? I'd investigate it - try some benchmarking. It's open source so you don't me afraid looking under the hood to see what's happening.
I don't see that at all. Saving is instant/transparent to me.
There is so much possible configuration that could cause an issue e.g. if you have "check on save" from an an extension then you enter "js jank land" where plugins take plugins that take plugins all configured in files with dozens of options, weird rules that change format every 6 months e.g. your linter might take plug-ins from your formatter, your test framework, your ui test framework, hot reload framework, your bundler, your transpile targets...
If saving is really slow then I would suspect something like an extension is wandering around node_modules. Probing file access when you see jank might reveal that.
I have that kind of fast, smooth experience with VS Code, too - but that is when I open my small hobby monorepo, or only when I don't leave it open all day. When I open a big work monorepo (250k files, maybe 10GB in size, or 200MB when you exclude all the node_modules and cache dirs, the slowness isn't instant but it becomes slow after "a while" — an hour, or two.
I do actually regularly benchmark it and test with no/minimal extensions, because I share responsibility for tooling for my team, but the fact that it takes an hour or two to repro makes that sort of too cumbersome to do. (We don't mandate using any specific editor, either, but most of my team uses VS Code so I am always trying to help solve pain points if I can.)
And its not just the file saves that become slow — it's anything, or seemingly so. Like building the auto-import suggestions, or jumping to the definition by ⌘-clicking a Symbol. Right after launch, its snappy. After 2-3 hours and a couple hundred files having been opened, it's click, wait, wait... jump.
Eventually, even typing will lag or stutter. Quitting and restarting it brings it back to snappy-ish for a while.
It is true that maybe we have some configuration that I don't change, so even with no or minimal extensions, there might be something about our setup triggers the problems. Like we have a few settings defined at the monorepo root. But very few.
But before you think aha! the formatter! know that I have tried every formatter under the sun over the past 5 years. (Because Prettier gave my team a lot of problems. Although we now use it again.)We have a huge spelling dictionary. I regularly disable the spelling extension though, but what if there was an edge case bug where having more than 1000 entries in your "cSpell.words" caused a memory leak on every settings lookup, even when the extension wasn't running? I mean... it's software, anything is possible.
But I suspect it is the built-in support for TypeScript itself, and that yeah, as you work with a very large number of files it has to build out a model of the connections between apps and libs and that just causes everything to slow down.
But then, like I mentioned nothing else I've seen quite has the depth of TypeScript support. Or the core set of killer features (to us), which is mainly the remote/SSH stuff for offloading the actual dev env to some beefy machine down the hall (or across the globe).
To us, these things are worth just having to restart the app every few hours. It's kinda annoying, sure, but the feature set is truly fantastic.
Hmm. I've not experienced that. Something is leaking which can be identified/fixed. There are quick things you could do to narrow it down e.g. restart extension host or the language server or kill background node processes etc.
I generally have it running for weeks... although I do have to use "reload window" for my biggest/main workspace fairly often because rust-analyzer debugging gets screwed up and it's the quickest fix from a keyboard shortcut. I may be not seeing your issue for other reasons :)
FWIW I can recommend "reload window" because it only applies to the instance you have a problem with and restores more state than quit/restart e.g. your terminal windows and their content so it's not intrusive to your flow.
Yeah, I know what you mean. I now schedule time for "sharpening my tools" each day and making a deliberate effort to fix issues / create PRs for pain-points. I used to live with problems way too long because "I didn't have time". It's not a wall-clock productivity win.... but the intangibles about enjoying the tools more, less pain, feeling in control and learning from other projects are making me happy.
It's too bad VSCode doesn't "hydrate" features on an as-needed basis or on demand. Imagine it opens by default with just text editing and syntax highlighting, and you can opt in to all the bells and whistles as you have the need with a keystroke or click.
I somewhat disagree. Features sell the product, not performance[1], and for most of the software development you could count on the rising CPU tide to lift all poorly performing apps. But now the tides have turned to drought and optimizing makes a hell of a lot of sense.
[1] They are more of a negative sell and relative to other feature parity products. No one left Adobe Photoshop for Paint, no matter how much faster Paint was. But you could if feature parity is closer, e.g. Affine vs Photoshop.*
performance is a feature.
Yes, but more in a QoL way. I say negative as in - if you don't have it you lose a customer, rather than if you have it, you gain a customer.
If performance is a feature, then it's not an important feature. Otherwise, people would use Paint, for everything.
Or put it another way, you want to do X1 task. It's editing a picture to remove some blemishes from skin. You could use a console, to edit individuals pixels, but it would take months/year to finish the task if you are making changes blindly, then checking. It could take several days if you are doing it with Paint. Or you could do it with Photoshop in a few minutes. What difference does a few ms make if you lose hours?
Now this is only task X1 which is edit blemishes, now you do this for every conceivable task and do an average. What percent of that task are ms loses?
I completely agree with that take. That's exactly the reason why, for example, whenever I'm about to do some "Real Work" with my computer (read: heavyweight stuff), all Electron apps are the first to go away.
My work uses Slack for communications, and it is fine sitting there for the most part, but I close it when doing some demanding tasks because it takes an unreasonable amount of resources for what it is, a glorified chat client.
Well, I think you are missing a subtle issue. They may not switch but they might pay more if it’s faster. They also might not switch to paint but if photoshop performed terribly they may switch to a dozen different tools for different purposes. This kind of thing already happens.
It's a Prisoners's Dilemma. Since apps are evaluated in an isolated fashion there is an incentive to use all the resources available to appear as performant as possible. There is further incentive to be as feature-rich as possible to appeal to the biggest audience reachable.
That this is detrimental to the overall outcome is not unfortunate.
There's not extra apparent performance in using Electron. A truly more performant solution will be still more performant under load from other applications.
The extra performance is on the side of the developers of the app. They can use a technology they already know (the web stack) instead of learning a new one (e.g Rust) or hiring somebody that knows it.
I have had to open the parent folder of all the different code bases I need in a single VSCode window, instead of having an individual window for each.
I much prefer having individual windows for each code base, but the 32G of ram for my laptop is not enough to do that.
If I were to run multiple instances of VSCode, then the moment I need to share my screen or run specs some of them will start crashing due to OOM.
Like the sibling, I have no problem with keeping multiple windows open and I only have 16GB RAM (MacBook Pro). It must be language extensions or something like that.
I don't notice much of a problem from multiple windows. I sometimes have a dozen going.
It's the language extensions in the windows that can cause me problems e.g. rust-analyzer is currently using more than 10GB! If windows are just for reading code and I'm feeling some memory pressure then I kill the language server / disable the extension for that window.
I have more problems with jetbrains. 64GB isn't enough for a dev machine to work on 10s of Mbs of code any more...
My only problem with VSCode is that it's owned by Microsoft. I'm willing to put up with some extra friction if it allows me to escape their ecosystem even a little bit.
My general rule is if I can get at most of what I need from the open source version of something, I use it. Even if it's less user friendly.
but vscode is open source: https://github.com/microsoft/vscode
and there are third-party builds from the community that disable things like telemetry: https://vscodium.com/
Sorry, I should have been more specific and said FOSS. VSCode is still encumbered by the weight of a mega corp. It's like saying Chrome is open source. Sure it is, but it still exists to serve the corporation that owns it.
It's MIT licensed. So it's more FOSS than FOSS
It isn't 1860 anymore, "the freedom to take freedom away" no longer counts.
In what way is VSCode comparable to enslaving human beings?
Having the freedom to take away freedom does not make a society more free.
MIT takes freedom away from end users at the expense of the developer's freedom.
In a way that is comparable to enslaving human beings?
It uses indentured neural networks to write code for you. You're a neural network! You just have rights because you ain't digital (and way larger and possibly using quantum effects). Smh
Tortured analogy.
It's free software in letter, but not in spirit. True free software doesn't lock out non-official builds for zero technical reasons.
what about vscodium? for that reason, what was iceweasel?
vscodium and VSCode forks are legally prevented from using the normal VSCode extension site. They have their own: https://open-vsx.org/
As far as I know Chrome forks are not blocked from using extensions from the Chrome Web Store.
*according to microsoft
There is some sort of vendor lock-in VSCode. It at least used to be extremely difficult to make GitHub Copilot to work with codium. There is something closed source in VSCode that makes the difference.
It was so difficult to maintain, that I ended up switching to VSCode. So the ”lock-in” worked.
The software is free, the extension site is not. I agree that's a shitty practice by MS, but it doesn't somehow make VSCode not free software.
It's not F/OSS at all. It's proprietary software with some open-source components, which together comprise VSCodium.
The problem is that many parts of the ecosystem require that you use the official MS build.
You can't connect to the Marketplace and some extensions outright can't be used with a custom build.
You can however down the extension from the website and install it from the terminal.
codium --install-extension {path to .vsix}
You are able to do so, but is it allowed by the website's terms of service? It may say that you are granted the license to extensions only with Microsoft builds of vscode.
Microsoft isn't a stranger to distribution restrictions and software usage limitations. I remember uploading Visual C# Express 2010 (freely downloaded from Microsoft's website, without license keys) to a local file sharing website to ease the downloading for my local study group and got a letter from Microsoft's lawyer to take it down.
After that our study group transitioned to Mono with Monodevelop.
Terms of service? Who cares.
I don't remember ging to the Website and agreeing to anything. I got vscodium from my package manager.
An actual example is that the Python LSP extension on the offical marketplace has some "DRM" that makes it pop up a fatal "You can't use this extension except with the real VSCode" error message. People have been playing whack-a-mole with it by editing the obfuscated JS to remove that check, or by using an older version from before they added the check. https://github.com/VSCodium/vscodium/discussions/1641
If you're idealogically opposed to Microsoft's editor, that doesn't seem to be a problem to me.
If you're ideologically opposed to Microsoft $FOO, you want to avoid putting yourself even further into their embrace.
https://ghuntley.com/fracture/
You mean except for all of the good plugins. Or the ability to use a custom plugin store. Last I read, the open builds struggled with removing all of the MS telemetry and some may still be leaking.
I, too, prefer to cut off my nose to spite my face.
Yes, living by principles is inconvenient sometimes.
A few weeks ago I had this giant json text blob to debug. I tried Gedit first, and it just fell over completely. Tried vim next, and it was for some reason extremely slow too, which surprised me.
VSCode loaded it nearly immediately and didn't hang when moving around the file. I have my complaints about VSCode, but speed definitely isn't one of them.
Did you have some plugins in vim? It is very odd if it was slower in this scenario.
Not to my knowledge, outside of whatever Debian comes with. Keep in mind this was on a Chromebook - so it would have been running in a VM on a rather memory restricted system. That said, VSCode would have been running in the same parameters.
Just found the file. 42MB on a single line. Takes 5 seconds to open in vim, and about 3 seconds for the right arrow to move the cursor one char over. Nothing like gedit, but slower than I expected.
I'm pretty sure this is syntax highlighting. It's a known issue to be slow for large files in Vim because it is synchronous. Try starting Vim with syntax highlighting off:
Yep that helps a ton, thanks. Now it behaves more like nvim, and cursors around much faster -
$ time vim -c 'syn off' tt.json
real 0m3.277s user 0m1.690s sys 0m0.349s
Interesting. I expected it to be near instant without syntax highlighting but it's still slow.
It is odd that it is slow. On my 2019 macbook pro
edit
new more realistic example: time vim -c 'syn off' <64 MB>.txt vim -c 'syn off' <64 MB>.txt 0.41s user 0.20s system 32% cpu 1.848 total
---
Here is my first, pre edit, example which is invalid. The file was a zip and my install of vim was not opening as text or binary
% time vim -c 'syn off' <48 GB file> vim -c 'syn off' <48 GB file> 0.03s user 0.03s system 2% cpu 2.380 total
This makes sense. I recently learned that VSCode is clever enough to automatically disable some features (which includes syntax highlighting among I guess other things) when it detects that the file is too big according to some heuristics (like probably, length of the longest line, or maybe just total size of the file).
So IMO I think vim is being "too dumb" here and should be able to adapt like VSCode does. But, meanwhile, if you want to test under equal conditions, you can disable VSCode's optimization by disabling this setting:
Editor: Large File Optimizations
Or directly in settings.json:
Disabling the advantages of one application vs another is just kneecapping the superior editor IMO.
Just curious, what of you do the same with bare neovim, for science?
Sure, just tried it. This is time to open, show the initial contents, then exit. nvim is much faster to cursor around, except when you hit the opening or closing of a json block it hangs a bit, so I'm guessing it has some kind of json plugin built in.
$ time vim tt.json
real 0m5.910s user 0m4.120s sys 0m0.343s
$ time nvim tt.json
real 0m2.894s user 0m1.372s sys 0m0.292s
I did some research and it seems that this particular slowness is due to single line file and if there is some syntax highlighting used with vim/neovim, it reads the line completely to do it correctly.
VSCode reads only the visible content and does not load everything for that line. It tokenizes the first 20k chars of the line at maximum, defined by the "editor.maxTokenizationLineLength" setting.
This makes a world of a difference when your editor is configured to wrap lines, or clip or w/e.
You probably happened to have VSCode configured to do something that mitigates the problems of having an extremely long single line, while Vim was not configured to do that.
In case you don't want to investigate the problem, but want to make a more "fair" comparison: use a language that you are comfortable with to format the file with linebreaks and indentation and then load it in different editors.
Defaults matter.
For mainstream users. Particularly in the case of vim, the end-user is more likely the figure out that this is a configuration problem and can be adjusted.
Might want to try https://github.com/LunarVim/bigfile.nvim
I believe VsCode has told me several times that the file i want to open is too big.
It's more of a suggestion/question than a hard limit though, no? "Are you sure you want to open this 4GB file?"
I work with giant jsons every day and always have to fall back to nvim as vscode is terrible. Vscode even has a default size limit where it disables editor features for json files larger than a few megabytes.
Nvim works flawlessly tho even with syntax highlight and folding.
Weird, I had the exact opposite experience. I had a large Markdown file I was editing and VSCode would simply hang or crash when opening it. Neovim on the other hand actually was able to navigate and edit just fine.
For this kind of stuff Sublime has always been extremely good.
But I still struggle to find a reason to not use neovim, it's replaced all my editors.
Okay, call me weird, but why our standards have fallen so low?
VSCode may appear fast, but still has massive latency. The Zed website claims 97ms.
I can feel it is laggy.
Why can't we have response time under 1ms? Even 5ms would be a massive improvement.
For me latency is a massive productivity killer as it feels like walking in a swamp and it always puts me off.
A typical 60 Hz screen refresh is 16.7 ms
If you haven't tried a 144hz or even a 240hz gaming PC, you should. You can really feel the difference dragging things around the screen.
(I'm not sure I would notice typing, but for dragging windows around I could never go back to 60fps.)
You can notice higher frame rates if you're in a competitive FPS, not a code editor. Unless you are playing CS2 in Emacs.
Choppy scrolling adds to the feeling of walking through swamp.
Properly levereged GUI editors have the potential to use the extra refresh rate for smother animations/smooth scrolling, though that's pretty far away from Emacs territory.
I can certainly tell difference between 60 and 120 Hz in fast paced games, but I would not notice it in UI.
I thought so too, but for a while I had 2 144Hz monitors on my Mac Pro[1] and very much noticed it in the UI, window dragging was smoother, browser scrolling too, absolutely noticeable.
[1] Then Apple released the Pro Display and Big Sur and people wondered "how does the math work for a 6K display and bandwidth?" The answer, they completely fucking broke DP 1.4. Hundreds of complaints, different monitors, different GPUs, all broke by Big Sur to this day just so Apple could make their 6K display work.
My screens could do 4K HDR10 @ 144 Hz. After Big Sur? SDR @ 95 Hz, HDR @ 60Hz. Ironically I got better results telling my monitors to only advertise DP 1.2 support, then it was SDR@120, HDR@95Hz.
Studiously ignored by Apple because they broke the standard to eke out more bandwidth.
I do not notice any difference between my 120Hz work MacBook Pro and my 60Hz home MacBook Air. I might notice if I did a side-by-side comparison and looked closely. But why would I?
60hz gives me a headache after a few hours, been like that since I was a kid.
I agree with you -- but aiming for 1ms performance is pretty hard. That is 1/1000th of a second. Your keyboard probably has higher latency than that. Physics cannot be defeated in this regard.
There are keyboards with 1kHz polling.
Yes but it takes longer than that for the signal to reach the usb port. And i doubt if many of us are typing at 1000 keystrokes/second. Apparently that's around 12,000 words/minute assuming average word length of 5 characters.
Expanding on this, there's a detailed analysis of the various contributors to editor latency (from keyboard electronics to scanout) by one of the jetbrains devs at[1]. They show average keypress-to-USB latency for a typical keyboard of 14ms!
1: https://pavelfatin.com/typing-with-pleasure/
I just want to point out that most keyboards hav a latency at 10-20 ms[1], so 1 ms is impossible.
[1]https://danluu.com/keyboard-latency/
That includes the physical travel time, which is an extremely important caveat.
Sure. But that is what the experience is, right? When I press a key, the entire end to end latency is what I care about.
I grew up a damn good HPB Q1 player at 250ish ms.
If you type and wait for the letter, I could see that being annoying. My brain works more in waves, my hands type a block and it's there on the screen. I've never once thought of character latency, but maybe that's my HPB roots.
Honestly I don't think that the problem with VSCode is speed, even. It's bloat. It uses gobs of RAM just to open up a few text files. I compared it to Sublime Text a while back and it was something like 500 MB (for Sublime) to 1-1.5 GB (VSCode). That's not acceptable in my view.
Microsoft's latest embrace-extend-extinguish strategy is keeping just enough special sauce in (frequently closed-source) vscode extensions and out of the language servers. They do the same thing with Pyright/Pylance.
TS itself is lock-in. I mean, the entire point of JS is that it's portable, and there's certainly no lack of compile-to-JS languages that are already finished and have much more powerful type systems and existing libs/ecosystems.
Enjoy your VScode projects exclusively on Windows a couple years down the road, or rather, contribute to MS' coding ML models to make yourself obsolete even before. Windows already posts home everything it has gathered on you the second it connects to the net, and I'd expect vscode to as well.
But the infanterists in our profession manage to get it wrong, every single time.
Erm, you do know that a founding principle of TS, is that the “compile” step is literally just stripping out the type annotations. You could implement it with a Regex if you really wanted to.
The only place this rule is broken, is TS Enums, and that generally considered to have been a mistake, but one that too old to rectify.
Why is that?
Yeah, bun for example can execute typescript files directly. It does not include the tsc or anything, it just strips out type annotations and executes the remanining file that is valid JS.
esbuild does the same I believe.
Could you perhaps consider a worldview that doesn’t place you as being better than everyone else that doesn’t share your preferences? I bet you don’t think that LLMs are going to replace you, rather you’re suspending disbelief to paint the most bleak picture of the future you can come up with, and, again, maximise the blame you place on everyone that isn’t as GOOD as you!
And the remote SSH and C++ extensions, though that actually has a good alternative in the Clangd extension.
I'm kind of ok with it tbh. As a monetisation strategy it's not the worst, and I have no expectation that they just do all this for free.
Bandwagoners are keen to class everything Microsoft does to be competitive as EEE. This is just…them building a product. Throwing their weight around, building something really good, releasing it for free, something that only a handful of other companies could do? Hell yeah! It’s shady. But it’s not EEE.
Yeah, this was mostly my experience. The Zed editor was fast, but it just felt like it wasn't as good as other editors. For me, the version control integration was particularly poor - it shows some line information, but diffing, blame, committing, staging hunks, reviewing staged changes etc are all missing.
There were a bunch of decisions that felt strange, although I can imagine getting used to them eventually. For example, ctrl-click (or jump to usages) on a definition that is used in multiple places opens up a new tab with a list of the results. In most other editors I've used, it's instead opened up a popover menu where I can quickly select which really I want to jump to. Opening those results in a new tab (and having that tab remain open after navigating to a result) feels like it clutters up my tabs with no benefit over a simple popover.
Like you, I'll probably try again in a few releases' time, but right now the editor has so much friction that I'm not sure I actually save any time from the speed side of things.
Have to agree on the VCS story. I’d switched over to using Zed more or less permanently, but I eventually moved back because I kept having to open Intellij to resolve conflicts.
As I don't use either, can't you just open the file and look for
yes, jetbrains editors are particually good at resolving merge conflicts. They also have a magic button to do all the obvious conflicts automatically.
A lot of IDEs these days offer a three-way-merge interface that massively improves on the conflict resolution process. Different tools have different interfaces, but generally you have three panes visible: one showing the diff original->A, one showing the diff original->B, and third showing the current state of the merged file, without conflicts. You can typically add chunks from either of the two diffs, or free edit your resolution based on a combination of the different options.
I find resolving conflicts through this sort of system tends to be a lot more intuitive than trying to mess around with conflict markers - it also helps with protecting against mistakes like forgetting conflicts or wanting to undo changes. If you're not used to it, I really recommend finding a good three-way merge plugin for your editor/IDE of choice.
I don't use it as my main editor (I'm far too used to the Jetbrains editors to make the switch, they're just too smart), but it's the best one for CLI apps that use EDITOR, like git. It boots up basically instantly even when it hasn't been launched in a while and I can make my commit messages and immediately close stuff up at the speed of my thought.
+1 for the Jetbrains gang.
The plus side for using Jetbrains is you can switch to literally anything else and it’ll feel lightning fast.
In the morning vscode is ok, come noon, it’s the primary thing eating my battery and it’s getting slower and slower; day end it’s unusable. Sure, restart it, I know, but it’s fairly terrible though.
I've never experienced this. Have you tried disabling all your extensions to see if one of them is causing it?
I have a mono repo. There's lot in it. And lot many files. Typescript. Go. Python. I have a lower end mac book Air. Not having any issues with VS code.
I had similar experience with JavaScript where it kept showing me errors (usually for ESM imports) even though things were fine. In VSCode, things worked without fuss. I've been testing out JetBrains Fleet [1] as well and its language support is far superior compared to Zed.
[1]: https://www.jetbrains.com/fleet/
I came from Visual Studio to vscode. VsCode looks super lightweight to me.
You should give Theia Ide [1] a try. It's plugin-compatible with VSCode, same user experience. It's slower to start and takes more memory but on my 3 y.o. intel Mac it is definitely snappier than VScode.
[1] https://theia-ide.org/
Hah, similar here. I keep trying it out after seeing posts here and there, but I can't seem to switch from VSCode.
For nearly anything I do it is fast enough, it starts in less than 2 seconds, and the main thing I like about VSCode is ability to switch projects with fuzzy autocomplete. That means I can jump between repos also in a few seconds, which is a huge lifesaver given I switch things frequently.
Zed looked pretty cool but the amount of extensions VSCode has makes it difficult to justify a switch. I do think that the SQL extensions for VSCode are pretty terrible, so maybe that's something where Zed can capitalize.
Interestingly the biggest issues we're having with VSCode have nothing to do with the IDE itself and are instead related to the TypeScript language server. There are so many bugs that require the TypeScript language server to be restarted, and there's little the VSCode team can do about that. Made a new file? Restart. Rename a file? Restart. Delete a directory? Restart. Refactor a couple of classes? Might need a restart.
We're also having some serious language server slowdowns because of some packages we're using. And there's not much Zed can do here for us either. It's really unfortunate because the convenience of having a full-stack TypeScript application is brought down by all of these inconveniences. Makes me miss Go's language server.
Yeah, I agree about VSCode being sort of fast enough. Computers are getting faster and I’m on a M-series mac which makes web rendering much faster but still I feel like as far as electron apps go: VScode is basically the golden child.
Slack & Teams on the other hand, ouch.
Yeah my experience has been that you aren't going to suffer performance problems with VSCode unless you have an incredibly large codebase. Past a certain point I'm sure Vim/NeoVim/Zed are probably much more performant, but the differences in smaller codebases is barely noticeable IME.
VSCode cheats a little in this area. It has its own autocomplete engine that can be guided by extension config, which it mixes seamlessly into the autocomplete coming back from the LSP. The net result is better autocomplete in all languages, that can’t be easily replicated in other editors, because the VSCode augmentations can often be better than what an LSP produces.
Zed is amazing for Rust and pretty good for C++. I feel like it's better for more systems-y languages than JS/TS etc.