return to table of content

Zed AI

modernerd
39 replies
23h6m

I just want a fast programmable text editor with a native GUI and good defaults.

But that seems really tough to find, for some reason.

Zed is so close, but I’d much rather see a focus on the “programmable” part and let the AI and collaboration features emerge later out of rich extensibility (i.e. as plugins, perhaps even paid plugins) than have them built-in behind a sign-in and unknown future pricing model.

p4bl0
6 replies
22h6m

I just want a fast programmable text editor with a native GUI and good defaults.

What would that be for each OS?

Linux: Kate (at least if using KDE; which one would it be for GTK / Gnome?)

macOS: TextMate?

Windows: Notepad++?

m-s-y
1 replies
19h45m

macOS: BBEdit now and forever.

leejoramo
0 replies
18h9m

BBEdit Doesn’t Suck!

sunaookami
0 replies
19h58m

Can also recommend CotEditor for macOS as a Notepad++ replacement. Don't know how "programmable" it is or what even falls under that label.

jay_kyburz
0 replies
20h16m

I use Kate on Windows as well.

I really love the Documents Tree plugin and could never go back to old style tabs.

lovethevoid
5 replies
22h57m

Sublime Text is likely closer to what you're looking for.

modernerd
4 replies
21h32m

I retry it every now and then and miss the easy extensibility of Neovim. Can we build something that marries both worlds?

meiraleal
2 replies
21h4m

By we you mean anybody besides you? Complain-driven development.

modernerd
1 replies
20h22m

Happy to try building one to free you from complaint-driven motivation.

meiraleal
0 replies
17h37m

Please do. Editors and IDEs need new ideas.

throwaway74354
0 replies
20h28m

RIP Onivim 2, it was so close to this niche.

kenjackson
4 replies
22h38m

What does “native GUI” mean?

modernerd
0 replies
21h38m

Using the same platform-specific graphics API the OS vendor builds their GUI apps with, ideally, but I'll also settle for "not a TUI, not a web application shipped as a desktop app, even if the OS vendor currently builds their GUI apps as web applications shipped as desktop apps".

layer8
0 replies
18h48m

A GUI that uses native controls and platform UI conventions with the native behavior expected on the given platform, or a near-indistinguishable equivalent of that.

lagniappe
0 replies
22h34m

I understood it as not a wrapped web application, like Electron or Tauri based applications.

Lord_Zero
0 replies
18h45m

Not electron

jsheard
3 replies
22h51m

In the case of Zed it was always inevitable, a text editor doesn't raise >$10M in venture capital unless there's a plan to stuff it full of premium subscription features.

Warp Terminal is a similar story, >$50M in funding for a terminal emulator of all things...

sangnoir
1 replies
22h43m

What I'd give to have a look at the roadmaps to see how they hope to 10x/100x VC investments with a text editor and terminal.

ramraj07
0 replies
22h41m

Just look at Postman.

gleenn
0 replies
16h28m

The funny thing is Atom was the origin story of Zed, written in some C++ and a lot of Coffeescript exactly so it could be very programmable.

Also, Spacemacs? It's technically a terminal but definitely has a lot of UI features. Very programmable.

lambdaops
2 replies
23h0m

There's two ways to make extensions to Zed, providing context for AI. In the post they show off making a Rust -> WASM based extension and also mention a server based model. There's also a third option -- Zed is open source. You don't have to use their auth, it just makes collaboration easy.

modernerd
0 replies
22h9m

Extensions can add the following capabilities to Zed: Languages, Themes, Slash Commands

This is a great start but it's far from what most would accept as "programmable" or richly extensible.

jvmancuso
0 replies
22h50m

we call the latter a "Context Server", basically any process that communicates via JSON-RPC over stdio can do it. documentation for that is here: https://zed.dev/docs/assistant/context-servers

andrewmcwatters
2 replies
22h14m

I don't know of a single modern desktop application that is deploying front-ends simultaneously in WinUI 3, AppKit, and GTK.

blacksmith_tb
0 replies
20h4m

Not Firefox?

antiframe
0 replies
16h41m

Emacs? (I don't have any WinUI 3 machines so can't verify, but does support GTK and AppKit if built with such support).

lenkite
1 replies
12h3m

with a native GUI

This means 100x more effort in the long run for a cross platform editor. Maybe if developers lived for 200 years, this could be possible. Will need to solve human ageing problem before the cross platform "native GUI" problem.

dualogy
0 replies
3h24m

For a text editor UI, it really isn't quite so challenging: see Sublime Text, or TextAdept, and others mentioned across this sub-thread.

layer8
1 replies
18h50m

The monetary incentives are not in your (and my) favor.

modernerd
0 replies
10h17m

If a motivated solo dev thought there might be at least 10,000 people who would pay 100 USD a year for a text editor with better extensibility and performance than VS Code and better defaults/richer GUI APIs than vim/Emacs, I can see why it might be tempting for them to try.

ivanjermakov
1 replies
8h21m

What are the benefits of having a "native GUI" that terminal interface cannot substitute?

Extensibility of neovim or emacs covers all my text editor use cases.

modernerd
0 replies
7h7m

Neovim and Emacs extensibility are great!

Native GUIs offer far better accessibility (TUIs are not screen-reader accessible, and neither is Emacs' GUI currently), hugely improved UI flexibility and consistent developer APIs (Emacs GUI is inconsistent across platforms and tricky to work with, every Neovim plugin reinvents ways to draw modals/text input because there's no consistent API), reduced redraw quirks, better performance, better debugging (as a Neovim plugin dev I don't want to spend time debugging user reports that relate to the user's choice of terminal emulator this week and not to Neovim or my plugin code).

onel
0 replies
11h23m

I would also cast my vote for sublime text. The performance is amazing, the defaults are great and the extensions cover a lot of the use cases

mrcwinn
0 replies
18h46m

If something is "so close," probably just use it.

laweijfmvo
0 replies
22h47m

What's your current text editor and what's wrong with it?

impulser_
0 replies
22h16m

To be fair, you can turn off all the AI stuff with a single config.

"assistant": { "enabled": false, }

dualogy
0 replies
3h29m

I just want a fast programmable text editor with a native GUI and good defaults.

It's called TextAdept. Much of it is itself built on its own Lua extensibility story, which runs on a fairly compact C core. Both native GUI and terminal versions, using the same user config (keybinds etc). Linux, Mac OS, Windows builds. LSP support built in. Plenty of community-produced extensions around (but of course not as vast a range as VSCode's VSX eco-system furnishes).

https://orbitalquark.github.io/textadept/

janice1999
31 replies
23h24m

JetBrains ships a local, offline AI auto-complete in PyCharm. Sure it's limited, but it shows it can be done. It's a pity other companies aren't trying the same, especially this one that boasts about millisecond editing times.

Edit: JetBrains, not IntelliJ. Auto-complete details - https://blog.jetbrains.com/blog/2024/04/04/full-line-code-co...

nsonha
8 replies
23h5m

AI autocomplete is just a toy, it has been there since the begining of AI code and I find it pretty useless. I prefer making a proper prompt to get better answers. Also most of the time engineers (use AI to) modify existing codebases rather than making new code.

simonw
2 replies
22h56m

What programming languages do you mostly work in?

I've been wondering if the benefits of AI-autocomplete are more material to people who work in languages like Python and JavaScript that are harder to provide IDE-based autocomplete for.

If you're a Java or TypeScript developer maybe the impact is reduced because you already have great autocomplete by default.

cjonas
1 replies
22h50m

I think typescript benefits even more than JavaScript because the type signatures add so much context. After defining the types and signatures copilot will often complete the body within a few changes of perfect

zackproser
0 replies
19h24m

Yeah +1, the current top LLMs do pretty well with TypeScript

brigadier132
2 replies
22h55m

AI autocomplete is just a toy

I disagree, when I'm writing code and it just autocompletes the line for me with the correct type with the correct type params set for the generics it saves me the mental effort having to scroll around the file to find the exact type signature I needed. For these small edits it's always right if the information is in the file.

lupire
1 replies
22h51m

Why do you need an LLM for that? That basic type inference.

brigadier132
0 replies
22h16m

i dont know what statically typed language you are using that never requires specifying types. As far as i know only ocaml works this way.

Different languages support different levels of type inference. When im writing rust and typescript im often specifying types this way for structs and function signatures.

With llms what often happens is i write a parameter or variable name and it infers from the name what the type is because i have used the same name somewhere else in my codebase. This would not work without llm autocomplete

lupire
0 replies
22h49m

I don't see a comparison to Non-GenAI auto complete there.

Eclipse wrote half my code characters 15 years ago.

raincole
5 replies
23h14m

I tried it with C#. It's not comparable to Copilot.

If I've never used Copilot I might be slightly impressed.

teaearlgraycold
2 replies
23h12m

I don’t see programming getting exponentially harder. So eventually local compute will catch up with programming needs and offline AI will be pretty good.

lovethevoid
0 replies
22h53m

The local compute problem is a cost problem, and most people do not have the hardware necessary and judging by trends this will continue for another ~decade.

__loam
0 replies
22h43m

Local compute is also getting progressively harder to make faster. I have a CPU from 6 years ago and checked some benchmarks to see if upgrades would be worth it. It's a 90% increase in performance with a 27% improvement in single threaded. Pretty substantial but not exponential. GPUs are advancing more quickly than CPUs right now but I wouldn't be surprised if they hit a wall soon.

sangnoir
0 replies
22h48m

For those who don't have the latitude of uploading their code to Microsoft servers, smaller, limited but fully local AI is better than nothing.

rileymichael
0 replies
19h57m

You're right, it's not comparable. Jetbrain's code completion actually integrates with the IDE to guarantee the suggested code isn't entirely a hallucination -- for example it checks for unresolved references so that its suggestions don't reference non-existent variables/methods.

I've disabled Copilot as it produces garbage way too often. I found myself "pausing" to wait for the suggestion that I'd ultimately ignore because it was completely invalid. I've left Jetbrain's code completion on though because it's basically just a mildly "smarter" autocomplete that I'll occasionally use / I don't find myself relying on.

jstummbillig
5 replies
23h17m

What it shows is that it can be done — in a limited way. Other people might not like those limits and chose to go a different way. I am not sure what's worth lamenting here.

0cf8612b2e1e
2 replies
23h9m

Zed is trying to position itself as a next gen hackers editor. Text editing with lessons learned from the multitude of experiments in the space. A flagship feature that is online only and requires me to trust a third party is a deal breaker to many. For instance, my employer would not be pleased if I suddenly started sharing code with another.

Take the AI out of the conversation: if you told your employer you shared the codebase, that’s an insta-fire kind of move.

nprateem
1 replies
22h58m

Yeah but so is I sent a message to s random person about our business vs I sent a message to one of our suppliers

0cf8612b2e1e
0 replies
22h28m

Is Zed a supplier? Sounds like a random developer can sign up at will without any corporate NDAs or other established legal agreements. Will my employer hash out the NDAs for every developer tool that wants to incorporate cloud AI for the lulz?

janice1999
1 replies
23h12m

I am not sure what's worth lamenting here.

The normalisation of surrendering all our code to remove AI cloud gods maybe? The other being a super responsive IDE now having major features have network requests delaying them, although HW requirements likely make that faster for most people.

jstummbillig
0 replies
11h40m

That sounds a little too spooky for my taste, but you do you. What anything beyond that means, in effect, is that you (not necessarily you) want to chose my values for me (not necessarily me).

I don't see why I should care to have you do that.

xemoka
1 replies
23h12m

Zed will hook into ollama as well; so you can run your own locally; just not as part of the same process (yet?).

vineyardmike
0 replies
22h43m

Hooking into standard APIs like Ollama is definitely preferred (at least to me) because it means it’s more composable and modular. Very Unix-y. Lets you use whatever works for you instead of the vendor.

jsheard
1 replies
23h9m

What's the impact on laptop battery life roughly, when you have a local LLM constantly running in the background of your text editor?

jeroenhd
0 replies
23h3m

IntelliJ is an IDE, though, not a text editor. If you want a text editor with AI, you may need to wait for Microsoft to bring ChatGPT to notepad.

It seems to be limited to a single CPU core at a time, so depending on your CPU boosting settings, some but not too much.

It's quite snappy despite not using a lot of resources. I tried replicating the effect using a CUDA-based alternative but I could get neither the snappiness nor the quality out of that.

_joel
1 replies
23h18m

Interesting, have you used it/found it to be usable? I use IntelliJ myself, but it's not known for being a lean beast. With the addition of stuff like test containers and a few plugins, I'd be suprised if my machine didn't melt adding a local LLM too.

janice1999
0 replies
23h16m

I mainly use PyCharm and I found the auto-complete to be good. It doesn't always kick in when I expect but some of the suggestions have been surprisingly complex and correct.

treesciencebot
0 replies
23h12m

running a much worse model at a higher latency (since local GPU power is limited) is a worse experience for Zed.

rkharsan64
0 replies
22h34m

I agree that the completions might not be that great, but for context: this is a 100M model, while most models that people compare this to are atleast 100x bigger.

They also focus on single line completions, and ship different models per programming language. All these make it possible to ship a decent completion engine with a very small download size.

kilohotel
0 replies
22h55m

sure, but if it was good more people was use it

TechDebtDevin
0 replies
22h41m

Maybe its because there's literally no point in using a local llm for code-completion. You'd be spending 90% of your time correcting it. Its barely worth it to use co-pilot.

acedTrex
22 replies
22h55m

AI assistants just slow me down. Its a very rare case i find them actually useful. I am generally concerned by the amount of devs that seem to claim that it is useful. What on earth are yall accepting.

owenpalmer
10 replies
21h53m

I'm curious, what kind of work do you do? Does stack overflow slow you down?

skydhash
3 replies
17h58m

Not GP, but the kind of search I do mostly are:

- Does this language have X (function, methods,...) probably because I know X from another language and X is what I need. If it does not, I will code it.

- How do I write X again? Mostly when I'm coming back to a language I haven't touch for a while. Again I know what I want to do, just forgot the minutia about how to write it.

- Why is X happening? Where X is some cryptic error from the toolchain. Especially with proprietary stuff. There's also how to do X where X is a particular combination of steps and the documentation is lacking. I heard to forums in that case to know what's happening or get sample code.

I only need the manual/references for the first two. And the last one needs only be done once. Accuracy is a key thing for these use cases and I'd prefer snippets and scaffold (deterministic) instead of LLMs for basic code generation.

theo0833
2 replies
17h30m

I use llms exactly and exclusively for the first two cases - just write comments like:

// map this object array to extract data, and use reduce to update the hasher

And let llms do the rest. I rarely find my self back to the browser - 80% of the time they spit out a completely acceptable solution, and for the rest 20% at least the function/method is correct. Saved me much time from context switching.

skydhash
1 replies
16h15m

For me the quick refresh is better as I only need to do it once (until I don't use the language/library again) and that can be done without internet (local documentation) or high power consumption (if you were using local models). And with a good editor (or IDEs) all of these can be automated (snippets, bindings to the doc browser,...) and for me, it's a better flow state than waiting for a LLM to produce output.

P.S.I type fast. So as soon as I got a solution in my head, I can write it quickly and if I got a good REPL or Edit-Compile-Run setup, I can test just as fast. Writing the specs, then waiting for the LLM's code and then review it to check feel more like being a supervisor than a creator and that's not my kind of enjoyable moment.

theo0833
0 replies
15h34m

I agree with you, creating something just feels better than reviewing code from a LLM intern ;D

That's why I almost never use the 'chat' panel in those AI-powered extensions, for I have to wait for the output and that will slow me down/kick me out of the flow.

However, I still strongly recommend that you have a try at *LLM auto completion* from Copilot(GitHub) or Copilot++(Cursor). From my experience it works just like context aware, intelligent snippets and heck, it's super fast - the response time is 0.5 ~ 1s on average behind a corporate proxy, sometimes even fast enough to predict what I'm currently typing.

I personally think that's where the AI coding hype is going to bear fruit - faster, smarter, context+documentation aware small snippets completion to eliminate the need for doc lookups. Multi file editing or full autonomous agent coding is too hyped.

acedTrex
2 replies
21h35m

Lots of golang kubernetes work these days.

Stackoverflow is used when im stuck and searching around for an answer. Its not attempting to do the work for me. At a code level I almost never copy paste from stackoverflow.

I also utilize claud and 4o at the same time while attempting to solve a problem but they are rarely able to help.

kamaal
1 replies
15h38m

Kubernetes, AWS, Cloudformation and Terraform etc sort of work is still not good with AI.

The current AI code rocket ship is VSCcode + Perl/Python/Node+ReactJS + Co-Pilot.

This is basically a killer combination. Mostly because large amounts of Open source code is available out there for training models.

Im guessing there will be an industry wide standardisation, and Python use will see a further mad rise. On the longer run some AI first programming language and tooling will be available which will have first class integration with the whole workflow.

For now, forget about golang. Just use Python for turbo charged productivity.

acedTrex
0 replies
3h29m

For now, forget about golang

I write kubernetes controllers. Golang is here to stay.

Just use Python for turbo charged productivity

This is my problem with all the "AI" bros. They seem to consistently push the idea that quickly writing code is the end all of "productivity" its akin to "just shovel more shit faster its great"

Speed != productivity

elicksaur
1 replies
19h45m

Do you use stack overflow for every keystroke?

owenpalmer
0 replies
13h19m

At minimum 6 stack overflow searches per keystroke.

dkersten
0 replies
3h39m

I'm just as baffled by the people who use stackoverflow daily. Its increasingly rare that I use it these days, to the point where I deleted my account a few years back and haven't missed it. Don't people read docs anymore? In many ways I feel lucky that I learned at a time when I only had offline docs, which forced me to become good at understanding documentation since its all I had.

nsonha
8 replies
22h26m

Did Google make you less productive when looking up about code? If it did not then I dont see how looking up with AI can be worse.

acedTrex
5 replies
21h34m

If it did not then I dont see how looking up with AI can be worse

Looking up with AI is worse because its WRONG a lot more. Random rabbit holes, misdirection. Stuff that SOUNDS right but is not. It takes a lot of time and energy to discern the wheat from the chaff.

Sure you can find misleading or outdated blogposts or forum discussions with a google search but the information is far more grounded in correctness then anything from an LLM.

ziml77
2 replies
20h21m

This is my experience with trying to use AI for coding tasks too. I've had back-and-forths with AI that involve me trying to get it to fix things to get a final working function, but since it doesn't actually understand the code, it fails to implement the fixes correctly.

Meanwhile, the stuff you find through a traditional web search tends to either be from a blog post where someone is posting actual working code snippets, or from StackOverflow where the code tends to be untested initially but then gets comments, votes, and updates over time that help boost confidence in the code. It's far more reliable to do a web search.

nsonha
1 replies
18h26m

why do people pretend that google search is straight forward

the stuff you find through a traditional web search tends to either be from a blog post

Is that so? Most of my hits have been stack overflow and github issues, where there are false positives, same problem to AI hallucination

skydhash
0 replies
17h50m

Because it is? I tend to do my queries as keywords instead as questions and I tend to get good result. But most of the time, I'm just seeking the online manual to understand how things works and what is happening, not an exact solution. It's the equivalent of using a library to write a thesis. That only requires to get familiar with the terminology of the domain, know where the best works are and how to use indexes and content tables.

kamaal
1 replies
15h30m

>Sure you can find misleading or outdated blogposts or forum discussions with a google search but the information is far more grounded in correctness then anything from an LLM.

This was the case only 2 - 3 months back. But the assistants all moved to GPT-4/Sonnet and the newer versions are just a whole lot better and accurate.

That's the whole idea behind AI, when you do find something is wrong, the error function kicks in and the weights are tweaked to more correct values.

When GPT-5 comes along it will be another whole level accurate. In fact its already close to 90% accurate for most tasks with GPT-5 you can say that number could go to 95% or so. Which is actually fairly good enough for nearly all the production work you could do.

Of course in coming years, Im guessing coding without AI assistance will be somewhat similar to writing code on paper or something like that. You can still do it for fun, but you won't be any where productive at a job.

acedTrex
0 replies
3h31m

I use gpt4o and sonnet regularly. They are so often wrong. Just yesterday gpt4o spit out consistently incorrect tree sitter queries and refused to accept it was wrong. Its all so pointless and slowed me down compared to just reading the documentation.

elicksaur
1 replies
19h34m

What’s the point of this argument? If the user you’re replying to has been on this site, they’ve probably seen this counterpoint before.

“Aha!” They say, “I only realized my folly after the 25th time someone pointed out googling also takes time!”

Maybe there’s some interesting difference in experiences that shouldn’t just be dismissed.

nsonha
0 replies
18h25m

What’s the point of this argument? Maybe I have heard people bashing AI... 26 times?

swah
0 replies
6h19m

Some of Cursor's features is appeals to my lazyness, say: "convert to javascript" and hit apply... For now its still a bit slow (streaming words) but when this is immediate? Not a change against the fastest Vimmer. Select code, dictate the change, review, apply - will save my wrists.

SpaghettiCthulu
0 replies
22h22m

I find the only "AI" I need is really just "Intellisense". Just auto complete repetitive lines or symbol names intelligently, and that doesn't even require an AI model.

cube2222
17 replies
22h55m

As I’ve said in yesterday’s thread[0], contrary to many others I find the AI integration in Zed to be extremely smooth and pleasant to use, so I’m happy they’re doubling down on this.

However, personally, I prefer to have it configured to talk directly to Anthropic, to limit the number of intermediaries seeing my code, but in general I can see myself using this in the future.

More importantly, I’m happy that they might be closing in on a good revenue stream. I don’t yet see the viability of the collaboration feature as a business model, and I was worried they’re gonna have trouble finding a way to sensibly monetize Zed and quit it at some point. This looks like a very sensible way, one that doesn’t cannibalize the open-source offering, and one that I can imagine working.

Fingers crossed, and good luck to them!

[0]: https://news.ycombinator.com/item?id=41286612

cube2222
12 replies
22h37m

Also, I know some people say “just let me pay for the editor” but I don’t think that’s actually a viable path.

The editor is open-source, and it being open-source is great. Others can contribute, it generally helps adoption, it will probably help with getting people to author plugins, and it means if they go in an undesirable way, the community will be able to fork.

So without making it non-open-source, they’d need to do open-core, which is incredibly hard to pull off, as you usually end up cannibalizing features in the open-source version (even if you have contributors willing to contribute them, you block it to not sabotage your revenue stream).

andrewmcwatters
6 replies
22h29m

I would like us as an industry to promote paying for things we use. Crazy idea, I know.

Open source is great and fun and an incredible force multiplier for the world, but when you want to do this stuff for a living, you have to charge money for it somehow, and if you're a software business, and not just software adjacent, it means charging for software.

phero_cnstrcts
2 replies
22h0m

As long as it is not via subscriptions…

pjmlp
1 replies
20h37m

Everyone goes to subscriptions, because trying to get new customers every month at a rate that pays everyone's salaries, at a typical engineering level, plus everyone else on the company, and office rental, doesn't scale, specially in developer tools where many prefer to suffer with lesser tooling than paying.

Nathanba
0 replies
20h25m

Subscriptions are only okay for me if it's done like Jetbrains does it, where you can keep the old versions permanently. Partly because it implies what has to be true for monthly payments to make sense: that the software keeps getting better and keeps getting support for various tech advances that happen and tools and databases. If I'm paying monthly for something that doesn't cost them anything then that feels illogical on my side. This should become a thing with online services too, at least let me keep some kind of upgraded version of your service after I stop paying. Something please, anything to make me feel less like a chump for having paid and then ending up with nothing in the end.

jarule
1 replies
19h4m

Pay-for closed-source is profitable for about a week. Then some asshole offers a shittier but free facsimile and community contributions render it good enough to kill the original.

andrewmcwatters
0 replies
18h34m

I think we're all pretty familiar with this cycle, but there do exist durable products that don't succumb to this issue within reason and the burden rests on the original authors to find ways to create themselves a moat like any other business.

cube2222
0 replies
22h23m

I’m not sure if that’s clear or not from my comments, but I completely agree (and pay for JetBrains editors, amongst other tools).

Though in the case of “fundamental / ecosystem center” tools like code editors which e.g. others will build plugins for I believe there’s immense value in it being open-source.

Thus, I’m rooting for them to find a business model where they get our money without making it non-open-source.

miki123211
3 replies
21h23m

A lot of Zed users are developers at medium-to-large businesses, so I think full source-available with licensing is very much something they could do.

They could put the code on GitHub, allowing contributions, with a license that turns into BSD or MIT after two years, and with the caveat that you can only run the (new) code if you purchase a license key first.

In companies of reasonable size, the deterent against piracy is the existence of a license itself, the actual copy protection and its strength aren't as important. The reason those companies (mostly) don't crack software isn't that the software is hard to crack, it's that their lawyers wouldn't let them.

Sure, this would make Zed somewhat easier to crack, but I think the subset of users who wouldn't use a Zed crack if Zed was binary-only but would use one if there was source code available is very small.

cube2222
2 replies
19h35m

But then it wouldn't be open-source anymore, would it? Making the willingness of people to contribute likely much smaller. You don't really have the safety of having an easy way to fork either, if all you can fork is a 2-year old version (which plugins are likely not compatible with anymore).

It would also (deservedly) likely anger a large part of the existing community, especially the ones most involved, like plugin authors, who put their time in assuming their contributing to an open-source ecosystem.

Thus, I believe that ship has sailed the moment they made it open-source in the first place.

austhrow743
1 replies
17h33m

My understanding of their comment is that the source is made available immediately. It’s just that you need to pay for a license to use it for the first couple years.

cube2222
0 replies
17h8m

Yes, and open-source and source-available are two different things. The comment I responded to suggested they switch to a source-available license which falls back to an open-source license after a time period passes.

Aeolun
0 replies
18h29m

Leave it open source exactly as it is now, do not put convenient downloadable binaries up on Github. Allow me to either compile it myself or pay for the binaries?

There must be some other way to monetize open source.

cedws
3 replies
22h24m

However, personally, I prefer to have it configured to talk directly to Anthropic, to limit the number of intermediaries seeing my code, but in general I can see myself using this in the future.

Same. I can kind of feel OK about my code going to Anthropic, but I can't have it going through another third party as well.

This is unfortunately IT/security's worst nightmare. Thousands of excitable developers are going to be pumping proprietary code through this without approval.

(I have been daily driving Zed for a few months now - I want to try this, I'm just sceptical for the reason above.)

swdunlop
0 replies
19h44m

Add the following to your settings.json:

  "assistant": {
    "version": "2",
    "default_model": {
      "provider": "anthropic",
      "model": "claude-3-5-sonnet-20240620"
    }
  }
  
Once this is done, you should be able to use Anthropic if you have an API key. (This was available before today's announcement and still works today as of Zed 0.149.3)

digging
0 replies
20h47m

I really wanted to try this out with a difficult bug, as I've had Zed installed for a while and haven't actually used it. But I have no idea if I'd get in trouble for that... even though our whole team uses VSCode which I'm sure has scanned all our codebases anyway.

campers
0 replies
15h43m

That was a part of the reasoning of open sourcing my AI assistant/software dev project. Companies like Google have strict procedures around access to customer data. The same can't always be said about a startup racing to not run out of cash.

stephc_int13
12 replies
23h13m

Not releasing a cross-platform code editor on the dominant OS seems quite weird in my opinion. (I know they plan to do it, but as someone who has built cross-platform apps, this is not rocket science to have Win32 support from the start.)

koito17
7 replies
22h5m

Anecdotally, I have never seen Windows widely used for development, outside of .NET shops (but that shouldn't be a surprise).

Moreover, there's plenty of quirks Windows has with respect to:

- Unicode (UTF-16 whereas the world is UTF-8; even Java uses UTF-8 nowadays, so it's only Windows where UTF-8 is awkward to use)

- filenames (lots of restrictions that don't exist on Mac or Linux)

- text encoding (the data in the same string type changes depending on the user's locale)

- UUIDs (stored in a mixed-endian format)

- limit of open files (much lower than Mac and Linux; breaks tools like Git)

If you write software in Java, Golang, or Node.js, you'll quickly encounter all of these issues and produce software with obscure bugs that only occur on Windows.

I'm not sure about Rust, but using languages that claim cross-platform support isn't enough to hide the details of an OS.

In every job I've had, the vast majority of devs were on Mac OS, with Linux coming in at a close second (usually senior devs). So I wasn't surprised Zed first supported Mac then Linux. Windows support is nice for students, game developers, and people maintaining legacy .NET software, but those people won't be paying for an editor.

koito17
1 replies
19h39m

The internals of OpenJDK may use either Latin1 or UTF-16, but the Java API, as of Java 18, defaults to the UTF-8 character set for string operations and text codecs.[1] Just like the standard APIs of Mac OS default to UTF-8 text encoding even though CFString internally uses various different representations to optimize memory usage.[2]

[1] https://docs.oracle.com/en/java/javase/18/docs/api/java.base...()

[2] https://developer.apple.com/library/archive/documentation/Co...

neonsunset
0 replies
18h48m

The macOS reference is irrelevant to Java here.

The charset defines the encoding which applies to first and foremost I/O behavior on how it treats otherwise untyped stream of bytes that are being converted to or from (UTF-16) text as stored by Java.

https://openjdk.org/jeps/400 is yesterday's news and something that .NET has been doing since long time ago (UTF8 encoding is an unconditional default starting with .NET Core 1.0 (2017)).

Whether Win32 APIs take UTF-8 or something else (well, it's usually ANSI or UTF-16) is something for the binding libraries or similar abstraction packages for a language of choice to deal with, and has rather minor impact on the overall flamegraph if you profile a sample application.

I find it strange having to defend this, but the UTF-8 vs UTF-16 argument really has no place in 2024 as dealing with popular encodings is as solved problem as it gets in all languages with adequate standard library.

bigstrat2003
1 replies
20h31m

Everywhere I've worked, most users (including devs) are on Windows. Windows is the major OS in enterprise environments, it's a huge mistake to not support it imo.

hi-v-rocknroll
0 replies
12h58m

Anecdotal evidence. Although I agree about OS support parity, because OS religiosity is generally dumb.

stephc_int13
0 replies
5h54m

None of that matters in practice, because in this context it would be trivial to solve with a correctly built OS abstraction layer.

And Windows is by large the development platform of choice for any serious gamedev work.

8n4vidtmkvmk
0 replies
11h8m

- Why is unicode an issue? The editor should use UTF-8. I don't think its forced to follow the Windows standard if they have their own font rendering? - Filenames is an issue, but it shouldn't be too hard to pull in a path library to do most of that for you. Which is why you should code for Windows from the start because you will quickly realize, oh yeah, maybe I couldn't hardcode forward slashes everywhere because it's going to be a pain to refactor later. And oh yeah, maybe case-insensitivity is a thing that exists. - Text encoding..? Just don't do that. Encode it the same way. Every modern editor, including Notepad, can handle \n instead of \r\n now. You needn't be re-encoding code that was pulled down from Git or whatever. - Also don't see why UUIDs are relevant - Limit on open files is probably a legit issue. But can be worked around of course.

Anyway, none of these sound like major hurdles. I think the bigger hurdles are going to be low-level APIs that Rust probably doesn't have nice wrappers for. File change notifications and... I don't know what. Managing windows. Drivers.

ksylvestre
1 replies
22h48m

It's especially weird they released linux (with x11 AND wayland backends) before windows.

zamalek
0 replies
22h36m

* Both Linux and MacOS are unices, there is less effort.

* The framework they use supports X11 and Wayland out of the box, it wasn't as much effort as you'd think.

* They accept contributions.

ivanjermakov
0 replies
8h15m

Give them time, developing cross-platform GUI apps was never that simple

ghh
0 replies
22h55m

For those that want to try it, and have a rust development environment installed, the following runs Zed on Windows. (The editor part at least, I haven't tried the collaborative functions).

  git clone https://github.com/zed-industries/zed
  cargo run --release

siscia
12 replies
23h20m

The main issue with these set of tools is that I mostly read and understand code more than writing it myself.

Not enough attention is been given to this imbalance.

It is impressive having an AI that can write code for you, but an AI that helps me understand which code we (as a team) should write would be much more useful.

Cu3PO42
6 replies
23h11m

My immediate instinct was to agree with you whole-heartedly. Then I remembered that Copilot has an "Explain this" button that I never click.

Maybe this is because I'm just not used to it, maybe the workflow isn't good enough, or maybe it's because I don't trust the model enough to summarize things correctly.

I do agree that this is an area that could use improvement and I see a lot of utility there.

siscia
2 replies
23h2m

My issue is not explaining how the code work. My issue is understanding why the code exists at all.

The default workflow is to follow the commit history until I don't get to where and when the code in it's current shape was introduced. Then trying reading the commit message that generally link to a ticket and then acquire from tribal knowledge of the team why it was done like that. If it is still necessary, what can we do today instead, etc...

And similarly when designing new code that needs to integrate on existing piece of code... Why there are such constraints in place? Why was it done like that? Who in the team know best?

candiddevmike
0 replies
22h57m

I get the frustration with this workflow too. This is where having all of your issues in Git would be great, but alas no one wants to collaborate in ticket comments via Git commits...

Inevitably the issue tracker gets moved/replaced/deprecated, and so all of that context disappear into the void. The code however is eternal.

8n4vidtmkvmk
0 replies
11h19m

AI could help with this. It could pull the version history for a chunk of code you have highlighted, pull up the related tickets, sift through them, and then try to summarize from commit messages and comments on the ticket why this was added.

What I could have used the other day was "find the commit where this variable became unused". I wanted to know if the line that used it was intentionally deleted or if it got lost in a refactor. I eventually found it but I had to click through dozens of commits to find the right one.

golergka
2 replies
22h54m

This is great feature, but targeted mostly at junior developers who don't yet understand how particular library or framework work. But when I read the code, I spend most of my effort trying to understand what did original developer meant by this, and LLMs are not yet very helpful in that regard.

siscia
0 replies
22h50m

I was trying to express exactly what you say!

Sorry if it wasn't clear.

Not only the intention, but also the reason behind it.

Cu3PO42
0 replies
12h22m

I partially agree with you: not knowing what particular functions etc. do is one use case, another for me would be to detangle complicated control flow.

Even if I can reason about each line individually, sometimes code is just complicated and maintains invariants that are never specified anywhere. Some of that can come down to "what were the constraints at the time and what did we agree on", but sometimes it's just complicated even if it doesn't have to be. The latter is something I would love for a LLM to simplify, but I just don't trust it to do that consistently correct.

madeofpalk
2 replies
22h44m

No one should be submitting code that's difficult to understand, regardless of how it was 'written'. This problem exists just the same as a developer who's going to copy large blocks of StackOverflow without much thought.

wnolens
0 replies
22h32m

No one should be submitting code that's difficult to understand

what? ok. Nice ideals.

jonnycomputer
0 replies
5h34m

It isn't always the piece of code that is hard to read. It's about how it fits into the 200,000+ line application you're working with, what it was trying to solve, whether it solved the right problem, and whether there will be unexpected interactions.

teekert
0 replies
23h17m

Sounds like you’re better helped by something like codescene?

raincole
0 replies
23h6m

There are a lot of tools that promise to help you understand existing code. However, it's a really hard problem and to me none of them is production ready.

Personally, I think the problem is that if the AI got it wrong, it would waste you a lot of time trying to figure out whether it's wrong or not. It's similar to outdated comments.

nichochar
11 replies
23h5m

I recently switched from neovim to zed, and I overall like Zed. I miss telescope, and think some vim navigation was better, but I suspect that it has to do with how much effort I put into configuring one over the other, so time will tell.

My biggest gripe was how bad the AI was. I really want a heavy and well-crafter AI in my editor, like Cursor, but I don't want a fork of the (hugely bloated and slow) vscode, and I trust the Zed engineering team much more to nail this.

I am very excited about this announcement. I hope they shift focus from the real-time features (make no sense to me) to AI.

xwowsersx
2 replies
22h15m

Have you tried using Codeium in Neovim[0]? It may not have all the features shown in this post, but still quite good. I will admit though that I'm enticed to try out AI in Zed now.

[0] https://github.com/Exafunction/codeium.nvim

roland35
0 replies
22h5m

Codeium works well but I really like the copilot chat plugin as well - it generally does a good job of explaining highlighted code, fixing errors, and other code interactions.

nichochar
0 replies
19h27m

I tried codeium for 6 months and eventually went back to copilot. I think codeium is tier 2

mynameisvlad
2 replies
22h51m

After using Cursor for some hobby stuff, it's really good. I was surprised at how well it managed the context, and the quick suggestions as you're refactoring really add up since they're generally exactly what I was about to do.

avarun
1 replies
22h40m

Are you referring to cursor or zed here as really good? It’s unclear to me as somebody that doesn’t regularly use either.

robinsonrc
0 replies
4h42m

The way it reads (to me), it’s about Cursor being good

nichochar
0 replies
19h26m

second person to share this with me recently, will have to try it out. Looks pretty alpha, but interesting

linsomniac
0 replies
22h29m

Agreed about the AI, the last time I tried Zed I was also trying Cursor at the same time, and the Cursor AI integration vs what Zed offered was just night and day. So I got the Cursor subscription. But I haven't used it in 2 months (I don't get to code a lot in my job).

This was maybe 3-4 months ago, so I'm excited to try Zed again.

ivanjermakov
0 replies
8h19m

What were the Zed features that made you to switch? I feel like with todays ecosystem it's easier to complete neovim experience with plugins than wait for Zed devs to catch up.

forrestthewoods
9 replies
23h12m

Hrm. Still not quite what I crave.

Here's roughly what I want. I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes.

LLMs often get code slightly wrong. That's fine! Doesn't bother me at all. What I need is an interface that allows me to iterate on code AND helps me understand the changes.

As a concrete example I recently used Claude to help me write some Python matplotlib code. It took me roughly a dozen plus iterations. I had to use a separate diff tool so that I could understand what changes were being made. Blindly copy/pasting LLM code is insufficient.

iamnbutler
2 replies
22h46m

Hey, this is Nate from Zed. Give the inline assistant a try. Here is a little demo: https://share.cleanshot.com/F2mg2lXy

You can even edit the prompt after the fact if the diff doesn't show what you want and regenerate without having to start all over.

forrestthewoods
1 replies
22h6m

Ah interesting. I missed that when browsing the page.

Can you make the diff side-by-side? I’ve always hated the “inline” terminal style diff view. My brain just can’t parse it. I need the side-by-side view that lets me see what the actual before/after code is.

iamnbutler
0 replies
21h53m

Haha I actually agree – I always prefer them side by side.

We don't have side-by-side diffs yet, but once we do I'll make sure it works here :)

zackproser
0 replies
19h19m

Excited to try Zed, but FWIW, this is exactly Cursor's default behavior if you use the side chat panel

simonw
0 replies
22h50m

"Here's roughly what I want. I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes."

That's exactly what this new set of Zed features lets you do.

Here's an animated GIF demo: https://gist.github.com/simonw/520fcd8ad5580e538ad16ed2d8b87...

rtfeldman
0 replies
22h55m

I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes.

Zed does that - here's a clip of it on some Python code:

https://youtu.be/6OdI6jYpw9M?t=66

Terretta
0 replies
3h59m

I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes.

If you squint, that's the same as using an IDE with first class git support and co-editing with a (junior) pair programmer that commits each thing you ask them to do locally, or just saves the file and lets you see stageable diffs you can reject instead of push.

Try the /commit workflow using aider.chat as a REPL in your terminal, with the same git repo open in whatever IDE you like that supports real time git sync.

The REPL talks to you in diffs, and you can undo commits, and of course your IDE shows you any Aider changes the same as it would show you any other devs' changes.

That said, I use Zed and while it doesn't have all the smarts of Aider, its inline integration is fantastic.

DiabloD3
7 replies
23h15m

I had hope Zed would be a good editor for junior developers, but that ship apparently has sailed, and it's destination isn't where we need to go.

marksomnian
5 replies
23h11m

Seems like a non-sequitur? Why does LLM integration mean that Zed is less good of an editor for junior devs?

simonw
4 replies
22h53m

Right, LLM integration should be a huge boon for junior developers.

retrofuturism
3 replies
22h42m

Not sure if it's what GP is talking about - but haven't you noticed how many juniors seem to be shooting themselves in the foot with LLMs these days, becoming over-reliant on it and gaining expertise slower?

DiabloD3
1 replies
22h7m

This is exactly what I'm talking about. Ever since LLMs took over, I've noticed an uptick in my fellow senior developers complain about the quality of work, but I've also seen a huge increase in the poor quality of PRs to open source projects.

Like, normally my primary complaint about LLMs is their copyright violating nature, and how it is hanging everyone who has ever published the output of an LLM out to dry, but unless LLM output improves, I think it may end up being a non-issue: their use will die out, and every AI startup will die, just like all the Alt-coin startups did.

Want to change my mind on LLM quality? Have it produce code so good that I can't tell the difference between an inexperienced developer (the kind that would be hired into a junior role, not the kind that would be hired for an internship) and the output of this thing.

simonw
0 replies
22h2m

I've noticed the opposite: people who had never even started learning to program who are now getting stuck in because they don't have to wade through six months of weird error messages about missing semicolons to get started.

But I guess I'm talking about complete newbie developers, which is a different category from junior developers.

I have 25+ years experience at this point so I'm pretty far removed from truly understanding how this stuff benefits newcomers!

TIPSIO
0 replies
23h8m

Are you talking specifically about the general UX? On initial glance it does look a little like there is a bit more of a learning curve to navigate.

swyx
6 replies
23h20m

am Cursor main, dont really have any burning pains that make me want to change tools but open to what I dont know.

Zed vs Cursor review anyone?

iamnbutler
2 replies
21h33m

Hey! I'm Nate from Zed. There are a lot of questions about this, here are some quick thoughts...

Cursor is great – We explored an alternate approach to our assistant similar to theirs as well, but in the end we found we wanted to lean into what we think our super power is: Transforming text.

So we leaned into it heavily. Zed's assistant is completely designed around retrieving, editing and managing text to create a "context"[0]. That context can be used to have conversations, similar to any assistant chatbot, but can also be used to power transformations right in your code[1], in your terminal, when writing prompts in the Prompt Library...

The goal is for context to be highly hackable. You can use the /prompt command to create nested prompts, use globs in the /file command to dynamically import files in a context or prompt... We even expose the underlying prompt templates that power things like the inline assistant so you can override them[2].

This approach doesn't give us the _simplest_ or most approachable assistant, but we think it gives us and everyone else the tools to create the assistant experience that is actually useful to them. We try to build the things we want, then share it with everyone else.

TL;DR: Everything is text because text is familiar and it puts you in control.

[0]: https://zed.dev/docs/assistant/contexts.html

[1]: https://zed.dev/docs/assistant/inline-assistant

[2]: https://zed.dev/docs/assistant/prompting#overriding-template...

thm76
1 replies
16h23m

How do I enable Zed AI?

I'm logged in, using Zed Preview, and selecting the model does nothing. In the configuration it says I "must accept the terms of service to use this provider" but I don't see where and how I can do that.

tdomhan
0 replies
15h32m

I'm having the same problem.

jvmancuso
0 replies
22h23m

- open-source vs closed fork of vscode

- transparent assistant panel vs opaque composer. you control your own prompts (cf. [0])

- in Zed the assistant panel is "just another editor", which means you can inline-assist when writing prompts. super underrated feature imo

- Zed's assistant is pretty hackable as well, you can add slash commands via native Zed extensions [1] or non-native, language-agnostic Context Servers [2]

- Zed's /workflow is analogous to Cursor's composer. to be honest it's not quite as good yet, however it's only ~1 week old. we'll catch up in no time :)

- native rust vs electron slop. Zed itself is one of the larger Rust projects out there [3], can be hard to work with in VS Code/Cursor, but speedy in Zed itself :)

[0]: https://hamel.dev/blog/posts/prompt/

[1]: https://zed.dev/docs/extensions/slash-commands

[2]: https://zed.dev/docs/assistant/context-servers

[3]: https://blog.rust-lang.org/inside-rust/2024/08/15/this-devel...

diroussel
0 replies
23h0m

Cursor is electron/vscode based. Zed uses a custom built rust UI and editor model that gives 120fps rendering. (Or was it 60fps)

It is really smooth on a Mac with ProMotion.

DevX101
0 replies
23h10m

They just launched it a few minutes ago. It's going to be a a while for a good review.

mcpar-land
5 replies
22h55m

I hope that in a few years we look back at this era of "prompt an LLM for a shell command and instantly run whatever it spits out by pressing enter" with collective embarrassment.

pb7
4 replies
19h18m

What does this mean?

skydhash
3 replies
17h45m

People that don't like Bash|Zsh|... and are afraid of ImageMagick|ffmpeg|curl|...'s manuals and want AI to generate the perfect script for them.

stemlord
2 replies
16h11m

Why not both? What do you see wrong with having gpt spit out a complicated ffmpeg filter chain and explaining every step of the output, which you can then look up in the docs and unit test before implementing? I find verifying the gpt output is still quicker than consulting giant wall of text man pages

skydhash
1 replies
13h55m

Fish and learning to fish. The latter widens your perspective and give you more options, especially if there are a lot of rivers nearby!

stemlord
0 replies
2h11m

You are saying one should learn to fish with a spear not a fishing rod

yewenjie
4 replies
22h55m

Has any long-term Emacs user delved into Zed and ported the cool features yet?

Don't take it as sarcasm, I am genuinely interested. I think Emacs' malleability is what still keeps it alive.

pton_xd
1 replies
16h38m

What are the cool Zed features? Also genuinely interested.

ivanjermakov
0 replies
8h13m

In my understanding Zed is "Figma for code". Huge focus on collaboration (hence the slogan "multiplayer code editor") and AI.

It's hard for me to understand what text editor itself has to do with LLM completions.

rgrmrts
0 replies
21h52m

Long-term Emacs user here: I actually just switched entirely to Zed.

1a527dd5
4 replies
22h46m

I know I'm not the target market. I don't want my editor to have AI.

I was really looking forward to trying Zed, but this just means I'll stick to VS/Code with the AI gung disabled.

In general, if any product comes with "AI" I'm turned off by it.

icholy
2 replies
22h41m

Yeah, I also don't want my editor to have syntax highlighting. I refuse to use any editor that has it. Don't even get me started on native vcs support. My rule of thumb is: "If it auto-completes then it gets the auto-yeet."

elpocko
1 replies
22h30m

It's "Find and replace" that makes my blood boil. I won't stand for it.

icholy
0 replies
21h41m

Amen brother.

elitan
0 replies
22h41m

I hope all our competitors find your answer inspirational!

s3tt3mbr1n1
3 replies
23h18m

Interesting that this seems to be the announcement of Anthropic's Copilot alternative:

A private beta of the Claude 3.5 Sonnet's new Fast Edit Mode, optimized for text editing. This upcoming mode achieves unprecedented speed in transforming existing text, enabling near-instantaneous code refactoring and document editing at scale.
grrowl
1 replies
19h14m

Sounds a lot like existing Claude integration + Caching (as most queries rely on relatively static code context)

skp1995
0 replies
19h5m

its a bit more than that, you can clearly see that the decoding speed is super fast

skp1995
0 replies
19h6m

yeah that caught my eye too, looks to me like speculative editing (they mentioned that its faster to each its input) + prompt-caching it would literally build up on all the tech they have

mikkelam
3 replies
23h38m

Ive been using Zed's assistant panel heavily and have really enjoyed the experience. The UI can be a bit frustrating. Sometimes, when you write it's hard to get it to send your query. The new /workflow seems to really bridge the last gap to effectively edit the parts that im asking for help with changes.

I'm already paying for OpenAI API access, definitely gonna try this

andrepd
2 replies
23h21m

It's the opposite for me ahaha. I'm very excited for Zed as a performant and powerful text editor, an "updated + open-source sublime text" if you will. But I have absolutely no interest in AI and copilot and github integrations and whatnot.

This is not a criticism of zed though, I simply have no interest. Much the contrary: I can only praise Zed as to how simple it is to disable all these integrations!

CapeTheory
0 replies
23h18m

Same here. If I found Zed now I would probably avoid it, but having tried it already I'm glad to have it, and will just turn off the intrusive bits.

owenpalmer
2 replies
22h6m

Hey Zed team, just one little nitpick about the page. I love the keyboard shortcuts at the top for the download page and login. However, when I try to Ctrl-L and select the url, it triggers the login page shortcut.

Brave Browser Windows 10

maxdeviant
1 replies
18h18m

Thanks for the report, this should be fixed now!

owenpalmer
0 replies
13h23m

Yep fixed!

jmakov
2 replies
23h21m

I wonder if there's already a solution that allows me to ask questions about local codebases. e.g. how does this subsystem work.

simonw
1 replies
22h52m

This feature does exactly that. You can open up the chat panel, run "/tab name-of-tab" or "/file path-to-file" and then start asking questions about the code.

iamnbutler
0 replies
22h21m

Hey! I'm Nate from Zed. You can also use the /file command to drop entire directories, or even globs into the assistant.

For example, you can do /file *.rs to load all of the rust files in your project into context.

Here is a simple but real example I used a while back:

"/file zed/crates/gpui/src/text_system.rs

I have a font I want to check if it exists on the system. I currently have a &'static str.

Is there something in here that will help me do that?"

I haven't interfaced with the lower level TextSystem that much, so rather than dig through 800 lines of code, I was able to instantly find `is_font_available()` and do what I needed to do.

bitbasher
2 replies
22h39m

I like the idea of Zed, and I recently went editor hopping. I installed Zed but was immediately hit with "your gpu is not supported/ will run incredibly slow" message. Gah...

leejoramo
1 replies
18h2m

In my case this pointed out a problem with my NVIDA drivers that I didn’t know about. Once I fixed that issue my whole KDE system ran much faster and allowed Zed to run

bitbasher
0 replies
15h44m

In my case:

CPU: 12th Gen Intel i7-1255U (12) @ 4.700GHz

GPU: Intel Device 46a8

bearjaws
2 replies
22h42m

You can do this with aider in any IDE:

https://aider.chat/

I think the focus on speed is great, but I don't feel my IDE's speed has held me back in a decade.

Terretta
0 replies
4h8m

Anthropic working with Paul Gauthier and Zed being aider-aware would be phenomenal. He's been working this for a while:

https://news.ycombinator.com/item?id=35947073

When familiar with Aider, it feels as if this Zed.ai post is chasing Paul's remarkably pragmatic ideas for making LLMs adept at codebases, without yet hitting the same depth of repo understanding or bringing automated smart loops to the process.

Watching Aider's "wait, you got that wrong" prompt chains kick in before handing the code back to you is a taste of "AI".

If your IDE is git savvy, then working with Aider in an Aider REPL terminal session with frequent /commits that update your IDE is like pair programming with a junior dev that happens to have read all the man pages, docs wikis, and stackoverflow answers for your project.

8n4vidtmkvmk
0 replies
11h4m

What has slowed me down is all the garbage pop ups and things that get in my way. Every time I open VSCode it tries reconnecting to some SSH I had open before it lets me do anything. And god forbid I have two different workspaces. The constant "what's new" and "please update me now"s don't help either.

I love IntelliJ but it does not start up quickly, which is a problem if I just want to look at a little code snippet.

sebzim4500
1 replies
22h29m

Claude 3.5 Sonnet's new Fast Edit Mode, optimized for text editing. This upcoming mode achieves unprecedented speed in transforming existing text, enabling near-instantaneous code refactoring and document editing at scale

I wonder what this is. Have they finetuned a version which is good at producing diffs rather than replacing an entire file at once? In benchmarks sonnet 3.5 is better than most models when it comes to producing diffs but still does worse than when it replaces the whole file.

maeil
1 replies
22h55m

Let me be very direct - what's the strength over the competition, e.g. Cody? The fact that it's its own text editor? I'm seeing the assistant emphasized but that just looks like Cody to me.

sunaookami
0 replies
19h53m

Agreed and Cody has recently upped their free tier and Sonnet 3.5 can be used for free for completions and up to 200 chat messages per month. Plus you can use it in VS Code and IntelliJ - no need to learn a new text editor.

jcsnv
1 replies
22h47m

What's the difference between VSCode (with co-pilot), Zed, & Cursor?

nsonha
0 replies
22h15m

Cursor is a fork of VSCode with code AI that, in my oppinion, better than Zed and other competitor because they implement the MODIFYING existing code workflow better. Most other code AI products are only good at code generation or being a better stack overflow. I don't use Copilot to tell, does it show you diff like Cursor when modifying code?

heeton
1 replies
23h23m

I’d be curious how this compares to supermaven, my current favourite AI autocomplete.

jvmancuso
0 replies
22h11m

Add `"features": {"inline_completion_provider": "supermaven"}` to your Zed settings and you can have the best of both worlds :D

swah
0 replies
3h57m

Its a fast graphical editor. Not sure ?

simonw
0 replies
22h54m

The Zed configuration panel includes tools for adding an Anthropic API key, a Google Gemini API key, an OpenAI API key or connecting to a local instance of ollama.

CrimsonCape
1 replies
19h7m

Is all the overhead required to use the AI features easily disabled with a feature flag such that zero CPU cost and zero network transmission occurs?

xinayder
0 replies
7h8m

First the unsolicited package installation controversy now they jumped onto the AI bandwagon. Is this a speedrun attempt at crashing a newly created company?

What's next? Web3 integration? Blockchain?

thekevan
0 replies
22h43m

I had a brief processing error I was really confused for a moment about how quickly Xed had progressed.

https://en.wikipedia.org/wiki/Xed

sharms
0 replies
21h33m

Missing from this announcement is language around Privacy. Cursor for example has a Privacy Mode that promises not to store code, and this seems like a critical feature for any AI enhanced dev tools.

poetril
0 replies
5h28m

Meanwhile html tags in JSX/TSX files still do not autocomplete/close. Speaking as someone who used Zed for nearly 7 months, it seems like should be prioritizing features that will make the editor more usable. I’d be excited to go back to Zed, but the issues drove me to Neovim.

nrvn
0 replies
21h43m

Until the AI means “system thinking capability that can analyze the codebase and give real suggestions” I don’t buy it. Everything I have seen so far is waste of my time and resources and at best is useful for generating tests or docstrings.

mcemilg
0 replies
22h49m

It’s great news that they provide it for free. It’s hard to subscribe to all the LLM providers. Even with a pro subscription, you need to buy credits to be able to use with the editors, which gets very expensive if you use them a lot.

On another side, I really like the experience of coding with GitHub Copilot. It suggests code directly in your editor without needing to switch tabs or ask separately. It feels much more natural and faster than having to switch tabs and request changes from an AI, which can slow down the coding process.

marcus_holmes
0 replies
15h12m

I've been playing with it this morning, and fuck me this is awesome. I finally feel that an LLM is being actually useful for coding. I still have to check everything, but it's like having a talented junior helping me.

kamaal
0 replies
15h47m

At some point an AI first programming language will have to come along which will integrate well with the AI models, Editor and Programmer input seamlessly.

Im not sure what that is, but Im guessing it will be something along the lines of Prolog.

You will basically give it some test cases, and it will write code that passes those test cases.

jmull
0 replies
23h4m

Just looking at the "inline transformations animation"...

How is typing "Add the WhileExpression struct here" better or easier than copy/pasting it with keyboard and/or mouse?

I want something that more quickly and directly follows my intent, not makes me play a word game. (I'm also worried it will turn into an iterative guessing game, where I have to find the right prompt to get it to do what I want, and check it for errors at every step.)

hemantv
0 replies
22h37m

Biggest thing missing is templates that are available in vscode.

gloosx
0 replies
14h6m

Anthropic is an evil company. I wanted to try their subscription for a month, now they are storing my credit card info forever without an option to remove it, yet a single click of a button will instantly resubscribe me. I don't understand how one can seriously think they will not sell all your data at the first opportunity to make money, cause with such shady subscription/payment practices it instantly gives a money-before-all vibes for the whole product and the company.

eksu
0 replies
19h12m

excited to use zed, but the AI chat panel does not work on a Mac host connected to a Linux guest.

dcchambers
0 replies
22h46m

I've been trying to mainline Zed for the past few months...and overall I really do like it - but there are enough quirks/bugs that make me frustrated.

A simple example: Something as simple as the hotkeys for opening or closing the project panel with the file tree isn't consistent and doesn't work all the time.

To be clear: I am excited about this new addition. I understand there's a ton of value in these LLM "companions" for many developers and many use cases, and I know why Zed is adding it...but I really want to see the core editor become bullet proof before they build more features.

conradludgate
0 replies
12h18m

I found zed to be pretty unusable. I don't use AI features but I'd love to replace vscode anyway. I just need an editor that actually works first

bww
0 replies
23h9m

Hmm. I was excited about Zed, but it now seems painfully clear they’re headed in a completely different direction than I’m interested in. Back to neovim, I guess…

benreesman
0 replies
20h13m

TLDR: Zed is pretty sweet. Amodei doesn’t have a next model on raw capability.

bartekpacia
0 replies
16h53m

Oh, they changed their website. It's stunning!

asadm
0 replies
22h1m

I just want a perplexity-style agentic integration that researches dozens of pages first, does internal brainstorming before printing output in my editor.

I just had a many-hour long hacking session with Perplexity to generate a complex code module.

arghwhat
0 replies
21h59m

I don't mind them having AI features, but I wish they'd fix some of the most basic performance issues considering that their entire reason to exist was "performance".

You know, things like not rerendering the entire UI on the smallest change (including just moving your mouse) without damage reporting.

adamgordonbell
0 replies
22h54m

This looks cool!

Feature requests: have something like aider's repo-map, where context always contains high level map of whole project, and then LLM can suggest specific things to add to context.

Also, a big use case for me, is building up understanding of an unfamiliar code base, or part of a code base. "What the purpose of X module?", "How does X get turned into Y?".

For those, its helpful to give the LLM a high level map of the repo, and let it request more files into the context until it can answer the question.

( Often I'm in learning mode, so I don't yet know what the right files to include are yet. )

Janymos
0 replies
1h32m

Tried Zed AI for a bit as a heavy user of Cursor, a few thoughts - I like that they are trying something different with the assistant panel view by providing end users full control of the context as opposed to Cursor's "magic" approach. There is a huge tradeoff between the granularity of control and efficiency however. The friction for manually filling in context for the assistant window might repel devs from using it constantly. - Zed AI is still missing a lot of UX details in terms of their inline assistant capabilities. e.g. pressing ctrl+? for inline assist only selects the current line, and users will have to manually select a block of code for inline assist, which is really annoying. In cursor, cmd+k automatically selects the surrounding code block - Definitely a huge plus that we get to choose our own LLM providers with Zed AI.