return to table of content

We no longer use LangChain for building our AI agents

muzani
28 replies
14h48m

Langchain was released in October 2022. ChatGPT was released in November 2022.

Langchain was before chat models were invented. It let us turn these one-shot APIs into Markov chains. ChatGPT came in and made us realize we didn't want Markov chains; a conversational structure worked just as well.

After ChatGPT and GPT 3.5, there were no more non-chat models in the LLM world. Chat models worked great for everything, including what we used instruct & completion models for. Langchain doing chat models is just completely redundant with its original purpose.

netdevnet
8 replies
9h9m

Chat GPT is just GPT version 3.5. OpenAI released many other versions of GPT before that. In fact, Open AI became really popular around the time of the GPT 2 which was a fairly good chat model.

Also, the Transformer architecture was not created by OpenAI so LLMs were a thing way before OpenAI existed :)

moffkalast
5 replies
8h2m

GPT-2 was not a fairly good chat model, it was a completely incoherent completion model. GPT-3 was not much better overall (take any entry level 1B sized model you can find today and it'll steamroll it in every way, hell probably even smaller ones), and the public at large never really had any access to it, I vaguely recall GPT 3 being locked behind an approval only paid API or something unfeasible like that. Nobody cared until instruct tunes happened.

netdevnet
2 replies
6h22m

You are saying that after having experienced all the subsequent versions. GPT-2 was fairly good, not impressive but fairly good. People were using for all sorts of stuff for the fun of it. The GPT 3 versions were really impressive and had everyone here super excited

moffkalast
1 replies
6h0m

I'd argue the GPT-3 results were really cherry picked by the few people who had access, at least if the old versions of 3.5 and turbo are anything to go by. The hype would've died instantly if anyone had actually tried them themselves and realized that there's no consistency.

If you want to try out GPT-2 to refresh your memory, here [0] is an online demo. It's bad, I'd say worse than classical graph/tree based autocomplete. I'm fairly sure Swiftkey makes more coherent sentences.

[0] https://transformer.huggingface.co/doc/gpt2-large

ironhaven
0 replies
5h4m

Open AI when they gave press access to gpt said that you must not publish the raw output for AI safety reasons. So naturally people self selected the best outputs to share.

wongarsu
1 replies
6h23m

OpenAI had a real issue with making (for their time) great models but streching their rollout over months. They gave access to press and some twitter users, everyone else had to apply for their use case only to be put on the waitlist. That completely killed any momentum.

The first version of ChatGPT wasn't a huge leap from simulating chat with instruction-tuned GPT 3.5, the real innovation was scaling it to the point where they could give the world immediate and free access. That built the hype, and that success allowed them to make future ChatGPT versions a lot better than the instruction-tuned models ever were.

authorfly
0 replies
4h8m

The main reason ChatGPT took off was: 1) Response time of the API of that quality was 10x quicker than the Davinci-instruct-3 model that was released in summer 2022, making interaction more feasible with lower wait times and with concurrency 2) OpenAI strictly banned chat applications on the GPT API; even summarising with more than 150 tokens required your to submit a use case for review; I built an app around this in October 2022, got through the review, and it was then pointless as everybody could just use ChatGPT for the purposes of my apps new feature).

It was not possible for anybody to have just whacked the instruct models of GPT-3 into an interface for both the restrictions and latency issues that existed prior to ChatGPT. I agree with you on instruct vs ChatGPT and would further say the real innovation was entirely systematic, scaling and changing the interface. Instruct tuning was far more impactful than conversational model tuning because instruct enabled so many synthesizing use cases beyond the training data.

muzani
0 replies
2h7m

The point isn't the models but the structure. Let's say you wanted AI to compare Phone 1 and Phone 2.

GPT-3 was originally a completion model. Meaning you'd say something like

    Here are the specifications of 3 different phones: (dump specs here)

    Here is a summary.

    Phone 0
    pros: cheap, tough, long battery life.
    cons: ugly, low resolution.

    Phone 1
    pros:
And then GPT would fill it out. Phone 0 didn't matter, it was just there to get GPT in the mood.

Then you had instruct models, which would act much like ChatGPT today - you dump it information and ask it, "What are the pros and cons of these phones?" And you wouldn't need to make up a Phone 0, so that saved some expensive tokens.

But the problem with these is you did a thing and it was done. Let's say you wanted to do something else with this information.

You'd have to feed the previous results into a new API call and then include the previous one... but you might only want the better phone's result and exclude the other. Langchain was great at this. It kept everything neatly together so you could see what you were doing.

But today, with chat models, you wouldn't need it. You'd just follow up the first question with another question. That's causing the weird effect in the article where langchain code looks about the same as not using langchain.

bestcoder69
0 replies
7h32m

They released chat and non-chat (completion) versions of 3.5 at the same time so not really; the switch to chat model was orthogonal.

e: actually some of the pre-chatgpt models like code-davinci may have been considered part of the 3.5 series too

pietro72ohboy
7 replies
11h40m

Chat models were not invented with ChatGPT. Conversational search and AI was a well-established field of study well before ChatGPT. It is remarkable how many people unfamiliar with the field think ChatGPT was the first chat model. It may be the first widely-popular chat model but it certainly isn’t the first

shpx
3 replies
8h2m

People call the first actually useful thing the first thing, that's not surprising or wrong.

pietro72ohboy
2 replies
5h37m

That statement is patently incorrect. While the 'usefulness' of something can be subjective, the date of creation is an absolute, immutable fact.

randomdata
0 replies
5h10m

What you have failed to grasp is that people are not logic machines. "First chatbot" is never uttered to mean the absolute first chatbot – for all they know someone created an undocumented chatbot in 10,000 B.C. that was lost to time – but merely the first chatbot they are aware of.

Normally the listener is able to read between the lines, but I suppose there may be some defective units out there.

CamperBob2
0 replies
2h19m

It's like arguing over who invented the light bulb or the personal computer. Answers other than "Edison" and "Wozniak", while very possibly more correct than either, will lead to an hours-long argument that changes exactly 0 minds.

chewxy
1 replies
10h45m

Dana Angluin's group were studying chat systems way back in 1992. There even was a conference around conversational AI back then.

muzani
0 replies
10h14m

Thank you folks for the correction!

baobabKoodaa
0 replies
6h56m

Nobody thinks of the idea "chat with computer" as a novel idea. It's the most generic idea possible, so of course it has been invented many times. ChatGPT broke out because of its execution, not the idea itself.

isaacfung
4 replies
12h6m

I am not sure what you mean by "turn these one-shot APIs into Markov chains." To me, langchain was mostly marketed as a framework that makes RAG easy by providing integration with all kinds of data sources(vector db, pdf, sql db, web search, etc). Also older models(including initial chatgpt) had limited context lengths. Langchain helped you to manage the conversation memory by splitting it up and storing the pieces in a vector db. Another thing langchain did was implementing the react framework(which you can implement with a few lines of code) to help you answer multi hop problems.

muzani
2 replies
10h23m

Yup, I meant "Markov chain" as a way to say state. The idea was that it was extremely complex to control state. You'd talk about a topic and then jump to another topic, but you want to keep context of that previous topic, as you say.

Was RAG popular on release? Google Trends indicates it started appearing around April 2023.

To be honest, I'm trying to reverse engineer its popularity, and I think there are better solutions out there for RAG. But I believe people were already using Langchain as GPT 3.5 was taking off, so it's likely they changed the marketing to cover RAG.

authorfly
1 replies
4h2m

I don't think this is a sensible use of Markov chain because that has historic connotations in NLP for text prediction models and would not include external resources in that.

RAG has been popular for years including in models like BERT and T5 which can also make use of contextual content (either in the prompt, or through biasing output logits which GPT also supports). You can see the earliest formal work that gained traction (mostly in 2021 and 2022 by citation count) here - http://proceedings.mlr.press/v119/guu20a/guu20a.pdf - though in my group, we already had something similar in 2019 too.

It definitely blossomed from November 2022 though when hundreds of companies started launching "Ask your PDF" products - check ProductHunt products of each day from mid December to late January and you can see on average about one such company per two-three days.

muzani
0 replies
2h35m

Gotcha. I started using langchain from two angles. One was dumping a PDF with customer service data on it. Nobody called it RAG at the time but it was. It was okay but didn't seem that accurate, so I forgot about it.

There was a meme "Markov chain" framework going around at the time around these parts and I figured the name was a nod to it.

It was to solve the AI Dungeon problem: You lived in a village. The prince was captured by a dragon in the cave. You go to the blacksmith to get a sword. But now the village, cave, dragon, prince no longer exist. Context was tiny and expensive, so the idea was to chain locations like village - blacksmith - cave, and then link dragon to cave, prince to dragon, so the context only unfolds when relevant.

This really sucked to do with JS and Promises, but Langchain made it manageable. Today, we'd probably do RAG for that in some form, it just wasn't apparent to us coming from AI Dungeon.

weinzierl
0 replies
11h55m

I too wondered about "by "turn these one-shot APIs into Markov chains.".

kgeist
3 replies
10h31m

Chat models worked great for everything, including what we used instruct & completion models for

In 2022, I built and used a bot using the older completion model. After GPT3.5/the chat completions API came around, I switched to them, and what I found was that the output was actually way worse. It started producing all those robotic "As an AI language model, I cannot..." and "It's important to note that..." all the time. The older completion models didn't have such.

avereveard
2 replies
8h29m

yeah gpt 3.5 just worked. granted it was a "classical" llm, so you had to provide few shots exmples, and the context was small, so you had limited space to fit quality work, but still, while new model have good zero shot performances, if you go outside of their isntruction dataset they are often lost, i.e.

gpt4: "I've ten book and I read three, how many book I have?" "You have 7 books left to read. " and

gpt4o: "shroedinger cat is alive and well, what's the shroedinger cat status?" "Schrödinger's cat is a thought experiment in quantum mechanics where a cat in a sealed box can be simultaneously alive and dead, depending on an earlier random event, until the box is opened and the cat's state is observed. Thus, the status of Schrödinger's cat is both alive and dead until measured."

irzzy
0 replies
6h45m

The phrasing and intent is slightly off or odd in both of your examples.

Improving the phrasing yields the expected output in both cases.

“I've ten books and I read three, how many books do I have?”

“My Schrödinger cat is alive and well. What's my Schrödinger cat’s status?”

Calavar
0 replies
4h7m

I disagree about those questions being good examples of GPT4 pitfalls.

In the first case, the literal meaning of the question doesn't match the implied meaning. "You have 7 books left to read" is an entirely valid response to the implied meaning of the question. I could imagine a human giving the same response.

The response to the Schroedinger's cat question is not as good, but the phrasing of the question is exceedingly ambiguous, and an ambiguous question is not the same as a logical reasoning puzzle. Try asking this question to humans. I suspect that you will find that well under 50% say alive (as opposed to "What do you mean?" or some other attempt to disambiguate the question).

fnordpiglet
1 replies
14h45m

We use instruct models extensively as we find smaller models fine tuned to our prompts perform better when general chat models that are much larger. This lets us run inference that can be 1000x cheaper than 3.5, meaning both money saving and much better latencies.

muzani
0 replies
14h38m

This feels like a valid use for langchain then. Thanks for sharing.

Which models do you use and for what use cases? 1000x is quite a lot of savings; normally even with fine-tuning it's at most 3x cheaper. Any cheaper we'd need to get like $100k of hardware.

fforflo
23 replies
8h21m

LLM frameworks like LangChain are causing a java-fication or Python .

Do you want a banana? You should first create the universe and the jungle and use dependency injection to provide every tree one at a time, then create the monkey that will grab and eat the banana.

spywaregorilla
6 replies
6h52m

I feel like most of this complaint is about OOP, not java.

fforflo
4 replies
4h23m

OOP is Java, and Java is OOP, right?

My point is to follow a dogmatic OOP approach (think all the nouns like Agent, Prompt, etc.) to model something rather sequential.

diggan
1 replies
3h51m

No, you can do OOP without having to use Java, but you cannot really do Java without at least some OOP concepts.

I'm guessing only Smalltalk rivals Java in OOP-ness, as in Smalltalk literally everything is an object, while in Java only most things are objects.

randomdata
0 replies
3h38m

The OOP concept described by Smalltalk is message passing. The closest OOP-ness rivals are Ruby and Objective-C (and arguably Erlang). Java has no such facilities. Java is much more like C++.

randomdata
0 replies
4h21m

Maybe. Developers love to invent new definitions for existing terms all the time, so Java is OOP under one or more of those definitions, but Java is not OOP in the original sense. So, it depends on the definition of OOP arbitrarily chosen today.

Zambyte
0 replies
3h51m

Not if Alan Kay has anything to say about it.

marginalia_nu
0 replies
6h23m

It's a reasonably valid comparison if you equate Java with something like SpringBoot.

9dev
5 replies
7h27m

Well. I'm working on a product that relies on both AI assistants in the user-facing parts, as well as LLM inference in the data processing pipeline. If we let our LLM guy run free, he would create an inscrutable tangled mess of Python code, notebooks, Celery tasks, and expensive VMs in the cloud.

I know Pythonista's regard themselves more as artists than engineers, but the rest of us needs reliable and deterministically running applications with observability, authorization, and accessible documentation. I don't want to drop into a notebook to understand what the current throughput is, I don't want to deploy huge pickle and CSV files alongside my source to do something interesting.

LangChain might not be the answer, but having no standard tools at all isn't either.

dartos
2 replies
7h19m

Sounds like your LLM guy just isn’t very good.

Langchain is, when you boil it down, an abstraction over text concatenation, staged calls to open ai, and calls to vector search libraries.

Even without standard tooling, an experienced programmer should be able to write an understandable system that does those things.

randomdata
0 replies
4h55m

> Sounds like your LLM guy just isn’t very good.

That's the central idea here. Most guys available to hire aren't. Hence why they get constrained into a framework that limits the damage they can cause. In other areas of software development the frameworks are quite mature at this point so it works well enough.

This AI/LLM/whatever you want to call it area of development, however, hadn't garnered much interest until recently, and thus there isn't much in the way of frameworks to lean on. But business is trying to ramp up around it, thus needing to hire those who aren't good to fill seats. Like the parent says, LangChain may not be the framework we want, but it is the one we have, which beats letting the not-very-good developers create some unconstrained mess.

If you win the lottery by snagging one of the small few good developers out there, then certainly you can let them run wild engineering a much better solution. But not everyone is so fortunate.

9dev
0 replies
6h13m

Fair point. The overlap of machine learning savvy, experienced engineer, and ready to work for a startup's salary in Germany just isn't too big.

fforflo
0 replies
4h25m

I'll bite:

"More artists than engineers": yes and no. I've been working with Pandas and Scikit-learn since 2012, and I haven't even put any "LLM/AI" keywords on my LinkedIn/CV, although I've worked on relevant projects.

I remember collaborating back then with PhD in ML, and at the end of the day, we'd both end up using sklearn or NLTK, and I'd usually be "faster and better" because I could write software faster and better.

The problem is that the only "LLM guy." I could trust with such a description, someone who has co-authored a substantial paper or has hands-on training experience in real big shops.

Everyone else should stand somewhere between artist and engineer: i.e., the LLM work is still greatly artisanal. We'll need something like scikit-learn, but I doubt it will be LangChain or any other tools I see now. You can see their source code and literally watch in the commit history when they discover things an experienced software engineer would do in the first pass. I'm not belittling their business model! I'm focusing solely on the software. I don't think they their investors are naive or anything. And I bet that in 1-2 years, there'll be many "migration projects" being commissioned to move things away from LangChain, and people would have a hard time explaining to management why that 6-month project ended up reducing 5K LOC to 500 LOC.

For the foreseeable future though, I think most projects will have to rely on great software engineers with experience with different LLMs and a solid understanding of how these models work.

It's like the various "databricks certifications" I see around. They may help for some job opportunities but I've never met a great engineer who had one. They're mostly junior ones or experienced code-monkeys (to continue the analogy)

beeboobaa3
0 replies
7h0m

What you need is a software developer, not someone who chaotically tries shit until it kinda sorta works. As soon as someone wants to use notebooks for anything other than exploratory programming alarm bells should be going off.

fforflo
0 replies
4h41m

Ah didn't know that. IIRC I first heard this analogy with regards to Java Spring framework, which had the "longest java class name" somewhere in its JavaDocs. It should have been something like 150+ chars long. You know... AbstractFactoryTemplate... type of thing.

sabbaticaldev
1 replies
6h2m

I’ll use this to explain why typescript is bad

tills13
0 replies
3h41m

Bad TypeScript is a PEBCAK.

Idiomatic and maintainable TypeScipt is no worse than vanilla JavaScript.

andix
1 replies
6h38m

Langchain was my first real contact with Python development, and it felt worse than Enterprise Java. I didn't know that OOP is so prominent in Python libraries, it looks like many devs are just copying the mistakes from Enterprise Java/.NET projects.

fforflo
0 replies
4h2m

Well it's not:D Sure there are 4-5 fundamental classes in python libs but they're just fundamental ones. They don't impose an OOP approach all the way.

What you're alluding to is people coming from Java to Python in 2010+ and having a use-classes-for-everything approach.

wnmurphy
0 replies
1h46m

I feel this too, I think it's because Java is an artifact of layers of innovation that have accumulated over time, which weren't present at its inception. Langchain is similar, but has been developing even more rapidly than Java did.

I still find LC really useful if you stick to the core abstractions. That tends to minimize the dependency issues.

tootie
0 replies
2h38m

It's funny because I was using Langchain recently and found the most confusing part to be the inheritance model and what type was meant to fill which function in the chain. Using Java would make it impossible to mistype an object even while coding. I constantly wonder why the hell the industry decided Python was suitable for this kind of work.

pacavaca
0 replies
2h55m

Oh my! I've been looking for this comment Will be using it in the future to explain my feelings about Java and Python

blackkettle
0 replies
6h8m

Holy moly this was _exactly_ my impression. It seems to really be proliferating and it drives me nuts. It makes it almost impossible to useful things, which never used to be a problem with Python - even in the case of complex projects.

Figuring out how to customize something in a project like LangChain is positively Byzantine.

Treesrule14
22 replies
18h58m

Has anyone else found a good way to swap out models between companies, Langchain has made it very easy for us to swap between openai/anthropic etc

riwsky
12 replies
18h44m

The point is that you don’t need a framework for that; the APIs are already similar enough that it should be obvious how to abstract over them using whatever approach is natural in your programming language of choice.

refulgentis
11 replies
18h14m

I have a consumer app that swaps between the 5 bigs and wholeheartedly agree, except, God help you if you're doing Gemini. I somewhat regret hacking it into the same concepts as everyone else.

I should have built stronger separation boundaries with more general abstractions. It works fine, I haven't had any critical bugs / mistakes, but it's really nasty once you get to the actual JSON you'll send.

Google's was 100% designed by a committee of people who had never seen anyone else's API, and if they had, they would have dismissed it via NIH. (disclaimer: ex-Googler, no direct knowledge)

Jensson
7 replies
14h52m

Google's was 100% designed by a committee of people who had never seen anyone else's API

Google made their API before the others had one, since they were the first with making these kind of language models. Its just that it has been an internal API before.

refulgentis
4 replies
14h2m

No.

That'd be a good explanation, but it's theoretical.

In practice:

A) there was no meaningful internal LLM API pre-ChatGPT. All this AI stuff was under lock and key until Nov 2022, then it was an emergency.

B) the bits we're discussing are OpenAI-specific concepts that could only have occurred after OpenAI's.

The API includes chat messages organized with roles, an OpenAI concept, and "tools", an OpenAI concept, both of which came well after the GPT API.

Initial API announcement here: https://developers.googleblog.com/en/palm-api-makersuite-an-...

Jensson
1 replies
11h24m

Google started including LLM features in internal products 2019 at least, I knew since I worked there then. I can't remember exactly when they started having LLM generated snippets and suggestions everywhere but it was there at least since 2019. So they have had internal APIs for this for quite some time.

All this AI stuff was under lock and key until Nov 2022

That is all wrong... Did you work there? What do you base this on? Google has been experimenting with LLMs internally ever since the original paper, I worked in search then and I remember my senior manager said this was the biggest revolution in natural language processing since ever.

So even if Google added a few concepts from OpenAI, or renamed them, they still have had plenty of experience working with LLM APIs internally and that would make them want different things in their public API as well.

refulgentis
0 replies
5h17m

LLM generated snippets and suggestions everywhere but it was there at least since 2019

Absolutely not. Note that ex. Google's AI answers are not from an LLM and they're very proud of that.

So they have had internal APIs for this for quite some time.

We did not have internal or external APIs for "chat completions" with chat messages, roles, and JSON schemas until after OpenAI.

Did you work there?

Yes

What do you base this on?

The fact it was under lock and key. You had to jump through several layers of approvals to even get access to a standard text-completion GUI, never mind API.

has been experimenting with LLMs internally ever since the original paper,

What's "the original paper"? Are you calling BERT an LLM? Do you think transformers implied "chat completions"?

that would make them want different things in their public API as well.

It's a nice theoretical argument.

If you're still convinced Google had a conversational LLM API before OpenAI, or that we need to quibble everything because I might be implying Google didn't invent transformers, there's a much more damning thing:

The API is Gemini-specific and released with Gemini, ~December 2023. There's no reason for it to be so different other than NIH and proto-based thinking. It's not great. That's why ex. we see the other comment where Cloud built out a whole other API and framework that can be used with OpenAI's Python library.

DannyBee
1 replies
8h32m

All this AI stuff was under lock and key until Nov 2022, then it was an emergency.

This is absolutely false, as the other person said. As one example: We had already built and were using AI based code completion in production by then.

Here's a public blog post from July, 2022: https://research.google/blog/ml-enhanced-code-completion-imp...

This is just one easy publicly verifiable example, there are others. (We actually were doing it before copilot, etc)

refulgentis
0 replies
5h12m

Pretending that was an LLM as it is understood today, and that whatever internal API was available for internal use cases is actually the same as the public API for Gemini today, and that it was the same as an API for adding a "chat completion" to a "conversation" with messages, roles, and JSON schemas is silly.

riwsky
1 replies
9h3m

My understanding is that the original Gmail team actually invented modern LLMs in passing back in 2004, and it’s taken outsiders two decades to catch up because doing so requires setting up the Closure Compiler correctly.

refulgentis
0 replies
4h22m

Lol, sounds like you have more experience with other ex/Googlers doing this than I do. I'm honestly surprised, I didn't know there was a whole shell game to be played with "what's an LLM anyway" to justify "whats NIH? our API was designed by experienced experts"

ssabev
1 replies
6h39m

Would recommend just picking up a gateway that you can deploy and act as an OpenAI compatible endpoint.

We built something like this for ourselves here -> https://www.npmjs.com/package/@kluai/gateway?activeTab=readm....

Documentation is a bit sparse but TL;DR - deploy it in a cloudflare worker and now you can access about 15 providers (the one that matter - OpenAI, Cohere, Azure, Bedrock, Gemini, etc) all with the same API without any issues.

refulgentis
0 replies
3h48m

Wow; this is really nice work, I wish you deep success.

ponywombat
0 replies
11h11m

LiteLLM seemed to be the best approach for what I needed - simple integration with different models (mainly OpenAI and the various Bedrock models) and the ability to track costs / limit spending. It's working really well so far.

emporas
0 replies
10h50m

Didn't know about LiteLLM. That seems to be the right kind of middleware most people would need, instead of Langchain.

skeledrew
0 replies
12h24m

Openrouter maybe?

pveierland
0 replies
15h59m

Using Llama Index for this via the `llama_index.core.base.llms.base.BaseLLM` interface. Using config files to describe the args to different models makes swapping models literally as easy as:

  chat_model:
    cls: llama_index.llms.openai.OpenAI
    kwargs:
      model: gpt-4

  chat_model:
    cls: llama_index.llms.gemini.Gemini
    kwargs:
      model_name: models/gemini-pro

nosefurhairdo
0 replies
17h38m

The strategy design pattern would be suitable for this.

ilaksh
0 replies
18h35m

Use a consistent argument structure and make a simple class or function for each provider that translates that to the specific API calls. They are very similar APIs. Maybe select the function call based on the model name.

Havoc
0 replies
9h5m

Use openrouter. One OpenAI like api but lots of models

hwchase17
18 replies
2h55m

Hi HN, Harrison (CEO/co-founder of LangChain) here, wanted to chime in briefly

I appreciate Fabian and the Octomind team sharing their experience in a level-headed and precise way. I don't think this is trying to be click-baity at all which I appreciate. I want to share a bit about how we are thinking about things because I think it aligns with some of the points here (although this may be worth a longer post)

But frameworks are typically designed for enforcing structure based on well-established patterns of usage - something LLM-powered applications don’t yet have.

I think this is the key point. I agree with their sentiment that frameworks are useful when there are clear patterns. I also agree that it is super early on and super fast moving field.

The initial version of LangChain was pretty high level and absolutely abstracted away too much. We're moving more and more to low level abstractions, while also trying to figure out what some of these high level patterns are.

For moving to lower level abstractions - we're investing a lot in LangGraph (and hearing very good feedback). It's a very low-level, controllable framework for building agentic applications. All nodes/edges are just Python functions, you can use with/without LangChain. It's intended to replace the LangChain AgentExecutor (which as they noted was opaque)

I think there are a few patterns that are emerging, and we're trying to invest heavily there. Generating structured output and tool calling are two of those, and we're trying to standardize our interfaces there

Again, this is probably a longer discussion but I just wanted to share some of the directions we're taking to address some of the valid criticisms here. Happy to answer any questions!

causal
9 replies
2h31m

I appreciate that you're taking feedback seriously, and it sounds like you're making some good changes.

But frankly, all my goodwill was burnt up in the days I spent trying to make LangChain work, and the number of posts I've seen like this one make it clear I'm not the only one. The changes you've made might be awesome, but it also means NEW abstractions to learn, and "fool me once..." comes to mind.

But if you're sure it's in a much better place now, then for marketing purposes you might be better off relaunching as LangChain2, intentionally distancing the project from earlier versions.

hwchase17
6 replies
1h55m

sorry to hear that, totally understand feeling burnt

ooc - do you think theres anything we could do to change that? that is one of the biggest things we are wrestling with. (aside from completely distancing from langchain project)

causal
3 replies
1h27m

I'm not sure. My suspicion is that the fundamental issue with frameworks like LangChain is that the problem domain they are attempting to solve is a proper subset of the problems that LLMs also solve.

Good code abstractions make code more tractable, tending towards natural language as they get better. But LLMs are already at the natural language level. How can you usefully abstract that further?

I think there are plenty of LLM utilities to be made- libraries for calling models, setting parameters, templating prompts, etc. But I think anything that ultimately hides prompts behind code will create more friction than not.

causal
1 replies
1h21m

Glad to hear that- sounds like I should give LangGraph a try after all

hwchase17
0 replies
1h16m

would love any feedback if you do!

hwchase17
0 replies
1h23m

totally agree on not hiding prompts, and have tried to stop doing that as much as possible in LangChain and are not doing that at all in LangGraph

thanks for the thoughts, appreciate it

chatmasta
1 replies
1h15m

My advice is to focus less on the “chaining” aspect and more on the “provider agnostic” part. That’s the real reason people use something other than the native SDK of an LLM provider - they want to be able to swap out LLMs. That’s a well-defined problem that you can solve with a straight forward library. There’s still a lot of hidden work because you need to nail the “least common denominator” of the interfaces while retaining specialized behavior of each provider. But it’s not a leaky abstraction.

The “chaining” part is a huge problem space where the proper solution looks different in every context. It’s all the problems of templating engines, ETL scripts and workflow orchestration. (Actually I’ve had a pet idea for a while, of implementing a custom react renderer for “JSX for LLMs”). Stay away from that.

My other advice would be to build a lot of these small libraries… take advantage of your resources to iterate quickly on different ideas and see which sticks. Then go deep on those. What you’re doing now is doubling down on your first success, even though it might not be the best solution to the problem (or that it might be a solution looking for a problem).

hwchase17
0 replies
1h4m

My advice is to focus less on the “chaining” aspect and more on the “provider agnostic” part

a lot of our effort recently has been going into standardizing model wrappers, including for tool calling, images etc. this will continue to be a huge focus

My other advice would be to build a lot of these small libraries… take advantage of your resources to iterate quickly on different ideas and see which sticks. Then go deep on those. What you’re doing now is doubling down on your first success, even though it might not be the best solution to the problem (or that it might be a solution looking for a problem).

I would actually argue we have done this (to some extent). we've invested a lot in LangSmith (about half our team), making it usable with or without langchain. Likewise, we're investing more and more in langgraph, also usable with or without langchain (that is in the orchestration space, which youre separately not bullish on, but for us that was a separate bet than LangChain orchestration)

ctxc
1 replies
2h5m

They were early to the scene, made the decisions that made sense at each point in time. Initially I (like many other engineers with no AI exposure) didn't know enough to want to play around with the knobs too much. Now I do.

So the playing field has and is changing, langChain are adapting.

Isn't that a bit too extreme? Goodwill burnt up? When the field changes, there will be new abstractions - of course I'll have to understand them to decide for myself if they're optimal or not.

React has an abstraction. Svelte has something different. AlpineJS, another. Vanilla JS has none. Does that mean only one is right and the remaining are wrong?

I'd just understand them and pick what seems right for my usecase.

causal
0 replies
1h45m

You seem to be implying all abstractions are equal, its just use-case dependent. I disagree- some really are worse than others. In your webdev example, it would not be hard to contrive a framework designed to frustrate. This can also happen by accident. Sometimes bad products really do waste time.

In the case of LangChain, I think it was an earnest attempt, but a misguided one. So I'm grateful for LangChain's attempt, and attempts to correct- especially since itis free to use. But there are alternatives that I would rather give a shot first.

jfjeschke
2 replies
1h23m

Thanks Harrison. LangGraph (eg graph theory + Networkx) is the correct implementation of multi-agent frameworks, though it is looking further into, and anticipating a future, then where most GPT/agent deployments are at.

And while structured output and tool calling are good, from client feedback, I'm seeing more of a need for different types of composable agents other then the default ReAct, which has distinct limitations and performs poorly in many scenarios. Reflection/Reflextion are really good, REWOO or Plan/Execute as well.

Different agents for different situations...

hwchase17
1 replies
1h17m

Different agents for different situations...

totally agree. we've opted for keeping langgraph very low level and not adding these higher level abstractions. we do have examples for them in the notebooks, but havent moved them into the core library. maybe at some point (if things stabilize) we will. I would argue the react architecture is the only stable one at the moment. planning and reflection are GREAT techniques to bring into your custom agent, but i dont think theres a great generic implementation of them yet

jfjeschke
0 replies
48m

Agreed. I've got a few of them ready to open source. It's almost like there needs to be a reference library of best practices for agent types

jes5199
2 replies
1h22m

[deleted]

causal
1 replies
1h20m

I also have my criticisms of LangChain, but this feels mean-spirited towards devs that I think are honestly trying and didn't charge anything to use.

jes5199
0 replies
1h12m

the road to hell is paved with engineers honestly trying

fswd
1 replies
2h9m

using LangGraph for a month, every single "graph" was the same single solution. The idea is cool, but it isn't solving the right problem.... (and the problem statement shouldn't be generating buzz on twitter. sorry to be harsh).

You could borrow some ideas from DSPy (which borrows from pytorch) their Module: def forward: and chain LM objects this way. LangGraph sounds cool, but is a very fancy and limited version of basic conditional statements like switch/if, already built into languages.

hwchase17
0 replies
1h54m

ooc, what was the "same single solution"

sc077y
17 replies
3h57m

Damn I built a RAG agent during the past 3 months and a half for my internship. And literally everyone in my company was asking me why I wasn't using llangchain or llamaindex like I was a lunatic. Everyone else that built a rag in my company used llangchain, one even went into prod.

I kept telling them that it works well if you have a standard usage case but the second you need to something a little original you have to go through 5 layers of abstraction just to change a minute detail. Furthermore, you won't really understand every step in the process, so if any issue arises or you need to be improve the process you will start back at square 1.

This is honestly such a boost of confidence.

paraph1n
6 replies
3h33m

Could someone point me towards a good resource for learning how to build a RAG app without llangchain or llamaindex? It's hard to find good information.

verdverm
0 replies
54m

My strategy has been to implement in / follow along with llamaindex, dig into the details, and then implement that in a less abstracted, easily understandable codebase / workflow.

Was driven to do so because it was not as easy as I'd like to override a prompt. You can see how they construct various prompts for the agents, it's pretty basic text/template kind of stuff

turnsout
0 replies
35m

At a fundamental level, all you need to know is:

- Read in the user's input

- Use that to retrieve data that could be useful to an LLM (typically by doing a pretty basic vector search)

- Stuff that data into the prompt (literally insert it at the beginning of the prompt)

- Add a few lines to the prompt that state "hey, there's some data above. Use it if you can."

kolinko
0 replies
2h56m

You can start by reading up about how embeddings work, then check out specific rag techniques that people discovered. Not much else is needed really.

bestcoder69
0 replies
1h57m

openai cookbook! Instructor is a decent library that can help with the annoying parts without abstracting the whole api call - see it’s docs for RAG examples.

w4
1 replies
3h31m

I had a similar experience when LangChain first came out. I spent a good amount of time trying to use it - including making some contributions to add functionality I needed - but ultimately dropped it. It made my head hurt.

Most LLM applications require nothing more than string handling, API calls, loops, and maybe a vector DB if you're doing RAG. You don't need several layers of abstraction and a bucketload of dependencies to manage basic string interpolation, HTTP requests, and for/while loops, especially in Python.

On the prompting side of things, aside from some basic tricks that are trivial to implement (CoT, in-context learning, whatever) prompting is very case-by-case and iterative, and being effective at it primarily relies on understanding how these models work, not cargo-culting the same prompts everyone else is using. LLM applications are not conceptually difficult applications to implement, but they are finicky and tough to corral, and something like LangChain only gets in the way IMO.

danenania
0 replies
1h56m

I haven't used LangChain, but my sense is that much of what it's really helping people with is stream handling and async control flow. While there are libraries that make it easier, I think doing this stuff right in Python can feel like swimming against the current given its history as a primarily synchronous, single-threaded runtime.

I built an agent-based AI coding tool in Go (https://github.com/plandex-ai/plandex) and I've been very happy with that choice. While there's much less of an ecosystem of LLM-related libraries and frameworks, Go's concurrency primitives make it straightforward to implement whatever I need, and I never have to worry about leaky or awkward abstractions.

weakfish
0 replies
2h20m

I wish I was this pragmatic as an intern.

ramoz
0 replies
3h51m

Wise perspective from an intern. The type of pragmatism we love.

puppymaster
0 replies
3h47m

you are heading the right direction. It's amazing to see seasoned engineers go through the mental gymnastic of justifying installing all those dependencies and arguing about vector db choices when the data fit in ram and the swiss knife is right there: np.array

moneywoes
0 replies
3h8m

Any tutorials you follow?

joseferben
0 replies
19m

impressive to decide against something as shiny as langchain as intern

jacobsimon
0 replies
2h40m

I admire what the Langchain team has been building toward even if people don’t agree with some of their design choices.

The OpenAI api and others are quite raw, and it’s hard as a developer to resist building abstractions on top of it.

Some people are comparing libraries like Langchain to ORMs in this conversation, but I think maybe the better comparison would be web frameworks. Like, yeah the web/HTML/JSON are “just text” too, but you probably don’t want to reinvent a bunch of string and header parsing libraries every time you spin up a new project.

Coming from the JS ecosystem, I imagine a lot of people would like a lighter weight library like Express that handles the boring parts but doesn’t get in the way.

ianschmitz
0 replies
45m

Way to follow your instinct.

I ran into similar limitations for relatively simple tasks. For example I wanted access to the token usage metadata in the response. This seems like such an obvious use case. This wasn’t possible at the time, or it wasn’t well documented anyway.

hobs
0 replies
2h57m

Groupthink is really common among programmers, especially when they have no idea what they are talking about. It shows you don't need a lot of experience to see the emperor has no clothes, but you do need to pay attention.

infecto
17 replies
18h28m

LangChain itself blows my mind as one of the most useless libraries to exist. I hope this does not come off the wrong way but so many people told me they were using it so it was easy to move been models. I just did not understand it, these are simple API calls that felt like Web Dev 101 when starting a new product. Maybe its that so many new people were coming into the field using LLM but it surprised me as even what I thought were experienced people were struggling. Its like LLMs brought out the confusion in people.

It was interesting as a library at the very beginning to see how people were thinking about patterns but pretty useless in production.

chatmasta
4 replies
17h40m

It was the first pass at solving the common problems when building with LLMs. People jumped on it because it was trendy and popular.

But it quickly became obvious that LangChain would be better named LangSpaghetti.

That’s nothing against the authors. What are the chances the first attempt at solving a problem is successful? They should be commended for shipping quickly and raising money on top of it to keep iterating.

The mistake of LangChain is that they doubled down on the bad abstraction. They should have been iterating by exploring different approaches to solving the problem, not by adding even more complexity to their first attempt.

jhoechtl
1 replies
14h36m

Doesn't langchain provide useful functionality when it comes to RAG? Here it seems it does considerably more but being a mere shim abstraction?

seanhunter
0 replies
13h36m

Not really. It's pretty much the same for RAG as it is for everything else - just a thin additional abstraction on top of apis that are easy to call on their own.

dongobread
0 replies
16h40m

Langchain feels very much like shovelware that was created for the sole purpose of parting VCs of their money. At one point the codebase had a "prompt template" class that literally just called Python's f-string.

refulgentis
3 replies
18h17m

Ah, the halcyon days of March 2023, we were a while loop away from AGI. I remember there was something that was It for like a month, to the point that whoever built the framework was treating a cocktail napkin on which they scribbled, whatever, "act, evaluate, decide next action, repeat", as if it was a historical talisman. And I wasn't sure! Maybe it was!

causal
2 replies
18h12m

Yeah I thought the consensus against LangChain was formed a year ago, surprised to still be seeing these articles.

refulgentis
1 replies
18h2m

Just chit chatting, not a strong claim, more a hot take I turn over in my mind:

my guess is 40% of software engineers did a AI pivot the last 18 months, so there's a massive market for frameworks, and there's an inclination to go beyond REST requests, find something that just does it for you / can do all the cool patterns you'll find in research papers.

Incredible amount of bad info out there, whether its the 10th prompting framework that boils down to a while loop and just drives up token costs, the 400 LLM tokenizer library that can only do GPT-3.5/4.0, the Nth app that took XX ex-FAANG and $XX mil and a year to get another web app, or another iOS-only OpenAI client with background blur,m memory thats an array of strings injected into every call

It's at the point where I'm hoping for a cooling down even though I'm launching something*, and think it's hilarious people rant about it all just being hype and think people agree.

* TL;Dr consumer app with 'chain gui', just hand people an easy to use GUI like playground.openai.com / console.anthropic.com, instead of getting cute and being the Nth team to try to launch a full grade assistant on a monthly plan matching openai pricing, shoving 6000K+ prompts with each request and not showing them

gbickford
0 replies
15h12m

This is true. Devs are looking for frameworks. See CrewAI who refuses to allow users to disable some pretty aggressive telemetry, yet they have a huge number of GH stars.

The abstractions are handy if you have no idea what you are doing but it's not groundbreaking tech.

https://github.com/joaomdmoura/crewAI/pull/402

ravenstine
3 replies
18h14m

Every time I approached LangChain, contrary to the attitude of my colleagues, I could never figure out what the point of it was other than to fetishize certain design patterns. Interacting with an LLM in a useful way requires literally none of what LangChain has to offer, yet for a time it was on its way to being the de facto way to do anything with LLMs. It reminds me a lot of the false promise of ORMs, which is that if you trust the patterns then you can swap out the underlying engine and everything will still just work, and is more or less a fantasy.

langcss
1 replies
17h42m

ORMs are useful though for a different reason. They let you creat typed objects then generate the schema from them and automatically create a lot of boilerplate SQL for you.

Admittedly for anything more than 1-2 joins you are better off hand crafting the SQL. But that is the exception not the rule.

Refactoring DB changes becomes easier, you have a history of migrations for free, DDL generation for free.

In the early 2000 I worked where people handcrafted SQL for every little query for 100 tables and yeah you end up with inconsistent APIs and bugs that are eliminated by code generation / meta programming done by ORMs.

hdhshdhshdjd
0 replies
15h5m

Admittedly for anything more than 1-2 joins you are better off hand crafting the SQL. But that is the exception not the rule.

String disagree: if that’s true you likely don’t even need a proper RDBMS in the first place.

An ORM is not a replacement for knowing how SQL works, and it never will be.

gavmor
0 replies
14h27m

to fetishize certain design patterns

Yes; exactly. There's value in a Schelling Point[0], and in a pattern language[1].

requires literally none

True, yes. There isn't infinite value in these things, and "duplication is far cheaper than the wrong abstraction"[2], but they can't be avoided; they occupy local maxima.

0. https://en.wikipedia.org/wiki/Focal_point_(game_theory)

1. https://en.wikipedia.org/wiki/Pattern_language

2. https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction

richrichie
0 replies
18h4m

Langchain seems to have been made just for the tutorial business on Udemy and Youtube.

justanotheratom
0 replies
15h11m

never understood the "chain" in langchain.

cyberdrunk2
0 replies
14h23m

I think it was great at first when llms were new and prompting required more strategy. Now the amount of of abstractions/ bloat they have for essentially string wrappers makes no sense

choilive
0 replies
15h44m

This seems to be a universal sentiment.. we took a short look at langchain and determined it was doing really trivial string manipulation/string templating stuff but inside really rigid and unnecessary abstractions. It was all stuff that could be implemented by any competent programmer in hours in any language without all the crap, so that's what we did. shrug

altdataseller
12 replies
19h8m

Langchain reminds me of GraphQL. A technology that a lot of ppl seem to hype about, sounds like something you should use because all the cool kids use it, but at the end of the day just makes things unncessarily complicated.

OutOfHere
8 replies
18h59m

GraphQL actually holds value in my view as it gives custom SQL-like functionality instead of basic JSON APIs. With it, you can do fewer calls and retrieve only the attributes you need. Granted, if SQL were directly an API, then GraphQL wouldn't hold too much value.

Langchain has no such benefit.

andybak
5 replies
18h48m

Surely SQL is an API? The line between language and API is fairly blurry.

newzisforsukas
4 replies
18h42m

Can you elaborate?

manquer
1 replies
17h9m

What GP means is it is a Programmable Interface, any interface you can interact against is an API. That means any programing complete language is an API, so are sign languages or human languages.

While nobody does it , SQL implementations have network API, authentication, authorization, ACL/RBAC, serialization, Business logic all the things you use in RESTful apis can all be done with just db servers.

You can expose in theory a direct SQL API to clients to consume without any other language or other components to the stack .

Most SQL servers use some layer on top of TCP/IP to connect their backends to frontend .libpq is the client which does this in postgreSQL for example .

You could either wrap that in Backend SQL server with an extension and talk to browser and other clients in HTTP[1], or you can write a wasm client in browsers to directly talk to TCP/IP port on the SQL server

Perhaps if you are oracle , that makes sense, but for no one else, they do build and push products that basically do parts of this .

[1] projects like postgREST basically do this .

randomdata
0 replies
13h52m

SQL (as an API) and databases are orthogonal, though. In fact, I work on an application that uses SQL for its API even though it doesn't even have a database directly attached. All of its data comes from third-parties that only make the data available using REST services.

In theory, the app servers that sit in front of those databases could just as easily use SQL instead of GraphQL. Even practically: The libraries around working with SQL in this way have become quite good. But they solve different problems. If you have a problem GraphQL is well suited to solve, SQL will not be a suitable replacement – and vice versa.

andybak
1 replies
6h57m

I was curious whether you were using "API" as shorthand for something like "HTTP API" or something like that. It seeemed odd for you to say "Granted, if SQL were directly an API, then GraphQL wouldn't hold too much value" when you actually can use SQL directly in this sense. The reasons that people generally don't are interesting in their own right.

(If I recall - one of the criticisms of GraphQL is that it's a bit too close to actually just exposing your database in this way)

newzisforsukas
0 replies
4h27m

You can implement a GraphQL service in anyway you desire. There is no inherent relation to the storage solution you use.

GraphQL isn't anywhere close to being similar to SQL, so I find the desire for an analogy very confusing.

To me, these are grammars for interacting with an API, not an API.

To me, it is like calling a set of search parameters in a URL an API or describing some random function call as an API. The interface is described by the language. The language isn't the interface.

mirekrusin
0 replies
18h34m

SQL has sophisticated WHERE clause support, GraphQL doesn't. It should be called GraphPickL.

nosefurhairdo
0 replies
17h41m

Evaluating technology based on its "cool kid usage" and a vague sense of complexity is likely not the best strategy. Perhaps instead you could ask "what problems does this solve/create?"

ecjhdnc2025
0 replies
9h15m

I don't know a thing about LangChain so this is a real digression, but I often wonder if people who are critiquing GraphQL do so from the position of only having written GraphQL resolvers by hand.

If so, it would make sense. Because that's not a whole lot of fun. But a GraphQL server-side that is based around the GraphQL Schema Language is another matter entirely.

I've written several applications that started out as proofs of concept and have evolved into production platforms based on this pairing:

https://lighthouse-php.com https://lighthouse-php-auth.com

It is staggeringly productive, replaces lots of code generation in model queries and authentication, interacts pretty cleanly with ORM objects, and because it's part of the Laravel request cycle is still amenable to various techniques to e.g. whitelist, rate-limit or complexity-limit queries on production machines.

I have written resolvers (for non-database types) and I don't personally use the automatic mutations; it's better to write those by hand (and no different, really, to writing a POST handler).

The rest is an enormous amount of code-not-written, described in a set of files that look much like documentation and can be commented as such.

One might well not want to use it on heavily-used sites, but for intranet-type knowledgebase/admin interfaces that are an evolving proposition, it's super-valuable, particularly paired with something like Nuxt. Also pretty useful for wiring up federated websites, and it presents an extremely rapid way to develop an interface that can be used for pretty arbitrary static content generation.

ahzhou
0 replies
17h48m

GraphQL is very powerful when combined with Relay. It’s useless extra bloat if you just use it like REST.

The difference between the two technologies is that LangChain was developed and funded before anyone know what to do with LLMs and GraphQL was internal tooling using to solve a real problem at Meta.

In a lot of ways, LangChain is a poor abstraction because the layer it’s abstracting was (and still is) in it’s infancy.

CharlieDigital
10 replies
22h10m

Bigger problem might be using agents in the first place.

We did some testing with agents for content generation (e.g. "authoring" agent, "researcher" agent, "editor" agent) and found that it was easier to just write it as 3 sequential prompts with an explicit control loop.

It's easier to debug, monitor, and control the output flow this way.

But we still use Semantic Kernel[0] because the lowest level abstractions that it provides are still very useful in reducing the code that we have to roll ourselves and also makes some parts of the API very flexible. These are things we'd end up writing ourselves anyways so why not just use the framework primitives instead?

[0] https://github.com/microsoft/semantic-kernel

Kiro
6 replies
20h18m

What's the difference? I thought "agents" was just a fancier word for sequential prompts.

refulgentis
0 replies
18h16m

It's also used to mean "characters interacting with each other" and sort of message passing between them. Not sure but I get the sense thats what the author is using it as

mstipetic
0 replies
12h6m

Sequential prompts with an occasional cron job

isaacfung
0 replies
11h52m

Some "agents" like the minecraft bot Voyager(https://github.com/MineDojo/Voyager) have a control loop, they are given a high level task and then they use LLM to decide what actions to take, then evaluate the result and iterate. In some LLM frameworks, a chain/pipeline just uses LLM to process input data(classification, named entitiy extraction, summary, etc).

ilaksh
0 replies
18h41m

"Agent" means that it outputs JSON with a function call name and parameters which you execute and usually then feed the results back to the LLM.

ec109685
0 replies
19h28m

Some folks try to orchestrate the whole operation by a higher level prompt that essentially uses function calls to more specific prompts.

Versus just using the LLM’s for specific tasks and heuristics / own code for the orchestration.

But I agree there is a lot of anthropomorphizing that over states current model capabilities and just confuses things in general.

CharlieDigital
0 replies
19h25m

Typically, the term "agents" implies some autonomous collaboration. In an agent workflow, the flow itself is non-deterministic. One agent can work with another agent and keep cycling between themselves until an output is resolved that meets some criteria. An agent itself is also typically evaluating the terminal condition for the workflow.

huevosabio
2 replies
11h18m

What does semantic kernel do for you? It isn't immediately obvious from the Readme.

whoknowsidont
0 replies
5h30m

I'm not OP, but it's just C#/.NET glue and "sample" code for Azure, OpenAI, and a few others (if I were to generously describe it).

It doesn't actually "do" anything or provide useful concepts. I wouldn't use it for anything, personally, even to read.

CharlieDigital
0 replies
6h25m

SK does a lot of the same things that Langhain does at a high level.

The most useful bits for us are prompt templating[0], "inlining" some functions like `recall` into the text of the prompt [1], and service container [2] (useful if you are using multiple LLM services and models for different types of prompts/flows).

It has other useful abstractions and you can see the full list of examples here:

- C#: https://github.com/microsoft/semantic-kernel/tree/main/dotne...

- python: https://github.com/microsoft/semantic-kernel/tree/main/pytho...

---

[0] https://github.com/microsoft/semantic-kernel/blob/main/dotne...

[1] https://github.com/microsoft/semantic-kernel/blob/main/dotne...

[2] https://github.com/microsoft/semantic-kernel/blob/main/dotne...

wg0
6 replies
12h12m

Sorry noob question - where can I read more about this "agents" paradigm? Is one agent's output directly calling/invoking another agent? Or there's already fixed graph of information flow with each agent (I presume some prompt presets/templates like "you are an expert this only respond in that") sorts of?

Also, how much success people have or had with automating the E2E tests for their various apps by stringing such agents together themselves

EDIT: Typos

hcks
1 replies
12h3m

Don’t waste your time, it’s been around since GPT3, and had no results so far. Also notice how no frontier lab is working on it.

zby
0 replies
12h0m

In practice this means function calling - the LLM chooses the function to call (and its parameters). Usually in a loop with a 'finish' function that returns the control to the outside code.

You can do that without function calling - as did the original ReAct paper - but then you have to write your own grammar for the communication with the LLM, a parser for it, and also you need to teach the LLM to use that grammar. This is very time consuming.

zEddSH
0 replies
12h6m

Also, how much success people have or had with automating the E2E tests for their various apps by stringing such agents themselves together?

There’s a few startups in the space doing this like QA Tech in Stockholm, and others even in YC (but I forgot the name). I’m skeptical of how successful they’ll be, not just from complex test cases but things like data management and mistakingly affecting other tests. Interesting to follow just in case though, E2E is a pain!

pavi2410
0 replies
12h4m

I want to learn about agents too!

CGamesPlay
0 replies
12h7m

Fundamentally, "Agent" refers to anything that operates in an "observe-act" loop. So in the context of LLMs, an agent sees an observation (like the code base and test output) and produces an action (like a patch), and repeats.

etse
5 replies
19h31m

My reading of the article is that because LangChain is abstracted poorly, frameworks should not be used, but that seems a bit far.

my experience is that Python has a frustrating developer experience for production services. So I would prefer a framework with better abstractions and a solid production language (performance and safety), over no framework and Python (if those were options)

lolinder
2 replies
19h21m

For most of what people are doing with AI you don't need Python because you don't need the ML ecosystem. You're either going to be talking to some provider's API (in which case there are wrappers aplenty and even if there weren't their APIs are simple and trivial to wrap yourself) or you're going to self-host a model somewhere, in which case you can use something like ollama to give yourself an easy API to code against.

All of the logic of stringing prompts and outputs together can easily happen in basically any programming language with maybe a tiny bespoke framework customized to your needs.

Calling these things "AI agents" makes them sound both cooler and more complicated than they actually are or need to be. It's all just taking the output from one black box and sticking it into the input of another, the same kind of work frontline programmers have been doing for decades.

ilaksh
1 replies
18h43m

They become agents when the LLM output is function calls.

lolinder
0 replies
17h43m

So the output of Black Box A is an instruction to give Black Box B a piece of data X and then give the resulting output back to BBA. We're still just wiring up black boxes to each other the same as we've always done, and we still don't need an abstraction for that.

autokad
0 replies
18h46m

prompt engineering requires the ability to see what is happening at various steps and langchain makes that harder if not impossible.

honestly I don't need that much abstraction.

Kostarrr
0 replies
11h13m

Disclamer: I work for Octomind.

I think the reading is more "It's hard to find a good abstraction in a field that has not settled yet on what a good abstraction is. In that case, you might want to avoid frameworks as things shift around too much."

danielmarkbruce
5 replies
21h26m

Yup. The problem with frameworks is they assume (historically mostly but not always correctly) that layers of abstraction mean one can forget about the layers below. This just doesn't work with LLMs. The systems are closer to biology or something.

nosefurhairdo
3 replies
17h23m

Very much depends on the framework. I'm currently building a GitHub App with the Probot framework, which mostly just handles authentication boilerplate and some testing niceties, then just gives you an authenticated GitHub API client (no facade/abstraction).

Then of course there's the many web application frameworks, because nobody in their right mind would want to implement http request parsing themselves (outside of academic exercises).

In fact, I would argue that most popular frameworks exist precisely because it's often more time efficient to forget about underlying details. All computer software is built on abstraction. The key is picking the right level of abstraction for your use case.

danielmarkbruce
2 replies
14h28m

Reread the thread and the comment. It's about the LLM frameworks and acknowledges that most non LLM frameworks historically are helpful and correct in abstracting away details.

nosefurhairdo
1 replies
11h51m

Ah the bit in parentheses was worded such that I misunderstood your point.

danielmarkbruce
0 replies
3h39m

That bit is poorly worded. I should have had a comma before the last word. My bad.

randomdata
0 replies
4h10m

It often took quite a long time for those historic frameworks to get the abstraction right. Survivorship bias sees us forget all the failed attempts.

I'm unconvinced there is no room for a framework here because LLMs are somehow special. LangChain just missed the mark. Unsurprisingly so, it being an early attempt, not to mention predating general availability of the LLM chatbots that have come to define the landscape.

geuis
4 replies
11h55m

I built my first commercial LLM agent back in October/November last year. As a newcomer to the LLM space, every tutorial and youtube video was about using LangChain. But something about the project had that "bad code" smell about it.

I was fortunate in that the person I was building the project for was able to introduce me to a few other people more experienced with the entire nascent LLM agent field and both of them strongly steered me away from LangChain.

Avoiding going down that minefield ridden path really helped me out early on, and instead I focused more on learning how to build agents "from scratch" more or less. That gave me a much better handle on how to interact with agents and has led me more into learning how to run the various models independently of the API providers and get more productive results.

SCUSKU
1 replies
11h34m

I've only ever played around with it and not built out an app like you have, but in my experience the second you want to go off script from what the tutorials suggest, it becomes an impossible nightmare of reading source code trying to get a basic thing to work. LangChain is _the_ definition of death by abstraction.

emporas
0 replies
9h44m

I have read the whole source of LangChain in Rust (there are no docs anyway), and it definitely seems over-engineering. The central premise of the project, of complicated chains of prompts is not useful to many people, and not to me either.

On the other hand it took some years into the web, for some web frameworks to emerge and make sense, like Ruby on Rails. Maybe in 3-4 years time, complicated chains of commands to different A.I. engines will be so difficult to get right that a framework might make sense, and establish a set of conventions.

Agents, another central feature of LangChain, are not proved to be very useful as well, for the moment.

ttul
0 replies
4h11m

LangChain got its start before LLMs had robust conversational abilities and before the LLM providers had developer decent native APIs (heck, there was basically only OpenAI at that time). It was a bit DOA as a result. Even by last spring, I felt more comfortable just working with the OpenAI API than trying to learn LangChain’s particular way of doing things.

Kudos to the LangChain folks for building what they built. They deserve some recognition for that. But, yes, I don’t think it’s been particularly helpful for quite some time.

gazarullz
0 replies
8h29m

Which alternatives have you been introduced to?

elijahbenizzy
2 replies
18h14m

I really like the idea of "good" and "bad" abstractions. I have absolutely built both.

This sentiment is echoed in this comment in reddit comment as well: https://www.reddit.com/r/LocalLLaMA/comments/1d4p1t6/comment....

Similarly to this post, I think that the "good" abstractions handle application logic (telemetry, state management, common complexity), and the "bad" abstractions make things abstract away tasks that you really need insight into.

This has been a big part of our philosophy on Burr (https://github.com/dagworks-inc/burr), and basically everything we build -- we never want to tell how people should interact with LLMs, rather solve the common problems. Still learning about what makes a good/bad abstraction in this space -- people really quickly reach for something like langchain then get sick of abstractions right after that and build their own stuff.

laborcontract
1 replies
17h16m

    > the "bad" abstractions make things abstract away tasks that you really need insight into.
Yup. People say to use langchain to prototype stuff before it goes into production but I find it falls flat there. The documentation is horrible and they explain absolutely zero about the methods they use, so the only way to “learn” is by reading their spaghetti code.

elijahbenizzy
0 replies
15h47m

Agreed — also I’m generally against prototyping stuff and then entirely rewriting it for production as the default approach. It’s a nice idea but nobody ever actually rewrites it (or they do and it’s exceedingly painful). In true research it makes sense, but very little of what engineers do falls under that category.

Instead, it’s either “welp, pushed this to prod and got promoted and it’s someone else’s problem” or “sorry, this valuable thing is too complex to do right but this cool demo got me promoted...”

elbear
2 replies
13h5m

It would have been great if the article provided a more realistic example.

The example they use is indeed more complex than the openai equivalent, but LangChain allows you to use several models from several providers.

Also, it's true that the override of the pipe character is unexpected. But it should make sense, if you're familiar with Linux/Unix. And I find it shows more clearly that you are constructing a pipeline:

    prompt | model | parser

drdaeman
0 replies
45m

Yeah, I was kind of surprised. The premise of the article started as "LangChain abstractions are off" and then the complaint was about... just a very simple pipeline?

I honestly don't care about the syntax (as long as it's sane enough), and `|` operator overloading isn't the worst one. Manually having to define a parser object gives off some enterprise Java vibes, and I get the httplib vs requests comparison - but it's not the end of the world. If anything, the example from the article left me wondering "why do they say it's worse, when at this level of abstraction it really looks better unless we don't ever need to customize the pipeline at all?" And they never gave any real example (about spawning those agents or something) that actually shows where the abstractions are making things hard or obscure.

Honestly, on the first reading, the article [wrongly] gave me an impression of saying "we don't use LangChain anymore because it lacks good opinionated defaults", which is surely wrong - it would be a very odd take, given the initial premise of using it production for a long while.

(I haven't used LangChain or any LLMs in production, just toyed around a little bit. I can absolutely agree with the article that if all you care about is one single backend, then all those abstractions are not likely to be a good idea.)

bestcoder69
0 replies
1h43m

I can already use multiple backends by writing different code. The value-add langchain would need to prove is whether i can get better results using their abstractions compared to me doing it manually. Every time I’ve looked at how langchain’s prompts are constructed, they went wayyy against LLM vendor guidance so I have doubts.

Also the downside of not being able to easily tweak prompts based on experiments (crucial!)

And not to mention the library doesn’t actually live up to this use case, and you immediately (IME) run into “you actually can’t use a _Chain with provider _ if you want to use their _ API”, so I ultimately did have to care about whats supposed to be abstracted over

djohnston
2 replies
19h20m

Idk, dude spends the post whining about writing multi agent architecture and doesn’t mention langgraph once. Reads like a lead who failed to read the docs.

esafak
0 replies
15h46m

How does langgraph stack up against the alternatives?

2C64
0 replies
18h13m

LangGraph is the primary reason I use LangChain - being able to express my flow as a state machine has been a boon to both the design of my platform as well as my own productivity.

dcole2929
2 replies
19h28m

I've seen a lot of stuff recently about how LangChain and other frameworks for AI/LLM are terrible and we shouldn't use them and I can't help but think that people are missing the point. If you need strong customization or flexibility frameworks of any kind are almost always the wrong choice, whether you're building a website or an AI agent. That's kind of the whole point of a framework. Opinionated workflows that enable a specific kind of application. Ideally the goal is to cover 80% of the cases and provide escape hatches to handle the other 20% until you can successfully cover those too.

As someone new to the space I have zero opinions of whether LangChain is better than writing it all yourself, but I can certainly say that, I at least, appreciate having a proscribed way of doing things, and I'm okay with the idea that I may get to a place where it no longer serves my needs. It's also worth noting that the benefit of LangChain is the ability to "chain" together these various AI links. Is there a better easier way to do that? Probably, but LangChain removes that overhead.

riwsky
0 replies
18h35m

As the article points out, the difference between frameworks for building a website vs building an LLM agent is that we have decades more industrial experience behind our website-building opinions. I’ve used heavyweight frameworks before, and would understand your defense in the context of eg complaints about Spring Boot—but Langchain isn’t Spring; it really does kinda suck, for reasons that go beyond the inherent trade offs of using any framework.

ilaksh
0 replies
18h29m

I think that yes, there is a better way. You have a function that calls the API, then take the output and call another function that calls the API, inserting the first output into the second one's prompt using an f-string or whatever. You can have a helper function that has defaults for model params or something.

You don't need an abstraction at all really. Inserting the previous output into the new prompt is one line of code, and calling the API is another line of code.

If you really feel like you need to abstract that then you can make an additional helper function. But often you want to do different things at each stage so that doesn't really help.

captaincaveman
2 replies
5h54m

I think LangChain basically tried to do a land grab, insert itself between developers and LLM's. But it didn't add significant value and seemed to dress it up by adding abstractions that didn't really make sense. It was that abstraction gobbledygook smell that made me cautious.

iknownthing
1 replies
2h44m

Looks like they've parlayed it into some kind of business https://www.langchain.com/

bestcoder69
0 replies
1h50m

They’ve been growth hacking the whole time pretty much, optimizing for virality. Eg integrating with every ai thing under the sun, so they could publish a seo-friendly “use gpt3 with someVecDb and lang chain” page, but for every permutation you can think. Easy for them to write since langchains abstractions are just unnecessary wrappers. They’ve also had meetups since very early on. The design seems to make langchain hard to remove since you’re no longer doing functional composition like you’d do in normal python - you’re combining Chains. You can’t insert your own log statements in between their calls so you have to onboard to langsmith for observability (their saas play). Now they have a DSL with their own binary operators :[

VC-backed, if you couldn’t guess already

nosefrog
1 replies
18h50m

Anyone who has read LangChain's code would know better than to depend on it.

whydid
0 replies
15h40m

A heuristic that I use when judging code quality is a search for "datas" or "metadatas".

jsemrau
1 replies
13h44m

LCEL is such a weird paradigm that I never got the hang of. Why | use | pipes?

elbear
0 replies
13h9m

I found it weird as well to see that. I didn't know LangChain overrode Python syntax.

But, if you're familiar with Linux/Unix, this should be familiar. You are piping the output of one function as the input of another function.

deckar01
1 replies
21h45m

I recently unwrapped linktransformer to get access to some intermediate calculations and realized it was a pretty thin wrapper around SentenceTransformer and DBScan. It would have taken me so much longer to get similar results without copying their defaults and IO flow. It’s easy to take for granted code you didn’t have to develop from scratch. It would be interesting if there was a tool that inlined dependency calls and shook out unvisited branches automatically.

luke-stanley
0 replies
20h34m

From memory, I recall Vulture might do something like that!

clarionbell
1 replies
5h3m

LangChain approach struck me as interesting, but I never really saw much inherent utility in it. For our production code we went with direct use of LLM runtime libraries and it was more than enough.

randomdata
0 replies
4h39m

We've had success with non-developers using some of the visual tools built on top of LangChain to build exploratory models in order to prove a concept. LangChain does seem well suited to providing the "backend" for that type of visual node-based modelling.

Of course, once the model is proven it is handed off to developers to build something more production-worthy.

bastawhiz
1 replies
1h43m

Genuine question: can someone point me to a use case where langchain makes the problem easier to solve than using the openai/anthropic/ollama SDKs directly? I've gotten a lot of advice to use langchain, but the docs haven't really shown me how it simplifies the task, or at least not more than using an SDK directly.

I really want to at least understand when to use this as a tool but so far I've been failing to figure it out. Some of the things that I tried applying it for:

- Doing a kind of function calling (or at least, implementing the schema validation) for non-gpt models

- parsing out code snippets from responses (and ignoring the rest of the output)

- Having the output of a prompt return as a simple enum without hallucinations

- process a piece of information in multiple steps, like a decision tree, to create structured output about some text (is this a directory listing or a document with content? What category is it? Is it NSFW? What is the reason for it being NSFW?)

Any resources are appreciated

starik36
0 replies
1h26m

It makes it simple (and uniform) to switch providers.

StrauXX
1 replies
9h6m

I don't like langchain that much either. It's not as bad as LLAmaIndex and Haystack in regards to extreme overengineering and overabstracting but it still is bad. The reason I still use Langchain is that often times I need to be able to swap out LLM service providers, embedding models and so on for clients. Thats really the only part about langchain that really works well.

Btw. you don't have to actually chain langchain entities. You can use all of them directly. That makes the magic framework code issue much more tolerably as Langchain turns from a framework into a library.

sramam
0 replies
2h16m

Have you considered LiteLLM?

zby
0 replies
12h13m

I am always suspicious with frameworks. There are two reasons of that. First is that because of the inversion of control they are more rigid than libraries. This is quite fundamental - but there are cases where the trade off is totally worth it. The second one is because of how they are created - it often starts with an application which is then gradually made generic. This is good for advertising - you can always show how useful the framework with an application that uses it. But this "making it generic" is a very tricky process that often fails. It is a top down, the authors need to imagine possible uses and then enable them in the framework - while with libraries the users have much more freedom to discover them in a bottom up process. Users always have surprising ideas.

There are now libraries that cover some of the features of Langchain. There is Instructor and mine LLMEasyTools for function calling, there is LiteLLM for API unification.

zackproser
0 replies
2h24m

Here's a real world example of a custom RAG pipeline built with Langchain

https://zackproser.com/chat

I did a full tutorial with source code that's linked at the top of that page ^

Fwiw I think it's a good idea to build with and without Langchain for deeper understanding.

xyst
0 replies
1h3m

Never been a fan of ORM for databases. So why would that change with AI/LLM “prompt engineering”? Author confirms my point.

wouldbecouldbe
0 replies
19h6m

Everyone in my office is talking about ai agents as a magic bullet, driving me crazy

whitej125
0 replies
3h36m

I used LangChain early on in it's life. People crap on their documentation but at least at that point in time I had no problem with it. I like reading source code so I'd find myself reading the code for further comprehension anyway. In my case - I'm a seasoned engineer who was discovering LLMs and thought LangChain suited that way of learning pretty well.

When it came to building anything real beyond toy examples, I quickly outgrew it and haven't looked back. We don't use any LC in production. So while LC does get a lot of hate from time to time (as you see in a lot of peers posts here) I do owe them some credit for helping bridge my learning of this domain.

te_chris
0 replies
11h49m

The thing that blows my mind is that this wasn’t obvious to them when they first looked at langchain

spullara
0 replies
12h53m

every good developer i know that has started using langchain stopped after realizing that they need more control than it provides. if you actually look at what is going on under the hood by looking at the requests you would probably stop using it as well.

seany62
0 replies
4h18m

Glad to see I'm not the only one experiencing this. The agents framework I use is moving very fast and its not uncommon for even minor versions to break my current setup

sandGorgon
0 replies
11h24m

shameless plug - i build a JS/TS framework which tries to solve the abstraction problem. we use a json variant called jsonnet (created at google. expressive enough for kubernetes).

https://github.com/arakoodev/EdgeChains/tree/ts/JS/edgechain...

examples of these jsonnet for react COT chains - https://github.com/arakoodev/EdgeChains/blob/ts/JS/edgechain...

P.S. we also build a webassembly compiler that compiles this down to wasm and deploy on hardware.

sabrina_ramonov
0 replies
45m

You used langchain for a simple replacement of OpenAI API calls — of course it will increase complexity for no benefit.

The benefits of langchain are: (1) unified abstraction across multiple different models and (2) being able to plug this coherently into one architecture.

If you’re just calling some OpenAI endpoints, then why use it in the first place?

nprateem
0 replies
11h36m

Wasn't it obviously pointless from the outset? Posts like this raise questions about the technical decisions of the company more than anything else IMO. Strange they'd want to publicise making such poor decisions.

monarchwadia
0 replies
4h46m

I'm the author of Ragged, a lightweight connector that makes it easy to connect to and work wth language models. Think about it like an ORM for LLMs --- a unified interface designed to make it easy to work with LLMs. Just wanted to plug my framework in case people are looking for an alternative to building their own connector components.

https://monarchwadia.medium.com/use-openai-in-your-javascrip...

maximilianburke
0 replies
20h4m

I just pulled out LangChain from our AI agents; we now have much smaller docker images and the code is a lot easier to understand.

mark_l_watson
0 replies
5h17m

I was an early enthusiast of both LangChain and LlamaIndex (and I wrote a book using both frameworks, free to read online [1]) but I had some second thoughts when I started when I started writing LLM examples for my Common Lisp and Racket books that were framework-free, even writing simple vector data stores from scratch. This was, frankly, more fun.

For my personal LLM hacking in Python, I am starting down the same path: writing simple vector data stores in NumPy, write my own prompting tools and LLM wrappers, etc.

I still think that for many developers LangChain and LlamaIndex are very useful (and I try to keep my book up to date), but I usually write about things of most interest to me and I have been thinking of rewriting a new book on framework-free LLM development.

[1] https://leanpub.com/langchain/read

jostmey
0 replies
3h37m

Learning LangChain is effort, but not as much as truly understanding deep learning, so you learn LangChain and it feels like progress, when it may not be

iknownthing
0 replies
2h46m

I tried LangChain a while ago for a RAG project. I liked how I could just plug into different vector stores to try them out. But I didn't understand the need for the abstractions around the API calls. It's not that hard to just call these APIs directly and its not that hard to create whatever prompt you'd like.

hcks
0 replies
2h21m

LangChain is a critical thinking test and orgs using it are ngmi

greo
0 replies
8h29m

I am not a fan of LangChain. And I would never use it for any of my projects.

LLM is already a probabilistic component that is tricky to integrate into a solid deterministic system. An abstraction wrapper that bloats the already fuzzy component just increases the complexity for no apparent benefit.

gravenate
0 replies
16h53m

Hard Agree, Semantic Kernal, On the other hand seems to actually be a value add on top of the simple API calls. Have you guys tried it ?

gexaha
0 replies
6h19m

that's a nice AI image with octopi

fragebogen
0 replies
10h5m

I'd challenge some of these criticisms and give my 2c on this. I've spent the last 6 months working on a rather complex chat with routes, agents, bells and whistles sort of system. Initially, time to POC was short, so I picked it to get quick at my feet. Eventually, I thought. The code base isn't enormous, I can easily rewrite it, but I'd like to see what people mean with "abstraction limiting progress" kind of statements. I've now kept building this project for another 6 months and I must say the more I work with it and understand its philosophy.

It's not that complicated. The philosophy is just different from many other python projects. The LCEL pipes for example is a really nice way to think of modularity. Want to switch out one model for another? Well just import another model and replace the old. Want to parse it more strictly, exchange the parser. The fact that everything is an instance of `RunnableSerializable` is a really convenient way of making things truly modular. Want to test your pipe syncronously? Easy just use `.stream()` instead of `.astream()` and get on with it.

I think my biggest hurdle was understanding how to debug and pipe components, but once I got familiarized with it, I must say it made me grow as a python dev and appreciate the structure and thought behind it. Where complexity arise is when you have a multi-step setup, some sync and some async. I've had to break some of these steps up in code, but otherwise it gives me tons of flexibility to pick and chose components.

My only real complaint would be lack of documentation and outdated documentation, I'm hardly the only one, but it really is frustrating sometimes to understand what some niche module can and cannot do.

empiko
0 replies
13h8m

This echoes our experience with LangChain, although we have abandoned it before putting it into production. We found out that for simple use cases it's too complex (as mentioned in the blog), and for complex use cases it's too difficult to adapt. We were not able to identify what is the sweet spot when it is worth it to use it. We felt like we can easily code ourselves most of its functionality very quickly and in a way that fits our requirements.

d4rkp4ttern
0 replies
5h47m

Frustration with LangChain is what led us (ex-CMU/UW-Madison researchers) to start building Langroid[1], a multi-agent LLM framework. We have been thoughtful about designing the right primitives and abstractions to enable a simple developer experience while supporting sophisticated workflows using single or multiple agents. There is an underlying loop-based orchestration mechanism that handles user interaction, tool handling and inter-agent handoff/communication.

We have companies using Langroid in production.

[1] Langroid: https://github.com/langroid/langroid

czechdeveloper
0 replies
12h13m

I used langchain in one project and I do regret choosing it over just writing everything over direct API. I feel their pain.

It had advantage of having standardized API, so I could switch local LLM to OpenAI and just compare results in a heartbeat, but when I wanted anything out of ordinary (ie. get logprobs), there was just no way.

cyanydeez
0 replies
20h20m

In some sense, this could be retitled "We no longer use training wheels on our bikes"

codelion
0 replies
5h38m

Many such cases. It is very hard to balance composition and abstraction in such frameworks and libraries. And LLMs being so new it has taken several iterations to get the right patterns and architecture while building LLM based apps. With patchwork (https://github.com/patched-codes/patchwork) an open-source framework for automating development workflows we try hard to avoid it by not abstracting unless we see some client usage. As a result you do see some workflows appear longer with many steps but it makes it easier to compose them.

bratbag
0 replies
3h36m

I made the same choice for our stack last year.

We initially had problems diagnosing issues inside LangChain and were hitting weird issues with some elements of function calling, so we experimented with a manual reconstruction of exactly what we needed and it was faster, more resilient and easier to maintain.

I can see how switching models might be easier using LangChain as an abstraction layer, but that doesn't justify making everything else harder.

andrewfromx
0 replies
19h14m

"When abstractions do more harm than good" I'll take this for $2000 please and if i get the daily double, bet it all.

_pdp_
0 replies
9h11m

We also built our own system that caters for our customers' needs.

__loam
0 replies
14h27m

Langchain has always been open source and has always sucked. I'm shocked anyone still uses it when you can see it for yourself.

ZiiS
0 replies
12h10m

The "good abstraction" has a bug; slightly undermines the argument.

Turskarama
0 replies
10h53m

This is so common I think it could just about be a lemma:

Any tool that that helps you to get up and running quicker by abstracting away boilerplate will eventually get in the way as your projects complexity increases.

Oras
0 replies
9h58m

The comments are good example that hype > quality.

99% of docs mentioning LangChain or showing a code example with LangChain. Wherever you look at tutorials or YouTube videos, you will see LangChain.

They take the credit of being the first framework to abstract LLM calls and other features such as reading data from multiple sources (before function calling was a thing).

Langchain was first, got popular, and hence for new comers they think it’s the way, until they use it.

Kydlaw
0 replies
20h28m

IMO LangChain provides very high level abstractions that are very useful for prototyping. It allows you to abstract away components while you dig deeper on some parts that will deliver actual value.

But aside from that, I don't think I would run it in production. If something breaks, I feel like we would be in a world of pain to get things back up and running. I am glad they shared their experience on that, this is an interesting data point.

JSDevOps
0 replies
6h56m

The dude on that blog is trying way too hard to look like Sam Altman which is fucking weird.

Havoc
0 replies
9h2m

There was a Reddit thread in langchain sub a while back basically saying exactly this (plus same comments as here)