I run a startup that does legal contract generation (contracts written by lawyers turned into templates) and have done some work GPT analysis of the contract for laypersons to interact and ask questions about the contract they are getting.
In terms of contract review, what I've found is that GPT is better at analysis of the document than generating the document, which is what this paper supports. However, I have used several startups options of AI document review and they all fall apart with any sort of prodding for specific answers. This paper looks like it just had to locate the section not necessarily have the back and forth conversation about the contract that a lawyer and client would have.
There is also no legal liability for GPT for giving the wrong answer. So It works well for someone smart who is doing their own research. Just like if you are smart you could use google before to do your own research.
My feelings on contract generation is that for the majority of cases, people are better served if there were simply better boilerplate contracts available. Laywers hoard their contracts and it was very difficult in our journey to find lawyers who would be willing to write contracts we would turn into templates because they are essentially putting themselves and their professional community out of income streams in the future. But people don't need a unique contract generated on the fly from GPT every time when a template of a well written and well reviewed contract does just fine. It cost hundreds of millions to train GPT4. If $10m was just spent building a repository of well reviewed contracts, it would be a more useful than spending the equivalent money training a GPT to generate them.
People ask pretty wide range of questions about what they want to do with their documents and GPT didn't do a great job with it, so for the near future, it looks like lawyers still have a job.
I notice same things in other professions, especially where it requires a huge upfront investment in education.
For example (at least where I live), there was a time about 20 years ago where architects also didn't want to produce designs that would be then sold to multiple people for cheap. The thinking was that this reduces market for architecture output. But of course it is easy to see that most people do not really need a unique design.
So the problem solved itself because the market does not really care and the moment somebody is able to compile a small library of usable designs and a usable business model, as an architect, you can either cooperate to salvage what you can or lose.
I believe the same comes for lawyers. Lawyers will live through some harsh time while their easiest and most lucrative work gets automated and the market for their services is going to shrink and whatever work is left for them will be of the more complex kind that the automation can't handle.
I think you greatly underestimate this group to retain their position as a monopoly. A huge chunk of politicians are lawyers, and most legal jurisdictions have hard requirements around what work you must have a lawyer to perform. These tools may make their practices more efficient internally, but it doesn't mean that value is being passed on to the consumer of the service in any way. They're a cartel and one with very close relationships with country leadership. I don't see this golden goose souring any time soon.
I think what you are missing is businesses doing what businesses have always been doing: finding a niche for themselves to make a good profit.
When you can hire less lawyers and get more work done and cheaper and at the same (or better) quality, you are going to upend the market for lawyering services.
And this does not require to replace lawyers. It is just enough to equip a lawyer with a set of tools to help them quickly do the typical tasks they are doing.
I work a lot with lawyers and a lot of what they are doing is going to be stupidly easily optimised with AI tools.
Please elaborate with some examples of what legal work you think will be optimized with AI tools.
Sometimes it feels like people look at GPT and think "This thing does words! Law is words! I should start a company!" but they actually haven't worked in legal tech at all and don't know anything about the vertical.
It's a logical reaction, at least superficially, to the touted capabilities of Gen AI and LLMs. But once you start trying to use the tech for actual legal applications, it doesn't do anything useful. It would be great if some mundane legal tasks could be automated away--for example negotiation of confidentiality agreements. One would think that if LLMs are capable of replacing lawyers, they could do something along those lines. But I have not seen any evidence that they can do so effectively, and I have been actively looking into it.
One of the top comments on this thread says that LLMs are going to better at summarizing contracts than generating them. I've heard this in legal tech product demos as well. I can see some utility to that--for example, automatically generating abstracts of key terms (like term, expiration, etc.) for high-level visibility. That said, I've been told by legal tech providers that LLMs don't do a great job with some basic things like total contract value.
I question how the document summarizing capabilities of LLMs will impact the way lawyers serve business organizations. Smart businesspeople already know how to read contracts. They don't need lawyers to identify / highlight basic terms. They come to lawyers for advice on close calls--situations where the contract is unclear or contradictory, or where there is a need for guidance on applying the contract in a real-world scenario and assessing risk.
Overall I'm less enthusiastic about the potential for LLMs in the legal space than I was six months ago. But I continue to keep an eye on developments and experiment with new tools. I'd love to get some feedback from others on this board who are knowledgeable.
As a side note, I'm curious if anyone knows about the impact of context window on contract interpretation a lot of contracts are quite long and have sections that are separated by a lot of text that nonetheless interact with each other for purposes of a correct interpretation.
I think one of the biggest problems with LLMs is the accountibility problem. When a lawyer tells you something, their reputation and career are on the line. There's a large incentive to get things right. LLMs will happily spread believable bullshit.
Lawyers are legally liable to their clients for their advice, it's a lot more than just reputation and career.
In fairness some lawyers will too, haha. I take your point, though. Good lawyers care about their reputation and strive to protect it.
A friend of mine is a highly ranked lawyer, a past general consul of multiple large enterprises. I sent him this paper, he played with ChatGPT-3.5 (not even GPT-4) and contract creation, he said it was 99% fine and then told me he's glad he is slowly retiring from law and is not envious of any up-and-coming lawyers entering the profession. One voice from the vertical.
General Consul? Is he a Roman general ?
That’s why he exited the market - it’s tough out there for Roman consuls and other Latin-based professions generally.
People started companies and succeeded for dumber reasons. Generalized "businessing" skills and placing yourself somewhere in the space where money is made counts for much more than actually knowing anything about specific product beforehand.
That's a very good point.
I would add, that sometimes being a newcomer is a benefit. Many times a particular industry is stuck in groupthink, having shared understanding on what is and what isn't possible. And it sometimes requires a person with a fresh perspective to be able to upend it. See example of Elon upending multiple industries by essentially doing exactly that.
https://xkcd.com/1570/
Lots of corporate and government law work.
It’s just like programmers and artists. As the tools improve, you’ll need fewer, smarter humans.
Yeah, lawyers will go away as soon as doctors do.
AI has outperformed radiologists for a while now, and I don't care how much better AI performs, radiologists aren't going away.
Radiologists get to decide the laws of the field essentially.
Why would they vote to kick themselves out of the highest paying job in the world?
Like the medical industry, the legal industry is designed around costing you as much as possible - not really anything related to your benefits.
I just can't see disruption here. The industry will wail against it tooth and nail at every chance.
I'm a doctor and I've been following this saga for a while. What you wrote does not match my experience. Firstly, you overestimate our political reach by a large margin. If you can have acceptable service for massively cheaper, it will happen regardless of any lobbying. What _is_ true, is that AI systems outperform docs for very select, often trivial situations (e.g. routine chest X-ray). I don't believe this would shrink the market for radiologists significantly. Secondly, the non-trivial work is exceedingly difficult to automate, because those cases currently have a prevalence to complexity ratio that make them impossible to train for. The US are making great strides towards that kind of thing, though. But not available yet.
If you're counting mammograms as routine chest X-rays, AI has been able to outperform and do this substantially cheaper since 2019.
Yet this is not an option for patients, and likely won't be in the next 10 years.
What is a realistic compromise?
Do you work in the medical industry? (To be Frank it sounds like you’re spewing bs about a topic you have no intimate knowledge of - seems rather naive actually).
Refer me to the evidence where AI is outperforming radiologists in the entire body of cross-sectional imaging. Are you seriously claiming that there is AI technology today that can take a brain MRI or CT A/P and produce a full findings and impression without human intervention? You have a reference for that?
Lawyers are uniquely well equipped to legislate their continued employment into existence.
Lawyers, yes. Junior associates, not so much.
Kinda? Lawyers are myopic, vain, and we don't really do much to innovate.
We wanted to make sure there would be no cross pollination between legal advisory services and other professional services in most jurisdictions, but the only thing that division did was significantly restrict our ability to widen our service offerings to provide more value.
The end result is that we protected our little nest egg while our share of the professional services pie has been getting eaten by consulting and multi-service accounting firms for the past 20 years.
Lawyers seem to be the prime group to prevent this outcome using some kind regulation. Many politicians are lawyers.
They were lawyers, they aren't still practicing attorneys representing clients.
They moved into the much more lucrative career of representing corporations.
Doctors, for instance. You hear no end of stories about how incredibly high pressure medicine is with insane hours and stress, but will they increase university placements so they can actually hire enough trained staff to meet the workload? Absolutely fkn not, that would impact salaries.
University placements aren’t the problem. Medical residencies are funded through Medicare and have been numerically capped. You could graduate a million MDs a year and if none of them have a training pipeline, we still lose.
I have no problem with your logic so long as it is applied uniformly to all of society. By that I mean society must abolish copyright, patents and all forms of intellectual property. Otherwise I'll be forced to defend lawyers in this case. Why can people hoard "IP" but lawyers can't hoard contracts?
I don't think that is the point he is making.
More that lawyers will hoard contracts but there will be financial incentives to defect early (the idea being to sell your contracts while the price is high).
Then that lawyer's contracts will become the mass-produced "good enough" boilerplate and be used widely driving down the value of hoarding contracts at all.
So, you will get the template for free. And then a lawyer has to put it on their letterhead and they charge you the exact same as they do right now for that because that will be made a requirement.
No. As a business owner you will hire couple lawyers, give them a bunch of programs to automate searching through texts, answering questions and writing legalese based on human description of what is the text supposed to do. These three are from my experience great majority of the work. The 2 people you hire will now perform like 10 people without tools. Then you will use part of that saved money to reduce prices and if you are business savvy, you will use the rest to research the automation further.
Then another business that wants to compete with you will no longer have an option, they will have to do this or more to be able to stay in the business at all.
IIRC about 40% of us politicians are lawyers, unfortunately I’m sure they will find a way to gatekeep these revenue streams for their peers.
I’m assuming by the use of “us” and “they” you meant US - not that you are a politician.
“easiest and most lucrative work”
I think this overlooks a big part of how the legal market works. Our easiest work is only lucrative because we use it to train new lawyers, who bill at a lower rate. To the extent the easy stuff gets automated, 1) it’s going to be impossible to find work as a junior associate and 2) senior attorneys will do the same stuff they did last year. If there’s a decrease in prices for a while, great, but a generation from now it’s going to be a lot harder to find someone knowledgeable because the training pathway will have been destroyed.
It was my understanding that there is also no legal liability for a lawyer for giving the wrong answer. In extreme cases there might be ethical issues that result in sanctions by the bar, but in most cases the only consequences would be reputational.
Are there cricumstances where you can hold an attorney legally liable for a badly written contract?
There is plenty of legal, ethical and professional liability for a lawyer giving the wrong answer, we don't often see the outcome of these things because like everything in the courts they take a long time to get resolved and also some answers are not wrong just "less right" or "not really that wrong."
I think the reason you rarely see the outcomes is because those disputes are typically resolved through mediation and/or binding arbitration, not in the courts.
Look at your most recent engagement letter with an attorney. I’d bet that you agreed to arbitrate all fee disputes, and depending on your state you might have also agreed to arbitrate malpractice claims.
I believe all practicing attorneys carry malpractice insurance as well as E&O (errors and omissions) insurance. I think one of those would "cover" the attorney in your example, but obviously insurance doesn't prevent poor Google reviews, nor would it protect the attorney from anything done in bad-faith (ethical violations), or anything else that could land an attorney before the state bar association for a disciplinary hearing.
Nit: Malpractice insurance is (a species of) E&O insurance.
It's called malpractice
I mean, sure, if the attorney is operating below the usual standards of care -- it's exceptionally uncommon in the corporate world, but not unheard of. In the case of AI assistance, you run into situations where a company offering AI legal advice direct to end-users is either operating as an attorney without licensing, or, if an attorney is on the nameplate, they're violating basic legal professional responsibilities by not reviewing the output of the AI (if you do legal process outsourcing -- LPO -- there's a US-based attorney somewhere in the loop who's taking responsibility for the output).
About the only case where this works in practice is someone going pro se and using their own toolset to gin up a legal AI model. There's arguably a case for acting as an accelerator for attorneys, but the problem is that if you've got an AI doing, say, doc review, you still need lawyers to review not just the output for correctness, but also go through the source docs to make sure nothing was missed, so you're not saving much in the way of bodies or time.
Yes. If the drafted language falls below reasonable care and damages the client, absolutely you can be sued for malpractice.
Wrong Word in Contract Leads to $2M Malpractice Suit[1].
[1]https://lawyersinsurer.com/legal-malpractice/legal-malpracti...
I recently used Willful to create a will and was pretty disappointed with the result. The template was extremely rigid on matters that I thought should have been no-brainers to be able to express (if X has happened, do Y, otherwise Z) and didn't allow for any kind of property division other than percentages of the total. It was also very consumed with several matters that I don't really feel that strongly about, like the fate of my pets.
I was still able to rewrite the result into something that more suited me, but for a service with a $150 price tag I kind of hoped it would do more.
Our philosophy at GetDynasty is that the contract (in our case estate planning documents) itself is a commodity which is why we give it away for free. Charging $150 for a template doesn't make sense.
Our solution like you point out is more rigid than having a lawyer write it, but for the majority of people having something that is accessible and free is worth it and then having services layer on top makes the most sense. It is easier to have a well written contract that you can "turn on and off" features or sections of the contract than to try to have GPT write a custom contract for you.
Interestingly, this is almost exactly how I draft a contract as an attorney. Westlaw has tons of what you might call “templates” but which have tons of information concerning when and why a client might need a certain part of the template.
The difference is that when Westlaw presents me with a decision point and I choose the wrong option for my client, my client sues my insurer and is made whole. (My premiums increase accordingly).
If you make the wrong choice in your choose-your-own-legal-adventure, you lose.
(For some contracts, this is probably the right approach).
I applaud the efforts to give away free documents like this. That is actually what happens when you have a lawyer do it: you pay pretty much nothing for the actual contract clauses to be written (they start with a basic form and form language for all of the custom clauses you may want), but you pay a lot for them to be customized to fit your exact desires and to ensure that your custom choices all work.
The idea of "legalzoom-style" businesses has always seemed like a bamboozle to me. You pay hundreds of dollars for essentially the form documents to fill in, and you don't get any of the flexibility that an actual lawyer gives you.
As another example, Northwest Registered Agent gives you your corporate form docs for free with their registered agent services.
Knowing someone who works in Trusts & Estates, that is terrible. I've often heard complaints about drafting by percentages of anything but straight financial assets which have an easily determined value, because that requires an appraisal(s). Yes, there are mechanisms to work it out in the end, but it is definitely better to be able to say $X to Alice, $Y to Bob and the remainder to Claire.
You have to think of not only what you want, but how the executors will need to handle it. We all love complex formulae, but we should use our ability to handle complexity to simplify things for the heirs - it's a real gift in a bad time.
Heh, okay I guess what I wanted was going to end up as the worst of both— fixed amounts off the top to some particular people/causes, and then the remainder divided into shares for my kids.
I guess there's an understanding the being an executor is a best-effort role, but maybe you could specifically codify that +/-5% on the individual shares is fine, just to take off some of the burden of needing it to be perfect, particularly if there are payouts occurring at different times and therefore some NPV stuff going on.
Pet trusts [1]! My lawyer literally used their existence, which I find adorable, to motivate me to read my paperwork.
[1] https://www.aspca.org/pet-care/pet-planning/pet-trust-primer
"There is also no legal liability for GPT for giving the wrong answer."
I mean, i get your point but lets be real: I cannot count the number of times I sat in a meeting and looked back at a contract and wished some element had a different structure to it. In law there are a lot of "wrong answers" someone could foolishly provide, but its way more often something more variable as to how "wrong" the answer is, than it is a binary bad/good piece of advice.
I personally feel the ability to have more discussion about a clause is extremely helpful, v's getting the a hopefully "right answer" from a lawyer, and counting the clock / $ as you try wrap your head around the advice you're being given. If you have deep pockets, you invite your lawyer to a lot of meetings, they have context and away you go....but for a lot of people, you're just involving the lawyer briefly and trying to avoid billable hours. That's been me at the early stage of everything, and it's a very tricky balance.
If you're a startup trying to use GPT, i say do it, but also use a lawyer. Augmenting the lawyer with GPT to save billable hours so you can turn up to a meeting with your lawyer and extract the most value in the shortest time period seems like the best play to me.
You can read my other reply which agrees with you that law is a spectrum rather than a binary.
The only way to have something "bullet proof" is to have experience in ways in which something can go wrong. Its just like writing a program in which the "happy path" is rather obvious but then you have to come up with all the different attack vectors and use cases in which the program can fail.
The same is with lawyers. Lawyers at big firms have the experience of the firm to guide them on what to do and what they should include in a contract. A small town family lawyer might have no experience in what you ask them to do.
Which is why I advocate for more standardized agreements as opposed to one off generated agreements (with GPT or a lawyer). Think of the YCombinator SAFE, it made a huge impact on financing because it was standardized and there were really no terms to negotiate compared to the world before which the terms Notes were complex had to be scrutinized and negotiated.
The issue is that a lot of lawyers have a conflict of interest and a "Not invented here" way of doing business. If you have a Trust for instance written by one lawyer and bring it to another lawyer, the majority of lawyers we talked to actually prefer to throw out the document and use their own. This method works well if you are a smart savvy person, but for the population at large, people have some crazy and weird ideas about how the law works and need to be talked out of what they want into something more sane.
Another common lawyer response besides "It depends" is "Well you can, but why would you want to?" So many people of a skewed view on what they want and part of a lawyers job is interpreting what they really want and guiding them on the path of that.
So the hybrid method really only works if you find a lawyer that accepts whatever crazy terms you came up with and are willing to work with what you generated.
Thats all very reasonable, thanks for taking the time to reply!
When i suggest going down a hybrid path, I mostly mean use GPT on your own (disclose this to your lawyer at your own risk) as a means to understand what they're proposing. I've spent so many hours asking questions clarifying why something is done a certain way, and most of that is about understanding the language and trying to rationalize the perspective the lawyer has taken. I feel I could probably have done a lot of that on my own time, just as fast, if GPT had been around during these moments. And then of course, confirmed my understanding aligns with the lawyer at the end.
I need to be upfront, I really don't know I'm right here....its just a hunch and gut reaction to how I'd behave in the present moment, but I find myself using AI more and more to get myself up to speed on issues that are beyond my current skill level. This makes me think law is probably another good way to augment my own disadvantages in that I have a very limited understanding of the rules and exceptional scenarios that might come up. I also find myself often on the edge of new issues, trying to structure solutions that don't exist or are intentionally different to present solutions...so that means a lot of explaining to the lawyer and subsequently a fair bit of back and forward on the best way to progress.
It's a fun time to be in tech, I'm hoping things like GPT turn out to be a long term efficiency driver, but I'm fearful about the future monetization path and how it'll change the way we live/work.
If you need a GPT to explain your lawyer's explanations to you, you need a new attorney.
Eh, no...i mean, maybe...I honestly feel the issue is me. I always want a lot of detail, and that can become expensive. Sometimes the detail I want is more than I needed, but I don't know that until after I've asked the question.
If your attorney is not adequately explaining things to you or providing you with resources to understand things he does not need to spend his time explaining to you, then you need a new attorney.
Hi hansonkd,
I'm working on Hotseat - a legal Q&A service where we put regulations in a hot seat and let people ask sophisticated questions. My experience aligns with your comment that vanilla GPT often performs poorly when answering questions about documents. However, if you combine focused effort on squeezing GPT's performance with product design, you can go pretty far.
I wonder if you have written about specific failure modes you've seen in answering qs from documents? I'd love to check whether Hotseat is handling them well.
If you'r curious, I've written about some of the design choices we've made on our way to creating a compelling product experience: https://gkk.dev/posts/the-anatomy-of-hotseats-ai/
Thanks for the response. I will check it out.
Specific failure modes can be something as simple as extraction of beneficiary information from a Trust document. Sometimes it works, but a lot of times it doesn't even with startups with AI products specific to extracting information from documents. For example it will have an incomplete list of beneficiaries, or if there are contingent beneficiaries, it won't know what to do. Not even a hard question about the contingency. Just making a simple list with percentages of if no-one dies what is the distribution.
Further trying to get an AI to describe the contingency is a crap shoot.
While I expect these options to get better and better, I have fun trying them out and seeing what basic thing will break. :)
Thanks for the response! I'm not familiar with Trust documents but I asked ChatGPT about them: https://chat.openai.com/share/c9d86363-b64a-4e44-9fd4-1d5b18...
If the example is representative, I see two problems: a simple extraction of information that is laid out bare (list of beneficiaries), and reasoning to interpret the section of contingent beneficiaries and connect it facts from other parts. Is that correct?
If that's the case, then Hotseat is miles ahead when it comes to analyzing regulations (from the civil law tradition, which is different from the US), and dealing with the categories of problems you mentioned.
Your post is very interesting. Thanks for sharing.
If your focus is narrow enough the vanilla gpt can still provide good enough results. We narrow down the scope for the gpt and ask it to answer binary questions. With that we get good results.
Your approach is better for supporting broader questions. We support that as well and there the results aren’t as good.
Thanks for reading it! I agree that binary questions are easy enough for vanilla GPT to answer. If your problem space fits them - great. Sadly, the space I'm in doesn't have an easy mode!
How do you think organizations can best use the contractual interpretations provided by LLMs? To expand on that, good lawyers don't just provide contractual interpretations, they provide advice on actions to take, putting the legal interpretation into the context of their client's business objectives and risk profile. Do you see LLMs / tools based on LLMs evolving to "contextualize" and "operationalize" legal advice?
Do you have any views on whether context window limits the ability of LLMs to provide sound contractual interpretations of longer contracts that have interdependent sections that are far apart in the document?
Has your level of optimism for the capabilities of LLMs in the legal space changed at all over the past year?
You mentioned that lawyers hoard templates. Most organizations you would have as clients (law firms or businesses) have a ton of contracts that could be used to fine tune LLMs. There are also a ton of freely available contracts on the SEC's website. There are also companies like PLC, Matthew Boender, etc., that create form contracts and license access to them as a business. Presumably some sort of commercial arrangement could be worked out with them. I assume you are aware of all of these potential training sources, and am curious why they were unsatisfactory.
Thanks for any response you can offer.
Not op but someone that currently runs an ai contract review tool.
To answer some of your questions:
- contract review works very well for high volume low risk contract types . Think slas, SaaS… these are contracts comercial legal teams need to review for compliance reasons but hate it.
- it’s less good for custom contracts
- what law firms would benefit from is just natural language search on their own contracts.
- it also works well for due diligence. Normally lawyers can’t review all contracts a company has. With a contract review tool they can extract all the key data/risks
- LLM doesn’t need to provide advice. LLM can just identify if x or y is in the contract. This improving the process of review.
- context windows keep increasing but you don’t need to send the whole contract to the LLM . You can just identify the right paragraphs and send that.
- things changes a lot in the past year. It would cost us $2 to review a contract now it’s $0.2 . Responses are more accurate and faster
- I don’t do contract generation but have explored this. I think the biggest benefit isn’t generating the whole contract but to help the lawyer rewrite a clause for a specific need. The standard CLM already have contract templates that can be easily filled in. However after the template is filled the lawyer needs to add one or two clauses . Having a model trained on the companies documents would be enough.
Hope this helps
Thanks. Appreciate your feedback.
Do you think LLMs have meaningfully greater capabilities than existing tools (like Kira)?
I take your point on low stakes contracts vs. sophisticated work. There has been automation at the "low end" of the legal totem pole for a while. I recall even ten years ago banks were able to replace attorneys with automations for standard form contracts. Perhaps this is the next step on that front.
I agree that rewriting existing contracts is more useful than generating new ones--that is what most attorneys do. That said, I haven't been very impressed by the drafting capabilities of the LLM legal tools I have seen. They tend to replicate instructions almost word for word (plain English) rather than draw upon precedent to produce quality legal language. That might be enough if the provisions in question are term/termination, governing law, etc. But it's inadequate for more sophisticiated revisions.
Didn't try Kira but tried zuva.ai, which is a spin off from them. We found that the standard LLM performed at the same level for classification for what we needed. We didn't try everything though. They let you train their model on specific contracts and we didn't do that.
For rewriting contracts keep in mind that you don't have to actually use the LLM to generate the text completely. It is helpful if you can feed all the contracts of that law firm into a vector db and help them find the right clause from previous contracts. Then you can add the LLM to rewrite the template based on what was found in the vector db.
Many lawyers still just use folders to organize their files.
I'm in the energy sector and have been thinking of fine tuning a local llm on energy-specific legal documents, court cases, and other industry documents. Would this solve some of the problems you mention about producing specific answers? Have you tried something like that?
Your welcome to try, but we had mixed results.
Law in general is interpretation. The most "lawyerese" answer you can expect is "It depends". Technically in the US everything is legal unless it is restricted and then there are interpretations about what those restrictions are.
If you ask a lawyer if you can do something novel, chances are they will give a risk assessment as opposed to a yes or no answer. Their answer typically depends on how well they think they can defend it in the court of law.
I have received answers from lawyers before that were essentially "Well, its a gray area. However if you get sued we have high confidence that we will prevail in court".
So outside of the more obvious cases, the actual function of law is less binary but more a function of a gradient of defensibility and the confidence of the individual lawyer.
I spent a lot of time with M&A lawyers and this is 100% true. The other answer is "that's a business decision".
So much of contract law boils down to confidence in winning a case, or it's a business issue that just looks like a legal issue because of legalese.
We're building exactly this for contract analysis: upload a contract, review the common "levers to pull", make sure there's nothing unique/exceptional, and escalate to a real lawyer if you have complex questions you don't trust with an LLM.
In our research, we found out that most everyone has the same questions: (1) "what does my contract say?", (2) "is that standard?", and (3) "is there anything I can/should negotiate here?"
Most people don't want an intense, detailed negotiation over a lease, or a SaaS agreement, or an employment contract... they just want a normal contract that says normal things, and maybe it would be nice if 1 or 2 of the common levers were pulled in their direction.
Between the structure of the document and the overlap in language between iterations of the same document (i.e. literal copy/pasting for 99% of the document), contracts are almost an ideal use-case for LLMs! (The exception is directionality - LLMs are great at learning correlations like "company, employee, paid biweekly," but bad at discerning that it's super weird if the _employee_ is paying the _company_)
That makes sense, how are you guys approaching breaking down what should be present and what is expected in contracts? I've seen a lot of chatbot-based apps that just don't cut it for my use case.
What's your objection to Nolo Press? They seem to have already done that.
That was more directed towards people who are trying to train AIs to be a competitor to Nolo. Lots of document repositories exist, but they won't work with you if you want to sell legal contracts yourself. I have seen a lot of startups raise money to try to build an AI solution to this, but the results haven't been great so far.
You said: "However, I have used several startups options of AI document review and they all fall apart with any sort of prodding for specific answers. "
I think you will find that this is because they "outsource" the AI contract document review "final check" to real lawyers based in Utah ... so, it's actually a person, not really a wholy-AI based solution (which is what the company I am thinking of suggests in their marketing material)
Which company is that? I don't see any point in obfuscating the name on a forum like this.
In other words, LLM’s are great examples of the 80/20 rule.
They’re going to be great for a lot of stuff. But when it comes to things like the law the other 20% is not optional.
So the world needs 1/5 as many attorneys ? or 1/100 ? How will 6-figure attorneys replace that income?
Which is mostly what I feel also happens with LLMs producing code. Useful to start with, but not more than that. We've still got a job us programmers. For the moment.
Producing code is like producing syntactically correct algebra. It has very little value on its own.
I’ve been trying to pair system design with ChatGPT and it feels just like talking with a person who’s confident and regurgitates trivia, but doesn’t really understand. No sense of self-contradiction, doubt, curiosity.
I’m very, very impressed with the language abilities and the regurgitation can be handy, but is there a single novel discovery by LLMs? Even a (semantic) simplification of a complicated theory would be valuable.
I launched a contract review tool about year ago[1].
The legal liability is an issue in several countries but contract generation can also be. If you are providing whatever is defined as legal services and are not a law firm, you will have issues.
[1]legalreview.ai
Thanks for the link.
that is a big reason why we haven't integrated AI tools into our product yet. Currently our business essentially works as a free product that is the equivalent of a "stationary store" of you are filling out a blank template and it is your responsibility what happens. This has a long history of precedence since for decades people could buy these templates off the shelf and fill them out themselves.
Giving a tool to our users to answer legal questions opens a can of works like you say. We decided that the stationary store templates are a commodity and should be free (even though our competitors charge hundreds for them) so we make money providing services on top of it.
That's a trap: If you don't have prior expertise then you can't distinguish plausible-sounding fact from fiction. If you think you are "smart", then afaik research shows you are easier to fool because you are more likely to think you know.
Google finds lots of mis/disinformation. GPTs are automated traps: they generate the most statistically likely text from ... the Internet! Not exactly encouraging. (Of course, it really depends on the training data.)
I'd like to know your company, and talk to you about GPT as it applies to legal - this is the most disruptive space available to GPTs, and its being poorly addressed.
This is why Lawyers must die.
If "letter of law" is a thing - then "opinions" in legal parlance, should be BS.
We should have every SC decision eviscerated by GPTs
Any any single person saying "You need a lawyer to figure this out" is a grifter and a charlatan
--
Anyway - I'd like to know more about your startup. I'd like to know what you o for lkegal DMS, ala hummingbird for biotech, but there are so many real applications of GPT to legal troves, such as an auto index/summary/ parties/contacts/dates/ blah blah that GPTs make legal searching all that more powerful and ANY legal-person in any position telling you anything bad about computers keeping them in check is a fraud.
The legal industry is the most low hanging fruit for GPTs.
"So It works well for someone smart who is doing their own research."
That's a concise explanation that also applies to GPTs and software engineering. GPT4 boosts my productivity as a software engineer because it helps me travel the path quicker. Most generated code snippets need a lot of work because I'm prodding it for specific use cases and it fails. It's perfect as an assistant though.
As I've said before, one of my biggest concerns with LLMs is that they somehow manage to concentrate their errors in precisely the places we are least likely to notice: https://news.ycombinator.com/item?id=39178183
If this is dangerous with normal English, how much more so with legal text.
At least if a lawyer drafts the text, there is at least one human with some sort of intentionality and some idea of what they're trying to say when they draft the text. With LLMs there isn't.
(And as I say in the linked post, I don't think that is fundamental to AI. It is only fundamental to LLMs, which despite the frenzy, are not the sum totality of AI. I expect "LLMs can generate legal documents on their own!" to be one of those things the future looks back on our era and finds simply laughable.)
Unfortunately people stop at step #1, they use Google and that is their research. I don't think ChatGPT is going to be treated any different. It will be used as an oracle, whether that's wise or not doesn't matter. That's the price of marketing something as artificial intelligence: the general public believes it.
How is that good for the end user? Malpractice claims are often all that is left for a client after the attorney messes up their case. If you use a GPT, you wouldn't have that option.
Are you using GPT-3 + RAG by any chance?