On a philosophic level, this sort of thing deeply concerns me. We often get mired in the debate of "should people be able to learn to make bombs or how to best kill themselves" (and those are important debates), but our solutions to those problems will have big implications for future access to knowledge, and who gets to control that access. In the process of trying to prevent a short-term harm, we may end up causing unintended long-term harm.
As time moves on, the good blog posts, tutorials, books, etc where you currently learn the deeper knowledge such as memory management, will stop being written and will slowly get very outdated as information is reorganized.
I've already seen this happen in my career. When I first started, the way you learned some new technology was to buy a book on it. Hardly anybody does this anymore, and as a result there aren't many books out there. People have turned more to tutorials, videos, blog posts, and Stack Overflow. The quick iterations of knowledge from these faster delivery mechanisms also further make books more outdated by the time they written, which further makes them less economical.
As AI becomes the primary way to learn (and I definitely believe that it will), the tutorials, videos, blog posts, and even Stack Overflow are going to taper off just like books did. I honestly expect AI to become the only way to learn about things in the future (things that haven't yet been invented/created, and will never get the blog post because an AI will just read the code and tell you about it).
It could be an amazing future, but not unless Google and others change their approach. I think we may need to go through a new Enlightenment period where we discover that we shouldn't be afraid of knowledge and unorthodox (and even heretical) opinions and theories. Hopefully it won't take 1,500 years next time.
? Quite the contrary, soon AI will be able to write high quality books for us about each field with state of the art knowledge.
Imagine books written in the style of the greatest writers with the knowledge of the most experienced and brightest minds, that come along with a Q&A AI assistant to further enhance your learning experience.
If AI does get democratized, then there is a strong possibility, we are about to enter the golden age of wisdom from books.
Where is the evidence for this? There is no such system available at the moment — not anything that even comes close.
I’m guessing your answer will be some variation on ‘just look at the exponential growth, man’, but I’d love to be wrong.
Same could be said about Moore’s Law and progression of silicon based processor advancements. Yet it has held true.
Same applies for AI, plus even now, with custom tools, one can already churn out high quality output with custom models, it’s just accuracy of it is still an uncracked part. But I think we’ll get there soon too.
Ofcourse this is why I said “soon” and not “now”.
In what domain exactly? Be precise with your claims.
I've yet to see an LLM-produced insight on... anything. The ability of these models to manipulate (and, yes, seemingly understand) language is freakishly good. Utterly scarily good, in fact. But keep probing and (if you yourself have sufficiently deep knowledge of the topic to allow good enough questions to be asked) you'll quickly find it lacking and come to the conclusion that the words are all there is — they're not a serialisation of the model's abstract free-floating conceptualised understanding; they are the understanding.
I suspect, though this may sound arrogant, that the vast majority of those making such grandiose claims about 'AI' are not subject matter experts in anything. They're the same people who think being able to pump out endless Python scripts to solve various canned problems means a system can 'code', or that cranking out (perhaps even very relevant!) rhyming sentences means it 'understands poetry'... and I say this as someone who is neither a programmer nor a poet and so has no skin in the game.
Again: I'm happy to be proven wrong. But I'm not expecting much.
I don't think anyone has claimed this so far in the conversation.
No one is claiming they are intelligent, and a subject matter expert doesn't need a model to tell them anything insightful -- they need a model to do the annoying gruntwork for them as proficiently as possible, which is what they are good for.
They have claimed something that makes it necessary. It was claimed that LLMs will 'soon' be able to 'write high quality books for us about each field'. If you can't produce a single insight on the topic, any book you write is going to fundamentally be a (very high level, likely almost verbatim) rehash of others, and therefore wouldn't usually be considered high quality. Not to mention the fact that an inability to generate real insight is indicative of a lack of an underlying model that isn't overfitted to the language used to express the examples.
I'm glad not, because I wouldn't be sure what they mean by that.
Indeed not. Having existing ideas be juxtaposed, recontextualised or even repeated can be valuable. But to claim that AI will be writing entire books that are worth reading seems laughable to me. There's no sign that we're close to that. Just play with the state of the art models for a while and try and push them.
I'm convinced that anyone who believes otherwise just doesn't have the technical expertise (and that's not a generic insult; it's true of most of us) to probe deeply enough to see the fragility of these models. They look at the output and (erroneously, due to the unknown unknowns involved) think themselves perfectly qualified to judge it high quality — and they do so, with great vigour.
If your only claim is simply 'LLMs can remix natural language in a way that sometimes proves useful', then fine. I absolutely agree.
Anyway such a conversation would be useless without defining exactly what 'intelligence' even means.
'It doesn't seem like it something I would value and anyone who doesn't agree isn't an expert'.
There are experts in technical fields who would find value in having an AI triage patients, or double check materials strengths for building projects, or to cross reference court cases and check arguments for them. As the parent to whom you are replying noted: it is just the accuracy that needs to be fixed.
There are no grand insights in technical reference books or instructional materials, just "existing ideas...juxtaposed, recontextualised" and "repeated".
Don't even pretend you think that's what I'm trying to say. Read my response again you need to.
I already said this. I'm not saying experts in technical fields cannot find value in AI; I'm saying that they have enough knowledge to be able to expose the fragility of the models. Triaging patients doesn't require expert knowledge, which is why AI can do it.
I seriously doubt that, actually. Are you, or is anyone else saying this, a lawyer? My position is that you have to be an expert to be able to evaluate the output of the models. It looking like it knows what it's talking about simply isn't enough.
The parent you replied to said that they could write books which would be useful for instruction if the accuracy problem were solved. You said they would never write anything worth reading because they can't be insightful.
What I said was 1) they could write instructional and reference books and no one claimed they were insightful and 2) that they are useful to experts even if they are not insightful.
I'm not sure what we are arguing about anymore if you are not going to dispute those two things.
Not an argument. Plenty of things seemed like they will progress/grow exponentially when they were invented yet they didn’t.
Perhaps, but why?
> soon AI will be able to write high quality books for us about each field with state of the art knowledge
And where is the AI getting that state of the art knowledge from, if people stop writing the content that trained the AI on those topics in the first place?
Young generations are having their attention span trained on Instagram Reels and YouTube Shorts, and even watching a full-length movie is sometimes a big ask. They are completely used to - at most - skimming through a couple of Google results to find the answer to any questions they have.
Once the AI replaces even that with a direct answer, as it's already doing, why do you think those young people will actually be reading amazing books prepared by the AI?
I'm worried that we're headed for another Dark Ages, between declining literacy rates, forced climate migrations, falling population (increasing population = higher specialization = more knowledge generation), overreliance on advanced technology, and complex systems that grow ever more rigid and lose their ability to adapt. With complex systems, when you get a shock to the fundamentals of the system, you often see the highest and more interconnected layers of the system fail first, since it's easier for simpler systems to reconfigure themselves into something that works.
I wonder if something like Asimov's Encyclopedia Galactica is needed to preserve human knowledge for the upcoming dark ages, and if it's possible to keep the most impactful technologies (eg. electricity, semiconductors, software, antibiotics, heat pumps, transportation) but with a dramatically shortened supply chain so that they can continue to be produced when travel and transport outside of a metropolitan region may not be safe.
Some years ago, I bought a nice electronics set from Amazon that essentially taught a basic electronics course. I was looking for one yesterday, but couldn't find one. All of them were centered around interfacing to an Arduino.
I had to go to ebay to find a used one.
What about an Arduino makes it inappropriate to teach basic electronics?
Basic electronics is how resistor-capacitor-inductor circuits work, along with diodes and the various transistor devices, up to op-amps. Computers have nothing to do with it.
Nothing is stopping you from demonstrating any basic circuit with an Ardiuno.
It is a way of being able to learn and use basic electronics and also do something functional with them as well as learn how to code.
I think the problem is more that-- assuming a comparable price-- they're probably going to skimp out on the "lower level" stuff in a kit with an Arduino. If you have a perfect timer, you don't need to showcase basic resistor-capacitor timer circuits, or pack a huge bag of related components. If you can define any logic function you want in code, there's no need to discuss wiring 74xx gates to each other.
I'm also really worried about teaching chemistry/physics in this day of hypervigilant anti-terrorism. I saw a family setting off model rockets on a mudflat in the Bay Area; a couple Karens standing next to us threatened to call the police on them. With the departure of LUNAR from Moffett because the Air Force didn't want them back, there are now no acceptable model rocketry clubs or sites in the Bay Area. And now you've got to worry that you'll end up on some FBI watch list for ordering basic chemistry sets for kids.
I was fortunate to get a decent chemistry set, as the next year they were emasculated. I had a lot of fun with it.
Isn't this a little hyperbolic? Go back to the 1920s and try and find a way to gain knowledge on something accessible but uncommon, like how to make glass. Would you have been able to do it without finding someone who knew already and would teach you? It is a relatively short chapter in our history that we have had access to such a large amount of information easily.
Right, and I worry that this is the apogee of human knowledge and we're going to start a steep downhill slide relatively soon.
Books on "new things" mostly have died off because the time-to-market is too long, and by the time it's released, the thing is different.
I agree, but why are things moving so quickly now that books are outdated quickly? I believe it's because of the superior information speed delivery of the tutorials, blog posts, etc.
And I believe the same thing will happen with AI. Writing tutorials, blog posts, etc will be snail-pace slow compared to having AI tell you about something. It will be able to read the code and tell you about it, and directly answer the questions you have. I believe it will be orders of magnitude more efficient and enjoyable than what we have now.
Tutorials, blog posts, etc will have too long time-to-market compared to AI generated information, so the same will happen, and those things will stop being written, just like books have.
No, it’s the superior patch speed.
It used to cost infinity dollars or be nearly impossible to patch software, so it was extensively tested and designed before shipping; now you ship it and if it breaks patch it anyway ship a new feature and a UI redesign quick!
Books can still be made but they take too long, and as you note most people don’t want to learn the tools they just want to get on with their task.
Patches are not necessarily good.
This is a massive issue with soft dev today.
That being said, the books can still exist - dynamically updated, well versioned books with frequent updates, and low cost distribution methods such as websites or module print formats.
The problem with frequent updates is the lack of rigor, which is partly why we don't see as much of this...
My personal impression is that modern software is a lot less buggy than 1980s software.
I'm not convinced.
Most software I use today is awful, but then again it is so much more complicated. It is rare thought that I use something that had one purpose and does it well.
Interacting with a computer has never been more frustrating imo. It might be a lot of small niggly things but it feels like death by a thousand cuts.
They are not. We just killed all the fast-moving formats in favor of the internet.
Books about new things were always outdated before they got distributed, and information circulated in magazines, booklets, journals, etc. We got a short time window when books became faster, and people tried to use them that way, but as the experience showed, fast books are still not fast enough.
I don't really understand what you mean by "We just killed all the fast-moving formats in favor of the internet." The internet is the delivery medium for the fast moving information. Prior to this, it was even slower because books, magazines, articles etc had to be printed on paper and shipped in the physical world.
But considering book (e-book) and blog post etc delivery on the internet, think about how fast a blog post is compared to a book. Maybe a few days to create instead of a year? Depends of course, some are longer some are shorter, but that makes blog posts about 100 times faster at disseminating information than books.
AI can generate a blog post of information in maybe a minute. Compared to human written, it's maybe 5,000 times faster, a full order (and a half) of magnitude faster. And this is still early days!
You don't think AI is going to disrupt blog posts in the same way they did books?
‘AI’ in its current state requires vast amounts of data. It will only understand (to the degree that such systems do) new subjects after thousands of books have been written on them. So I don’t see how the original sources are going to completely ‘taper off’. Most people might not look at them once a subject has matured to the point at which its form can reasonably by replicated and interpolated by machine learning models, but by that point it’s old knowledge anyway.
I've tried to debate with college-educated people who would cite TikTok videos as their sources. The shallowness of their source material is only exceeded by their certainty of rectitude.
This is partly because access to university education has widened tremendously. Modern academic research is not really TikTok-based.
I think the implication isn't that it's an aspect of academia but just that college graduates of all people should know to do better.
point is there will be no market for books so there's no reason to write them
unless the AI companies are going to start commissioning manuscripts from experts, but they feel entitled not to pay for ingested material
this is a major impediment to LLM corps' "fair use" claim as their derivative work takes market share from the source material
My primary concern about the move towards the more rapid delivery channels for knowledge is that the knowledge delivered has become much, much shallower.
Books and even magazine articles could spend words delving deep into a subject, and readers expected to spend the time needed to absorb it. It's really very rare to see any online sources that approach that level of knowledge transfer.
That represents a real loss, I think.
I could not agree more actually. I personally feel like we've lost a great deal. Having a good book that has been carefully and logically constructed, checked, and reviewed is the best.
Perhaps with AI, we'll be able to generate books? Certainly that is far off, but what an amazing thing that would be!
It's not far off at all, some people sold AI-generated books on Amazon (they banned them in theory). Good and/or factual books however... Who knows.
I think a big piece of that was that the physical learning materials let you skip over stuff, but you still had to lug around the skipped-over stuff, so you never really stopped being reminded that it was there, and probably were able to return to it should it suit you.
(Of course, I also hold the opinion that the best learning materials are pretty modular, and very clear about what you're getting, and those go hand-in-hand. I think most things these days are not clear enough about what they are, and that's a problem.)
But if AI becomes the primary way to learn, how will the AI learn new things? Everything AI has learned about the outside world has to come from somewhere else.
From interacting with the outside world, just like humans do.
Current AI is only trained on web text and images, but that's only step 1. The same algorithms work for just about any type of data, including raw data from the environment.
Whatever the AI is doing is fine, but everyone should be able to see what filters are being applied. This should be accessible information. To not know how information is being managed, for that too be a secret is terrible.
This to me is just indicative of a larger problem that is already in play (and has been all of my life) and thats the issues surrounding the internet. I would prefer NOT to use AI to learn about things. I'd much rather read a first hand account or an opinion from an expert presented alongside facts. So why are we so unable to do that?
Its becoming increasingly impossible to find useful information on the internet, a giant part of that issue is that a single company essentially controls 99% of the access to all of humanities information. Things like Wikipedia, the Internet Archive and government archiving are becoming increasingly important. Its time that we think about decoupling corporate control of the internet and establish some hard and fast ground rules that protect everyones rights while also following common sense.
Its not that people are afraid of knowledge, it is purely due to corporations wanting to be perceived a certain way and those same corporations covering their ass from lawsuits and scrutiny. Corporations will never change. You may as well call that a constant. So the solution isn't going to be focused around how corporations choose to operate. They have no incentive to ever do the right thing.
We do, and it probably will. We are extremely bad at learning from history. Which is, ironically, proven by history.
I agree -- I call this the "Age of Dislightenment". I wrote about this at length last year, after interviewing dozens of experts on the topic:
https://www.fast.ai/posts/2023-11-07-dislightenment.html
We should stop with this question. It is never in good faith and I always see it presented in very weird ways. As a matter of fact, we have an answer to this already, but maybe it hasn't explicitly been said (enough). We've decided that the answer is "yes." In fact, I can't think of a way that this can't be yes because otherwise how would you make bombs and kill people?
They always have vague notions like can an LLM teach you how to build "a bomb." What type of bomb matters very much. Pipebomb? I don't really care. They sold instructions at Barnes and Nobles for years and The Anarchist Cookbook is still readily available. High explosives? I'd like to see how it can give you instructions where someone that does not already have the knowledge to build such a device could build the device __and__ maintain all their appendages.
A common counter argument to these papers is about how the same information is on the internet. Sure, probably. You can find plans for a thermonuclear weapon. But that doesn't mean much because you still need high technical skill to assemble it. Even more to do so without letting everyone in a 50km radius know. And even harder without appearing on a huge amount of watch lists or setting of our global monitoring systems. They always mask the output of the LLM, but I'd actually like to see it. I'm certain you can find some dangerous weapon where there are readily available instructions online and compare. Asking experts is difficult to evaluate as a reader. Are they answering that yes the instructions are accurate in the context that a skilled person could develop the device? Is that skill level at such a level that you wouldn't need the AI instructions? I really actually care just about the novices, not the experts. The experts can already do such things. But I would honestly be impressed if a LLM could give you detailed instructions on how to build a thermonuclear weapon that was actually sufficient to assemble and not the instructions that any undergraduate physics student could give you. Honestly, I doubt that such instructions would ever be sufficient through text alone and the LLM would have to teach you a lot of things along the way and force you to practice certain things like operating a lathe.
It would also have to teach you how to get all the materials without getting caught which is how we typically handle this issue today: procurement.
I just don't have confidence that this presents a significant danger even if the AI was far more advanced or even were it AGI. An AGI robot operating independently is a whole other story, but I'd still be impressed if it could build a thermonuclear weapon or biological weapon and do so without getting caught. If to teach someone how to build a nuclear weapon you first need to give them a physics and engineering degree, then I'm not worried.
So I do not find these arguments worth spending significant time thinking about. There are much higher priority questions at play and even more dangers from AI than this that are worth spending that time on. Maybe I've characterized incorrectly, but I'm going to need some strong evidence. And if you're going to take up this challenge, you must consider what I've said carefully. About the gap between theory and practice. If you have no experience making things I doubt you will be able to appropriately respond but you're more than welcome to give it a go. If you want a proof by example: take anything you think you know how to do but haven't done before, and then try to go do that. If you fail, we've made my case. If you succeed, consider if you could have done so through instructions alone and what were the underlying skills you needed to bridge the gap.
Imagine being a researcher from the future and asking this same question of the AI. The safety concern would be totally irrelevant, but the norms of the time would be dictating access to knowledge. Now imagine a time in the not too distant future where the information of the age is captured by AI, not books or films or tape backups, no media that is accessible without an AI interpreter.
You might enjoy reading (or listening to) Samo Burja's concept of "Intellectual Dark Matter".
The change in how people learn has been interesting. There still are new books being published on technical topics. They just don't have a very long shelf life, and don't get advertised very much.
Just do a quick pass through Amazon's "Last 90 days" section and you'll find hundreds of newly released technical books.
I don't have numbers, but there seem to be a constant stream of new programming books coming out by manning, pakt, and O'Reilly.
So it seems to me just that it's not that people don't like books, they take longer to produce and this have less visibility to those not looking explicitly for them