This sentence caught my attention:
You can’t just Google these kinds of things. That’s why they’re so interesting.
So what Google did is dilute the information advantage. The more interconnected we are, the harder it becomes to maintain that advantage.
Google, ChatGPT, Gemini, Perplexity, etc are all retrieval services (with some rudimentary remixing capabilities thrown in, for the latter few). Some human has to actually have interesting insights and write them. Retrieval services help with distribution & discovery. The less we’re connected, the harder it is to find the right audience who value your (niche) insights.
Of course attribution and linking is important.
AI so much much more than retrieval since it knows how to correctly apply exist concepts to previously unexplored areas.
You can ask ChatGPT how one might implement and enforce laws in a world where magic exists. Or time travel. Or whatever. Because it knows laws and it knows common fiction tropes and can reason well enough to surmise how to apply them together.
You make the following comment regarding artificial stupidity systems (AI)
and yet these systems know nothing at all. Far too many people (including the developers of such systems) have failed to understand that none of these systems can go beyond the programming that humans have incorporated into them. Yes, they appear to exhibit certain [abilities] but no more than any other essentially mechanical device and the limited capabilities that we have been able to design in them.
You can certainly pose questions and these systems (within the constraints of the programming involved in them) can retrieve and correlate data that is accessible. Bus the insights drawn will require human involvement.
Over decades, I have built tools to help in the analysis of all sorts of data sets and it has taken a human asking the [right] questions to get appropriate outcomes.
We do not understand our own intelligence let alone being able to build any artificial intelligent system that can operate on its own.
What does amaze me though is that we create natural intelligence systems all the time and they are called children. I have been avidly watching the development of my youngest grandchild and she makes an absolute mockery of any artificial system we have built anywhere.
In a very real sense, every artificial stupidity system (AI) that we have built is as complicated as a hammer or a knife.
It is what we (human beings) do with these simple tools that determine the outcomes.
I get what you’re saying, and will echo that these systems are still just tools that enhance what we, as humans, are capable of doing. But I think you’re failing to consider that even comparatively simple systems can exhibit emergent behavior, well beyond their programming. You’re being unreasonably reductive and dismissive.
The specific behavior you get is a result of the programming. what does "beyond the programming" mean? I think it's actually a meaningless statement.
I would love to see an actual example where a program does something not encoded in itself already.
In what sense is a novel, never-before-seen image that comes out of DALL·E 3 "encoded in" in the system? The image may be, in a sense, encoded in the latent space of images, but so is every image (of the same dimensions) that ever has been or will be.
The images generated are based on two things:
1). the code within the systems
2). the data entered into the system.
Those images are the result of how the data is processed by the code. Even if you put in a random number generator, the code processes those numbers based on it internal code constraints.
Now a well designed and built code base may be able to generate every image of the same dimensions. You can certainly write code that can exhaustively generate every possible image, but the question here is whether this exhaustive generation is intelligent.
This isn't really true except in the most reductive sense.
If you ask for a picture of Shakespeare dressed as Spiderman reading a McDonald's advert on the moon, no one has programmed that and the fact it knows these individual concepts is no different to the fact a human has seen them too.
The laws of physics encode every motion and interaction of every particle.
None of those fundamental laws directly says "water is wet".
You see it every time you use a machine learning system. Even the simplest models, say a linear regression line in a spreadsheet, "learn" from the data, not from the code.
Emergent behaviour can occur, not a problem. But if you study such systems, I think you will find that the emergent behaviour is based on the programming involved and is not "beyond the programming".
When it comes to intelligence, this is not something that we can say is actually emergent.
There are currently a number of projects that are looking into intelligence and free-will. There are researchers on the same teams who hold quite different opinions - the results for these projects are not at all conclusive.
I admire you for pressing home your point that others are missing. I practice a visual art form (which I won't name; many other smaller cultures around the world too have their own) which will never "emerge" from AI _unless_ it is programmed in, or trained on the visual art itself. Even though, I don't see how it could ever figure out the intricate detailed meanings without it being programmed. The people trying to counter you are thinking only within the culture within which these AIs have been created, and thus it does seem to them that anything AI creates is emergent because it seemingly created soemthing they haven't, didn't, couldn't, wouldn't. Without the programming (never mind the electricity), AI is still a blunt tool.
It is shocking to me how many people miss the fact that the big prediction machines trained on lots of data, are fundamentally historical and based on that data?
I think the point of machine learning is that programming isn't "incorporated into them".
These systems may not be intelligent yet, but we certainly didn't program them - the majority of their features and abilities arise from the training data they were exposed to.
Well in my experience it more knows how to incorrectly apply existing concepts to incompatible areas and exert great confidence while doing it.
I have just two recent examples where I thought it would be faster to use chatgpt instead of reading the manual.
1. How to asynchronously (e.g. asyncio) record from the microphone in python. Chatgpt gave me good which looked elegant, but didn't work. Well turns out pyaudio isn't compatible with asyncio.
2. Trying to install a docker of some server (I think it was wallabag?) with an SSO using openid. Again I got a elaborate docker compose file and instructions for the server configuration. After trying to implement it for a while I realised that those configuration options don't exist. Chatgpt just invented them.
My conclusion from these is that one really shouldn't ask AI question were one already knows the answer/correct way and you just want to have something create the boilerplate. I shudder and the thought that people are proposing to use AI for medical purposes without supervision by a professional.
When you say ChatGPT, do you mean 4 or 3.5?
4 does this, but I don’t think this it the show-stopper some people seem think it is. You can ask it to correct itself by feeding it the error messages. It’s like have a very knowledgeable, extremely fast programmer on call. Sometimes their guesses aren’t right first try — so what?
Maybe. I just don’t like to confuse the two, it’s like talking to a middle schooler vs talking to a graduate student. In both cases you can say the experience sucked but you should specify which was it.
I've seen similar, although in my cases the wrong answers turned out the be useful. They used commands or functions or APIs that I had not known about, and when I looked into those found that they could solve my problem when used correctly.
Wanted to give it a try. I didn't even consider many of these and doubt I would've come up with such a good list even if I had a lot of time. Much less than 1 second it took to generate the response.
In a world where magic exists, laws and the legislative system would likely undergo significant adaptations to address the unique capabilities and risks presented by magic. Here are several ways in which these systems might be different, based on various assumptions about the nature and scope of magic:
1. *Regulation of Magical Practices*: Just as with professions and practices in our world, there would be regulations governing who can practice magic, how it can be learned, and where it can be used. Licenses or permits might be required for practicing certain types of magic, similar to medical licenses or driving licenses.
2. *Magic-Related Offenses*: New categories of crimes would exist, such as unauthorized enchantments, illegal curses, magical fraud (using illusions or mind control for deceit), and unlicensed necromancy. Laws would specify what constitutes a magical assault versus a non-magical one, with potentially different penalties.
3. *Enforcement and Jurisdiction*: Special law enforcement units trained in counter-magic or magical defense would be necessary to handle crimes involving magic. The jurisdiction might depend on the nature of the magic used; for example, cross-dimensional or time-altering magic could involve a higher authority or international magical council.
4. *Evidence and Investigation*: The collection of evidence might include magical means, such as truth spells, scrying (viewing distant or future events), or reviewing memories directly. However, there would also need to be laws protecting individuals against invasive magical investigations.
5. *Rights and Protections*: There would be laws to protect non-magical beings from discrimination or harm by magical individuals or entities. Similarly, sentient magical creatures or beings might have their own rights, leading to more complex legal considerations regarding interspecies interactions.
6. *Ethical Use of Magic*: Legislation would likely address the ethical implications of spellcasting, summoning, and other magical practices, including the use of magic on unwilling subjects or creating sentient beings. Regulations might prevent or restrict the use of magic that can alter someone's will, memories, or identity.
7. *Magical Contracts and Agreements*: The legal system would need to address the binding nature of magical contracts, which could have irrevocable consequences. Laws might specify how such contracts can be fairly negotiated and what remedies exist for breaking a magical contract.
8. *Intellectual Property*: In a world where spells, potions, and magical items are created, there would be intellectual property laws protecting the creators of magical inventions. This could include patents on spell formulas, copyright on magical texts, or trademarks on magical item designs.
9. *International Relations and Security*: On a larger scale, there might be international laws governing the use of magic, especially in conflicts or espionage. Treaties could limit the use of particularly destructive spells or ban the creation of magical weapons of mass destruction.
10. *Education and Public Safety*: Laws would govern the education and training of individuals with magical abilities, possibly requiring mandatory schooling to ensure that magic is used safely and responsibly. Public safety regulations would address the storage of magical materials and the construction of magically protected buildings.
In this magical world, the legal and legislative system would need to be flexible and innovative, capable of addressing the ever-evolving nature of magic and its impact on society. It would be a blend of traditional legal principles and new rules designed to manage the unique challenges posed by the existence of magic.
Fun little exercise: while reading above, see what happens when mentally substituting “magic” with “AI”, words like “enchantments” with “algorithms” and so on.
All of these are basically "what we have now, but with magic".
I'm sorry but how magic and the legal system interact is a subject explored in fiction. Hell, even Harry Potter touched on the subject. And that's not going to even touch authors like Jim Butcher and Piers Anthony who have entire series about "this, but with magic".
You basically just described the anime Witch Hunter Robin.
https://en.wikipedia.org/wiki/Witch_Hunter_Robin
To an extent, but it's only about 85-95% correct even in the good domains, which is fantastic for a few steps… and then it makes a fundamental error and half the time doesn't understand corrections given by domain experts that would get it back on track.
This limitation is the only reason why all of us all have jobs, so do plan for your economic future on the assumption the AI get past this flaw with no warning — how many steps it can do before falling over isn't something I can predict, despite regular use of even the OpenAI playground before ChatGPT came out.
The longer I think about something, the dumber AI gets.
I have given our internal knowledgebase documents to ChatGPT and asked for summaries, and it adds in things from the general internet that are wrong to very wrong contradicting our KB (we are very far off the beaten path of conventional audio studio design testing).
I think this effect is because we learn and the AI doesn't. (In the short term, for any given release of the AI.) Start asking about a topic that's new to you, and it will "know" things you don't. Then you learn those things, and it keeps making the same mistakes.
Over the longer term, new releases have improvements, but the feedback loop is currently pretty slow. It will likely get faster.
It's like the Gell-Mann Amnesia for AI.
That's also something I often encounter with ChatGPT. It often is very wrong about things I ask, because I often wonder and try things off the beaten path. That's our advantage against LLMs.
Purely probabilistically, trying things off the beaten path is just a matter of higher LLM temperature. Turning up GPT-4's temperature is basically an expensive /dev/urandom pipe today, but I don't see any fundamental reason why LLMs can't catch up. Maybe all it takes is tinkering with how temperature is calculated.
No - temperature is a sampling measurement from a distribution. If you adjust temperature, you still ultimately pull from the same distribution.
Parent is saying the advantage is in pulling ideas from outside that encoded distribution.
You've missed the metaphor, I think. Higher temperatures will make it "more creative" for lack of a better term, but there's a lot of specialist knowledge it doesn't have and which you can't give it just by twiddling a dial.
It has a massive speed advantage that lets it read the whole internet, but it's dumb enough that it also needs to, and when even that doesn't give it enough examples, it's like asking me to improvise Latin based on what I recognise of the loanwords in English.
No the advantage is that you have context. That context is HUGE. That's why you can't and won't be able to have chat gpt actually do your job for you. You still need to ask it questions. And to even know what questions you need to ask you need to have context.
You don't just need a context, you need to finetune a model around that context since working memory isn't enough. That is what human workers do, they spend the first months finetuning their model around the company context before they can start to provide valuable code.
Once you can pay to get your own fine tuned version of the big LLMs maybe we can start to do real work with these.
The fact is that we're not that interconnected though. Finding people that have useful, actionable information, and want to share it with you, is extremely difficult.
Or maybe we have just conditioned people to not share useful information publicly anymore.
Amazon selling what used to be open source projects has certainly had a chilling effect on people publishing source code under a permissive licence.
That is a search issue not an interconnectivity issue.
If you want to maintain an information advantage, you should definitely not rely solely on the web. Simply reading a book is often 10x better. Comment from a few months ago:
https://news.ycombinator.com/item?id=38369583
One thing I've found comically underestimated is books. It's not whether the information is physical or electronic, but what actual corpus of information is available
...
So yeah I'd say 3 main repos of knowledge are: the open Internet, printed books, and Scihub, and many people today only use the first one
Though I'd also put "talking to the right people" and by extension "working at the right place" pretty high up there in getting novel information.
I think for a certain generation, Google probably diluted their ability to access other knowledge bases, and act upon that info. And for the generation currently learning how to learn, LLMs may do the same thing.