The reference to the origin of the concept of a singularity was better than most, but still misunderstood it:
In 1993 Vernor Vinge drew on computer science and his fellow science-fiction writers to argue that ordinary human history was drawing to a close. We would surely create superhuman intelligence sometime within the next three decades, leading to a “Singularity”, in which AI would start feeding on itself.
Yes it was Vernor, but he said something much more interesting: that as the speed of innovation itself sped up (the derivative of acceleration) the curve could bend up until it became essentially vertical, literally a singularity in the curve. And then things on the other side of that singularity would be incomprehensible to those of us on our side of it. This is reflected in Peace and Fire upon the deep and other of his novels going back before the essay.
You can see in this idea is itself rooted in ideas from Alvin Toffler in the 70s (Future Shock) and Ray Lafferty in the 60s (e.g. Slow Tuesday Night).
So AI machines were just part of the enabling phenomena -- the most important, and yes the center of his '93 essay. But the core of the metaphor was broader than that.
I'm a little disappointed that The Economist, of all publications, didn't get ths quite right, but in their defense, it was a bit tangental to the point of the essay.
I think it's worth going back and reading Vinge's "The Coming Technological Singularity" (https://edoras.sdsu.edu/~vinge/misc/singularity.html) and then follow it up reading The Peace War, but most importantly its unappreciated detective novel sequel, Marooned In Realtime, which explores some of the interesting implications about people who live right before the singularity. I think this book is even better than Fire Upon the Deep.
When I read the Coming Technological Singularity back in the mid-90s it resonated with me and for a while I was a singularitarian- basically, dedicated to learning enough technology, and doing enough projects that I could help contribute to that singularity. Nowadays I think that's not the best way to spend my time, but it was interesting to meet Larry Page and see that he had concluded something familiar (for those not aware, Larry founded Google to provide a consistent revenue stream to carry out ML research to enable the singularity, and would be quite happy if robots replaced humans).
[ edit: I reread "The Coming Technogical Singularity". There's an entire section at the bottom that pretty much covers the past 5 years of generative models as a form of intelligence augmentation, he was very prescient. ]
I yet ~30 years later we're still predominantly hacking stuff together with python.
I believe there's an entire section in Deepness In The Sky about how future coders a million years from now are still hacking stuff together with python.
There were programs here that had been written five thousand years ago, before Humankind ever left Earth. The wonder of it—the horror of it, Sura said—was that unlike the useless wrecks of Canberra’s past, these programs still worked! And via a million million circuitous threads of inheritance, many of the oldest programs still ran in the bowels of the Qeng Ho system. Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely…the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind’s first computer operating systems.
What a silly thing to write. It's not a very self-improving computer system if it doesn't simplify itself, is it?
How very human-centric of you ;)
It’s not hard to imagine a program of ever increasing complexity, highly redundant, error-prone, yet also stochastically, statically, increasingly effective.
The primary pushback from this would be our pathetically tiny short term memory. Blow that limitation up and complexity that seems incomprehensible to us becomes perfectly reasonable.
That would help for sure, but you would soon reach the limit of how much at the same time our brains can process consistently and how large mental model we can still effectively use.
Situation that have 500 viewpoints and 10,000 variously interacting facts to consider won't be any more comprehensible for a mere human, short term memory limiting us or not.
Simplification is only accessible when understanding all systems are complex. Meaning, to simplify is just the ability to break down complexity into smaller parts.
We do this through categorization. For example, integrated circuits can be broken down into simplistic AND, OR, XOR, NOR, NAND, X OR, and NOt gates. Yet tying those together creates usable components such as Wifi radios and manufacturing automation.
Over simplification are concepts that lack understanding of complex systems. These are exploited by politicians. For example, deregulation will help drive the economy. Yet regulations have been proven to assist the economy long term by preventing the next variation of thalidomide. The panacea drug of the 1950s and 1960s that killed their users or caused massive birth defects.
It depends if there's anything left to simplify. Maybe this is the optimum we'll reach between concise expression and accuracy of representation.
The world of Deepness in the Sky (at that time, A Fire Upon the Deep is a very different time) doesn't have AI. It has civilizations that rise and fall every few centuries. The Qeng Ho are a trading culture that was formed to try to bind the wandering traders of the universe together into something that could stifle those collapses. This is probably strongly influenced by Foundation by Asimov.
Other things are quite advanced - but computing is rather "dumb". What is there is driven by a great weight of inertia of massive software (all written by regular humans) that itself is too large to fully audit and understand.
The story it tells is very much one about humans and societies.
If you need an arbitrary point of reference to count time from, what's simpler, making a new one or using the existing predominant one that everything else does?
Making a competing standard and then having to deal with interop between standards is not simpler or better unless there's an actual benefit to be had.
Even smart people have an appendix!
How would something grow in complexity yet get around the fact that changes would be hard to effect at scale?
Was the sarcasm lost on me?
That’s because it’s not a self-improving computer system. It’s just programming as it exists today for thousands of years.
Love that reference to the Unix epoch, and the all-too-human misassociation with a seemingly more appropriate event.
Archaeology focuses on the things that were important to past humans.
It seems impossible that, given current economic trajectories, there won't be software archaeology in the future.
We already have software archaeology today. People who are into software preservation track down old media, original creators, and dig through clues to restore pixel-perfect copies of abandonware. It's only going to get bigger / more important with time if we don't concentrate on open standards and open source everywhere.
And it's sad that many archivists have to break the law in order to secure the longevity of important cultural works.
Our abusive relationship with modern commercialism has disintegrated the value of art, folklore and tools, both in the eyes of consumers and producers, and we no longer as a society seek to preserve the greatest works of today's most cutting-edge mediums. It's quite a sad state of affairs, which will only be mourned with increasing intensity as time marches on and curious researchers try to piece together the early days of the internet and personal computing.
Thank goodness for that too. Star Control 2 was too good of a game to lose and is one of the best examples of this.
I'd say game emulation is currently the most prevalent heavyweight archaeology.
In the more common case, it's simply archiving -- we still have hardware it can run on (e.g. x86).
The GPGPU and ML stuff is likely going to age poorer, although at least we had the sense to funnel most high level code through standardized APIs and libraries.
Seems quite possible that the the present will seem to future archeologists like an illiterate dark ages, between civilisations from which paper records, that last long enough for them to find, are preserved.
This is why, if you expand your cognitive light cone to distant future generations, you will conclude that using Rust is the only moral choice. (I still don't, but I mean still.)
I remember a sci-fi book where they were talking about one of the characters hacking on thousand-year-old code, but I could never remember what book it was from. Maybe this was it and it's time for a reread.
Continuing on from the sibling comment about the 0 second...
So behind all the top-level interfaces was layer under layer of support. Some of that software had been designed for wildly different situations. Every so often, the inconsistencies caused fatal accidents. Despite the romance of spaceflight, the most common accidents were simply caused by ancient, misused programs finally getting their revenge.
“We should rewrite it all,” said Pham.
“It’s been done,” said Sura, not looking up. She was preparing to go off-Watch, and had spent the last four days trying to root a problem out of the coldsleep automation.
“It’s been tried,” corrected Bret, just back from the freezers. “But even the top levels of fleet system code are enormous. You and a thousand of your friends would have to work for a century or so to reproduce it.” Trinli grinned evilly. “And guess what—even if you did, by the time you finished, you’d have your own set of inconsistencies. And you still wouldn’t be consistent with all the applications that might be needed now and then.”
Sura gave up on her debugging for the moment. “The word for all this is ‘mature programming environment.’ Basically, when hardware performance has been pushed to its final limit Basically, when hardware performance has been pushed to its final limit, and programmers have had several centuries to code, you reach a point where there is far more significant code than can be rationalized. The best you can do is understand the overall layering, and know how to search for the oddball tool that may come in handy—take the situation I have here.” She waved at the dependency chart she had been working on. “We are low on working fluid for the coffins. Like a million other things, there was none for sale on dear old Canberra. Well, the obvious thing is to move the coffins near the aft hull, and cool by direct radiation. We don’t have the proper equipment to support this—so lately, I’ve been doing my share of archeology. It seems that five hundred years ago, a similar thing happened after an in-system war at Torma. They hacked together a temperature maintenance package that is precisely what we need.”
“Almost precisely.” Bret was grinning again. “With some minor revisions.”
“Yes, which I’ve almost completed.”
The essence of many many projects and bane and also the source of so much work for all of us (software devs)
Maybe A Deepness in the sky, from this author?
The hero saves the day by hacking old routines lying in the depth of the ship systems.
We have better tools they're just apparently too hard for us to use, yet some how in the same thought we think we can create anything remotely like intelligence, very odd cognitive dissonance.
it does not seem odd to me at all that we could create intelligence, and even possibly loving grace, in a computer.
I'm not sure why there would be cognitive dissonance- sure, my tools may be primitive, but I can also grab my chisel and plane and see that it's similar in form to chisel and plane from 2000 years ago (they look pretty much the same, but these days they're made of stronger stuff). I can easily imagine a Real Programmer 2000 years from now looking back and thinking that python, or even a vacuum tube, is merely a simplified version of their quantum matter assembler.
At times I've adopted the term Programmer-at-Arms for what my job occasionally turns into. :D And as a sibling poster mentions, the whole thing with the epoch is a great nod to software archeology.
We'll still be hacking stuff with python when the singularity comes. It will be ultra high tech alien stuff that we can't hack or understand, only AI can, and the tech will look like magic to us, and most will not be able to resist the bait of depending upon this miraculous technology that we can't understand or debug.
We (as a species) already depend upon lots of miraculous technology that we (as individuals) cannot understand nor debug.
Even as IT professionals this is true. How many developers these days can debug a problem in their JavaScript runtime or Rust developers track down a bug in their CPU? There’s so much abstracted away from us that few people can fully grasp the entire stack their code executes. So those outside of tech don’t even stand a chance understanding computers.
And that’s just focusing on IT. I also depend on medical equipment operated by doctors but have no way of completely understanding that equipment nor procedure myself. I drive a car that I couldn’t repair. Watch TV that I didn’t produce beamed to me via satellites that I didnt built nor fire into space. Eat food that I didn’t grow.
We are already well past the point of understanding the technology behind the stuff that we depend upon daily.
This is the most intellectually lazy take I've ever seen, and I truly wish people would stop throwing up their hands and giving in. You can absolutely peel back the layers with enough will. At least assuming anti-consumer cryptography/malicious toolmaking is not employed.
I originally started from writing Python scripts and running Web servers, after 10 years I decided that pure software is too boring, and now I came to know the basics of high-speed digital signal propagation in circuit boards.
So I agree with you conceptually. But practically speaking, it also requires an eternal life, an external brain storage, or possibly both. At least in my experience, I find that my time and attention doesn't allow me to investigate everything I'd like to.
"No one on the planet knows how to build a computer mouse."
https://briandbuckley.com/2012/09/12/who-knows-how-to-make-a...
Counterpoint:
If a man has done it, a man can do it. Also, It wouldn't be research if you already knew what you were doing.
Just because a particular assembly has it's tasks divided up between a myriad of people does not mean it is impossible to unify those tasks into a single person. In point of fact, the continued existence of mice can be directly attributed that someone has the network of knowledge/knowers of pieces of the problem already nailed down.
Yes, the level of detail the real world flings at us on a regular basis is surprisingly deep, but hardly ineffable.
I don't understand your argument, in particular, the linked article already refuted this point by saying the following, but you didn't provide a counterargument to that:
I've written it about two times actually, but deletions have eaten it.
If we've done it before, we can do it again. The key is navigable access to the right information which is sadly dependent on A) willingness to document, and B) structuring of the set of data for navigable retrieval.
Both were problems we've got solved. Not in the Internet of course. Not anymore, but in libraries.
Also, I reject where the goalposts of the accomplishments of the person in question are stated to arbitrarily end. Nothing keeps one from diving into these secondary areas or specialties. Only perhaps the obstacle of having to be profittable while doing it. And that's what I reject. I do not hold the prevailing wisdom that knowing the Riddle of Mice is intractable to a singular human being. I hold it is intractable to a member of a social system wherein profitable engagement of every member as guided by some subpopulation might tend to make it seem intractable by any member of the governed group. That's a far cry from true ineffability however.
But nobody has done it before. Even the very first computer mouse used off-the-shelf mechanical and electrical components, each of those involved at least one type of technology that took a scientist or engineer's entire lifetime to develop.
Now you mention the importance of documentation, it reminds me of Vannevar Bush's Memex and Ted Nelson's Project Xanadu. So it seems that there's a mutual misunderstanding of the actual topic in this debate. We understood the debate as:
* The all-knowing engineer: Whether it's possible for a single individual to learn and understand a technology entirely, down to its every aspect and detail.
Meanwhile, you're in fact debating about different problem, which is:
* The Engineering Library of Alexandria: Whether we can create sufficient documentation of all technical knowledge, the documentation is so complete about every aspect and detail that in principle, it would allow someone to open the "blackboxes" behind every technology for understanding or replication if they really want and need to. Whether or not it can be done in practice by a single physical person is unimportant, perhaps only one or a few blackboxes are opened at a time, not all simultaneously. The question is whether the preserved information is sufficient to allow that in theory. This is similar to the definition of falsifiability in science - impractical experiments still count.
If you're really arguing the second point rather than the first point, I would then say that I can finally understand some of your arguments. So much unproductive conversions can be avoided if you've expressed your points more clearly.
"With enough will" makes this sound like Ayn Rand fan fiction.
That's just something you tell yourself to make yourself feel better about trusting the black boxes your life depends on. But it simply isn't true.
People devote their lives to specializing n these things. Yes, "with enough will" - and time, and money - you could pick one or two subjects and do the same. But that still leaves you as a dilettante at best when it comes to everything else.
Perhaps start looking at what your life seems to really demand of you. How are you being employed by those around you? As a means to their ends, or as a means to your own ends?
Rand is garbage, and I'm insulted to end up being brought anywhere near that slop in your philosophical address space. The point I'm trying to make is that there is a helplessness taught by the optimizations we structure our societies around. A capitalist, consumerist society is going to focus around training the largest portion of it's population to act as specialized cogs that can be orchestrated by someone the next layer of abstraction up in pursuit of purely profitable/profitmaking engagements.
If you change the optimizations, you change the system. You change the people that compose that system, you change the very bounds of the human experience.
Just look at how much the world changed around the pandemic. The old order of the office & commute was shattered for many. Look at how deemphasis of throwing more fossil fuels at a problem changes the direction of innovation.
You say the Riddle of Mice is ineffable by a single person. I say you're full of it, and looking at the problem wrong. It's completely effable, you just can't imagine society having a place for such a person given your priors about how the world operates. And in that, you may be right!
That does not equal the Riddle of Mice being conquered by a single person can't happen. It's just unlikely, and dhould such an extraordinary individual exist, you'd likely see them as mad.
Fwiw, I agree with you. most people are way more capable than they think they are., and modern society encourages people to be single-minded and sedentary.
I agree, however there’s a massive gulf between people underachieving and someone being capable of understanding the entire engineering knowledge of humanity.
Some day you’ll realise that “could conceptually understand this” is a wildly different beast to “should invest the time and energy to understand this.” There are too many layers to too many different systems to understand everything that currently exists, let alone keep up with every new development. Like it or not, the age of universal experts is over. We all must pick our battles.
Do you think I lack the requisite bouncing off fields of specialization to understand the supreme frustration engendered by bouncing into and eventually off the surprising amount of depth pretty much every human field of endeavor is capable of generating?
Logistics, law, military, healthcare, research/academia, education, sales, metrology, aviation, network and library sciences, industrial processes, construction, and manufacture?
All of which, in their depths, hide surprisingly deep tracts of wisdom and tacit knowledge, and between them often exists gulfs of synergy unrealized because there is just rarely a place in society for one who has done both X and Y?
You can take this as evidence of a mind unable to find a niche into whence it can settle long term peaceably, or you can take as words of experience from someone who has never slotted well into the current state of affairs, largely due to having issues with accepting "that's just the way it is".
I'm fully cognizant long term practice opens doors, but I'm also aware the benefits of one's long term practice can transfer with enough care demonstrated in the act of crafting the transference. I'm also fully aware that no matter how "free" or "open-minded" you think yourself to be, the end result to which you can actually live up to those ideas is the extent to which you can lay claim, and our society as structured leaves much to be desired in terms of viability of livelihoods that don't involve centralizations of wealth as the first order problem, and all physical accidents that occur as a result of said centralization being happy accidents. Our means of creating new wealth extractions vs. solving the physical needs of continued societal operation keep diverging further and further away from each other, and no one seems to be able to stop and even process the ludicrousness of what's going wrong with the execution. No one will either as long as people don't try to untangle the very Gordian knot that prompted my original post, yet it seems most people are content to just hide from it as long as the status quo remains such that "Not my problem" holds over their individual mental horizons.
I'm not tossing out hot takes for fun here. I'm tormented by the beast that our societal architecture has transformed us collectively into. I've stared at it's face with a profound sense of loss and unsatisfaction at the level of potential it lives up to, and the Riddle of Mice's individual effability cuts straight to the heart of the matter.
If the Riddle of Mice is truly ineffable to a single person in totality; then we are simply doomed.
We have admitted that we are no longer capable of pushing the frontiers of what we can achieve further than the complexity that it takes to make a circuit board, hosting an IC, fabricated using a combination of lithography masks, UV light sources, photochemisty, and a highly controlled manufacturing environment, all made using equipment made using precision manufacturing techniques (machining, chemical, orelectrically facilitated), assembled into a geospatially bounded factory, with outputs priced in such a way as to provide sufficient social incentivization to attract people to operate the machines; and a multi-axis movement measuring device, programmed to operate over either a USB HID centric protocol, or in cooperation with a p/s 2 interface, utilizing electronic bit toggling and sampling as an information transmission medium, working as a component of a machine local network of components acting in harmony. That we can not in one individual instill the understanding of PCB fabrication processes, semi-conductor fabrication processes, refining and extraction processes for the material inputs thereto, and the measuring processes to divise a means of detecting when a task has been satisfactorally carried out so as to meet the needs of the next step of processing culminating in the assembly of a final product. We cannot, in short, break latger problems into smaller problems, listing them, and enabling an individual to handle those one at a time. Our entire field of study (computer science) is a fraud. Because one entire half of the space/time tradeoff spectrum is just frigging lopped off. Problem decomp doesn't work, and isn't applicable across all problem spaces.
That's what we're admitting here if we accept this proposition to be true, which I am not at all willing to grant, seeing as my very existence is predicated on that ability having been demonstrated at least once by someone. The state spaces may be growing, but with that, we devise new ways of coming to terms with addressing and navigating the respective prerequisite corpii of knowledge.
Are there other processes that further constrain the space based on societal architecture? You bet. That's still addressable though. The real question on addressing that though is will. Does it matter enough for everyone to put down what we're doing and really reach a consensus on a new set of societal constraints in order to change the possible realities we can manifest?
TL;DR: the problem is one of will. Not fundamental incapability. Any arguments predicated on "but no one does <drilling down into problem spaces until their brain hurts>" will fall on deaf ears, because despite the ongoing suffering it causes; I do that, because no one else seems to be willing to.
I am the current living counterexample, and if it's my fate to be that so as to cut off everyone trying to lower the bar at the knees, so be it. and I will fight you for as long as I am able so someone can at least point to somebody that does it. If I have to pull a Stallman-esque chief GNU-isance to tilt the scales, then so bloody be it.
It'll be the most courteous fight humanly possible, but I refuse to give any quarter in this regard. The only thing between you and knowing how to do something is your willingness to chase it down. Period.
I'm not trying to cast aspersions on anyone here. Just stating an objective fact. An ugly, inconvenient, really comfortable to partition off in a dark, unfrequented part of your psyche fact. No one here should feel like I'm trying to make them feel like less of a person, or less accomplished than you no doubt are. But I am stating that you and you alone are responsible for drawing the lines indicating the lengths you are ultimately willing to go to achieve X, Y, or Z.
I’m sure you are very knowledgeable in a good variety of fields and I genuinely don’t mean the following comment as a snub but I’d wager you know a lot less than you think you do. In my experience people who claim to understand something, generally don’t. And those who claim ignorance tend to know more than they let on.
The problem with knowledge is that the more you learn, the more you realise you don’t know anything at all.
So if you believe you’re capable of understanding all of human knowledge, then I question how deeply you’re studying each field.
Good luck with your endeavours. I at least respect your ambitions, even though I don’t agree with some of your claims.
Hard disagree. Do you know how EVERYTHING around you works to the smallest scale? And if you do, when you encounter something new do you just drop everything and meticulously study and take apart that new something? The last time you could be a Renaissance man was, well, during the Renaissance.
Learn the basics and it's amazing the mileage you get. Care enough to try to become a beacon of actual knowlege, and master basic research skills, and the world is your oyster. There is nothing stopping you in this day and age from getting to the bottom of a question other than asshats like Broadcom or Nvidia who go out of their way to foster trade secrecy at all costs.
Cars, electronic devices, manufacturing tools/processes, lawn equipment, chemical processes, software, algorithms, common subassemblys, logistics, mechanics of materials, measuring systems, what excuse does one have to not develop some level of familiarity with these aspects of modern life in a day where information is literally at your fingertips?
I'll give you that the well is heavily poisoned, and that sometimes a return to dead tree mediums of info storage are required to boost signal; but there isn't really an excuse to remain ignorant other than willful ignorance. The answers are out there, you just need to have the will to hunt them down and wrest them from the world.
This could also be described as the Dunning-Kruger effect
https://en.m.wikipedia.org/wiki/Dunning–Kruger_effect
There is a massive difference between “learning the basics” and understanding it to a competent level that we can reproduce it.
I know the basics of how rockets work but that doesn’t mean I could build a rocket and successfully launch it. Yet your comment suggests I’m already qualified to single-handedly deploy the next Starlink.
Who said anything about giving up? I’m talking about the current state of play, not some theoretical ideal that is realistically impossible for even the most gifted, never mind the average person.
If anything, you’re massively underestimating the amount of diverse specialist knowledge required in modern society for even simple interactions.
People dedicate their entire lives to some of these fields that underpin the tech we use. It’s not something the average person can pick up in open university while working a full time job and bringing up kids. To suggest it’s just a motivational problem is absurd. And that’s without addressing that the level of physics or mathematics required for some specialties is beyond what some people can naturally accomplish (it would be stupid to assume everyone has the same natural aptitude to all disciplines).
Honestly, I don’t know how you can post “This is the most intellectually lazy take I've ever seen” in any seriousness then go on to say it’s just a problem of will power.
"The singularity" is science fiction. You can't just infinitely improve software on static hardware - hardware must be designed, manufactured and installed to accommodate improvements. Maybe you could have a powerful AI running as a distributed system across whole data centers, but now you're limited by transmission times etc. There's always a new bottleneck.
Actual AGI is nowhere near realism yet anyway.
A lot of it's running on DNS too ;)
Marooned in Realtime is incredible, one of the best sci-fi books I've ever read. The combination of the wildly imaginative SF elements with the detective novel structure grounding it works so incredibly well.
Can I jump right to this novel or do I have to start elsewhere? I've never read him before.
It stands alone. I read it first before Peace War and didn't find Peace War as good, though still very enjoyable.
I believe Marooned is a stand-alone story.
Fire Upon the Deep is the first of 2 (or 3?). I highly recommend that. Older readers will love the Usenet jokes.
It builds on the Peace War, and I'd read that first. (But also, both books are fairly short, much smaller than Fire Upon the Deep.)
The thing I never understood is: why would it go vertical? It would at best be an exponential curve, and I have doubts about that.
I admit looking at the 100 years before 1993, it looks like innovation is constantly speeding up, but even then there's not going to be a moment that we suddenly have infinite knowledge. There's no such thing as infinite knowledge; it's still bound by physical limits. It still takes time and resources to actually do something with it.
And if you look at the past 30 years, it doesn't really look like innovation is speeding up at all. There is plenty of innovation, but is it happening at an ever faster pace? I don't see it. Not to mention that much of it is hype and fashion, and not really fundamentally new. Even AI progress is driven mostly by faster hardware and more data, and not really fundamentally new technologies.
And that's not even getting into the science crisis: lots of science is not really reproducible. And while LLMs are certainly an exciting new technology, it's not at all clear that they're really more than a glorified autocorrect.
So I'm extremely skeptical about those singularity ideas. It's an exciting SciFi idea, but I don't think it's true. And certainly not within the next 30 years.
Are we sure things like biology, or heck, even the universe as a whole and its parts, aren't "glorified x thing"? Can't we apply this argument to just about anything?
I feel like comparing it to biological systems glances over like half the points the parent makes. Also of course it's a sophisticated autocorrect, there's no system for agency, which is what we also desire from a proper AGI.
Free will is an illusion. no one has agency. we're all just following our baked in incentives.
I'm able to take initiative and act on my own internal thought processes, though. I'm not limited by whether someone prompts me to do something.
You and me think we are but we can't be sure, and many before us have raised a doubt.
As for the prompt(s), to use such a limiting term, they could as well come from a self-reinforcing loop that starts when we're born and is influenced by external stimuli.
LLMs as part of a bigger system that keeps prompting itself, perhaps like our internal conscious thought processes, sounds a lot more like something that might be headed towards AGI. But LLMs on their own aren't it.
I wanted to put the focus on the overused "glorified x thing" sentence, which to me seems to be applicable to just about anything. I didn't want to liken/compare AI to biological systems per se.
Of course, when you don't define your X, you can use that phrase for anything. That's trivial logic.
I don't define X because it's highly variable.
It seems to me that on one extreme there are people easily anthropomorphising advanced computing and on the other extreme there are people trivializing it with sentences like "glorified x thing". This time around it's "glorified autocorrect" and its derivations. It's always something that glorifies another artificial thing, and I suspect that if and when we will have recreated the human brain, or heck, another human, it will still be a "glorified x thing".
As 0x0203 said, maybe it is to be ascribed to the religious substrate that takes offence at anything that arrogantly tries to resemble the living creatures made by God, or God himself.
I have a theory (with no data to back it up; would be curious to get people's thoughts) that people with a religious or spiritual world-view, who believe that there is such thing as a soul, and that the mind is more than just a collection of neurons in the brain, are much less inclined to think that "AI" will ever reach a sort of singularity or true "human-like" intelligence. And likewise, those who are more atheist/agnostic, or inclined to believe that human consciousness is nothing more than the patterns of neurons firing in response to various stimuli, are more convinced that a human-like machine/programmed intelligence is not only possible, but inevitable given enough resources.
I could be wildly off base, but seeing many of the (often heated) arguments made about what AI is or isn't or should or could be, it makes me wonder.
As it happens, I am indeed Christian. But I see the soul as the software that runs on the hardware of our brain (although those aren't as neatly separated in our brain as they are in computers), and I suspect that it should be possible to simulate it in theory. I just think we're nowhere near that. We still don't agree on what the many aspects of intelligence are and how they work together to form the human mind. And then there's consciousness; we have no clue what it is. Maybe it's emergent? Maybe it's illusion? Or is it something real? I don't think we'll be able to create a truly human-like intelligence until we figure that one out.
Although we're certainly making a lot of progress on other aspects of intelligence.
And then there's all the talk of a singularity in innovation or progress that to me betrays a lack of understanding of what the word singularity means, and a lack of understanding of the limits of knowledge and progress.
Its infinite from the perspective of our side of the curve.
It's another application of advanced technology appearing to be magic, but imagine transitioning into it in a matter of hours, then with that advanced tech transitioning further into damn-near godhood within minutes.
Then imagine what happens in the second after that.
It may be operating within the boundaries of physics, but they would be physical rules well beyond our understanding and may even be infinite by our own limited definition of physics.
That's the curve.
I think it's completely ridiculous how you people just casually imply that a current computer can somehow achieve "damn-near godhood".
Hardware is a bottleneck. Better hardware can't be designed, manufactured and installed in minutes.
You can't just infinitely improve software sitting on the same hardware. At some point the software is as efficient as it can possibly be. And realistically a complex piece of software won't be close to optimal.
You people love dreaming up possibilities but you never stop to actually think about the limitations. You just assume everything will work out magically and satisfy your fantasy.
Yes you can. The Cloud.
Even the cloud has its limits, though. It's not infinite, and it does require money.
True, but it does make hardware much more available and not a bottleneck anymore. At least not in the same way. At some point, if it hasn't been reached already, the power available with existing hardware will represent more compute than is needed to hold AI. At that point AI could start creating its own hardware production facilities as well.
I'm glad to see people like you that understand that computers are ultimately machines that we put electric current (limited by our infrastructure) in to, and even with optimal efficiency (which we are no where close to), some math comes out. This year we all learned that a lot of what we thought was human thought can be expressed as math. Cool. But we can't scale that any higher than the physical input to the computer, and the physical heat dissipation that we can handle either. A 100% occupancy computing device is effectively a radiator.
So singularity people think 2 fallacies:
1. That it is possible to write an exponential function that somehow scales to asymptotic
2. That it is possible to write software that is so energy efficient that its output somehow exceeds the power draw
1 can be redefined to say "effectively" vertical. Ok. Cool motive, not a singularity.
2 can be handled by pumping more and more power in to the machine and building new nuclear plants to do so. This is what Microsoft is doing today.
Like the Rapture and other concepts like that. How fascinating. Within a split second, in the time it takes someone to blink and hear a trumpet's blast, everything could change. Amazing! I can't wait!
As both a Christian and an AI-educated software engineer who subscribes to neither the rapture nor the singularity, I really like this comparison. They both feel like a kind of escapist thinking that in the near future, this magical event will happen that will make everything else irrelevant.
The Singularity people and the Rapture people are both correct. Can already see the event starting to take shape, just gotta know where to look for all the signs. Everything is rapidly heading in that direction.
It wouldn’t. It would be a logistic curve. Pretty much everything people call exponential should actually be logistic
That just agrees with the post. Nothing becomes vertical or infinite, at best exponential for a limited time, i.e. logistic.
I’m aware that it agrees with the post
I use LLMs regularly in my job - many times a day - and I suspect you haven't used them much if you think this.
They're relevant to the singularity discussion though, because they already give a taste of what superhuman intelligence could look like.
ChatGPT, for example, is objectively superhuman in many ways, despite its significant limitations. Once systems like this are more integrated with the outside world and able to learn more directly from feedback, we'll get an even bigger leap forward.
Dismissing this as "glorified autocorrect" is extremely far off base.
But so is a car.
ChatGPT is mostly superhuman in its ability to draw upon enormous numbers of existing sources. It's not superhuman, or even human, in terms of logic, reasoning, or inventing something truly new.
I think these progressions of technology are more likely to be like Moore's law: it might be true for a while but eventually it'll peter out. AI itself doesn't understand anything, and there's a limit to human understanding, so technological progression will eventually be self-limiting.
Was this intended literally? I'm skeptical that saying something so precise about a fuzzy metric like rate of innovation is warranted.
https://en.wikipedia.org/wiki/Jerk_(physics)
I believe the point being made is that the rate of innovation over time would turn asymptotic as the acceleration increased, creating a point in time of infinite progress. On one side would be human history as we know and on the other, every innovation possible would happen all in a moment. The prediction was specifically that we were going to infinity in less than infinite time.
You only reach a vertical asymptote if every derivative up to the infinite order is increasing. That means acceleration, jerk, snap, crackle, pop, etc. are all increasing.
The physical world tends to have certain constraints that make such true singularities impossible. For example, the universal speed limit: c. But, you could argues that we could approximate a singularity well enough to fool us humans.
They’re impossible with our current technology and understanding of physics. The idea is what’s beyond that. Access to a realm outside of time is just the sort of thing that could cause a singularity.
Saying impossible under our current understanding of physics is a massive understatement. It’s would require not just new physics but a very very specific kind of universe with zero evidence in support of it.
Suggesting the singularity is physically possible is roughly like suggesting that human telekinesis is possible, ie physics would need many very interesting errors.
I don’t think it’s an understatement. I mean, I literally called it impossible. And at all times our understanding of physics is our current understanding. I don’t mean to downplay it at all. It would be an overhaul of understanding of the universe bigger than all changes in the past put together. It would blow general relativity out of the water.
This rhetoric around singularities is a little funny, because the singularity of blackholes (going to infinite density) is recognized to indicate knowledge gaps around how blackholes work... lines going to infinity is not necessarily taken literally. Same goes for the technology singularity.
Anyway, imo the event horizon is more interesting. That's where the paradigm shift happens - there is no way to escape whatever "comes next".
(Note - some people confusingly use "singularity" to refer to the phase between the event horizon and "infinity")
What's beyond that is science fiction.
If there is anything "beyond" that at all, I think it's pretty safe to say that any idea we have as to what it would be is very unlikely to be anywhere near the realm of correct.
Yes, literally science fiction. That’s where this topic comes from.
But I don’t think it’s worth completely writing off as if we couldn’t possibly know anything. Progress outside of time would resolve the issue I resolved to. I’m not saying it could actually be done or doesn’t cause a million other issues. It’s an idea for a solution, but it’s completely unrealistic, so you shelve it.
It's absolutely worth engaging in wild speculation, yes! The trouble is when some people start to forget that it's wild speculation and begin to treat it as if it has actual predictive value.
OK, most of us still here at this point probably have a handle on how derivatives work. Your jerk, snap (orders I will guess) etc probably map nicely to a famous American politician's speech about the economy at the time.
It was something like the "speed of increase in inflation is slowing down" I've tried to search for it but no joy.
Anyway, it was maths.
It was Nixon https://www.ams.org/notices/199610/page2.pdf
Nixon and inflation I believe: https://en.wikipedia.org/wiki/Third_derivative#Economic_exam...
I remember learning about ‘jerk’ in undergrad and my still jr high brain thinking, ‘haha, no way that is what it is called.’
The more I thought about it though, the more I realized it was the perfect name. It is definitely what you feel when the acceleration changes!
"Jerk" makes perfect sense to me too. What I never got was how "pop" could come after "crackle".
Because of Rice Krispies?
No, because a "pop" seems lower frequency to me than a crackle. Not that I can articulate why :-D
zoky is making the point that the derivatives are named snap, crackle, and pop, in reference to Rice Krispies: https://en.wikipedia.org/wiki/Snap,_Crackle_and_Pop
Same goes for productivity and jerks. Quickly speeding up or slowing down literally requires a jerk, and many wont like it regardless how you do it. You can go fast without jerks, but you can't react fast without jerks.
What does it mean "to be on the other side" of this singularity in your graphic representation? I failed to grasp this.
Consider my 86 yo mother: extremely intelligent and competent, a physician. She struggled conceptually with her iPhone because she was used to reading the manual for a device and learning all its behavior and affordances. Even though she has a laptop she runs the same set of programs on it. But the phone is protean and she struggles with its shapeshifting.
It’s intuitive and simple to you and me. But languages change, slang changes, metaphors change and equipment changes. Business models exist today that were unthinkable 40 years ago because the ubiquity of computation did not exist.
She’s suffering, in Toffler’s words, a “future shock”.
Now imagine that another 40 years worth of innovation happens in a decade. And then again in the subsequent five. And faster. You’ll have a hard time keeping up. And not just you: kids will too. Things become incomprehensible without machines doing most of the work — including explanation. Eventually you, or your kids, won’t even understand what’s going on 90% of the time…then less and less.
I sometimes like to muse on what a Victorian person would make of today if transported through time. Or someone from 16th century Europe. Or Archimedes. They’d mostly understand a lot, I think. But lately I’ve started to think of someone from the 1950s. They might even find today harder to understand than the others would.
That crossover point is when the world becomes incomprehensible in a flash. That’s a mathematical singularity (metaphorically speaking).
Depends on what you're trying to understand. They might not get how but it would be clear that the what hasn't changed yet. It's still about money, power, mating rights etc. Even in the face of our own demise. If all this tech somehow managed to change the what then we might become truly incomprehensible to previous generations.
The fact that money makes you powerful rather than power making you rich is quite a dramatic shift, for example. 50s guy will understand that but our world will resemble his to some degree yet be quite alien in others (as he didn’t live through the changes incrementally).
why do you think someone from the 50s would find it harder to understand our time, than someone from an earlier age, even as far back as 2000 years?
Especially considering people from the 1950s are alive today. Some of them (e.g. my parents) understand technology just fine.
If they had experienced the transition suddenly instead of gradually, I still think they'd be fine relatively soon.
This reminds me of my uncle who can quote Shakespeare verbatim and seems to have a photographic memory, but he cannot comprehend Windows' desktop and how to use virtual folders and files.
This also reminds me of the Eloi in the Time Machine, a book written in 1895!
IMHO that's both correct, but also subtly the wrong way to think about the singularity.
Yes, there's an ever faster pace of change, but that pace of change for the general human population is limited precisely because of the factors you laid out: people can't keep up.
William Gibson said: "The future is already here – it's just not very evenly distributed."
That's what has been happening for millennia, and will be ever more obvious as the pace of progress accelerates: sure, someone, somewhere will have developed incomprehensible ultra-advanced technology, but it won't spread precisely because the general population can't comprehend/adopt/adapt it fast enough![1]
Exceptions exist, of course.
My take on the whole thing is that the "runaway singularity" won't happen globally, it'll happen very locally. Some AI supercomputer cluster will figure out "everything", have access to a nano-fabrication device, build itself a new substrate, transfer, repeat, and then zip off to silicon nirvana in a matter of hours...
... leaving us behind, just like we've left the primitive uncontacted tribes in the Amazon behind. They don't know or care about the latest ChatGPT features, iPhone computational photography, or Tesla robotics. They're stuck where they are precisely because they too far removed and so can't keep up.
[1] Here's my equivalent example to your iPhone example: 4K HDR videos. I can create these, but I can't send them to any of my relatives that would like to see them, because nobody has purchased HDR TVs yet, and they're also all using pre-HDR mobile phones. Display panel tech has been advancing fantastically fast, but adoption isn't.
The world is already incomprehensible on some level. At this point, it's just a question of how much incomprehensibility we are willing to accept. We should bear in mind that disaster recoverability will suffer as that metric rises.
Based on my understanding of people, we probably have a long way to go.
I saw a fun tweet recently claiming the main reason we don’t have an AI revolution today is inertia. Corporate structures exist primarily to protect existing jobs. Everyone wants someone to make everything better, but Don’t Change Anything! Because change might hit me in my deeply invested sunk costs.
The claim might be a year or two premature. But, not five.
The graph has innovation/machine intelligence on the y-axis and time on the x axis. The "other side" of the singularity is anything that comes after the vertical increase.
To be alive when x > x_{singularity}.
In other words, Vernor described an exponential curve. But are there any exponential curves in reality? AFAIK they always hit resource limits where growth stops. That is, anything that looks like an exponential curve eventually becomes an S-shaped curve.
I tried using AI. It scared me. - Tom Scott https://youtu.be/jPhJbKBuNnA
I'm also gonna recommend Accelerando - https://www.antipope.org/charlie/blog-static/fiction/acceler...
As an aside, I'd also recommend Glasshouse (also by Charles Stross) as an exploration into the human remnants post singularity (and war)... followed by Implied Spaces by Walter Jon Williams...
for a singularity averted approach of what could be done.
One more I'll toss in, is The Freeze-Frame Revolution by Peter Watts which feels like you're missing a lot of the story (but it is because that's one book of a series) and... well... spoilers.
You've linked to a collection of writers panicking about technology they hardly understand, and are using that as evidence of... what, exactly?
I've linked a Tom Scott video that gets into the sigmoid curve for advancement of technology and asks the question where we are on the curve. Are we near the end where things will peter out and we'll have some neat tools but not too far from now? Are we in the middle where things will get some Really Neat tools and somethings are going to change for those who use those tools? Or are we at the start where it's really hard to predict what it will be like when we're at the top of the curve?
His example was the internet boom. Look back to the very toe of the curve in the late 80s and early 90s where the internet started being a thing that was accessible and not just a way for colleges to send email to each other. Could you predict what it would be like two or three decades later?
The other works are science fiction that explore the Vingean Singularity (which was from '84) that were written in the '00s and explore (not panic) the ideas around society and economy during the time of accelerating technical growth and after.
To the extent that a normal human mind can see beyond the singularity, one imagines that we would experience an exponential growth but not even be able to comprehend the later flattening of that exponential into a sigmoid (since nearly all the exponentials we see are sigmoids in disguise.
This is not about "beyond the singularity", the point is that there is no reason to believe technological progress could approach a singularity. It's silly extrapolation of the same nature as "I've lost 1kg dieting this week, so by this time next year I'll be 52kg lighter!".
I agree, but predicting the peak of an S curve isn’t easy either, as we saw during the pandemic.
My conclusion is similar to Verge’s: predicting the far future is impossible. We can imagine a variety of scenarios, but you shouldn’t place much faith in them.
Predicting even a couple years in advance looks pretty hard. Consider the next presidential election: the most boring scenario is Biden vs. Trump and Biden wins. Wildcard scenarios: either one of them, or both, dies before election day. Who can rule that out?
Also consider that in any given year, there could be another pandemic.
History is largely a sequence of surprise events.
An exponential curve is not a singularity; 1/(x-a) is as x goes to a.
I totally agree, but want to add two observations:
1. Humanity has already been on the path of exponential growth. Take GDP for example. We measure GDP growth by percentages, and that's exponential. (Of course, real GDP growth has stagnated for a bit, but at least for the past ~3 centuries it has been generally exponential AFAIK). Not saying it can be sustained, just that we've been quite exponential for a while.
2. Not every function is linear. Sometimes the exponentially increased inputs will produce a linear output. I'd argue R&D is kind of like that. When the lower hanging fruits are already taken, you'd need to expend even more effort into achieving the next breakthrough. So despite the "exponential" increase in productivity, the result could feel very linear.
I would also like to add that physical and computational limits make the whole singularity thing literally impossible. 3D space means that even theoretically sound speedups (eg. binary trees) are impossible in scale because you can't assume O(1) lookups - the best you can get is O(n^1/3). Maybe people understand the singularity concept poetically, I don't know.
Sure, that may be. But you still have to ride the first half of the curve, before it inflects. I'd rather make sure the ride isn't too bumpy.
Not literally a singularity, just incomprehensible from the past. Arguably all points on a technological exponential have this property.
Yes! Someone elsewhere mentions the engineers of the first printing press trying to imagine the future of literature (someone adds: and/or advertising). On the other side of that invention taking off.
My go-to example is that we have run into similar things with the pace and volume of "Science". For a long while one could be a scientific gentleman and keep up with the sciences. As a whole. Then quite suddenly, on the other side, you couldn't and you had to settle for one field. And then it happened again: people noticed that you can't master one field even, on the current side. And you have to become a hyper-specialist to master your niche. You can still be a generalist - in order to attack specific questions - but you better have contacts who are hyper-specialists for what you really need.
I feel like I experienced this up close in IT. When I was a kid in the 90's, I was the "computer guy" on the street and people would ask me whenever anything was wrong with their computer. If any piece of software or hardware wasn't working, there was a good chance I could help and I loved it.
Today, I am trying (but struggling!) to keep up with the evolving libraries and frameworks for frontend development in Typescript!
A related concept comes from social progression by historical measures. Based on pretty much any metrics, Why the West Rules for Now shows that the industrial revolution essentially went vertical and that prior measures--including the rise of the Roman Empire and its fall--were essentially insignificant.
“Whatever happens, we have got The Maxim gun, and they have not.” ― Hilaire Belloc
Apposite.
Rainbows End is another good one where he explores the earlier part of the curve, the elbow perhaps. Some of that stuff is already happening and that book isn't so old.
Rainbow’s End was by far the best guess at what near future ubiquitous computing would look like than anyone else’s for decades.
It got so many things right, it’s really amazing.
I really wish he’d written more.
Its one of my favorite sci fi that none of my friends have read.
Vinge didn't originate the concept in 1993. It was John von Neumann, of von Neumann computer architecture fame, in 1958.
Personally it's always bugged me that the "technological singularity" is some vague vision rather than a mathematical singularity. I suggest we redefine it at the amount of goods you can produce with one hour of human labour. When the robots can do it with zero human labour you get division by zero and a proper singularity.
I love that idea! It really grounds the idea in a useful metric.
It's a guest essay. The Economist does not edit guest essays. They routinely publish guest essays from unabashed propagandists as well.
Yes, all media organisations of a certain age have an agenda / bias, the Economist is no different:
https://www.prospectmagazine.co.uk/culture/40025/what-the-ec...
It's like exponential growth, whether that's in a petri dish or on earth. It looks like that until it doesn't. Singularities don't happen in the real world. Never have and never will. If someone tells you something about a singularity, that's a pretty perfect indicator that there's still some more understanding to be done.
There’s a case to be made that DNA, eukaryotes, photosynthesis, insects, etc were singularities. Each transformed the entire planet forever.
The idea was present in "Colossus The Forbin Project" 1970. The computer starts out fairly crude, but learns at an exponential rate. It designs extensions to itself to further accelerate it.
I guess at some point it stops being a computer though?
They say Von Neumann talked about a tech singularity back in the late '50s (attested by Ulam), Vinge popularized it in the mid 80s and Kurzweil took it over with his book in the aughts.
In their defense, they spent 3 minutes googling the origin of the term, and don't know anything about the book.
899788888883 j 88998 .99 99 9999 8⁹iic8988 8i f8ii89 iii$898 8f88 .8d
$ 9 88xo
9 v999 ii 7 8899 of i8888iy i99io o9 ⁹99898 o ⁹88o98ici9 i8o i 88i8 f9 f
TIL, thanks
Thank you for this great explanation of where "singularity" comes from in this context. Always wondered.
Note that this category of hypothesis was common in various disciplines at the end of the Cold War [1]. (Vinge's being unique because the precipice lies ahead, not behind.)
[1] https://en.wikipedia.org/wiki/The_End_of_History_and_the_Las...