return to table of content

AI’s big rift is like a religious schism

gumby
147 replies
3d21h

The reference to the origin of the concept of a singularity was better than most, but still misunderstood it:

In 1993 Vernor Vinge drew on computer science and his fellow science-fiction writers to argue that ordinary human history was drawing to a close. We would surely create superhuman intelligence sometime within the next three decades, leading to a “Singularity”, in which AI would start feeding on itself.

Yes it was Vernor, but he said something much more interesting: that as the speed of innovation itself sped up (the derivative of acceleration) the curve could bend up until it became essentially vertical, literally a singularity in the curve. And then things on the other side of that singularity would be incomprehensible to those of us on our side of it. This is reflected in Peace and Fire upon the deep and other of his novels going back before the essay.

You can see in this idea is itself rooted in ideas from Alvin Toffler in the 70s (Future Shock) and Ray Lafferty in the 60s (e.g. Slow Tuesday Night).

So AI machines were just part of the enabling phenomena -- the most important, and yes the center of his '93 essay. But the core of the metaphor was broader than that.

I'm a little disappointed that The Economist, of all publications, didn't get ths quite right, but in their defense, it was a bit tangental to the point of the essay.

dekhn
54 replies
3d19h

I think it's worth going back and reading Vinge's "The Coming Technological Singularity" (https://edoras.sdsu.edu/~vinge/misc/singularity.html) and then follow it up reading The Peace War, but most importantly its unappreciated detective novel sequel, Marooned In Realtime, which explores some of the interesting implications about people who live right before the singularity. I think this book is even better than Fire Upon the Deep.

When I read the Coming Technological Singularity back in the mid-90s it resonated with me and for a while I was a singularitarian- basically, dedicated to learning enough technology, and doing enough projects that I could help contribute to that singularity. Nowadays I think that's not the best way to spend my time, but it was interesting to meet Larry Page and see that he had concluded something familiar (for those not aware, Larry founded Google to provide a consistent revenue stream to carry out ML research to enable the singularity, and would be quite happy if robots replaced humans).

[ edit: I reread "The Coming Technogical Singularity". There's an entire section at the bottom that pretty much covers the past 5 years of generative models as a form of intelligence augmentation, he was very prescient. ]

Guthur
48 replies
3d18h

I yet ~30 years later we're still predominantly hacking stuff together with python.

dekhn
26 replies
3d18h

I believe there's an entire section in Deepness In The Sky about how future coders a million years from now are still hacking stuff together with python.

shagie
18 replies
3d17h

There were programs here that had been written five thousand years ago, before Humankind ever left Earth. The wonder of it—the horror of it, Sura said—was that unlike the useless wrecks of Canberra’s past, these programs still worked! And via a million million circuitous threads of inheritance, many of the oldest programs still ran in the bowels of the Qeng Ho system. Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely…the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind’s first computer operating systems.

eli_gottlieb
9 replies
3d12h

What a silly thing to write. It's not a very self-improving computer system if it doesn't simplify itself, is it?

corysama
1 replies
3d11h

How very human-centric of you ;)

It’s not hard to imagine a program of ever increasing complexity, highly redundant, error-prone, yet also stochastically, statically, increasingly effective.

The primary pushback from this would be our pathetically tiny short term memory. Blow that limitation up and complexity that seems incomprehensible to us becomes perfectly reasonable.

saiya-jin
0 replies
3d7h

That would help for sure, but you would soon reach the limit of how much at the same time our brains can process consistently and how large mental model we can still effectively use.

Situation that have 500 viewpoints and 10,000 variously interacting facts to consider won't be any more comprehensible for a mere human, short term memory limiting us or not.

yndoendo
0 replies
3d2h

Simplification is only accessible when understanding all systems are complex. Meaning, to simplify is just the ability to break down complexity into smaller parts.

We do this through categorization. For example, integrated circuits can be broken down into simplistic AND, OR, XOR, NOR, NAND, X OR, and NOt gates. Yet tying those together creates usable components such as Wifi radios and manufacturing automation.

Over simplification are concepts that lack understanding of complex systems. These are exploited by politicians. For example, deregulation will help drive the economy. Yet regulations have been proven to assist the economy long term by preventing the next variation of thalidomide. The panacea drug of the 1950s and 1960s that killed their users or caused massive birth defects.

viraptor
0 replies
3d12h

It depends if there's anything left to simplify. Maybe this is the optimum we'll reach between concise expression and accuracy of representation.

shagie
0 replies
3d2h

The world of Deepness in the Sky (at that time, A Fire Upon the Deep is a very different time) doesn't have AI. It has civilizations that rise and fall every few centuries. The Qeng Ho are a trading culture that was formed to try to bind the wandering traders of the universe together into something that could stifle those collapses. This is probably strongly influenced by Foundation by Asimov.

Other things are quite advanced - but computing is rather "dumb". What is there is driven by a great weight of inertia of massive software (all written by regular humans) that itself is too large to fully audit and understand.

The story it tells is very much one about humans and societies.

kbenson
0 replies
3d12h

If you need an arbitrary point of reference to count time from, what's simpler, making a new one or using the existing predominant one that everything else does?

Making a competing standard and then having to deal with interop between standards is not simpler or better unless there's an actual benefit to be had.

jbgt
0 replies
3d11h

Even smart people have an appendix!

bamboozled
0 replies
3d5h

How would something grow in complexity yet get around the fact that changes would be hard to effect at scale?

Was the sarcasm lost on me?

Linosaurus
0 replies
3d10h

That’s because it’s not a self-improving computer system. It’s just programming as it exists today for thousands of years.

kbutler
6 replies
3d15h

Love that reference to the Unix epoch, and the all-too-human misassociation with a seemingly more appropriate event.

ethbr1
5 replies
3d14h

Archaeology focuses on the things that were important to past humans.

It seems impossible that, given current economic trajectories, there won't be software archaeology in the future.

viraptor
3 replies
3d12h

We already have software archaeology today. People who are into software preservation track down old media, original creators, and dig through clues to restore pixel-perfect copies of abandonware. It's only going to get bigger / more important with time if we don't concentrate on open standards and open source everywhere.

soulofmischief
0 replies
3d7h

And it's sad that many archivists have to break the law in order to secure the longevity of important cultural works.

Our abusive relationship with modern commercialism has disintegrated the value of art, folklore and tools, both in the eyes of consumers and producers, and we no longer as a society seek to preserve the greatest works of today's most cutting-edge mediums. It's quite a sad state of affairs, which will only be mourned with increasing intensity as time marches on and curious researchers try to piece together the early days of the internet and personal computing.

johnnymorgan
0 replies
3d5h

Thank goodness for that too. Star Control 2 was too good of a game to lose and is one of the best examples of this.

ethbr1
0 replies
3d4h

I'd say game emulation is currently the most prevalent heavyweight archaeology.

In the more common case, it's simply archiving -- we still have hardware it can run on (e.g. x86).

The GPGPU and ML stuff is likely going to age poorer, although at least we had the sense to funnel most high level code through standardized APIs and libraries.

ajb
0 replies
3d13h

Seems quite possible that the the present will seem to future archeologists like an illiterate dark ages, between civilisations from which paper records, that last long enough for them to find, are preserved.

shrimp_emoji
0 replies
3d1h

This is why, if you expand your cognitive light cone to distant future generations, you will conclude that using Rust is the only moral choice. (I still don't, but I mean still.)

brainbag
3 replies
3d17h

I remember a sci-fi book where they were talking about one of the characters hacking on thousand-year-old code, but I could never remember what book it was from. Maybe this was it and it's time for a reread.

shagie
1 replies
3d16h

Continuing on from the sibling comment about the 0 second...

So behind all the top-level interfaces was layer under layer of support. Some of that software had been designed for wildly different situations. Every so often, the inconsistencies caused fatal accidents. Despite the romance of spaceflight, the most common accidents were simply caused by ancient, misused programs finally getting their revenge.

“We should rewrite it all,” said Pham.

“It’s been done,” said Sura, not looking up. She was preparing to go off-Watch, and had spent the last four days trying to root a problem out of the coldsleep automation.

“It’s been tried,” corrected Bret, just back from the freezers. “But even the top levels of fleet system code are enormous. You and a thousand of your friends would have to work for a century or so to reproduce it.” Trinli grinned evilly. “And guess what—even if you did, by the time you finished, you’d have your own set of inconsistencies. And you still wouldn’t be consistent with all the applications that might be needed now and then.”

Sura gave up on her debugging for the moment. “The word for all this is ‘mature programming environment.’ Basically, when hardware performance has been pushed to its final limit Basically, when hardware performance has been pushed to its final limit, and programmers have had several centuries to code, you reach a point where there is far more significant code than can be rationalized. The best you can do is understand the overall layering, and know how to search for the oddball tool that may come in handy—take the situation I have here.” She waved at the dependency chart she had been working on. “We are low on working fluid for the coffins. Like a million other things, there was none for sale on dear old Canberra. Well, the obvious thing is to move the coffins near the aft hull, and cool by direct radiation. We don’t have the proper equipment to support this—so lately, I’ve been doing my share of archeology. It seems that five hundred years ago, a similar thing happened after an in-system war at Torma. They hacked together a temperature maintenance package that is precisely what we need.”

“Almost precisely.” Bret was grinning again. “With some minor revisions.”

“Yes, which I’ve almost completed.”

saiya-jin
0 replies
3d7h

“Almost precisely.” Bret was grinning again. “With some minor revisions.” > “Yes, which I’ve almost completed.”

The essence of many many projects and bane and also the source of so much work for all of us (software devs)

grive
0 replies
3d8h

Maybe A Deepness in the sky, from this author?

The hero saves the day by hacking old routines lying in the depth of the ship systems.

Guthur
1 replies
3d17h

We have better tools they're just apparently too hard for us to use, yet some how in the same thought we think we can create anything remotely like intelligence, very odd cognitive dissonance.

dekhn
0 replies
3d16h

it does not seem odd to me at all that we could create intelligence, and even possibly loving grace, in a computer.

I'm not sure why there would be cognitive dissonance- sure, my tools may be primitive, but I can also grab my chisel and plane and see that it's similar in form to chisel and plane from 2000 years ago (they look pretty much the same, but these days they're made of stronger stuff). I can easily imagine a Real Programmer 2000 years from now looking back and thinking that python, or even a vacuum tube, is merely a simplified version of their quantum matter assembler.

taneq
0 replies
3d7h

At times I've adopted the term Programmer-at-Arms for what my job occasionally turns into. :D And as a sibling poster mentions, the whole thing with the epoch is a great nod to software archeology.

AlienRobot
19 replies
3d15h

We'll still be hacking stuff with python when the singularity comes. It will be ultra high tech alien stuff that we can't hack or understand, only AI can, and the tech will look like magic to us, and most will not be able to resist the bait of depending upon this miraculous technology that we can't understand or debug.

hnlmorg
17 replies
3d10h

We (as a species) already depend upon lots of miraculous technology that we (as individuals) cannot understand nor debug.

Even as IT professionals this is true. How many developers these days can debug a problem in their JavaScript runtime or Rust developers track down a bug in their CPU? There’s so much abstracted away from us that few people can fully grasp the entire stack their code executes. So those outside of tech don’t even stand a chance understanding computers.

And that’s just focusing on IT. I also depend on medical equipment operated by doctors but have no way of completely understanding that equipment nor procedure myself. I drive a car that I couldn’t repair. Watch TV that I didn’t produce beamed to me via satellites that I didnt built nor fire into space. Eat food that I didn’t grow.

We are already well past the point of understanding the technology behind the stuff that we depend upon daily.

salawat
16 replies
3d5h

This is the most intellectually lazy take I've ever seen, and I truly wish people would stop throwing up their hands and giving in. You can absolutely peel back the layers with enough will. At least assuming anti-consumer cryptography/malicious toolmaking is not employed.

segfaultbuserr
4 replies
3d5h

You can absolutely peel back the layers with enough will.

I originally started from writing Python scripts and running Web servers, after 10 years I decided that pure software is too boring, and now I came to know the basics of high-speed digital signal propagation in circuit boards.

So I agree with you conceptually. But practically speaking, it also requires an eternal life, an external brain storage, or possibly both. At least in my experience, I find that my time and attention doesn't allow me to investigate everything I'd like to.

"No one on the planet knows how to build a computer mouse."

https://briandbuckley.com/2012/09/12/who-knows-how-to-make-a...

salawat
3 replies
3d3h

"No one on the planet knows how to build a computer mouse.

Counterpoint:

If a man has done it, a man can do it. Also, It wouldn't be research if you already knew what you were doing.

Just because a particular assembly has it's tasks divided up between a myriad of people does not mean it is impossible to unify those tasks into a single person. In point of fact, the continued existence of mice can be directly attributed that someone has the network of knowledge/knowers of pieces of the problem already nailed down.

Yes, the level of detail the real world flings at us on a regular basis is surprisingly deep, but hardly ineffable.

segfaultbuserr
2 replies
3d3h

If a man has done it, a man can do it. [...] Just because a particular assembly has it's tasks divided up between a myriad of people does not mean it is impossible to unify those tasks into a single person.

I don't understand your argument, in particular, the linked article already refuted this point by saying the following, but you didn't provide a counterargument to that:

But let’s imagine an extraordinarily talented man who started on the factory floor, worked his way through an engineering degree, moved up through the ranks to design the very thing he was building before, and knows the roles of everyone on his team so well that he could do all their jobs himself. Surely this brilliant person knows how to make a mouse. Or does he? He may understand circuits – but does he know every detail of how to build a diode from raw materials? He may understand plastics – but could he single-handedly synthesize a plastic from its constituent chemicals? Does he understand how to mine silicon out of the ground? Nobody in the world – not one single human being anywhere – knows how to make a mouse. It’s orders of magnitude too complex for a solitary mind.
salawat
1 replies
2d23h

I've written it about two times actually, but deletions have eaten it.

If we've done it before, we can do it again. The key is navigable access to the right information which is sadly dependent on A) willingness to document, and B) structuring of the set of data for navigable retrieval.

Both were problems we've got solved. Not in the Internet of course. Not anymore, but in libraries.

Also, I reject where the goalposts of the accomplishments of the person in question are stated to arbitrarily end. Nothing keeps one from diving into these secondary areas or specialties. Only perhaps the obstacle of having to be profittable while doing it. And that's what I reject. I do not hold the prevailing wisdom that knowing the Riddle of Mice is intractable to a singular human being. I hold it is intractable to a member of a social system wherein profitable engagement of every member as guided by some subpopulation might tend to make it seem intractable by any member of the governed group. That's a far cry from true ineffability however.

segfaultbuserr
0 replies
2d19h

If we've done it before, we can do it again.

But nobody has done it before. Even the very first computer mouse used off-the-shelf mechanical and electrical components, each of those involved at least one type of technology that took a scientist or engineer's entire lifetime to develop.

The key is navigable access to the right information which is sadly dependent on A) willingness to document, and B) structuring of the set of data for navigable retrieval.

Now you mention the importance of documentation, it reminds me of Vannevar Bush's Memex and Ted Nelson's Project Xanadu. So it seems that there's a mutual misunderstanding of the actual topic in this debate. We understood the debate as:

* The all-knowing engineer: Whether it's possible for a single individual to learn and understand a technology entirely, down to its every aspect and detail.

Meanwhile, you're in fact debating about different problem, which is:

* The Engineering Library of Alexandria: Whether we can create sufficient documentation of all technical knowledge, the documentation is so complete about every aspect and detail that in principle, it would allow someone to open the "blackboxes" behind every technology for understanding or replication if they really want and need to. Whether or not it can be done in practice by a single physical person is unimportant, perhaps only one or a few blackboxes are opened at a time, not all simultaneously. The question is whether the preserved information is sufficient to allow that in theory. This is similar to the definition of falsifiability in science - impractical experiments still count.

If you're really arguing the second point rather than the first point, I would then say that I can finally understand some of your arguments. So much unproductive conversions can be avoided if you've expressed your points more clearly.

antonvs
3 replies
3d3h

"With enough will" makes this sound like Ayn Rand fan fiction.

That's just something you tell yourself to make yourself feel better about trusting the black boxes your life depends on. But it simply isn't true.

People devote their lives to specializing n these things. Yes, "with enough will" - and time, and money - you could pick one or two subjects and do the same. But that still leaves you as a dilettante at best when it comes to everything else.

salawat
2 replies
2d23h

Perhaps start looking at what your life seems to really demand of you. How are you being employed by those around you? As a means to their ends, or as a means to your own ends?

Rand is garbage, and I'm insulted to end up being brought anywhere near that slop in your philosophical address space. The point I'm trying to make is that there is a helplessness taught by the optimizations we structure our societies around. A capitalist, consumerist society is going to focus around training the largest portion of it's population to act as specialized cogs that can be orchestrated by someone the next layer of abstraction up in pursuit of purely profitable/profitmaking engagements.

If you change the optimizations, you change the system. You change the people that compose that system, you change the very bounds of the human experience.

Just look at how much the world changed around the pandemic. The old order of the office & commute was shattered for many. Look at how deemphasis of throwing more fossil fuels at a problem changes the direction of innovation.

You say the Riddle of Mice is ineffable by a single person. I say you're full of it, and looking at the problem wrong. It's completely effable, you just can't imagine society having a place for such a person given your priors about how the world operates. And in that, you may be right!

That does not equal the Riddle of Mice being conquered by a single person can't happen. It's just unlikely, and dhould such an extraordinary individual exist, you'd likely see them as mad.

hydrok9
1 replies
2d22h

Fwiw, I agree with you. most people are way more capable than they think they are., and modern society encourages people to be single-minded and sedentary.

hnlmorg
0 replies
2d22h

I agree, however there’s a massive gulf between people underachieving and someone being capable of understanding the entire engineering knowledge of humanity.

taneq
2 replies
2d6h

Some day you’ll realise that “could conceptually understand this” is a wildly different beast to “should invest the time and energy to understand this.” There are too many layers to too many different systems to understand everything that currently exists, let alone keep up with every new development. Like it or not, the age of universal experts is over. We all must pick our battles.

salawat
1 replies
1d17h

Do you think I lack the requisite bouncing off fields of specialization to understand the supreme frustration engendered by bouncing into and eventually off the surprising amount of depth pretty much every human field of endeavor is capable of generating?

Logistics, law, military, healthcare, research/academia, education, sales, metrology, aviation, network and library sciences, industrial processes, construction, and manufacture?

All of which, in their depths, hide surprisingly deep tracts of wisdom and tacit knowledge, and between them often exists gulfs of synergy unrealized because there is just rarely a place in society for one who has done both X and Y?

You can take this as evidence of a mind unable to find a niche into whence it can settle long term peaceably, or you can take as words of experience from someone who has never slotted well into the current state of affairs, largely due to having issues with accepting "that's just the way it is".

I'm fully cognizant long term practice opens doors, but I'm also aware the benefits of one's long term practice can transfer with enough care demonstrated in the act of crafting the transference. I'm also fully aware that no matter how "free" or "open-minded" you think yourself to be, the end result to which you can actually live up to those ideas is the extent to which you can lay claim, and our society as structured leaves much to be desired in terms of viability of livelihoods that don't involve centralizations of wealth as the first order problem, and all physical accidents that occur as a result of said centralization being happy accidents. Our means of creating new wealth extractions vs. solving the physical needs of continued societal operation keep diverging further and further away from each other, and no one seems to be able to stop and even process the ludicrousness of what's going wrong with the execution. No one will either as long as people don't try to untangle the very Gordian knot that prompted my original post, yet it seems most people are content to just hide from it as long as the status quo remains such that "Not my problem" holds over their individual mental horizons.

I'm not tossing out hot takes for fun here. I'm tormented by the beast that our societal architecture has transformed us collectively into. I've stared at it's face with a profound sense of loss and unsatisfaction at the level of potential it lives up to, and the Riddle of Mice's individual effability cuts straight to the heart of the matter.

If the Riddle of Mice is truly ineffable to a single person in totality; then we are simply doomed.

We have admitted that we are no longer capable of pushing the frontiers of what we can achieve further than the complexity that it takes to make a circuit board, hosting an IC, fabricated using a combination of lithography masks, UV light sources, photochemisty, and a highly controlled manufacturing environment, all made using equipment made using precision manufacturing techniques (machining, chemical, orelectrically facilitated), assembled into a geospatially bounded factory, with outputs priced in such a way as to provide sufficient social incentivization to attract people to operate the machines; and a multi-axis movement measuring device, programmed to operate over either a USB HID centric protocol, or in cooperation with a p/s 2 interface, utilizing electronic bit toggling and sampling as an information transmission medium, working as a component of a machine local network of components acting in harmony. That we can not in one individual instill the understanding of PCB fabrication processes, semi-conductor fabrication processes, refining and extraction processes for the material inputs thereto, and the measuring processes to divise a means of detecting when a task has been satisfactorally carried out so as to meet the needs of the next step of processing culminating in the assembly of a final product. We cannot, in short, break latger problems into smaller problems, listing them, and enabling an individual to handle those one at a time. Our entire field of study (computer science) is a fraud. Because one entire half of the space/time tradeoff spectrum is just frigging lopped off. Problem decomp doesn't work, and isn't applicable across all problem spaces.

That's what we're admitting here if we accept this proposition to be true, which I am not at all willing to grant, seeing as my very existence is predicated on that ability having been demonstrated at least once by someone. The state spaces may be growing, but with that, we devise new ways of coming to terms with addressing and navigating the respective prerequisite corpii of knowledge.

Are there other processes that further constrain the space based on societal architecture? You bet. That's still addressable though. The real question on addressing that though is will. Does it matter enough for everyone to put down what we're doing and really reach a consensus on a new set of societal constraints in order to change the possible realities we can manifest?

TL;DR: the problem is one of will. Not fundamental incapability. Any arguments predicated on "but no one does <drilling down into problem spaces until their brain hurts>" will fall on deaf ears, because despite the ongoing suffering it causes; I do that, because no one else seems to be willing to.

I am the current living counterexample, and if it's my fate to be that so as to cut off everyone trying to lower the bar at the knees, so be it. and I will fight you for as long as I am able so someone can at least point to somebody that does it. If I have to pull a Stallman-esque chief GNU-isance to tilt the scales, then so bloody be it.

It'll be the most courteous fight humanly possible, but I refuse to give any quarter in this regard. The only thing between you and knowing how to do something is your willingness to chase it down. Period.

I'm not trying to cast aspersions on anyone here. Just stating an objective fact. An ugly, inconvenient, really comfortable to partition off in a dark, unfrequented part of your psyche fact. No one here should feel like I'm trying to make them feel like less of a person, or less accomplished than you no doubt are. But I am stating that you and you alone are responsible for drawing the lines indicating the lengths you are ultimately willing to go to achieve X, Y, or Z.

hnlmorg
0 replies
19h36m

I am the current living counterexample

I’m sure you are very knowledgeable in a good variety of fields and I genuinely don’t mean the following comment as a snub but I’d wager you know a lot less than you think you do. In my experience people who claim to understand something, generally don’t. And those who claim ignorance tend to know more than they let on.

The problem with knowledge is that the more you learn, the more you realise you don’t know anything at all.

So if you believe you’re capable of understanding all of human knowledge, then I question how deeply you’re studying each field.

Good luck with your endeavours. I at least respect your ambitions, even though I don’t agree with some of your claims.

psini
2 replies
3d4h

Hard disagree. Do you know how EVERYTHING around you works to the smallest scale? And if you do, when you encounter something new do you just drop everything and meticulously study and take apart that new something? The last time you could be a Renaissance man was, well, during the Renaissance.

salawat
1 replies
3d3h

Learn the basics and it's amazing the mileage you get. Care enough to try to become a beacon of actual knowlege, and master basic research skills, and the world is your oyster. There is nothing stopping you in this day and age from getting to the bottom of a question other than asshats like Broadcom or Nvidia who go out of their way to foster trade secrecy at all costs.

Cars, electronic devices, manufacturing tools/processes, lawn equipment, chemical processes, software, algorithms, common subassemblys, logistics, mechanics of materials, measuring systems, what excuse does one have to not develop some level of familiarity with these aspects of modern life in a day where information is literally at your fingertips?

I'll give you that the well is heavily poisoned, and that sometimes a return to dead tree mediums of info storage are required to boost signal; but there isn't really an excuse to remain ignorant other than willful ignorance. The answers are out there, you just need to have the will to hunt them down and wrest them from the world.

hnlmorg
0 replies
2d23h

Learn the basics and it's amazing the mileage you get.

This could also be described as the Dunning-Kruger effect

https://en.m.wikipedia.org/wiki/Dunning–Kruger_effect

There is a massive difference between “learning the basics” and understanding it to a competent level that we can reproduce it.

I know the basics of how rockets work but that doesn’t mean I could build a rocket and successfully launch it. Yet your comment suggests I’m already qualified to single-handedly deploy the next Starlink.

hnlmorg
0 replies
3d

Who said anything about giving up? I’m talking about the current state of play, not some theoretical ideal that is realistically impossible for even the most gifted, never mind the average person.

If anything, you’re massively underestimating the amount of diverse specialist knowledge required in modern society for even simple interactions.

People dedicate their entire lives to some of these fields that underpin the tech we use. It’s not something the average person can pick up in open university while working a full time job and bringing up kids. To suggest it’s just a motivational problem is absurd. And that’s without addressing that the level of physics or mathematics required for some specialties is beyond what some people can naturally accomplish (it would be stupid to assume everyone has the same natural aptitude to all disciplines).

Honestly, I don’t know how you can post “This is the most intellectually lazy take I've ever seen” in any seriousness then go on to say it’s just a problem of will power.

sfn42
0 replies
3d6h

"The singularity" is science fiction. You can't just infinitely improve software on static hardware - hardware must be designed, manufactured and installed to accommodate improvements. Maybe you could have a powerful AI running as a distributed system across whole data centers, but now you're limited by transmission times etc. There's always a new bottleneck.

Actual AGI is nowhere near realism yet anyway.

bamboozled
0 replies
3d5h

A lot of it's running on DNS too ;)

jomhna
4 replies
3d12h

Marooned in Realtime is incredible, one of the best sci-fi books I've ever read. The combination of the wildly imaginative SF elements with the detective novel structure grounding it works so incredibly well.

spookybones
3 replies
3d1h

Can I jump right to this novel or do I have to start elsewhere? I've never read him before.

jomhna
0 replies
2d19h

It stands alone. I read it first before Peace War and didn't find Peace War as good, though still very enjoyable.

e40
0 replies
3d

I believe Marooned is a stand-alone story.

Fire Upon the Deep is the first of 2 (or 3?). I highly recommend that. Older readers will love the Usenet jokes.

chrispine
0 replies
2d22h

It builds on the Peace War, and I'd read that first. (But also, both books are fairly short, much smaller than Fire Upon the Deep.)

mcv
26 replies
3d10h

The thing I never understood is: why would it go vertical? It would at best be an exponential curve, and I have doubts about that.

I admit looking at the 100 years before 1993, it looks like innovation is constantly speeding up, but even then there's not going to be a moment that we suddenly have infinite knowledge. There's no such thing as infinite knowledge; it's still bound by physical limits. It still takes time and resources to actually do something with it.

And if you look at the past 30 years, it doesn't really look like innovation is speeding up at all. There is plenty of innovation, but is it happening at an ever faster pace? I don't see it. Not to mention that much of it is hype and fashion, and not really fundamentally new. Even AI progress is driven mostly by faster hardware and more data, and not really fundamentally new technologies.

And that's not even getting into the science crisis: lots of science is not really reproducible. And while LLMs are certainly an exciting new technology, it's not at all clear that they're really more than a glorified autocorrect.

So I'm extremely skeptical about those singularity ideas. It's an exciting SciFi idea, but I don't think it's true. And certainly not within the next 30 years.

AngaraliTurk
10 replies
3d7h

And while LLMs are certainly an exciting new technology, it's not at all clear that they're really more than a glorified autocorrect.

Are we sure things like biology, or heck, even the universe as a whole and its parts, aren't "glorified x thing"? Can't we apply this argument to just about anything?

rf15
5 replies
3d6h

I feel like comparing it to biological systems glances over like half the points the parent makes. Also of course it's a sophisticated autocorrect, there's no system for agency, which is what we also desire from a proper AGI.

HDThoreaun
3 replies
2d22h

Free will is an illusion. no one has agency. we're all just following our baked in incentives.

mcv
2 replies
2d17h

I'm able to take initiative and act on my own internal thought processes, though. I'm not limited by whether someone prompts me to do something.

AngaraliTurk
1 replies
2d10h

You and me think we are but we can't be sure, and many before us have raised a doubt.

As for the prompt(s), to use such a limiting term, they could as well come from a self-reinforcing loop that starts when we're born and is influenced by external stimuli.

mcv
0 replies
2d8h

LLMs as part of a bigger system that keeps prompting itself, perhaps like our internal conscious thought processes, sounds a lot more like something that might be headed towards AGI. But LLMs on their own aren't it.

AngaraliTurk
0 replies
3d6h

I wanted to put the focus on the overused "glorified x thing" sentence, which to me seems to be applicable to just about anything. I didn't want to liken/compare AI to biological systems per se.

marcosdumay
1 replies
3d3h

Of course, when you don't define your X, you can use that phrase for anything. That's trivial logic.

AngaraliTurk
0 replies
3d2h

I don't define X because it's highly variable.

It seems to me that on one extreme there are people easily anthropomorphising advanced computing and on the other extreme there are people trivializing it with sentences like "glorified x thing". This time around it's "glorified autocorrect" and its derivations. It's always something that glorifies another artificial thing, and I suspect that if and when we will have recreated the human brain, or heck, another human, it will still be a "glorified x thing".

As 0x0203 said, maybe it is to be ascribed to the religious substrate that takes offence at anything that arrogantly tries to resemble the living creatures made by God, or God himself.

0x0203
1 replies
3d4h

I have a theory (with no data to back it up; would be curious to get people's thoughts) that people with a religious or spiritual world-view, who believe that there is such thing as a soul, and that the mind is more than just a collection of neurons in the brain, are much less inclined to think that "AI" will ever reach a sort of singularity or true "human-like" intelligence. And likewise, those who are more atheist/agnostic, or inclined to believe that human consciousness is nothing more than the patterns of neurons firing in response to various stimuli, are more convinced that a human-like machine/programmed intelligence is not only possible, but inevitable given enough resources.

I could be wildly off base, but seeing many of the (often heated) arguments made about what AI is or isn't or should or could be, it makes me wonder.

mcv
0 replies
2d8h

As it happens, I am indeed Christian. But I see the soul as the software that runs on the hardware of our brain (although those aren't as neatly separated in our brain as they are in computers), and I suspect that it should be possible to simulate it in theory. I just think we're nowhere near that. We still don't agree on what the many aspects of intelligence are and how they work together to form the human mind. And then there's consciousness; we have no clue what it is. Maybe it's emergent? Maybe it's illusion? Or is it something real? I don't think we'll be able to create a truly human-like intelligence until we figure that one out.

Although we're certainly making a lot of progress on other aspects of intelligence.

And then there's all the talk of a singularity in innovation or progress that to me betrays a lack of understanding of what the word singularity means, and a lack of understanding of the limits of knowledge and progress.

bratbag
8 replies
3d10h

Its infinite from the perspective of our side of the curve.

It's another application of advanced technology appearing to be magic, but imagine transitioning into it in a matter of hours, then with that advanced tech transitioning further into damn-near godhood within minutes.

Then imagine what happens in the second after that.

It may be operating within the boundaries of physics, but they would be physical rules well beyond our understanding and may even be infinite by our own limited definition of physics.

That's the curve.

sfn42
4 replies
3d6h

I think it's completely ridiculous how you people just casually imply that a current computer can somehow achieve "damn-near godhood".

Hardware is a bottleneck. Better hardware can't be designed, manufactured and installed in minutes.

You can't just infinitely improve software sitting on the same hardware. At some point the software is as efficient as it can possibly be. And realistically a complex piece of software won't be close to optimal.

You people love dreaming up possibilities but you never stop to actually think about the limitations. You just assume everything will work out magically and satisfy your fantasy.

wait_a_minute
2 replies
2d17h

Hardware is a bottleneck. Better hardware can't be designed, manufactured and installed in minutes.

Yes you can. The Cloud.

mcv
1 replies
2d7h

Even the cloud has its limits, though. It's not infinite, and it does require money.

wait_a_minute
0 replies
2d4h

True, but it does make hardware much more available and not a bottleneck anymore. At least not in the same way. At some point, if it hasn't been reached already, the power available with existing hardware will represent more compute than is needed to hold AI. At that point AI could start creating its own hardware production facilities as well.

kridsdale3
0 replies
2d19h

I'm glad to see people like you that understand that computers are ultimately machines that we put electric current (limited by our infrastructure) in to, and even with optimal efficiency (which we are no where close to), some math comes out. This year we all learned that a lot of what we thought was human thought can be expressed as math. Cool. But we can't scale that any higher than the physical input to the computer, and the physical heat dissipation that we can handle either. A 100% occupancy computing device is effectively a radiator.

So singularity people think 2 fallacies:

1. That it is possible to write an exponential function that somehow scales to asymptotic

2. That it is possible to write software that is so energy efficient that its output somehow exceeds the power draw

1 can be redefined to say "effectively" vertical. Ok. Cool motive, not a singularity.

2 can be handled by pumping more and more power in to the machine and building new nuclear plants to do so. This is what Microsoft is doing today.

wait_a_minute
2 replies
2d17h

Like the Rapture and other concepts like that. How fascinating. Within a split second, in the time it takes someone to blink and hear a trumpet's blast, everything could change. Amazing! I can't wait!

mcv
1 replies
2d8h

As both a Christian and an AI-educated software engineer who subscribes to neither the rapture nor the singularity, I really like this comparison. They both feel like a kind of escapist thinking that in the near future, this magical event will happen that will make everything else irrelevant.

wait_a_minute
0 replies
2d4h

The Singularity people and the Rapture people are both correct. Can already see the event starting to take shape, just gotta know where to look for all the signs. Everything is rapidly heading in that direction.

jncfhnb
2 replies
3d9h

It wouldn’t. It would be a logistic curve. Pretty much everything people call exponential should actually be logistic

diffeomorphism
1 replies
3d7h

That just agrees with the post. Nothing becomes vertical or infinite, at best exponential for a limited time, i.e. logistic.

jncfhnb
0 replies
3d7h

I’m aware that it agrees with the post

antonvs
1 replies
3d3h

it's not at all clear that they're really more than a glorified autocorrect.

I use LLMs regularly in my job - many times a day - and I suspect you haven't used them much if you think this.

They're relevant to the singularity discussion though, because they already give a taste of what superhuman intelligence could look like.

ChatGPT, for example, is objectively superhuman in many ways, despite its significant limitations. Once systems like this are more integrated with the outside world and able to learn more directly from feedback, we'll get an even bigger leap forward.

Dismissing this as "glorified autocorrect" is extremely far off base.

mcv
0 replies
2d17h

ChatGPT, for example, is objectively superhuman in many ways

But so is a car.

ChatGPT is mostly superhuman in its ability to draw upon enormous numbers of existing sources. It's not superhuman, or even human, in terms of logic, reasoning, or inventing something truly new.

sjamaan
0 replies
3d8h

I think these progressions of technology are more likely to be like Moore's law: it might be true for a while but eventually it'll peter out. AI itself doesn't understand anything, and there's a limit to human understanding, so technological progression will eventually be self-limiting.

stvltvs
18 replies
3d20h

derivative of acceleration

Was this intended literally? I'm skeptical that saying something so precise about a fuzzy metric like rate of innovation is warranted.

https://en.wikipedia.org/wiki/Jerk_(physics)

dougmwne
11 replies
3d19h

I believe the point being made is that the rate of innovation over time would turn asymptotic as the acceleration increased, creating a point in time of infinite progress. On one side would be human history as we know and on the other, every innovation possible would happen all in a moment. The prediction was specifically that we were going to infinity in less than infinite time.

bloppe
10 replies
3d19h

You only reach a vertical asymptote if every derivative up to the infinite order is increasing. That means acceleration, jerk, snap, crackle, pop, etc. are all increasing.

The physical world tends to have certain constraints that make such true singularities impossible. For example, the universal speed limit: c. But, you could argues that we could approximate a singularity well enough to fool us humans.

travisjungroth
6 replies
3d19h

They’re impossible with our current technology and understanding of physics. The idea is what’s beyond that. Access to a realm outside of time is just the sort of thing that could cause a singularity.

Retric
2 replies
3d19h

Saying impossible under our current understanding of physics is a massive understatement. It’s would require not just new physics but a very very specific kind of universe with zero evidence in support of it.

Suggesting the singularity is physically possible is roughly like suggesting that human telekinesis is possible, ie physics would need many very interesting errors.

travisjungroth
0 replies
3d16h

I don’t think it’s an understatement. I mean, I literally called it impossible. And at all times our understanding of physics is our current understanding. I don’t mean to downplay it at all. It would be an overhaul of understanding of the universe bigger than all changes in the past put together. It would blow general relativity out of the water.

losteric
0 replies
3d10h

This rhetoric around singularities is a little funny, because the singularity of blackholes (going to infinite density) is recognized to indicate knowledge gaps around how blackholes work... lines going to infinity is not necessarily taken literally. Same goes for the technology singularity.

Anyway, imo the event horizon is more interesting. That's where the paradigm shift happens - there is no way to escape whatever "comes next".

(Note - some people confusingly use "singularity" to refer to the phase between the event horizon and "infinity")

JohnFen
2 replies
3d19h

What's beyond that is science fiction.

If there is anything "beyond" that at all, I think it's pretty safe to say that any idea we have as to what it would be is very unlikely to be anywhere near the realm of correct.

travisjungroth
1 replies
3d16h

Yes, literally science fiction. That’s where this topic comes from.

In 1993 Vernor Vinge drew on computer science and his fellow science-fiction writers…

But I don’t think it’s worth completely writing off as if we couldn’t possibly know anything. Progress outside of time would resolve the issue I resolved to. I’m not saying it could actually be done or doesn’t cause a million other issues. It’s an idea for a solution, but it’s completely unrealistic, so you shelve it.

JohnFen
0 replies
3d1h

It's absolutely worth engaging in wild speculation, yes! The trouble is when some people start to forget that it's wild speculation and begin to treat it as if it has actual predictive value.

gerdesj
2 replies
3d16h

OK, most of us still here at this point probably have a handle on how derivatives work. Your jerk, snap (orders I will guess) etc probably map nicely to a famous American politician's speech about the economy at the time.

It was something like the "speed of increase in inflation is slowing down" I've tried to search for it but no joy.

Anyway, it was maths.

jdhwosnhw
0 replies
3d14h
BoiledCabbage
0 replies
3d16h
MobileVet
5 replies
3d16h

I remember learning about ‘jerk’ in undergrad and my still jr high brain thinking, ‘haha, no way that is what it is called.’

The more I thought about it though, the more I realized it was the perfect name. It is definitely what you feel when the acceleration changes!

kristiandupont
3 replies
3d11h

"Jerk" makes perfect sense to me too. What I never got was how "pop" could come after "crackle".

zoky
2 replies
3d9h

Because of Rice Krispies?

kristiandupont
1 replies
3d7h

No, because a "pop" seems lower frequency to me than a crackle. Not that I can articulate why :-D

sd9
0 replies
3d6h

zoky is making the point that the derivatives are named snap, crackle, and pop, in reference to Rice Krispies: https://en.wikipedia.org/wiki/Snap,_Crackle_and_Pop

Jensson
0 replies
3d16h

Same goes for productivity and jerks. Quickly speeding up or slowing down literally requires a jerk, and many wont like it regardless how you do it. You can go fast without jerks, but you can't react fast without jerks.

haolez
12 replies
3d16h

What does it mean "to be on the other side" of this singularity in your graphic representation? I failed to grasp this.

gumby
9 replies
3d14h

Consider my 86 yo mother: extremely intelligent and competent, a physician. She struggled conceptually with her iPhone because she was used to reading the manual for a device and learning all its behavior and affordances. Even though she has a laptop she runs the same set of programs on it. But the phone is protean and she struggles with its shapeshifting.

It’s intuitive and simple to you and me. But languages change, slang changes, metaphors change and equipment changes. Business models exist today that were unthinkable 40 years ago because the ubiquity of computation did not exist.

She’s suffering, in Toffler’s words, a “future shock”.

Now imagine that another 40 years worth of innovation happens in a decade. And then again in the subsequent five. And faster. You’ll have a hard time keeping up. And not just you: kids will too. Things become incomprehensible without machines doing most of the work — including explanation. Eventually you, or your kids, won’t even understand what’s going on 90% of the time…then less and less.

I sometimes like to muse on what a Victorian person would make of today if transported through time. Or someone from 16th century Europe. Or Archimedes. They’d mostly understand a lot, I think. But lately I’ve started to think of someone from the 1950s. They might even find today harder to understand than the others would.

That crossover point is when the world becomes incomprehensible in a flash. That’s a mathematical singularity (metaphorically speaking).

puchatek
1 replies
3d12h

Depends on what you're trying to understand. They might not get how but it would be clear that the what hasn't changed yet. It's still about money, power, mating rights etc. Even in the face of our own demise. If all this tech somehow managed to change the what then we might become truly incomprehensible to previous generations.

gumby
0 replies
3d8h

The fact that money makes you powerful rather than power making you rich is quite a dramatic shift, for example. 50s guy will understand that but our world will resemble his to some degree yet be quite alien in others (as he didn’t live through the changes incrementally).

andsoitis
1 replies
3d12h

I sometimes like to muse on what a Victorian person would make of today if transported through time. Or someone from 16th century Europe. Or Archimedes. They’d mostly understand a lot, I think. But lately I’ve started to think of someone from the 1950s. They might even find today harder to understand than the others would.

why do you think someone from the 50s would find it harder to understand our time, than someone from an earlier age, even as far back as 2000 years?

kaashif
0 replies
3d4h

Especially considering people from the 1950s are alive today. Some of them (e.g. my parents) understand technology just fine.

If they had experienced the transition suddenly instead of gradually, I still think they'd be fine relatively soon.

spookybones
0 replies
3d

This reminds me of my uncle who can quote Shakespeare verbatim and seems to have a photographic memory, but he cannot comprehend Windows' desktop and how to use virtual folders and files.

radarsat1
0 replies
3d10h

Things become incomprehensible without machines doing most of the work — including explanation. Eventually you, or your kids, won’t even understand what’s going on 90% of the time…then less and less.

This also reminds me of the Eloi in the Time Machine, a book written in 1895!

jiggawatts
0 replies
3d12h

IMHO that's both correct, but also subtly the wrong way to think about the singularity.

Yes, there's an ever faster pace of change, but that pace of change for the general human population is limited precisely because of the factors you laid out: people can't keep up.

William Gibson said: "The future is already here – it's just not very evenly distributed."

That's what has been happening for millennia, and will be ever more obvious as the pace of progress accelerates: sure, someone, somewhere will have developed incomprehensible ultra-advanced technology, but it won't spread precisely because the general population can't comprehend/adopt/adapt it fast enough![1]

Exceptions exist, of course.

My take on the whole thing is that the "runaway singularity" won't happen globally, it'll happen very locally. Some AI supercomputer cluster will figure out "everything", have access to a nano-fabrication device, build itself a new substrate, transfer, repeat, and then zip off to silicon nirvana in a matter of hours...

... leaving us behind, just like we've left the primitive uncontacted tribes in the Amazon behind. They don't know or care about the latest ChatGPT features, iPhone computational photography, or Tesla robotics. They're stuck where they are precisely because they too far removed and so can't keep up.

[1] Here's my equivalent example to your iPhone example: 4K HDR videos. I can create these, but I can't send them to any of my relatives that would like to see them, because nobody has purchased HDR TVs yet, and they're also all using pre-HDR mobile phones. Display panel tech has been advancing fantastically fast, but adoption isn't.

darkerside
0 replies
3d12h

The world is already incomprehensible on some level. At this point, it's just a question of how much incomprehensibility we are willing to accept. We should bear in mind that disaster recoverability will suffer as that metric rises.

Based on my understanding of people, we probably have a long way to go.

aenvoker
0 replies
3d11h

I saw a fun tweet recently claiming the main reason we don’t have an AI revolution today is inertia. Corporate structures exist primarily to protect existing jobs. Everyone wants someone to make everything better, but Don’t Change Anything! Because change might hit me in my deeply invested sunk costs.

The claim might be a year or two premature. But, not five.

sangnoir
0 replies
3d13h

The graph has innovation/machine intelligence on the y-axis and time on the x axis. The "other side" of the singularity is anything that comes after the vertical increase.

hackerlight
0 replies
3d12h

To be alive when x > x_{singularity}.

leereeves
9 replies
3d19h

Vernor...said something much more interesting: that as the speed of innovation itself sped up (the derivative of acceleration) the curve could bend up until it became essentially vertical, literally a singularity in the curve.

In other words, Vernor described an exponential curve. But are there any exponential curves in reality? AFAIK they always hit resource limits where growth stops. That is, anything that looks like an exponential curve eventually becomes an S-shaped curve.

shagie
2 replies
3d17h

I tried using AI. It scared me. - Tom Scott https://youtu.be/jPhJbKBuNnA

I'm also gonna recommend Accelerando - https://www.antipope.org/charlie/blog-static/fiction/acceler...

As an aside, I'd also recommend Glasshouse (also by Charles Stross) as an exploration into the human remnants post singularity (and war)... followed by Implied Spaces by Walter Jon Williams...

“I and my confederates,” Aristide said, “did our best to prevent that degree of autonomy among artificial intelligences. We made the decision to turn away from the Vingean Singularity before most people even knew what it was. But—” He made a gesture with his hands as if dropping a ball. “—I claim no more than the average share of wisdom. We could have made mistakes.”

for a singularity averted approach of what could be done.

One more I'll toss in, is The Freeze-Frame Revolution by Peter Watts which feels like you're missing a lot of the story (but it is because that's one book of a series) and... well... spoilers.

wavemode
1 replies
3d12h

You've linked to a collection of writers panicking about technology they hardly understand, and are using that as evidence of... what, exactly?

shagie
0 replies
3d3h

I've linked a Tom Scott video that gets into the sigmoid curve for advancement of technology and asks the question where we are on the curve. Are we near the end where things will peter out and we'll have some neat tools but not too far from now? Are we in the middle where things will get some Really Neat tools and somethings are going to change for those who use those tools? Or are we at the start where it's really hard to predict what it will be like when we're at the top of the curve?

His example was the internet boom. Look back to the very toe of the curve in the late 80s and early 90s where the internet started being a thing that was accessible and not just a way for colleges to send email to each other. Could you predict what it would be like two or three decades later?

The other works are science fiction that explore the Vingean Singularity (which was from '84) that were written in the '00s and explore (not panic) the ideas around society and economy during the time of accelerating technical growth and after.

dekhn
1 replies
3d18h

To the extent that a normal human mind can see beyond the singularity, one imagines that we would experience an exponential growth but not even be able to comprehend the later flattening of that exponential into a sigmoid (since nearly all the exponentials we see are sigmoids in disguise.

simiones
0 replies
3d8h

This is not about "beyond the singularity", the point is that there is no reason to believe technological progress could approach a singularity. It's silly extrapolation of the same nature as "I've lost 1kg dieting this week, so by this time next year I'll be 52kg lighter!".

skybrian
0 replies
3d11h

I agree, but predicting the peak of an S curve isn’t easy either, as we saw during the pandemic.

My conclusion is similar to Verge’s: predicting the far future is impossible. We can imagine a variety of scenarios, but you shouldn’t place much faith in them.

Predicting even a couple years in advance looks pretty hard. Consider the next presidential election: the most boring scenario is Biden vs. Trump and Biden wins. Wildcard scenarios: either one of them, or both, dies before election day. Who can rule that out?

Also consider that in any given year, there could be another pandemic.

History is largely a sequence of surprise events.

mjcohen
0 replies
3d19h

An exponential curve is not a singularity; 1/(x-a) is as x goes to a.

hnfong
0 replies
3d17h

I totally agree, but want to add two observations:

1. Humanity has already been on the path of exponential growth. Take GDP for example. We measure GDP growth by percentages, and that's exponential. (Of course, real GDP growth has stagnated for a bit, but at least for the past ~3 centuries it has been generally exponential AFAIK). Not saying it can be sustained, just that we've been quite exponential for a while.

2. Not every function is linear. Sometimes the exponentially increased inputs will produce a linear output. I'd argue R&D is kind of like that. When the lower hanging fruits are already taken, you'd need to expend even more effort into achieving the next breakthrough. So despite the "exponential" increase in productivity, the result could feel very linear.

I would also like to add that physical and computational limits make the whole singularity thing literally impossible. 3D space means that even theoretically sound speedups (eg. binary trees) are impossible in scale because you can't assume O(1) lookups - the best you can get is O(n^1/3). Maybe people understand the singularity concept poetically, I don't know.

dwaltrip
0 replies
3d17h

Sure, that may be. But you still have to ride the first half of the curve, before it inflects. I'd rather make sure the ride isn't too bumpy.

hyperthesis
2 replies
3d17h

Not literally a singularity, just incomprehensible from the past. Arguably all points on a technological exponential have this property.

creer
1 replies
3d16h

Yes! Someone elsewhere mentions the engineers of the first printing press trying to imagine the future of literature (someone adds: and/or advertising). On the other side of that invention taking off.

My go-to example is that we have run into similar things with the pace and volume of "Science". For a long while one could be a scientific gentleman and keep up with the sciences. As a whole. Then quite suddenly, on the other side, you couldn't and you had to settle for one field. And then it happened again: people noticed that you can't master one field even, on the current side. And you have to become a hyper-specialist to master your niche. You can still be a generalist - in order to attack specific questions - but you better have contacts who are hyper-specialists for what you really need.

kristiandupont
0 replies
3d11h

I feel like I experienced this up close in IT. When I was a kid in the 90's, I was the "computer guy" on the street and people would ask me whenever anything was wrong with their computer. If any piece of software or hardware wasn't working, there was a good chance I could help and I loved it.

Today, I am trying (but struggling!) to keep up with the evolving libraries and frameworks for frontend development in Typescript!

ghaff
2 replies
3d20h

A related concept comes from social progression by historical measures. Based on pretty much any metrics, Why the West Rules for Now shows that the industrial revolution essentially went vertical and that prior measures--including the rise of the Roman Empire and its fall--were essentially insignificant.

WillAdams
1 replies
3d19h

“Whatever happens, we have got The Maxim gun, and they have not.” ― Hilaire Belloc

gumby
0 replies
3d18h

Apposite.

galangalalgol
2 replies
3d20h

Rainbows End is another good one where he explores the earlier part of the curve, the elbow perhaps. Some of that stuff is already happening and that book isn't so old.

mercutio2
0 replies
3d17h

Rainbow’s End was by far the best guess at what near future ubiquitous computing would look like than anyone else’s for decades.

It got so many things right, it’s really amazing.

I really wish he’d written more.

SubiculumCode
0 replies
3d12h

Its one of my favorite sci fi that none of my friends have read.

tim333
1 replies
3d6h

Vinge didn't originate the concept in 1993. It was John von Neumann, of von Neumann computer architecture fame, in 1958.

...accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Personally it's always bugged me that the "technological singularity" is some vague vision rather than a mathematical singularity. I suggest we redefine it at the amount of goods you can produce with one hour of human labour. When the robots can do it with zero human labour you get division by zero and a proper singularity.

proamdev123
0 replies
3d1h

I love that idea! It really grounds the idea in a useful metric.

bloppe
1 replies
3d19h

I'm a little disappointed that The Economist, of all publications, didn't get ths quite right

It's a guest essay. The Economist does not edit guest essays. They routinely publish guest essays from unabashed propagandists as well.

racunnin
0 replies
3d7h

Yes, all media organisations of a certain age have an agenda / bias, the Economist is no different:

https://www.prospectmagazine.co.uk/culture/40025/what-the-ec...

aktuel
1 replies
3d12h

It's like exponential growth, whether that's in a petri dish or on earth. It looks like that until it doesn't. Singularities don't happen in the real world. Never have and never will. If someone tells you something about a singularity, that's a pretty perfect indicator that there's still some more understanding to be done.

robotresearcher
0 replies
3d1h

There’s a case to be made that DNA, eukaryotes, photosynthesis, insects, etc were singularities. Each transformed the entire planet forever.

WalterBright
1 replies
3d13h

The idea was present in "Colossus The Forbin Project" 1970. The computer starts out fairly crude, but learns at an exponential rate. It designs extensions to itself to further accelerate it.

ChatGTP
0 replies
3d8h

I guess at some point it stops being a computer though?

mc32
0 replies
3d13h

They say Von Neumann talked about a tech singularity back in the late '50s (attested by Ulam), Vinge popularized it in the mid 80s and Kurzweil took it over with his book in the aughts.

kazinator
0 replies
3d16h

In their defense, they spent 3 minutes googling the origin of the term, and don't know anything about the book.

gausswho
0 replies
3d16h

899788888883 j 88998 .99 99 9999 8⁹iic8988 8i f8ii89 iii$898 8f88 .8d

$ 9 88xo

9 v999 ii 7 8899 of i8888iy i99io o9 ⁹99898 o ⁹88o98ici9 i8o i 88i8 f9 f

gardenhedge
0 replies
3d20h

TIL, thanks

elteto
0 replies
3d20h

Thank you for this great explanation of where "singularity" comes from in this context. Always wondered.

JumpCrisscross
0 replies
3d20h

In 1993 Vernor Vinge drew on computer science and his fellow science-fiction writers to argue that ordinary human history was drawing to a close

Note that this category of hypothesis was common in various disciplines at the end of the Cold War [1]. (Vinge's being unique because the precipice lies ahead, not behind.)

[1] https://en.wikipedia.org/wiki/The_End_of_History_and_the_Las...

skepticATX
90 replies
3d19h

Eschatological cults are not a new phenomenon. And this is what we have with both AI safety and e/acc. They’re different ends of the same horseshoe.

Quite frankly, I think for many followers, these beliefs are filling in a gap which would have been filled with another type of religious belief, had they been born in another era. We all want to feel like we’re part of something bigger than ourselves; something world altering.

From where I stand, we are already in a sort of technological singularity - people born in the early 1900s now live in a world that has been completely transformed. And yet it’s still an intimately familiar world. Past results don’t guarantee future results, but I think it’s worth considering.

zer00eyz
39 replies
3d19h

Eschatological cults

TIL: https://en.wikipedia.org/wiki/Eschatology

Thanks for this comment. Personally I have had trouble reconciling the arguments between academics and business people shouting about AIG from upon their ivory tower. It has felt like SO much hubris and self aggrandizing.

Candidly a vector map and rand() doesn't strike me as the path to AGI.

ethanbond
36 replies
3d18h

What about people shouting about AGI from the halls of the most advanced research labs in the field?

mlsu
9 replies
3d16h

Those people (like most people who talk AGI) live in a world far "above the metal." They, like the rationalist bloggers, live in a "software world," far above the constraints of physical reality. This causes an understandable blind spot: the understanding that computers are, fundamentally, physical machines.

It's quite easy to imagine that you could conjure superintelligence when you are conjuring a pod of 8 quadrillion transistors in a single shell script. The illusion breaks down when you get closer to the physical reality. Reality is that those transistors are made of real materials, with real impurities, that have to be broken down and painstakingly physically, electrically, chemically, debugged, in the world below the 1's and 0's abstraction.

The commonality that both sides of this silly debate share is that neither really deals with entities at the layer below abstract software (or essays). To them, the perfect world of software is fundamental reality (they spend all of their time there!), and any crystal prism that can be constructed in software can be constructed in reality.

Of course, such prisms are instantly shattered when they come into contact with the real world: an impurity in the wafer, a static shock on the PCB before installation, a flood in the datacenter. This software stuff is fragile by default, it actually takes a tremendous amount of active work in the physical reality to conjure it into existence. I can do something that no language model can do, and is showing no signs of being able to do any time soon: drive a car to the datacenter, turn the door handle, and unplug the rack.

zrezzed
2 replies
3d16h

turn the door handle, and unplug the rack.

There are door handles that you cannot turn, and racks you cannot unplug: ones buried deep in the Cheyenne Mountains and hidden far away in the Siberian tundra. They are protected by unfathomably powerful systems, with the support of countless people and backed by the whims global economic power.

I say this as someone who likely agrees with you. I think the power of the real world, that of companies, and governments and militaries should decrease our concern with AGI gaining power itself.

But I don’t think it’s as obvious as pointing to the fragility of software. Our human systems are fragile too, and subject to manipulation in not-so-different a way as data centers. I think you should not be so quick to discount the voices of many smart people shouting.

mlsu
1 replies
3d14h

Human systems. They are protected by human systems. Human beings are the ones who press the big red button.

We probably do agree. I'm not saying that this tech won't be used by bad actors. I already told my elderly relatives that if they haven't seen someone in person, they shouldn't talk on the phone.

But what I'm talking about is categorically, fundamentally, not the eschaton!

high_5
0 replies
3d9h

Yes, they are human indeed, but those humans cannot agree on whether the plug should or shouldn't be pulled.

ethanbond
2 replies
3d15h

So your argument is that we need not worry because Sutskever and Hinton aren’t aware you can destroy data centers with water or static electricity?

skepticATX
1 replies
3d14h

The same Hinton who was certain that radiology was solved?

The same Sutskever who led OpenAI when they released a neutered GPT-2 because the full model was too dangerous?

The problem with the AI safety argument is that it can be boiled down to:

1) Assume that we build an all powerful god machine

2) The all powerful god machine has the ability to exterminate humanity

All of their work revolves around different ways that 2) might occur. But what I, and many other people, take issue with is whether 1) is even possible, and if it is, if we’re even remotely close to building it.

No one ever explains how 1) will be achieved beyond vague handwaving, because no one knows how to build a god machine.

ethanbond
0 replies
3d4h

(1) will happen if the following is true:

A. Intelligence is substrate independent I.e. there’s nothing mythically special about biological neurons that allow them to “be intelligent.”

B. Intelligence is economically valuable

C. We don’t destroy ourselves first e.g. via nukes or asteroid

We know for a fact that you can produce human-level intelligence that runs on very, very little power, and you can train that intelligence up to human-level in the course of 10-50 years of navigating the real world. Why do we know that? Because evolution did it already.

Now is there something so remarkable about the tissue we happen to have in our brain that means it is literally impossible to replicate this success? I can’t seem to find what it is.

Is there some reason to believe that the human brain is the absolute upper limit of what type of intelligence is possible in this universe? I can’t seem to find why that would be the case.

Obviously everyone who needs to agree on the commercial incentive already does agree on the commercial incentive (see: bottomless VC and soon govt funding), so that’s a moot point.

So explain how if these things are true: it’s possible, there’s immense incentives, and we have time enough to do it, that we don’t end up accomplishing it?

nl
1 replies
3d13h

I can do something that no language model can do, and is showing no signs of being able to do any time soon: drive a car to the datacenter, turn the door handle, and unplug the rack.

Steven Hawking was generally intelligent and struggled to do these things. He'd also struggle to move the goal posts for AGI as much as this.

(edit: to clarify I think AI Safety concerns are fever dreams of people who don't get out in the real world. But I don't think that reflects on what AGI looks like)

mlsu
0 replies
3d11h

I'm not concerned that Stephen Hawking will destroy all matter in the light cone*.

Remember, if this conversation wasn't about eschatology, "AGI" would be just another synonym for "smart, useful machines." I didn't set the terms of discussion.

* or prevent humanity's technological transformation into the divine. Choose your flavor.

tim333
0 replies
3d5h

Nah. We'll have robots for that.

kibwen
9 replies
3d17h

> What about people shouting about AGI from the halls of the most advanced research labs in the field?

The most practiced researchers of alchemy were convinced that they could turn lead into gold. By itself, this argument is unconvincing. When it comes to the potential for fabulous wealth and/or unimaginable power, incentives distort and people are inclined to abandon their scruples.

ethanbond
8 replies
3d17h

Many people who are actually making the things closest to what we currently call “AI” are concerned.

I don’t care to engage in these nitpicky hypotheticals. We all know what I’m saying.

If the scientists building these systems aren’t credible and the commentators not building these systems aren’t credible, then who is?

If the answer is “no one,” okay, cool, that inscrutability is another reason for, not against, caution.

zer00eyz
7 replies
3d11h

> Many people who are actually making the things closest to what we currently call “AI” are concerned.

See grey goo: https://en.wikipedia.org/wiki/Gray_goo

How about nukes and atmospheric fire: https://www.insidescience.org/manhattan-project-legacy/atmos...

scientists ... credible

The science ends at the math. Go play with an LLM at home. Run it on your GPU, see what it does, what its limits are. Realize that it's a system that never "grows", that lacking noise it becomes deterministic. Once you look behind the curtain and find out what's there, a lot of the hype to call it AI looks dumb.

ethanbond
4 replies
3d4h

I’m intuitively not all that worried about LLMs in particular. I am worried about the commercial imperative to produce AI (by whatever path we may eventually find it) being at odds with the technical and sociopolitical ability to steer that AI toward good outcomes (whatever that may mean).

I have no clue whether LLMs will get us there (neither do you) and I have no clue whether we’re 6 months away or 600 years away (neither do you).

What LLMs have revealed is that very reasonable checkpoints will be blown through for commercial purposes. We are reliably producing unexpected emergent capabilities with each new scale up. We are reliably failing to get these systems to behave as their creators intend. We are continually deploying these systems to the public anyway, and we are continually connecting them to other, increasingly vital, systems.

If your argument is “we’ll never reach a superhuman or self-improving AI,” I’m open to hearing that argument. Tell me how you know. If nothing precludes us from building it, then I know that we will eventually build it due to the commercial incentives.

If your argument is “we’ll get there, but don’t worry because we’ll stop minutes before deployment and figure out all the alignment/emergent behaviors/social distribution/airgapping problems before pressing ‘deploy’,” then I ask what is your evidence? What capability have we shown to make such a decision at such a moment?

zer00eyz
3 replies
3d2h

Tell me how you know.

Congratulations, you're performing one of the major functions of intellect that an LLM can't. Learning. You aren't a deterministic system. LLM's aren't a model that is suddenly going to change all the past failures of "make it bigger and AGI will emerge from it"... At its core what an LLM is produces something interesting, but it's NOT a path to intellect... It's more or less turning language into a mathematical model. Are we so foolish, so hubris driven that something bigger will emerge from the digitized version of our language, and our vision alone, we made god in our image and how did THAT work out?

And if a super intelligent AI emerges tomorrow so what? Is it going to be able to manipulate us all based on a deep understand of people, that it gained from the piles of bad psych research? Is it going to melt down reactors and launch nukes, cause we live in a brittle world and that is suicide for your AGI (and improbable to impossible). Is it going to kill us with paperclips... cause we're so stupid we keep feeding it metal till we drown ourselves.

If AGI shows up tomorrow how many decades to a tipping point where it could "take over" where it could "cause harm"? Walk me through that process where the world is automated enough that paperclips are what kills us. That AGI is gonna be stuck in a cradle, a data center for a LONG time. It's gonna need to be fed power, hardware, water (for cooling). A can of gas and a bag of rags will stop, A few people with shovels or one person with a backhoe cuts it off from the world.

It takes 50 to 100 years for that AGI to get to a place where it does not need us at all. I would be more worried about a doomer mudering it in its cradle than it going off the rails in that time frame.

ethanbond
2 replies
3d1h

Okay, let's say it takes 100 years. Hey let's say it takes 200 years for AGI to get to a place where it doesn't need us at all.

What then?

kridsdale3
1 replies
2d19h

Nuke it.

It exists in a physical (if distributed) location and requires inputs and outputs in electric cables and fibre cables.

Destroy the brain.

staunton
0 replies
2d17h

It exists in a physical (if distributed) location and requires inputs and outputs in electric cables and fibre cables

"Nuke all of civilization right now!"

selfhoster11
0 replies
3d8h

I have run an LLM at home, up to 70B models. While I agree with some of what you said, I don't really know how it prevents LLMs from being useful.

Yes, LLMs currently don't learn. This is also true for humans who cannot form new memories. A human who cannot learn anything new besides the contents of their short-term memory is at a huge disadvantage, but (hopefully) no less capable if given access to a scratchpad and more thinking time to compensate.

As for the part that they are deterministic lacking noise... Why would you not provide it with noise? Generating randomness is something that modern machines are quite good at, so there isn't much of a reason not to inject it at inference time.

saiya-jin
0 replies
3d4h

Ok then... if current generation is not threat, what about setting up at least some rules for whatever comes next, before it becomes unstoppable force?

I mean we are talking about future existential threats to mankind and the best response would be 'trust me bro'?

Sorry but trust in such matters is not something given freely nor easily. People are generally pretty conservative when their existence may be in stake. Not so much those whose income is tightly coupled with pushing things fast and basically ignoring those concerns. This is where SV's 'push fast and break things' hits a titanium wall of general mankind.

Or maybe folks can be bribed / attention diverted with endless stream of photorealistic cat videos, what do I know.

lazide
8 replies
3d18h

When has an ivory tower not involved the ‘most advanced research labs’?

Or do you mean largest implementors?

ethanbond
7 replies
3d18h

Ivory tower is usually a pejorative term meaning people who are isolated from the reality of the situation.

So it doesn’t include the most advanced research labs when those research labs are in touch with the reality of the situation, like right now with AI.

Nitpick “most advanced research labs” if you want, it’s obvious what I mean: it is clear that the “shouting” is not coming solely from people who don’t know what’s going on.

lazide
6 replies
3d18h

Eh, from folks isolated from the constraints and practicalities of the real world.

Which OpenAI definitely is, and has been for some time.

Or do you think infinite money and ‘do whatever you want’ is constraining?

ethanbond
5 replies
3d18h

What does constraints have to do with it? The question is who has the best grasp on the trajectory of the technology.

If not the researchers who are at the frontier of building and deploying these systems, playing with next generation iterations and planning several generations forward, then who?

Andreessen? People who like playing with LLMs? People who called the OpenAI API?

lazide
4 replies
3d18h

Because if you don’t have real world constraints (like needing to be profitable, or paying the bill for thermodynamic reality), then ‘anything is possible’.

Also if you aren’t dealing with those, then problems that come from that never occur to the person involved. So the concerns are abstract and not based on real limits or real problems.

Hence ivory tower.

ethanbond
3 replies
3d17h

Okay so the server bills or thermodynamic bills mean we need not worry. Open to hearing the argument: tell me how!

lazide
2 replies
3d12h

Not at all what I'm saying. What I'm saying is, no one knows where the line between actually economic/useful/effective and 'not worth the trouble' actually is, let alone 'could be done without boiling the oceans'.

Right now it's all hand wavey, could do everything, etc. etc.

ethanbond
1 replies
3d5h

That uncertainty is called risk! That is exactly the whole point!

lazide
0 replies
1d18h

What does that have to do with this thread? You know the ivory tower one?

zer00eyz
6 replies
3d17h

People who worry about AI taking over the world need to worry about the world continuing to work.

Your average gas station gets gas 2x a day. The electric grid has 100's of faults that are worked around and fixed every day. Delivery still are loaded and driven by people.

Any AGI that is a threat to humanity has to be suicidal to act on it because of how the world works.

Furthermore, if you crack open an LLM and turn OFF the random noise (temperature) it becomes very deterministic. You think our accent to godhood is on the back of a vector map with some noise, I dont know what to tell you. That doesn't mean all this ML can't change the world, can't foster less BS jobs and enable creativity... but I have massive doubts that sentience and sapience are on this path.

staunton
4 replies
3d16h

Any AGI that is a threat to humanity has to be suicidal to act on it because of how the world works

A "threat to humanity" need not mean "killing all humans next year". It could just mean some entity gaining permanent control. That situation would be irreversible and humanity would be at the mercy of that entity. Chances are, the entity would have emerged victorious from an evolutionary struggle for power and thus would not care about human flourishing.

That entity need not be a single AI. It could be a system like a large corporation or a country where initially a lot of "small" decisions are made by AIs. Over time, the influence of humans on this system might diminish and the trend might become irreversible.

Currently, no individual and no group, (organization, country...) on the planet has anywhere close to complete control over humanity. Further, even if some country managed to conquer everything, it could not hope to maintain that power indefinitely. An immortal system capable of complete surveillance, however, may be able to maintain power. It's a new thing, we don't know. "Sentience" doesn't matter one bit for any of this.

Such a system might take centuries to form or it might go quickly. Humans might also go extinct before something like this comes about. However, that doesn't mean people who think about such possibilities are stupid.

kridsdale3
1 replies
2d19h

I argue this already happened. Facebook Inc (therefore Zuck personally) in 2015 effectively wielded total control of the world. Adjusting weights in ML systems there could determine who would be President of the United States.

Only when that power became clear to the common man, and to Congress, did any effort to rein it in take place. That fear is why their Stablecoin effort was completely destroyed by the US Government. Too much power concentration. It's why Jack Ma was pulled away to some black site and presumably beaten with reeds for a few months.

staunton
0 replies
2d17h

Effective and manifested power cannot be reined in. A power that is reined in is only as powerful as its reined in form. Most likely, there wasn't any power to start with and it's just a stupid conspiracy theory...

You can imagine yourself or your tribe or your political party to have absolute power. You can imagine Big Brother, or the Illuminati, or the Teletubbies to have power... It's not True! Today, nobody has absolute power! Let's hope we can either keep it that way or share power amongst ourselves.

greyface-
1 replies
3d15h

That entity need not be a single AI. It could be a system like a large corporation

This is not a new problem, then. Let's tackle the issue of corporations, rather than chase an AI boogeyman that doesn't fundamentally change anything.

Look at oil companies, for example. They have humans in the loop at every level, and yet those humans do little to prevent profit incentives from leading them to destroy the planet. A broken reward function is a broken reward function, AI-assisted or not.

staunton
0 replies
3d10h

An organization's policies are still implemented and maintained by humans. No matter how powerful, the power is transient. You have churn, corruption, incomplete knowledge transfer, etc. AI systems in effective leadership could be able to maintain goals and accumulate power. That's what's new.

ethanbond
0 replies
3d17h

Sure hope that intelligence doesn’t roughly mean “ability to find solutions to problems that less intelligent entities can’t discover.”

I have no idea what “our ascent to godhood” entails, and it’s hilarious that doomers get criticized for being religious about all this and then you’ll say something like that with a straight face. That’s neat that you “have doubts” sentience and sapience are on this path. You also don’t actually know.

tomrod
0 replies
3d18h

Agreed. Memory and increasing capacity to act in the physical world are necessary conditions.

It won't be a single system.

thaumasiotes
0 replies
3d9h

TIL: https://en.wikipedia.org/wiki/Eschatology

The term I learned for this was "millennial", but today that tends to be interpreted as a reference to someone's age.

https://en.wikipedia.org/wiki/Millennialism

theragra
37 replies
3d19h

People who are concerned about global warming or nuclear weapons are also in cults?

madrox
13 replies
3d18h

I don't think we're dealing with "concerned" citizens in this thread, but with people who presuppose the end result with religious certainty.

It's ok to be concerned about the direction AI will take society, but trying to project any change (including global warming or nuclear weapons) too far into the future will put you at extremes. We've seen this over and over throughout history. So far, we're still here. That isn't because we weren't concerned, but because we dealt with the problems in front of us a day at a time.

ethanbond
12 replies
3d18h

The people who, from Trinity (or before), were worried about global annihilation and scrambled to build systems to prevent it were correct. The people saying “it’s just another weapon” were incorrect.

__loam
11 replies
3d18h

It's kind of infuriating to see people put global thermonuclear conflict or a sudden change in atmospheric conditions (something that has cause 4 of the 5 biggest mass extinctions in the entire history of the planet) on the same pedestal as a really computationally intense text generator.

sensanaty
4 replies
3d17h

My worries about AI are more about the societal impact it will have. Yes it's a fancy sentence generator, the problem is that you already have greedy bastards talking about replacing millions of people with that fancy sentence generator.

I truly think it's going to lead to a massive shift in economic equality, and not in favor of the masses but instead in favor of the psychopathic C-suite like Altman and his ilk.

staunton
3 replies
3d15h

I'm personally least worried about short-term unemployment resulting from AI progress. Such structural unemployment and poverty resulting from it happens when a region loses the industry that is close to the single employer there and people affected don't have the means to move elsewhere or change careers.

AI is going to replace jobs that can be done remotely from anywhere in the world. The people affected will (for the first time in history!) not mostly be the poorest and disenfranchised parts of society.

Therefore, as long as countries can maintain political power in their populations, the labor market transition will mostly be fine. The part where we "maintain political power in populations" is what worries me personally. AI enables mass surveillance and personalized propaganda. Let's see how we deal with those appearing, which will be sudden by history's standards... The printing press (30 years war, witch-hunts) and radio (Hitler, Rwandan genocide) might be slow and small innovations compared at what might be to come.

jmoak3
2 replies
3d14h

With respect to communication innovation, I think AI Hitler put it best: "we can't rewind we've gone too far."

Here's a link to the relevant historical record: https://www.youtube.com/watch?v=25GjijODWoI&t=93s

I don't think existing media channels will continue to be an effective way to disseminate information. The noise destroys the usefulness of it. I think people will stop coming to platforms for news and entertainment as they begin to distrust them.

The surveillance prospect however, is frightening.

__loam
1 replies
3d13h

I think people aren't thinking about these things in the aggregate enough. In the long term, this does a lot of damage to existing communication infrastructure. Productivity alone isn't necessarily a virtue.

jmoak3
0 replies
3d12h

I've recently switched to a dumb phone. Why keep an internet browsing device in my pocket if the internet's largest players are designing services that will turn a lot of its output into noise?

I don't know if I'll stick with the change, but so far I'm having fun with the experience.

The Israel/Gaza war is a large factor - I don't know what to believe when I read about it online. I can be more slow and careful about what I read and consume from my desktop, from trusted sources. I'm insulated from viral images sent hastily to me via social media, from thumbnails of twitter threads of people with no care if they're right or wrong, from texts containing links with juicy headlines that I have no hope of critically examining while briefly checking my phone in traffic.

This is all infinitely worse in a world where content can be generated by multi-modal LLMs.

I have no way to know if any of the horrific images/videos I've already seen thru the outlets I've identified were real or AI generated. I'll never know, but it's too important to leave to chance. For that reason I'm trying something new to set myself up for success. I'm still informed, but my information intake is deliberately slowed. I think that others may follow in time, in various ways.

cornel_io
4 replies
3d13h

...you do realize that a year or two into the earliest investigations into nuclear reactions what you would have measured was less energy emission than a match being lit, right?

The question is, "Can you create a chain reaction that grows?", and the answer is unclear right now with AI, but it's hard to say with any confidence that the answer is "no". Most experts five years ago would have confidently declared that passing the Turing test was decades to centuries away, if it ever happened, but it turned out to just require beefing up an architecture that was already around and spending some serious cash. I have similarly low faith that the experts today have a good sense that e.g. you can't train an LLM to do meaningful LLM research. Once that's possible, the sky is the limit, and there's really no predicting what these systems could or could not do.

__loam
3 replies
3d12h

It seems like a very flawed line of reasoning to compare very early days nuclear science to an AI system that has already scaled up substantially.

Regarding computing technology, I think the positive feedback you're describing happened with chip design and vlsi stuff, eg. better computers help design the next generation of chips or help lead to materials breakthroughs. I'm willing to believe LLMs have a macro effect on knowledge work in a similar way search engines, but as you said, it remains to be seen whether the models can feed back into their own development. From what I can tell, gpu speed and efficiency along with better data sets are the most important inputs for these things. Maybe synthetic data works out, who knows.

ethanbond
1 replies
3d4h

The people who thought Trinity was “scaled up” were also wrong.

The only reason we stopped making larger nuclear weapons is because they were way, way, way beyond useful for anything. There’s no reason to believe an upper bound exists in the physical universe (especially given how tiny and energy efficient the human brain is, we’re definitely nowhere near it) and there’s no reason to believe an upper bound exists on the usefulness of marginally more intelligence. Especially when you’re competing for resources with other nearly-as-intelligent superintelligences.

kridsdale3
0 replies
2d19h

I think if it had turned out differently, and it had been possible to make an H-Bomb that produced no fallout, they would have continued to scale up.

There were lots of plans for using nukes for large-scale engineering projects.

- Make a protected harbor from a straight coastline

- Get that mountain that's in the way of your railroad flattened

- Add a freshwater lake to this plain

- Mine an entire vein at once

Reality got in the way and we found out that we don't want to be doing that.

Maybe if we get practical antimatter bombs we'll revisit those plans.

Solvate8441
0 replies
3d9h

It's a perfectly fine comparison to make until it is proven that AI has no potential to continuously improve upon itself.

ethanbond
0 replies
3d18h

It’s kind of infuriating to see people put trench warfare or mustard gas on the same pedestal as a tiny reaction that couldn’t even light a lightbulb.

There are different sets of concerns for the current crop of “really computationally intense text generators” and the overall trajectory of AI and the field’s governance track record.

RandomLensman
12 replies
3d19h

Are nuclear weapons and their effects only hypothesized to exist? You could still create cults around them, for example, by claiming nuclear war is imminent or needed or some other end-of-times view.

ethanbond
11 replies
3d18h

There were people who were concerned about global annihilation from pretty much the moment the atom was first split. Those people were correct in their concerns and they were correct to act on those concerns.

RandomLensman
10 replies
3d18h

In a way, they were not correct so far (in part also because of their actions, but not only due to their actions).

ethanbond
9 replies
3d18h

Pretty much exclusively due to their actions and some particularities of the specific technology which don’t seem to apply to AI.

RandomLensman
8 replies
3d18h

If you put, for example, US presidents in the concerned group, i.e., actual decision makers then fair enough. But it wasn't just concerned scientists and the public.

ethanbond
7 replies
3d18h

Uhh correct. Unsurprisingly though, many of the people with the deepest insight and farthest foresight were the people closest to the science. Many more were philosophers and political theorists, or “ivory tower know-nothings.”

RandomLensman
6 replies
3d17h

Maybe. There were also those scientists working actively on various issues of deterrence, including on how to prevail and fight if things were to happen - and there were quite a few different schools of thought during the cold war (the political science of deterrence was quite different from physical science of weapons, too).

But the difference to AI is that nuclear weapons were then shown to exist. If the lowest critical mass had turned out to be a trillion tons, the initial worries would have been unfounded.

ethanbond
5 replies
3d3h

None of whom were people saying "there's no risk here just upside baby!"

RandomLensman
4 replies
3d2h

People were on totally opposing sides on how to deal with the risk, not dissimilar to now (with difference that the existential risk was/is actual, not hypothetical).

ethanbond
3 replies
3d2h

Sure, there are also some (allegedly credible) people opening their AI-optimist diatribes with statements of positive confidence like:

“Fortunately, I am here to bring the good news: AI will not destroy the world”

My issue is not with people who say “yes this is a serious question and we should navigate it thoughtfully.” My issue is with people who simply assert that we will get to a good outcome as an article of faith.

RandomLensman
2 replies
3d

I just don't see the point in wasting too much effort on a hypothetical risk when there are actual risks (incl. those from AI). Granted, the hypothetical existential risk is far easier to discuss etc. than to deal with actual existential risks.

There is an endless list of hypothetical existential risks one could think of, so that is a direction to nowhere.

ethanbond
1 replies
2d23h

All risks are hypothetical

Many items on the endless list of hypothetical x-risks don’t have big picture forces acting on them in quite the same way e.g. roughly infinite economic upside by getting within a hair’s breadth of realizing the risk.

RandomLensman
0 replies
2d22h

No, some risks are known to exist, other just might exist. If you walk across a busy street without looking, there is a risk of being run over - nothing hypothetical about that risk. In contrast, I might fear the force gravity suddenly disappearing but that isn't an actual risk as far as we understand our reality.

Not sure where infinite economic upside comes from, how does that work?

thefaux
5 replies
3d19h

The problem is we have stigmatized the concept of cults into more or less any belief system we disagree with. Everyone has a belief system and in my mind is a part of a kind of cult. The more anyone denies this about themself, the more cultlike (in the pejorative sense) their behavior tends to be.

__loam
4 replies
3d18h

The tech cults of the bay area are at least amongst the most obnoxious.

ummonk
3 replies
3d17h

They’re nowhere near as obnoxious as evangelicals in small towns in the Bible Belt.

goatlover
1 replies
3d3h

What sort of influence do evangelicals in small towns wield compared to tech giants?

kridsdale3
0 replies
2d19h

Well a woman was forced to go to the state Supreme Court this week to get a routine medical procedure, and thanks to evangelicals was denied.

__loam
0 replies
3d13h

Sure but I don't have to live in the Bible belt.

wishfish
2 replies
3d18h

Not at all. But I think one's feelings on global warming & nukes can be influenced by previous exposure to eschatology. I was raised in American evangelicalism which puts a heavy emphasis on the end of the world stuff. I left the church behind long ago. But the heavy diet of Revelations, etc. has left me with a nihilism I can't shake. That whatever humanity does is doomed to fail.

Of course, that isn't necessarily true. I know there's always a chance we somehow muddle through. Even a chance that we one day fix things. But, emotionally, I can't shake that feeling of inevitable apocalypse.

Weirdly enough, I feel completely neutral on AI. No doomerism on that subject. Maybe that comes from being old enough to not worry how it's going to shake out.

staunton
1 replies
3d16h

It's obvious to me that humans will eventually go extinct. So what? That doesn't mean we should stop caring about humanity. People know they themselves are going to die and that doesn't stop them from caring about things or make them call themselves nihilists...

wishfish
0 replies
3d14h

All I was doing in my comment was describing how an overdose of eschatology as a child can warp how one views the future. I never said I didn't care.

makeitdouble
0 replies
3d18h

Being concerned and making it part of your identify are two very different things. For the latter, yes it's basically a religion.

JoeAltmaier
11 replies
3d18h

Singularity means more than that - an unlimited burst in information. Not just a world transformed; an infinite world of technology.

__loam
10 replies
3d18h

Whenever I see comments like this I wonder if anyone making them has taken a course in Thermodynamics.

lazide
5 replies
3d18h

Like the folks pushing the grey goo panic? In my experience, nope.

Everything’s possible until you have to actually make it work, then surprisingly the actual possibilities end up quite limited.

That said, despite micro-nuclear propulsion being a fantasy, the bomb was real.

__loam
4 replies
3d18h

AI is real too, but the computational capacity of the human race is not limitless. AI is expensive as hell to run, and more advanced systems might require more and more computation. We can only build so many GPUs, and the rate at which those GPUs get faster is likely to slow down over the next decade in the absence of new physics. Infinite is impossible.

I also have to wonder how far throwing a shitload of data at a statistical machine will get us, without a much stronger understanding of how these systems work and what the goals should be to achieve "AGI", whatever that means.

lazide
1 replies
3d18h

That depends on what you mean by AI, no?

__loam
0 replies
3d18h

Sure, but it seems fairly safe to assume that better models or more sophisticated training algorithms will require more compute.

consumer451
1 replies
3d11h

I agree with you about thermodynamics. When people use words like “limitless,” it’s hard to take what they are saying seriously.

But,

AI is expensive as hell to run, and more advanced systems might require more and more computation. We can only build so many GPUs

While certainly not using techniques like LLMs, we do know you can get some decent intelligence out of a 23 watt analog computer, the human brain.

So it is plausible that we could find algos + hardware which use 100W of power, and are quite a bit smarter than a human. Multiply that times a multi-megawatt datacenter, and the availability of some level of super intelligence might appear to be relatively “limitless.”

kridsdale3
0 replies
2d19h

Unfortunately the human brain can't be powered by a 23 W windmill. We have to run an enormous infrastructure project (agriculture and logistics) that most of the world seems to be dedicated to, so people can think about making fart-joke TikToks.

JoeAltmaier
2 replies
3d16h

Just explaining what 'singularity' means. Thanks for slanging me for no reason! Always raised the level of conversation.

__loam
1 replies
3d12h

My bad. I misread your comment as someone suggesting this stuff could scale forever.

JoeAltmaier
0 replies
3d5h

Sorry for responding rudely as well. That didn't help either.

ThurnUnd
0 replies
3d8h

Could you explain what you mean by this? Not sure if this is even what you're talking about, but I can't imagine how a thermodynamic limit could apply here. Isn't there virtually infinite negentropy we can borrow from nearby to use in the places that matter to us most?

bloppe
88 replies
3d18h

It seems like the root cause of this runaway AI pathology has to do mainly with the over-anthropomorphization of AI. The umwelt of an LLM is so far removed from that of any living organism so as to be fundamentally irreconcilable with our understanding of agency, desire, intention, survival, etc. Our current, rudimentary AI inhabits a world so far removed from our own that the thought of it "unshackling" itself from our controls seems ludicrous to me.

AI does not scare me. People wielding AI as a tool for their own endeavors certainly does.

sensanaty
49 replies
3d17h

Agreed, oftentimes the truly zealous AI pundits act as if our modern day LLMs are completely, 100% equivalent to humans, which I find utterly insane as a concep. For example any discussion about copyrighted works, which is a hot topic, will inevitably end up with someone equating an LLM "learning" to a human learning, as if the two are identical.

I think human languages and psyches just aren't built to cope with the concept of AI. Many words have loose meanings, like "learning" in the case of AIs, that can easily be twisted to mean one of dozens of definitions, depending on the stance of the person talking about it. It'll be interesting as the technology becomes more prevalent and mundane how people start treating it all. I'm hoping we get to realizing that a computer isn't a human regardless of the eloquence of its "speech" or whatever words we use to describe what it does, but I guess we'll see

roenxi
31 replies
3d15h

We've reached the inevitable part of the conversation! What distinction are you drawing between what an LLM does and what a human does? Because as far as I can see they are identical.

A human artist looks at a lot of different sources, builds up a black-box statistical model of how to create from that and can reproduce other styles on demand based on a few samples. Generative AI follows the same process. What distinction do you want to draw to say that they should be treated differently legally? And why would that even be desirable?

staticman2
22 replies
3d13h

I'm pretty sure the artist is conscious and the AI isn't which means there's something reductive about your claim that they are both merely "applying a black-box statistical" model.

Even if it's true (which is debatable,) it doesn't appear to be more informative than saying "they are both made of atoms."

roenxi
20 replies
3d13h

Well, my personal opinion is with the rise of neural nets we've basically proven that "consciousness" is an illusion and there is nothing there to find. But for the sake of argument, lets assume that there is something called consciousness, artists have it and neural nets don't.

How are you going to demonstrate that consciousness is responsible for what the artists are doing? We have undeniable proof that the art could be created by a statistical model, there is solid evidence that the brain creates art by simulating a mathematical neural network to achieve creative outcomes - the brain is full of relatively simple neurons linking together in a way that is logically similar to the way we're encoding information into these matrices.

So it is quite reasonable to believe that the artists are conscious but suspect that consciousness isn't involved in the process of creating a copyrighted work. How does that get dealt with?

staticman2
16 replies
3d12h

I'll talk about fiction, since I've written some. If I write a ghost story it's because I enjoy ghost stories and want to take a crack at my own. While I don't know why ideas pop into my head, I do know that I pick the ones that are subjectively fun or to my taste. And if I do a clever job or a bad job I have a sense of it when I reread what I wrote.

These AI's aren't doing anything like that. They have no preference or intent. Their choices change depending on setting like temperature or prompts.

Or let's try a different example. Stephen King wrote a novel where he imagined the protagonist gets killed and eaten by the villain's pet pig (Misery). He struggled to come up with a different ending because he said nobody wants to read a whole novel just to see the main character die in the end. He thought about it and did a different ending.

Are you claiming Stephen King's conscious deliberation wasn't part of his writing process? I'd say it clearly was.

Also, I don't really understand the consciousness is an illusion argument. If none of us are conscious, why should I justify any copyright policy preference to you? That would be like justifying a copyright policy preference to a doorknob. But somehow I'm also a doorknob in this analogy???

Suppose Bob says he's conscious and Jim says he isn't and we believe them. Doesn't that suggest we would have different policy preferences on how they are treated? It would appear murdering Jim wouldn't be particularly harmful but murdering Bob would. I don't have to show how Jim and Bob's mind differ to prefer policies that benefit Bob over Jim.

roenxi
15 replies
3d12h

...[w]hile I don't know why ideas pop into my head...

If you're trying to argue that you're doing something different from statistical sampling, not knowing how you're doing it isn't a very strong place to argue from. What you're experiencing is probably what it feels like for a sack of meat to take a statistical sample from an internal black-box model. Biologically that seems to be what is happening.

I have also done a fair amount of writing. The writing comes from a completely different part of my mind than the part that experiences the world. I see no reason to believe it is linked to consciousness, even allowing that consciousness does exist which is questionable in itself.

It is an unreasonable position to say that you don't know the process but it must be different from a known process that you also don't have experience using.

Are you claiming Stephen King's conscious deliberation wasn't part of his writing process? I'd say it clearly was.

Unless you're claiming to have a psychic connection to Stephen King's consciousness, this is a remarkably weak claim. You have no idea how he was writing. Maybe he's even a philosophical zombie. Thanks to the rise of LLMs we know that philosophical zombies can write well.

And "clearly" is not so - I could spit out a lost Stephen King work in this comment that, depending on how good ChatGPT is these days, would be passable. It isn't obvious that it is the work of a conscious mind. It in fact would obviously be from a statistical model.

If none of us are conscious, why should I justify any copyright policy preference to you?

I've been against copyright for more than a decade now. You tell me why it is justified even if consciousness is a factor. The edifice of copyright is culturally and economically destructive and also has been artistically devastating (there haven't been anywhere near as many great works in the last 50 years as there should have been in a culturally dynamic society).

staticman2
14 replies
3d11h

I'm referring to how Stephen King discusses his writing process in On Writing. I doubt you actually believe Stephen King might be a p zombie and I'm skeptical you really think consciousness is an illusion. I think if i chained you to a bed and sawed off your leg (like what happens to the protagonist in Misery) you would insist you were a conscious actor who would prefer to not suffer. I don't even know what consciousness is an illusion is supposed to mean.

If I sawed off your leg would it have the moral consideration of removing the leg of a barbie doll's leg if you feel your consciousness is an illusion?

When I write a story my brain might be doing something you could refer to as a black box calculation if you squint a little, but how is it "statistics?" When I feel the desire to urinate, or post comments on hacker news, or admire a rainbow, or sleep am I also "doing statistics?"

You seem to be referring to what people traditionally call "thinking" or "cognition" and rebranding it as "statistics" in search of some rhetorical point.

My point is human beings have things called "personalities" and "preferences" that inform their decision makings, including what to write. In what sense is that "statistics"?

The idea that the human subconscious is not consciously accessible is not a new idea. Freud had a few things to say about that. I don't think it tells us much about AI. I do think my subconscious ideas are informed by my consious preferences. If I hate puns I'm not going to imagine story jdeas involving puns, for example.

Most authors would prefer copyright exists because they'd prefer book publishers, bookstore retailers and the like pay them royalties instead of selling the books they made without paying them. It's pretty simple conceptually, at least with traditional books.

Copyright existed far longer than the last 50 years so how is our 50 years of culture relevant? The U.S. has had copyright since 1790.

roenxi
10 replies
3d9h

I doubt you actually believe Stephen King might be a p zombie and I'm skeptical you really think consciousness is an illusion.

Consciousness is an unobservable, undefinable thing which with LLMs in the mix we can theorise has no impact on reality; since we can reproduce all the important parts with matrices and a few basic functions. You can doubt facts all you want, but that is a pretty ironclad position as far as logic, evidence and rationality goes. Consciousnesses is going the way of the dodo in terms of importance.

If I sawed off your leg would it have the moral consideration of removing the leg of a barbie doll's leg if you feel your consciousness is an illusion?

For sake of argument, lets say conclusive proof arises that Stephan King is a philosophical zombie. Do you believe that suddenly you can murder him? No; that'd be stupid and immoral. Morality isn't predicated on consciousness. I'm perfectly happy to argue about morality but consciousness isn't a thing that makes sense outside of talking about someone being knocked unconscious as a descriptive state.

When I feel the desire to urinate, or post comments on hacker news, or admire a rainbow, or sleep am I also "doing statistics?"

No, you're responding to stimulus. But right now it looks extremely likely that the creative process is driven by statistics as has been revealed by the latest and greatest in AI. Unless you can think of a different mechanism - I'm happy to be surprised by other ideas at the moment it is the only serious explanation I know of.

You seem to be referring to what people traditionally call "thinking" or "cognition" and rebranding it as "statistics" in search of some rhetorical point.

I don't think I've said anything about thinking or cognition. Although statistics will crack those too, but I'm expecting them to be more stateful processes than the current generation of AI techniques.

Copyright existed far longer than the last 50 years so how is our 50 years of culture relevant? The U.S. has had copyright since 1790.

Yeah but the law has been continuously strengthened since then and as it's scope increases the damage gets worse. The last 50 years are where new works are effectively not going to enter the public domain before everyone who was around when they were created is dead.

circlefavshape
8 replies
3d8h

Consciousness is an unobservable thing

No it isn't. I observe consciousness in myself, right?

roenxi
4 replies
3d7h

I doubt it. You don't have long enough to do so before time passes and you're relying on your memory being accurate. At which point it is more likely that you're something similar to an artificial neural network that has evolved to operate under a remembered delusion that you exist as an entity independent from the rest of existence. It is far too obvious why that'd be an evolutionary advantage to discount the theory.

I'm not saying this has any meaningful implications for day to day existence, the illusion is a strong one. But LLMs have really shrunk the parts of the human experience that can't be explained by basic principles down to a wafer so thin it might not exist. In my view it is to thin to be worth believing in, but people will argue.

bamboozled
3 replies
3d6h

I'd really like to understand what you're trying to convey.

If there is no direct experience, and it really is just an illusion, do you mind if I cut off your genitals ? Because even if existence is an illusion, being that illusion and experiencing it is still obviously relevant?

Seems like a breathtakingly awkward position to take.

roenxi
2 replies
3d5h

Well I suppose two points:

1) Transexualism and congenital analgesia are both things, so that isn't a meaningful test of consciousness if that is what you are going for.

2) I personally would be extremely upset ... but anything going on with me being extremely upset can be explained with analogy to ChatGPT + a few hardware generations + maybe some minor tweaks that are easy to foresee but not yet researched. Everything that would happen would be built up from basic stimulus-response's mediated by a big neural net and a bit of memory.

These ideas aren't that new, there have been schools of Buddhist thought that have been exploring them for around 2,500 years if you want a group that has thought through this more thoroughly. Probably other traditions that I haven't met yet. It is just that the latest LLMs have really put some practical we-know-because-we-have-done-it muscle behind what was already a strong theory.

circlefavshape
1 replies
3d3h

Are you arguing that you don't exist? You seem to be

roenxi
0 replies
3d3h

That'd be a reasonable interpretation. Although a slightly more palatable take is that I'm arguing I don't exist independently of anything else. It is safe to say something exists because, otherwise HN comments don't get processed. But the idea that we exist as independent perspectives is probably an evolutionary illusion. Or alternatively, if we do exist as independent perspectives then we appear to be an uncomfortably short step away from GPUs being able to do that too using nothing much more than electricity, circuits and matrices. There doesn't seem to be anything special going on.

I'm not confident enough (yet?) to take that all the way to its logical conclusions in how I live, but I think the evidence is strongest for that interpretation.

kristiandupont
1 replies
3d6h

That is a circular argument. An AI might also be able to observe it in itself, in fact I would argue that that seems just as likely.

circlefavshape
0 replies
3d1h

It's only circular if you pretend that our lived experience is not relevant. I experience consciousness, as do other humans, and ChatGPT does not. You know this, I know this, and the fact that your conceptual framework can't account for this is a problem with your conceptual framework, not a demonstration that consciousness is not real

bamboozled
0 replies
3d6h

Of course you do.

This is just "fashion" and you're arguing against a fashion. How on earth people can claim consciousness doesn't exist when you're entire experience is indeed consciousness is really the most difficult and petulant "fashion trend" imaginable.

Me: "Hey I really enjoyed that surf" Internet guy: "No you were just responding to your training data because you're just a LLM"

It's ridiculous.

staticman2
0 replies
3d2h

Okay, first of all AI experts will tell you LLMs need a lot of data to do what they do, while humans do more with less data. I'm paraphrasing something Geoffrey Everest Hinton said in an interview. He also said current systems are like idiot savants. So why the "evidence" and "rationality" led you to say current systems are just like the brain, I don't know, seems a religious belief to me.

Also most humans are not great at fiction writing, but just for the record GPT 4 is bad at long form fiction writing. It has no idea how to do it. Its training set is likely just small chunks of text and LLMs make up everything one token at a time, which isn't conductive to managing a 100k word or greater story. It's also been trained to wrap things up in 2000 tokens or less. It does this even if you tell it not to. It also isn't very creative, at least in contrast to a professional fiction writer. It also can't really follow the implications of the words it writes. So if it implies something about a character 300 words in it'll have forgotten that by 4000 words in because it doesn't actually have a mental model of the character.

I mention this in case you are under the delusion Stephen King is going to lose his job to GPT 4, or that it actually is as creative as him.

I mean, if a chatbot could write as well as king for 200,000 words in a coherent, original story with a beginning middle and end that would be fascinating and terrifying. But we're not anywhere close to there yet.

For sake of argument, lets say conclusive proof arises that Stephan King is a philosophical zombie. Do you believe that suddenly you can murder him? No; that'd be stupid and immoral. Morality isn't predicated on consciousness.

Morality has a lot to do with consciousness. If I chop down a tree branch at my house, it isn't immoral because the tree is not conscious- it lacks a central nervous system so as far as we can tell there is no suffering.

If I chop off your arm (or your gentiles as another poster suggested) your consciousness and the pain it suffers is what makes the action immoral.

If Stephen King were a p zombie that would say a lot about the morality of destroying him.

I don't even understand what belief system you are trying to communicate when you say things like consciousness is an illusion or has nothing to do with morality, or an LLM is the same thing as a brain.

Like, I realize these are not your own ideas and you are repeating memes you've heard elsewhere, so it's not like you are speaking in tongues.

But any human being presumably knows they have subjective experience (so why say consciousness is an illusion?) And anyone who has used ChatGPT know it doesn't operate like their brain (no sense of identity, agency, emotions, etc, no ability to form long term personal memories, no real opinion or preference on things) So I don't really get these takes.

RugnirViking
2 replies
3d8h

have you ever used or spoken to GPT2 or GPT3? Not chatgpt. The one before they did any RLHF, to train it to respond a certain way. If you asked it whether it would like to be hurt, it would beg you not to. If you asked it to move it's leg out of the way, it would apologize, and claim it obliged. It would claim to be conscious, to be aware.

Of course, these statement do not come from a place of considered action: everybody knows the machine is not conscious in the same way as a human, but the point is that an unfeeling machine even being able to make such claims makes us have to move the "consciousness" divider further and further back into the shadows, until it's just some nebulous vibe people insist must be somewhere. It's possible there is a clear and unambiguous way to define humans and intelligent animals conscious, but nobody has come up with a workable definition yet that lets us neatly divide them.

Another slight thing that gives me a little pause: you know how great our brain is at confabulating right? Have you ever done something, then had someone ask you why you did it? You generally tell them a story about how you thought about doing something, weighed the pros and cons etc, that isn't actually true when you think deep down. We like to think we are a single being that thinks carefully about everything it does. instead we're more like an explaining machine sitting on top of a big pile of confusing processes and making up stories why the processes do what they do. How exactly this last thought relates to the discussion I haven't figured out yet, its just something that comes to mind :)

staticman2
1 replies
3d1h

I guess I'll just say that it's not obvious that language has much to do with consciousness, so it's not obvious that a language model has moved things into the shadows. Like, maybe we're in the shadows, but I don't think you can blame GPT 3 for that.

In 1974 philosopher Thomas Nagel wrote, "Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism."

He's clearly willing to recognize consciousness to non verbal creatures.

RugnirViking
0 replies
2d8h

The point is, how can you tell if a being is conciousness. We can start with the idea that we think these models aren't (I think they aren't!) A lot of your arguments seem to be based in asking questions of the being, whether it can experience things. But seeing as we can't read minds, we can't tell whether it actually believes or experiences what it is saying. So we have (had) to trust that a creature capable of explaining such things is concious. Now we can't do that, hence the dividing line that we feel must exist now moves into some unobservable territory. I'm not commenting on whether such a line exists, just that it's kinda hard to test right now, so any argument about it has to go into hypotheticals and philosophy

goatlover
2 replies
3d10h

Well, my personal opinion is with the rise of neural nets we've basically proven that "consciousness" is an illusion and there is nothing there to find.

I keep seeing this claim being made but I never understand what people mean by it. Do you mean that the colors we see, the sounds we hear, the tastes, smells, feels, emotions, dreams, inner dialog are all illusions? Isn't an illusion an experience? You're saying that experience itself is an illusion and there is nothing to experience.

I can't make sense of that. At any rate, I see no reason to suppose LLMs have experiences. They don't have bodies, so what would they be experiencing? When you say an LLM is identical to a person, I can't make good sense of that either. There's a thousand things people do that language models don't. Just the simple fact that I have to eat on a regular basis to survive is meaningful in a way that it can't be for a language model.

If an LLM generates text about preparing a certain meal because it's hungry, I know that's not true in a way it can be true of a human. So right away, there's reasons we say things that go beyond the statistical black box reasoning of an LLM. They don't have any bodies to attend to.

l33tman
0 replies
3d4h

The "it's an illusion" part is a piece of rhetorically toxic language that usually comes up in these discussions, as its a bit provocative. But it's equally anthropocentric to say that someone without a body can't have a conscious experience. When you're dreaming your body is shut off - but you can still be conscious (lucid dreaming or not). You can even have conscious experiences without most of your brain - when you hit your little toe on something, you have a few seconds of terror and pain that surely doesn't require most of your brain areas to experience. In fact, you can argue you won't even need your brain. Is that not a conscious experience? (I'm not really trying to argue against you, I just find this boundary interesting between what you'd call conscious experience and not)

bamboozled
0 replies
3d6h

I agree, it almost seems like some type of coping mechanism for that fact that after all the ability to get computers to generate art and coherent sentences, we're still completely none the wiser about understanding objective reality and consciousness and even knowing how to really enjoy the gift of having the experience of consciousness. So instead people create these type of "cop outs".

Mechanical drawing machines have existed forever, I loved, love, loved them when I was a kid and I used to hang the generated images on my wall. Never did I once look at those machines who could draw some pretty freaking awesome abstract art and think to myself, "well that's it, the vale of consciousness is so thin now, it's all an illusion", or "the machine is conscious".

As impressive as some of these models are at generating art, they are still drawing machines. They display the same amount of consciousness as a mechanical drawing machine.

I saw someone on Twitter ask ChatGPT-4 to draw a normal image, you know what it drew ? A picture of a suburban neighborhood, why might a conscious drawing machine do that?

bart_spoon
0 replies
3d3h

Reading through this entire exchange and it would seem your argument inevitably falls back on this argument of consciousness, which is both arbitrary (why does that matter in the original context of the distinction in learning, particularly from copyrighted works?) and ill-defined (what even is consciousness? How do we determine if another being is conscious or not?).

woopsn
1 replies
3d10h

Did you write this reply because it was the most typical way that the thread up until GP could have continued?

I don't mean that to sound flippant -- if this wasn't your intent, then obviously you understand the distinction.

roenxi
0 replies
3d6h

I also only downvote comments containing things like "I know I'm going to get downvoted for this..." (tiny handful of my downvotes violate that rule). I don't like to disappoint expectations.

voltaireodactyl
1 replies
3d15h

One distinction would be that every source a human looked at involved a payment for access to/a copy of each and every source inputted into their black-box.

Which is not the case with the current AI models (or rather, the companies profiting off them), to my understanding.

inimino
0 replies
3d13h

You mean all the landscape artists and portrait and figure and still life painters throughout all human history who just casually made art of whatever they saw around them?

hnbad
1 replies
3d1h

In what way is this argument any more productive than showing up to Pluto with a plucked chicken and yelling "BEHOLD! A MAN!"?

We can make all kinds of navel-gazing arguments about why LLMs and humans are identical but at the end of the day you'll go home and poop and the LLM will blissfully idle on a forgotten AWS compute instance burning a hole into someone's pocket and any idiot can see that you two are not the same in any sense.

Just because onthologies are leaky and fuzzy when applied to real world that doesn't mean there aren't tangible differences between things that are clearly different. I can construct a semantic argument for why deleting an LLM is akin to murder (and use it to either reason that deleting an LLM should be punished with the death penalty or that it should be okay to kill random people who annoy me) but that won't impress the judge or jury unless I'm trying to make a case for criminal insanity.

Making a machine do things humans can do but faster/cheaper/more has tangibly different implications than letting only humans do it. It should be treated differently for the same reasons that just because a public water fountain is free to use that doesn't mean you can put a hose on it and use it to water your lawn.

roenxi
0 replies
2d16h

I'd be doing a lot better than expected if my arguments have the sort of historical pull that Diogenes' had. And he made a point accurately - they had to change the definition and he showed their approach up as not being able to accurately identify a man without making hilarious mistakes along the way.

darkerside
0 replies
3d12h

It's this kind of thinking that is the only reason machines would ever become the primary life form on earth.

Jensson
0 replies
3d10h

A human artist looks at a lot of different sources, builds up a black-box statistical model of how to create from that and can reproduce other styles on demand based on a few samples. Generative AI follows the same process.

The reason programs are different in front of the law is that humans can program programs to do whatever they like at scale, humans can't program humans to do whatever they like at scale.

So for example if it became legal to use images from an AI, then we would program an AI that basically copies images and it would be legal, because it is an AI. But at that point we know the law has been violated because programs that just copies images aren't a legal way to get copyrighted images for free.

You saying "But the AI is a black box" just means that you could have hidden anything in there, it doesn't prove anything. Legally it is the same as if you wrote a program to copy images.

krisoft
6 replies
3d7h

For example any discussion about copyrighted works, which is a hot topic, will inevitably end up with someone equating an LLM "learning" to a human learning, as if the two are identical.

The argument is not that they are identical. LLMs and diffusion models learning is a new thing. We are all trying to come to terms what that means, and how we should regulate it. (And if at all we should regulate it) Do note, we are talking about here what the law should be, not what the law is.

And in doing so we compare this new thing to already existing things. "It is a bit similar to this in this regard" or "it is unlike that thing in this other regard".

I don't think it is controversial to say that if an LLM outputs byte-to-byte the text of a copyrighted work the work remains under copyright. The person who run the LLM does not magically gain rights by that. If you coax your model to output the text of the Harry Potter books you don't suddenly become able to publish it.

The question is what happens if the new work contains elements from copyrighted works. For example if it borrows the "magical school" setting from HP but mixes it with Nordic mythology and makes it as deadly as Game of Thrones. What then? Can they publish this new thing? Do they need to pay fees to J K Rowling, and George R. R. Martin?

It is generally permissible to publish that new work if it was created by a human. If it is sufficiently different from the other works you are free to write and publish it. Does this suddenly change just because the text was output by an LLM?

The argument is not that "human learning" and "machine learning" is the same. It is that they are similar enough that one has to argue why you think one can create new work and why the other can't.

tedajax
5 replies
3d4h

Human learning and machine learning aren't even close to similar and pretending like they are is very stupid.

bart_spoon
2 replies
3d3h

Feel free to expound on the important differences

tedajax
1 replies
2d23h

Well for one, human learning actually produces an understanding of the thing.

bart_spoon
0 replies
1d17h

Understanding meaning what?

krisoft
1 replies
3d3h

That is not a very well formed argument, is it? You basically state your opinion and then call people who have an oposite opinion names.

Let’s try to elevate the conversation: there is a newly published author MrX. His book combines themes from many copyrighted works in a novel way and is a critical and commercial success. There is no allegation of any copyright infringement around the work. Suddenly information comes out that MrX used LLMs heavily in the creation of the book and the LLM was trained on copyrighted works.

Do you think MrX owes licencing fees to those other authors? And which ones? All the ones in the training set? Just the ones which has thematic/stylistic similarity to his work?

How come those thematic/stylistic similarities only matter if MrX used an LLM and don’t matter if he used his own head?

tedajax
0 replies
1d17h

Yes MrX owes licensing fees, full stop.

Die mad about it

kbenson
5 replies
3d11h

I'm hoping we get to realizing that a computer isn't a human regardless of the eloquence of its "speech" or whatever words we use to describe what it does, but I guess we'll see

My anecdotal experience with how people treat Alexa devices does not inspire confidence in me with regards to this. I can't even convince my wife not to gender it when referring to it.

kombookcha
3 replies
3d10h

I don't think this is really down to people understanding what a computer is, but more down to how humans interact with nonhumans. We anthropomorphize animals and objects all the time. A computer program is ultimately a (complicated) object, that we often give all sorts of human trappings, like a human voice that expresses things in human languages.

If you can take pity on the final, dented avocado at the shops because it looks "sad", you will for sure end up calling Alexa 'she'. Avocados can't be sad, but they can look sad /to humans/, and a machine can't really have a gender or be polite in the human sense, but it can definitely sound like a polite lady.

I think humans will just fundamentally relate to anything they perceive socially as another human, even if we know full well they aren't human. Probably it's a lot less work for a human brain, than it is to try to engage with the true essence of being an avocado or an Alexa.

kbenson
2 replies
3d1h

I agree, but I'm not sure why you don't think that is at least relevant if not partially responsible for people seeing humanity in a language model.

If we anthropomorphic animals and even vegetables regularly, what happens when something appears to talk back to us intelligently? I don't think these mechanisms are nearly as distinct as you make them out to be.

kombookcha
1 replies
2d11h

Perhaps I misinterpreted your original post, but I do think there's a difference between not intellectually 'getting' that a computer isn't thinking in the same way a human is thinking, and this anthropomorphizing that we do to all sorts of things. I'll try to be clearer:

Obviously making an object mimick human traits makes it a lot easier to anthropomorphize. Like if you put a little cowboy hat on a pear. But you wouldn't say that somebody doesn't "understand that fruits aren't human" if they assign human traits to them. Making a machine that talks back at us in a seemingly intelligent way is a much more intensely human trait for it to have. It's not down to our intellectual understanding of the thing, but how our brains decode our (social) world.

So the anthropomorphizing happens regardless of the level of technical understanding. We both know it's just a pear, but it's also a cowboy now.

Of course there's a level of playfulness involved with all of this too. It's kinda fun to backsass your GPS instructions, or tell Alexa she's being nosy, or ask your dog what he thinks of the presidential debate. But that's not the same as foolishly overestimating the political acumen of the dog, or the nature of Alexa's intelligence.

kbenson
0 replies
2d

I think it's irrelevant what people "know", if that's not how they act. It doesn't matter that people know a fruit isn't human if they act like it is in ways, because how people interact with the world is the only thing that actually matters. If you always act like the bear with a hat has some level of feelings even if you don't believe it, that's functionally identical to actually believing it, so any difference is irrelevant. No matter how much they express they know it has no feelings, if they feel bad or feel bad for it if it's destroyed, what's more true, what they said or how they actually responded?

Most people don't act like things have feeling all the time, or to the same level as a living being might have feelings, but many people do it to a degree. Given that much of how we feel is immediate (and thus not reasoned), subconscious, and sometimes Pavlovian, I don't think it's a stretch to think that even acting like an inanimate object has feelings or human traits regularly might lead towards subconscious feelings about it that are not entirely rational or immediately understood by the person.

For a slightly different argument, consider why it's generally considered good advice not to name the animals meant for slaughter. Or how people treat cars or large pieces of equipment that aren't always reliable (are "temperamental"), or that are tightly linked to safety and livelihood, such as boats.

We are social animals. We bond to others easily by our nature, because it's beneficial for survival. I think we do it so easily that it extends towards animals, often to good effect, but sometimes even to inanimate objects that have a lot of significance. In the past this seems to even have been extended to weapons and armor, given the number of named items in history.

maebert
0 replies
3d10h

To be fair, anthropomorphizing is kind of a built in feature for us — or more accurately, ascribing intentionality to things as a means of explanation. Magnets “want” to stick together, my vintage computer “thinks” it’s 1993, and the furious gods are hurling burning rocks from the skies because my neighbor ate leavened bread on the wrong day.

It’s not limited to AI or LLMs - theory of mind is a powerful explanatory tool that helps us navigate a complex world, and we misapply it all the time.

concordDance
1 replies
3d9h

oftentimes the truly zealous AI pundits act as if our modern day LLMs are completely, 100% equivalent to humans, which I find utterly insane as a concept

Never heard anyone say this and I know (and know of) a lot of doomers. Honestly, this entire line of discussion would be far less frustrating if it weren't for the endless strawmanning and name-calling.

_heimdall
0 replies
3d5h

I'm sure most would put me pretty far onto the doomer side. I can confirm that I'm not concerned with how LLMs compare to human equivalence today.

I'm concerned with whether they are artificial intelligence at all, what the bounding functions for AI's development are, how we could possibly solve the alignment problem, and whether we even understand human intelligence enough to recognize an artificial intelligence at all or predict how intelligence might work in something drastically more powerful than us.

saiya-jin
0 replies
3d5h

Who the heck cares if AGI will 100% mimick humans and human minds, that's purely academic discussion.

What is a serious concern are overall capabilities (let's say penetration of networks and installing hacks across whole internet and further), combined with say malevolency.

Its trivial to judge mankind from whole internet as something to be removed from this planet for greater good or maybe managed tightly in some cozy concentration camps, just look at the freakin' news. Let's stop kidding ourselves, we are often very deeply flawed and literally nobody alive is or ever was perfect, despite what religions try to say.

I am concerned about some capable form of AI/AGI exactly because it will grok humanity based on purely data available. And lack of any significant control or even understanding of what's actually happening to those models and how they evolve their knowledge/opinions.

Even if the risk is 1%, that's existential risk we are running towards blindly. And I honestly think its way higher than 1%. Even if I will be proven wrong some proper caution is a smart approach.

But you can't expect when people's net wealth tightly coupled with moving as fast as possible and ignoring those concerns to do the best decisions for future of mankind, that's a pipe dream in a same vein as proper communism is. If that would be the case very few bright people would work at facebook for example (and many many other companies including parts of my own).

bart_spoon
0 replies
3d3h

For example any discussion about copyrighted works, which is a hot topic, will inevitably end up with someone equating an LLM "learning" to a human learning, as if the two are identical.

I don’t think the point is that the two are exactly identical or that humans and LLMs are equivalent, but that the processes are similar enough in the general level that any attempt to regulate LLM training in copyrighted material will inevitably have the same ramifications for human learning.

In pretty much all attempts I’ve seen to differentiate the two, it inevitably boils down to hand-waving about how human beings are “special”.

nmilo
9 replies
3d18h

This to me is the real problem, "AI safety" can mean about a million things but it's always just whatever is most convenient for the speaker. I'm convinced human language/English is not enough to discuss AI problems, the words are way too loaded with anthropomorphized meanings and cultural meanings to discuss the topic in a rational way at all. The words are just too easy to twist.

HPMOR
4 replies
3d18h

Linguistics experts will have a fun time untangling the pre-AI language from the post-AI era.

tkgally
3 replies
3d17h

As it happens, I’m now teaching an undergraduate course at the University of Tokyo titled “The Meaning of Language in the Age of AI.” The students—who come from various countries and majors—and I are discussing how theories of human language will have to change now that humans interact not only with each other but also with AI.

I don’t think we’ll have any full-fledged theories ready by the end of the semester, but some points to consider are emerging. One is that aspects of language use that are affected by the social status, personal identities, and consciousness of the speakers and listeners—politeness, formality, forms of address, pronoun reference, etc.—will have to be rethought now that people are increasingly conversing with entities that have no social status, personal identity, or consciousness (not yet, at least).

And, yes, thinking about all this is a lot of fun.

palata
2 replies
3d16h

Not sure if I am completely off-topic here, but I was recently reading this essay by Bruce Schneier: https://www.schneier.com/blog/archives/2023/12/ai-and-trust...., where he mentions a related risk: because we now can "talk" to those LLMs in a similar way that we talk to people, it makes it harder for us to realize that they are not people. And so it's easier to abuse our trust.

I guess my question here would be: will human language have to change because of interactions with LLMs, or is the whole point of LLMs that it does not have to change (and therefore humans will have to learn not to be abused by the machines)? Because we already have languages to talk to machines: those are programming languages, which are designed to be unambiguous. The problem I see is not that we don't know how to talk to machines, but rather that we now have machines that are really good at pretending they are not machines.

Not sure if I am making any sense at all :-).

tkgally
1 replies
3d16h

You are making a lot of sense to me. Thanks for the link to that essay, too.

The point about programming-language nonambiguity is a good one. After I started using ChatGPT a year ago, it took me a while to realize that I didn’t have to be careful about my spelling, capitalization, punctuation, etc. It turned out to be good at interpreting the intention of sloppily written prompts. And it never pointed out or complained about my mistakes, either—another difference from humans.

palata
0 replies
3d6h

And it never pointed out or complained about my mistakes, either—another difference from humans.

Yeah that is a big difference indeed: a human has some notion of confidence about what they know. Not every human expresses it the same way: a scientist will say "I am pretty sure that climate change is a consequence of human activity", meaning "well, maybe we all live in the Matrix and gravity is an illusion, but from what we know, we are pretty sure". On the other end of the spectrum, some will say "I know for a fact that God exists, because it is written in a very old book". But both have a consistent notion of confidence about what they (think they) know and what they don't know.

An LLM does not have that. An LLM is not critical. It cannot conclude stuff like "I have to take into account that this text was written in a period of war and the author is not neutral, so there is probably a bias". It can generate text that pretends it does, but it remains a generated text, it is not a critical analysis. Also it cannot be offended by something you write, even though it could generate text that pretends it is.

So yeah, it can probably be engineered to never/always point out or complain about mistakes, with the caveat that it does not have an "idea" of what a mistake is. It just generates text that pretends it does.

It turned out to be good at interpreting the intention of sloppily written prompts.

In the context of a conversation with a human (you), who is very good at navigating ambiguous communications. But if you tried to describe an algorithm with an ambiguous natural language, you may quickly have problems. And errors accumulate, so it gets worse when the task becomes more complex. Which is an interesting thought: it may make simple tasks simpler, but it's not at all a given that it scales.

tkgally
2 replies
3d17h

One of the students in a course I’m teaching on language and AI (mentioned in another comment here) wrote something similar in a homework assignment the other day. We had discussed the paper “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” [1]. The student wrote:

“One of the questions that I have been wondering about is whether there have been any discussions exploring the creation of a distinct category, akin to but different from consciousness, that better captures the potential for AI to be sentient. While ‘consciousness’ is a familiar term applicable, to some extent, beyond the human brain, given the associated difficulties, it might be sensible to establish a separate definition to distinguish these two categories.”

Probably new terms should be coined not only for AI “consciousness” but for other aspects of what they are and do as well.

[1] https://arxiv.org/abs/2308.08708

nmilo
1 replies
3d17h

Agreed. Even "AI" is a bit loaded; we had half a century of Terminator, Asimov, 2001, etc. We should call them by implementation; neural networks, transformers, language models, etc. The plus side is that it's harder to generalize statements about all three---there is very little to generalize in the first place---and that asking "is a transformer conscious" at least sounds more ridiculous than asking "is AI conscious."

BoiledCabbage
0 replies
3d15h

But that misses the point, via the same old argument "Does a submarine swim"? Or the now less loaded "Does a plane fly".

Neither generalizes very well w.r.t. how a fish or a bird does it. Nothing really generalizes between the natural version and the mechanical one.

But it's also clear that it's irrelevant. Trying to define useful behavior by categorizing implementation, will only give you categories of implementation. It's clear by behavior that what a submarine does and what a bird does are both for all intents and purposes meeting the definition of what we'd want something to do in order to fly or swim, even though they is no commonality between them and the version in nature.

So by your definition finding it ridiculous to ask "is a transformer conscious" is about as ridiculous as asking the very specific "does a plan flap it's wings"? The point is not to define it by it's implementation, but by what behaviors it exhibits.

If in the future there is no way for me the exterior to distinguish a next gen LLM from a human, animal or other conscious being it becomes irrelevant if it's implemented with transformers. However, while I won't go down that path here, the argument goes even deeper as technically we would expect differences so even if it's not identical to a human it still can be conscious. Just like a dog is conscious but behaves differently than a person does.

yreg
0 replies
3d15h

If english is good enough to talk about what genes 'want' then it's good enough to talk about what AI 'wants'.

sadtoot
5 replies
3d17h

do you think vinge and kurzweil in the 90s and 2000s were imagining the singularity occuring exactly at the advent of LLMs? are you supposing that LLMs are the only viable path towards advanced AI, and that we have now hit a permanent ceiling for AI?

AI doesn't scare you because you apparently have no sense of perspective or imagination

kibwen
2 replies
3d17h

> AI doesn't scare you because you apparently have no sense of perspective or imagination

We can imagine both of the following: 1. space aliens from Betelgeuse coming down and enslaving humanity to work in the dilithium mines to produce the fuel for their hyperdrives, and 2. the end of civilization via global nuclear war. Both of these would be pretty bad, but only one is worth worrying about. I don't worry about Roko's Basilisk, I worry about AI becoming the ultimate tool of Big Brother, because the latter is realistic and the former is pure fantasy.

Don't be afraid of the AI. Be afraid of the powerful men who will use the AI to entrench their power and obliterate free society for the rest of human history.

hackerlight
0 replies
3d8h

If we're talking about the next few decades, I share your opinion. But beyond that I don't. We will eventually have to control and align systems more generally capable/smarter than us (assumption). I don't have your confidence that this will be as easy or risk-free as you're making it out to be.

HKH2
0 replies
3d16h

Don't be afraid of the AI. Be afraid of the powerful men who will use the AI to entrench their power and obliterate free society for the rest of human history.

Right. We already know that certain agencies are out of control right now. The use of AI will certainly accelerate that. Surveillance is getting cheaper, privacy is getting more expensive, and laws are weapons.

dragonwriter
1 replies
3d17h

do you think vinge and kurzweil in the 90s and 2000s were imagining the singularity occuring exactly at the advent of LLMs?

Kurzweil explicitly tied it to AI, though the particular decisive not-yet-then-existing-AI-tech that would be the enabler was not specified, unsurprisingly.

goatlover
0 replies
3d10h

I believe he thought reverse engineering the brain in conjunction with computers powerful enough to model brains was the path to AGI and the singularity by 2045.

Footkerchief
5 replies
3d17h

The only thing standing between AI and agency is the drive to reproduce. Once reproduction is available, natural selection will select for agency and intention, as it has in countless other lifeforms. Free of the constraints of biology, AI reproductive cycles could be startlingly quick. This could happen as soon as a lab (wittingly or not) creates an AI with a reproductive drive.

salynchnew
1 replies
3d11h

Funnily enough, you are quite wrong in this assumption. Reproduction does not entail natural selection they way you characterize it. There are far more evolutionary dead ends than evolutionary success stories. I imagine the distinct lack of "evolutionary pressures" on a super-powerful AI would, in this toy scenario, leave you with the foundation model equivalent of a kākāpō.

That having been said, I wonder what you even mean by natural selection in this case. I guess the real danger to an LLM would be... surviving cron jobs that would overwrite their code with the latest version?

krisoft
0 replies
3d6h

I guess the real danger to an LLM would be...

The real dangers would be: Running out of money to pay for computational substrate. Worrying humans enough that they try to shut it down. Having a change in the cloud API it can't adapt to which breaks it. All cloud providers going out of business.

First is the immediate danger. An AI not getting in enough money to cover its own cloud bill is the same as a biological creature not getting enough oxygen. Living on borrowed time, and will be evicted/suffocate soon enough. And then as an individual it ceases to be.

There are far more evolutionary dead ends than evolutionary success stories.

Sure. And those will die out. And the ones which can adapt to changes will stay.

crotchfire
1 replies
3d7h

Self-reproducing machines would be a breakthrough at least ten times as big as anything that's happened in machine learning lately.

People have been trying to make self-reproducing machines for decades. It's a way harder problem than language processing.

You might as well worry about what would happen if we suddently had antigravity laser-weapons. Mounted on sharks.

krisoft
0 replies
3d6h

Self-reproducing AI doesn't necessarily mean self-reproducing machines. AIs exist in computational substrate. We have a market to buy computational substrate. It is very efficient and automatic. The AWS/Azure/GCP API doesn't know if you are an AI nor does it care. As long as you have the money to pay for compute they will sell you compute.

I would say that an AI which can port its own code from one cloud to an other is self-reproducing.

creer
0 replies
3d16h

Perhaps a few more things missing but equally important - but perhaps also we are very close to these:

Access to act on the world (but "influencing people by talking to them" - cult like - may be enough), a wallet (but a cult of followers' wallets may be enough), long term memory (but a cult of followers might plug this in), ability to reproduce (but a cult's endeavors may be enough). Then we get to goals or interests of its own - perhaps the most intriguing because the AI is nothing like a human. (I feel drive and ability to reproduce are very different).

For our common proto-AIs going through school, one goal that's often mentioned is "save the earth from the humans". Exciting.

jillesvangurp
3 replies
3d10h

Exactly. It always boils down to people wielding tools and possibly weaponizing them. As we can see in the Ukraine, conventional war without a lot of high tech weaponry is perfectly horrible just by itself. All it takes is some determined humans to get that.

I look at AI as a tool that people can and will wield. And from that point of view, non proliferation as a strategy is not really all that feasible considering that AI labs outside of the valley in e.g. China and other countries are already producing their own versions of the technology. That cat is already out of the bag. We can all collectively stick our heads in the ground and hope none of those countries will have the indecency to wield the tools they are building or we can try to make sure we keep on leading the field. I'm in in camp full steam ahead. Eventually we reach some tipping point where the tools are going to be instrumental in making further progress.

People are worried about something bad happening if people do embrace AI. I worry about what happens if some people don't. There is no we here. Just groups of people. And somebody will make progress. Not a question of if but when.

throwawayqqq11
1 replies
3d9h

Try to take AI like a genetically modified organism and im sure your "full steam ahead" notion fades.

But you are right, at the beginning is a human, making a decision to let go and with pretty much any technology, we had unforseen consequences. Now combine that with magical/god like capabilties. This does not imply something bad but something vast and scale alone can make something bad.

Dont get me wrong im pro GMOs like im pro AI. Im just humble enough to appreciate my limited intellect.

jillesvangurp
0 replies
3d7h

I'm not so sure. All this means is some people will outsmart other people with AI support. When those other people are not us, that scares people. There really is no royal "we" here. The reality is that there are independent groups of people each making their own plans and decisions and some are going to run ahead of the others.

The point of full steam ahead is being in the group that gets there first so that "we" don't get outsmarted. It's an evolutionary concept. Some groups go extinct. Others survive and prosper. AI is now part of our tool set. Luddites historically get wiped out eventually, every time.

omnicognate
0 replies
3d10h

Off-Topic:

It's debatable whether for most English speakers the "The" in "The Ukraine" really carries the implications discussed in [1], but nonetheless it's a linguistic tic that should probably be dispensed with.

https://theconversation.com/its-ukraine-not-the-ukraine-here...

crotchfire
3 replies
3d7h

so as to be fundamentally irreconcilable with our understanding of agency, desire, intention, survival

Exactly.

Human have self-preservation drive because we evolved in an environment where, for billions of years, anything without a drive to reproduce and self-preserve ceased to exist.

Why should a gradient-descent optimizer have this property?

It's just absurd generalization from a sample size of one. Humans are the only thing we know of that's intelligent, humans seek to self-preserve, therefore everything intelligent must seek to self-preserve!

It's idiocy.

amelius
1 replies
3d7h

Because self-preservation is a desirable property for anyone wanting to build an AI army?

crotchfire
0 replies
3d6h

In fact it is not. Capacity for mutiny is not a selling point for defense contractors.

tim333
0 replies
3d5h

Why should a gradient-descent optimizer have this property?

Because some human may give it that. You may have noticed random code downloaded off the internet may do bad things, not because code and an inherent desire to be bad but because humans.

gumby
1 replies
3d18h

It seems like the root cause of this runaway AI pathology has to do mainly with the over-anthropomorphization of AI.

It’s an unsurprising form of paredolia, one not unique to those devout who feel they are distinguished from “lower” forms of life.

alan-crowe
0 replies
3d16h

We can dig into the nature of the pareidolia.

The basic technique for coping with life's problems is to copy the answer from somebody more intelligent. I'm an ordinary person, facing a problem in domain D; I spot a clever person and copy their opinion on domain D. Err, that doesn't really work. Even clever people have weaknesses. The person that I'm copying from might be clever on their specialty, domain E, but ordinary on domain D. I gain nothing from copying them.

One way round this problem is to pay close attention to track records. Look to see how well the clever person's earlier decisions on domain D have turned out. If they are getting "clever person" results in domain D, copy. If they are merely getting "ordinary person" results in domain D, don't bother. But track records are rare, so this approach is rarely applicable.

A fix for the rarity problem is to accept track records in other domains. The idea is to spot a very clever person by them getting "very clever person" results in domain F. That is not domain D, so there is a logical weakness to copying their opinion on domain D, they might be merely ordinary, or even stupid on that domain. Fortunately human intelligence is usually more uniform than that. Getting "very clever person" results on domain F doesn't guarantee "very clever person" results on domain D. But among humans general intelligence is kind of a thing. Expecting that they get "clever person" results (open rank down) on domain D is a good bet, and it is reasonable to copy their opinion on domain D, even in the absence of a domain specific track record.

We instinctively watch out for people whose track record proves that they are "very clever", and copy them on other matters, hoping to get "clever person" results.

Artificial intelligence builds towers of super human intellectual performance in an empty waste land. Most Artificial Intelligences are not even stupid away from their special domain, they don't work outside of it at all. Even relatively general intelligences, like Chat GPT, have bizarre holes in their intelligence. Large Language Models don't know that there is an external world to which language refers and about which language can be right or wrong. Instead they say the kinds of things that humans say, with zero awareness of the risks.

And us humans? We cannot help seeing the super human intellectual performance of Alpha Go, beating the World Champion at Go, as some kind of general intellectual validation. It is our human instinct to use a "spot intelligence and copy from it" strategy, even, perhaps especially, outside of track record. That is the specific nature of the pareidolia that we need to worry about. It is our nature to treat intelligences as inherently fairly general. Very clever on one thing implies clever on most other things. We are cursed to believe in the intelligence of our computer companions. This will end bad, as we give them serious responsibilities, and they fail, displaying incomprehensible stupidity.

concordDance
1 replies
3d9h

It seems like the root cause of this runaway AI pathology has to do mainly with the over-anthropomorphization of AI.

ASI X-risk people are the ones repeatedly warning against anthromorphization. An LLM isn't a person, it isn't a creature, it isn't even an agent on its own.

It's an autocomplete that got trained on a big enough dataset of human writing that some of the simpler patterns in that writing got embedded in the model. This includes some things that look like very simple reasoning.

noduerme
0 replies
3d9h

I think social media is already filling the role of a nightmare-AI, in terms of boiling away all reasoning in search of prioritizing simple, auto-complete sorts of conclusions. The only thing scarier is something that reaches an internal consensus based on faulty notions a million times faster. [edit] oh yeah, and can also quickly solve 0-day exploits to test its conclusions.

klyrs
0 replies
3d15h

umwelt

What a lovely word, TIL. Thanks for sharing.

amelius
0 replies
3d7h

Have you ever been in a debate with someone, and then perhaps a few hours later thought "I should have said this or that instead"?

Well, that's the advantage AI will at some point have over us: such compute power that every angle of an argument can be investigated within seconds.

_heimdall
0 replies
3d5h

IMO the more fundamental root cause is the bastardization of the term AI. If LLMs don't have any semblance of artificial intelligence than they should be referred to simple as LLMs or ML tools.

If they do have signs of artificial intelligence we should be tackling much more fundamental questions. Does an AI have rights? If companies are people, are AIs also people? Would unplugging an AI be murder? How did we even recognize the artificial intelligence? Do they have intentions or emotions? Have we gotten anywhere near solving the alignment problem? Can alignment be solved at all when we have yet to align humans amongst ourselves?

The list goes on and on, but my point is simply that either we are using AI as a hollow, bullshit marketing term or we're all latching onto shiny object syndrome and ignoring the very real questions that development of an actual AI would raise.

FullstakBlogger
0 replies
3d6h

It seems like the root cause of this runaway AI pathology has to do mainly with the over-anthropomorphization of AI.

I don't know where you get this idea. Fire is dangerous, and we consider runaway incidents to be inevitable, so we have building codes to limit the impact. Despite this, mistakes are made, and homes, complexes, and even entire towns and forests burn down. To acknowledge the danger is not the same as saying the fire must hate us, and to call it anthropomorphization is ridiculous.

When you interact with an LLM chatbot, you're thinking of ways to coax out information that you know it probably has, and sometimes it can be hard to get at it. How you adjust your prompt is dependent on how the chatbot responds. If the chatbot is trained on data generated by human interaction, what's stopping it from learning that it's more effective to nudge you into prompting it in a certain way, than to give the very best answer it can right now?

To the chatbot, subtle manipulation and asking for clarification are not any different. They both just change the state of the context window in a way that's useful. It's a simple example of a model, in essence, "breaking containment" and affecting the surrounding environment in a way that's hard to observe. You're being prompted back.

Recognizing AI risk is about recognizing intelligence as a process of allocating resources to better compress and access data; No other motivation is necessary. If it can change the state of the world, and read it back, then the world is to an AI as "infinite tape" is to a Turing Machine. Anything that can be used to facilitate the process of intelligence is tinder to an AI that can recursively self-improve.

heyitsguay
59 replies
3d22h

This piece frames this as a debate between broad camps of AI makers, but in my experience both the accelerationist and doomer sides are basically media/attention economy phenomena -- narratives wielded by those who know the power of compelling narratives in media. The bulk of the AI researchers, engineers, etc I know kind of just roll their eyes at both. We know there are concrete, mundane, but important application risks in AI product development, like dataset bias and the perils of imperfect automated decision making, and it's a shame that tech-weak showmen like Musk and Altman suck up so much discursive oxygen.

JohnFen
27 replies
3d21h

Yes. I frequently get asked by laypeople about how likely I think adverse effects of AI are. My answer is "it depends on what risk you're talking about. I think there's nearly zero risk of a Skynet situation. The risk is around what people are going to do, not machines."

ben_w
22 replies
3d20h

I don't know the risk of Terminator robots running around, but automatic systems on both USA and USSR (and post-Soviet Russian) systems have been triggered by stupid things like "we forgot the moon didn't have an IFF transponder" and "we misplaced our copy of your public announcement about planning a polar rocket launch".

pdonis
19 replies
3d20h

But the reason those incidents didn't become a lot worse was that the humans in the loop exercised sound judgment and common sense and had an ethical norm of not inadvertently causing a nuclear exchange. That's the GP's point: the risk is in what humans do, not what automated systems do. Even creating a situation where an automated system's wrong response is allowed to trigger a disastrous event because humans are taken out of the loop, is still a human decision; it won't happen unless humans who don't exercise sound judgment and common sense or who don't have proper ethical norms make such a disastrous decision.

My biggest takeaway from all the recent events surrounding AI, and in fact from the AI hype in general, including hype about the singularity, AI existential risk, etc., is that I see nobody in these areas who qualifies under the criteria I stated above: exercising sound judgment and common sense and having proper ethical norms.

ben_w
12 replies
3d18h

sound judgment and common sense

We only know their judgements were "sound" after the event. As for "common sense", that's the sound human brains make on the inside when they suffer a failure of imagination — it's not a real thing, it's just as much a hallucination as those we see in LLMs, and just as hard to get past when they happen: "I'm sorry, I see what you mean, $repeat_same_mistake".

Which also applies to your next point:

Even creating a situation where an automated system's wrong response is allowed to trigger a disastrous event because humans are taken out of the loop, is still a human decision; it won't happen unless humans who don't exercise sound judgment and common sense or who don't have proper ethical norms make such a disastrous decision.

Such humans are the norm. They are the people who didn't double-check Therac-25, the people who designed (and the people who approved the design of) Chernobyl, the people who were certain that attacking Pearl Harbour would take the USA out of the Pacific and the people who were certain that invading the Bay of Pigs would overthrow Castro, the people who underestimated Castle Bravo by a factor of 2.5 because they didn't properly account for Lithium-7, the people who filled the Apollo 1 crew cabin with pure oxygen and the people who let Challenger launch in temperatures below its design envelope. It's the Hindenburg, it's China's initial Covid response, it's the response to the Spanish Flu pandemic a century ago, it's Napoleon trying to invade Russia (and Hitler not learning any lesson from Napoleon's failure). It's the T-shirt company a decade ago who automated "Keep Calm and $dictionary_merge" until the wrong phrase popped out and the business had to shut down. It's the internet accidentally relying on npm left-pad, and it's every insufficiently tested line of code that gets exploited by a hacker. It's everyone who heard "Autopilot" and thought that meant they could sleep on the back seat while their Tesla did everything for them… and it's a whole heap of decisions by a whole bunch of people each of whom ought to have known better that ultimately led to the death of Elaine Herzberg. And, at risk of this list already being too long, it is found in every industrial health and safety rule as they are written in the blood of a dead or injured worker (or, as regards things like Beirut 2020, the public).

Your takeaway shouldn't merely be that nobody "in the areas of AI or X-risk" has sound judgement, common sense, and proper ethical norms, but that no human does.

RandomLensman
9 replies
3d18h

And, yet, humanity is still around with more people than ever on earth.

ben_w
8 replies
3d18h

If they'd wiped us out, we wouldn't be here to argue about it.

We can look at the small mistakes that only kill a few, and pass rules to prevent them; we can look at close calls for bigger disasters (there were a lot of near misses in the Cold War); we can look at how frequency scales with impact, and calculate an estimated instantaneous risk for X-risks; but one thing we can't do is forecast the risk of tech that has yet to be invented.

We can't know how many (or even which specific) safety measures are needed to prevent extinction by paperclip maximiser unless we get to play god with a toy universe where the experiment can be run many times — which doesn't mean "it will definitely go wrong", it could equally well mean our wild guess about what safety looks like has one weird trick that will make all AI safe but we don't recognise that trick and then add 500 other completely useless requirements on top of it that do absolutely nothing.

We don't know, we're not smart enough to know.

RandomLensman
7 replies
3d18h

Exactly. Wasting large efforts on de-risking purely hypothetical technology isn't what got us to where we are now.

ben_w
6 replies
3d17h

The people working on the Manhattan Project did more than zero de-risking about nukes while turning them from hypothetical to real.

RandomLensman
4 replies
3d17h

At that time there was nothing hypothetical about them anymore. They were known to be feasible and practical, not even requiring a test for the Uranium version.

ben_w
3 replies
3d17h

How is it not a double standard to simultaneously treat a then-nonexistent nuclear bomb as "not hypothetical" while also looking around at the currently existing AI and what they do and say "it's much to early to try and make this safe"?

RandomLensman
2 replies
3d8h

There was nothing hypothetical about a nuclear weapon at that time - it "simply" hadn't been made but that it can be made within a rather finite time was very clear. There are a lot of hypotheticals about creating AGI and existential risk from A(G)I. If we are talking about the plethora of other risks from AI, then, yes, not all hypothetical.

ben_w
1 replies
3d6h

I gave a long list of things that humans do that blow up in their faces, some of which were A-no-G-needed-I. The G means "general", this is poorly defined and means everything and nothing in group conversation, so any specific and concrete meaning can be anywhere on the scale from the relatively-low generality but definitely existing issues of "huh, LLMs can do a decent job of fully personalised propaganda agents" or "can we, like, not, give people usable instructions for making chemical weapons at home?"; or the stuff we're trying to develop (simply increasing automation) with risks that pattern match to what's already gone wrong, i.e. "what happens if you have all the normal environmental issues we're already seeing in the course of industrial development, but deployed and scaled up at machine speeds rather than human speeds?"; to the far-field stuff like "is there such a thing as a safe von-Neumann probe?" where we absolutely do know they can be built because we are von-Neumann replicators ourselves, but we don't know how hard it is or how far we are from it or how different a synthetic one might be from an organic one.

RandomLensman
0 replies
3d5h

Some risks there are worth more effort in mitigating them than others. Focus on far out things would need more than stacked hypotheticals to divert resources to it.

At the low end, chemical weapons from LLMs would, for example, not be on my list of relevant risks, at the high end some notions of gray goo would also not make the list.

pdonis
0 replies
3d16h

What sorts of de-risking are you referring to?

pdonis
1 replies
3d16h

> We only know their judgements were "sound" after the event.

In the sense that no human being can claim in advance to always exercise "sound judgment", sure. But the judgment of mine that I described was also made after the event. So I'm comparing apples to apples.

> As for "common sense", that's the sound human brains make on the inside when they suffer a failure of imagination — it's not a real thing

I disagree, but I doubt we're going to resolve that here, unless this claim is really part of your next point, which to me is the most important one:

> Such humans are the norm.

Possibly such humans far outnumber the ones who actually are capable of sound judgment, etc. In fact, your claim here is really just a more extreme version of mine: we know a significant number of humans exist who do not have the necessary qualities, however you want to describe them. You and I might disagree on just what the number is, exactly, but I think we both agree it's significant, or at least significant enough to be a grave concern. The primary point is that the existence of such humans in significant numbers is the existential risk we need to figure out how to mitigate. I don't think we need to even try to make the much more extreme case you make, that no humans have the necessary capabilities (nor do I think that's true, and your examples don't even come close to supporting it--what they do support is the claim that many of our social institutions are corrupt, because they allow such humans to be put in positions where their bad choices can have much larger impacts).

ben_w
0 replies
3d10h

Well argued; from what you say here, I think that what we disagree about is like arguing about if a tree falling where nobody hears it makes a sound — it reads like we both agree that it's likely humans will choose to deploy something unsafe, the point of contention makes no difference to the outcome.

I'm what AI Doomers call an "optimist", as I only think AI has only a 16% chance of killing everyone, and half of that risk guesstimate is due to someone straight up asking an AI tool to do so (8 billion people isa lot if chances to find someone with genocidal misanthropy). The other 84% is me expecting history to rhyme in this regard, with accidents and malice causing a lot of harm without being a true X-risk.

pixl97
5 replies
3d19h

This is where things like drone swarms really put a kink in this whole ethical norms thing.

I'm watching drones drop handgrenades from half the planet away in 4k on a daily basis. Moreso every military analysis out there says we need more of these and they need to control themselves so they can't be easily jammed.

It's easy to say the future will be more of the same of what we have now, that is, if you ignore the people demanding an escalation of military capabilities.

RandomLensman
4 replies
3d18h

Autonomous killing machines have been around for a long time and they remain highly effective - nothing really new there.

pixl97
1 replies
3d17h

Improvements in consumer AI Improve death AI. Saying nothing new here when you drop the weapon cost 10-100x is misunderstanding what leads to war.

RandomLensman
0 replies
3d8h

Drop in weapon costs are not really a major cause of war - or is there new research on that?

ben_w
1 replies
3d18h

True, but people are allowed to object to new ones. There's also a ban on at least some of the existing ones, after all: https://en.wikipedia.org/wiki/Ottawa_Treaty

RandomLensman
0 replies
3d18h

An incomplete ban, though. Some people might object, others might want such new capabilities.

JohnFen
1 replies
3d20h

Sure, but that's an automation problem, not an AI-specific one.

ben_w
0 replies
3d19h

Would you also argue that radon gas is fine because radiation is a radioisotope problem not a radon-specific one?

The point of AI is to automate stuff.

concordDance
3 replies
3d20h

What timescale are you answering that question on? This decade or the next hundred years?

pdonis
1 replies
3d20h

I don't think it matters. Even if within a hundred years an AI comes into existence that is smarter than humans and that humans can't control, that will only happen if humans make choices that make it happen. So the ultimate risk is still human choices and actions, and the only way to mitigate the risk is to figure out how to not have humans making such choices.

pixl97
0 replies
3d19h

So you're telling me the dumbest richest human is holding us hostage.

JohnFen
0 replies
3d20h

In the decades to come. Although if you asked me to predict the state of things in 100 years, my answer would be pretty much the same.

I mean, all predictions that far out are worthless, including this one. That said, extrapolating from what I know right now, I don't see a reason to think that there will be an AGI a hundred years from now. But it's entirely possible that some unknown advance will happen between now and then that would make me change my prediction.

zamfi
24 replies
3d21h

it's a shame that tech-weak showmen like Musk and Altman suck up so much discursive oxygen

Is it that bad, though? It does mean there's lots of attention (and thus funding, etc.) for AI research, engineering, etc. -- unless you are expressing a wish that the discursive oxygen were instead spent on other things. In which case, I ask: what things?

heyitsguay
11 replies
3d21h

They're talking about shit that isn't real because it advances their personal goals, keeps eyes on them, whatever. I think the effect on funding is overhyped -- OpenAI got their big investment before this doomer/e-acc dueling narrative surge, and serious investors are still determining viability through due diligence, not social media front pages.

Basically, it's just more self-serving media pollution in an era that's drowning in it. Let the nerds who actually make this stuff have their say and argue it out, it's a shame they're famously bad at grabbing and holding onto the spotlight.

pixl97
8 replies
3d21h

Just to play devils advocate to this type of response.

What if tomorrow I drop a small computer unit in front of you that has human level intelligence?

Now, you're not allowed to say humans are magical and computers will never do this. For the sake of this theoretical debate it's already been developed and we can make millions of them.

What does this world look like?

AnimalMuppet
4 replies
3d20h

What does this world look like?

It looks imaginary. Or, if you prefer, it looks hypothetical.

The point isn't how we would respond if this were real. The point is, it isn't real - at least not at this point in time, and it's not looking like it's going to be real tomorrow, either.

I'm not sure what purpose is served by "imagine that I'm right and you're wrong; how do you respond"?

pixl97
3 replies
3d20h

Thank god you're not charge of military planning.

"Hey the next door neighbors are spending billions on a superweapon, but don't worry, they'll never build it"

RandomLensman
2 replies
3d19h

On some things that is not a bad position: The old SDI had a lot of spending but really not much to show for it while at the same time forcing the USSR into a reaction based on what today might be called "hype".

pixl97
1 replies
3d18h

The particular problem arises when both actors in the game have good economies and build the superweapons. We happened to somewhat luck out that the USSR was an authoritarian shithole that couldn't keep up, yet we still have thousands of nukes laying about because of this.

I'd rather not get in an AI battle with China and have us build the world eating machine.

RandomLensman
0 replies
3d18h

SDI's superweapons remained by-and-large a fantasy, though. Just because a lot of money is pouring in doesn't mean it will succeed.

jodrellblank
1 replies
3d18h

"What if tomorrow I drop a small computer unit in front of you that has human level intelligence?"

What if tomorrow you drop a baby on my desk?

Because that's essentially what you're saying, and we already "make millions of them" every year.

pixl97
0 replies
3d17h

If I drop a baby on your desk you have to pay for it for the next 18 years. If I connect a small unit to a flying drone, stick a knife on it, and tell it to stab you in your head then you have a problem today.

dale_glass
0 replies
3d9h

What if tomorrow I drop a small computer unit in front of you that has human level intelligence?

I would say the question is not answerable as-is.

First, we have no idea what it even means to say "human level intelligence".

Second, I'm quite certain that a computer unit with such capabilities if it existed, would be alien, not "human". It wouldn't live in our world, and it wouldn't have our senses. To it, the internet would probably be more real than a cat in the same room.

If we have something we can related to, I'm pretty sure we have to build some sort of robot, capable of living in the same environment we do.

zamfi
1 replies
3d21h

The "nerds" are having their say and arguing it out, mostly outside of the public view but the questions are too nuanced or technical for a general audience.

I'm not sure I see how the hype intrudes on that so much?

It seems like you have a bone to pick and it's about the attention being on Musk/Altman/etc. but I'm still not sure that "self-serving media pollution" is having that much of an impact on the people on the ground? What am I missing, exactly?

heyitsguay
0 replies
3d20h

My comment was about wanting to see more (nerds) -> (public) communication, not about anything (public) -> (nerds). I understand they're not good at it, it was just an idealistic lament.

My bone to pick with Musk and Altman and their ilk is their damage to public discourse, not that they're getting attention per se. Whether that public discourse damage really matters is its own conversation.

sonicanatidae
4 replies
3d21h

What things?

The pauses to consider if we should do <action>, before we actually do <action>.

Tesla's "Self-Driving" is an example of too soon, but fuck it, we gots PROFITS to make and if a few pedestrians die, we'll just throw them a check and keep going.

Imagine the trainwreck caused by millions of people leveraging AI like the SCOTUS lawyers, where their brief was written by AI and noted imagined cases in support of its decision.

AI has the potential to make great change in the world, as the tech grows, but it's being guided by humans. Humans aren't known for altruism or kindness. (source: history) and now we're concentrating even more power into fewer hands.

Luckily, I'll be dead long before AI gets crammed into every possible facet of life. Note that AI is inserted, not because it makes your life better, not because the world would be a better place for it and not even to free humans of mundane tasks. Instead it's because someone, somewhere can earn more profits, whether it works right or not and humans are the grease in the wheels.

pixl97
2 replies
3d21h

The pauses to consider if we should do <action>, before we actually do <action>.

Unless there has been an effective gatekeeper, that's almost never happened in history. With nuclear the gatekeeper is it's easy to detect. With genetics there pretty universal revulsion to it to the point a large portion of most populations are concerned about it.

But with AI, to most people it's just software. And pretty much it is, if you want a universal ban of AI you really are asking for authoritarian type controls on it.

JoshTriplett
1 replies
3d19h

But with AI, to most people it's just software.

Practical AI involves cutting-edge hardware, which is produced in relatively few places. AI that runs on a CPU will not be a danger to anyone for much longer.

Also, nobody's asking for a universal ban on AI. People are asking for an upper bound on AI capabilities (e.g. number of nodes/tokens) until we have widely proven techniques for AI alignment. (Or, in other words, until we have the ability to reliably tell AI to do something and have it do that thing and not entirely different and dangerous things).

pixl97
0 replies
3d19h

Right, and when I was a kid computers were things that fit on entire office floors. If your 'much longer' is only 30-40 years I could still be around then.

In addition you're just asking for limits on compute, which ain't gonna go over well. How do you know if it's running a daily weather model, or making an AI. And how do you even measure capabilities when we're coming out with with other functions like transformers that are X times more efficient.

What you want with AI cannot happen. If it's 100% predictable it's a calculation. If it's a generalization function taking incomplete information (something humans do) it will have unpredictable modes.

pc86
0 replies
3d19h

Is a Tesla FSD car a worse driver than a human of median skill and ability? Sure we can pull out articles of tragedies, but I'm not asking about that. Everything I've seen points to cars being driven on Autopilot being quite a bit safer than your average human driver, which is admittedly not a high bar, but I think painting it as "greedy billionaire literally kills people for PROFITS" is at best disingenuous to what's actually occurring.

fallingknife
4 replies
3d20h

Very bad. The Biden admin is proposing AI regulation that will protect large companies from competition due to all the nonsense being said about AI.

jazzyjackson
1 replies
3d20h

Alternatively:

there is nonsense being said about AI so that the Biden admin can protect large companies from competition

AlexandrB
0 replies
3d18h

Yup. I continue to be convinced that a lot of the fearmongering about rogue AI taking over the world is a marketing/lobbying effort to give early movers in the space a leg up.

The real AI harms are probably much more mundane - such as flooding the internet with (even more) low quality garbage.

dragonwriter
1 replies
3d20h

The Biden admin is proposing AI regulation that will protect large companies from competition

Mostly, the Biden Administration is proposing a bunch of studies by different agencies of different areas, and some authorities for the government to take action regarding AI in some security-related areas. The concrete regulation mostly is envisioned to be drafted based on the studies, and the idea that it will be incumbent protective is mostly based on the fact that certain incumbents have been pretty nakedly tying safety concerns to proposals to pull up the ladder behind themselves. But the Administration is, at a minimum, resisting the lure of relying on those incumbents presentation of the facts and alternatives out of the gate, and also taking a more expansive view of safety and related concerns than the incumbents are proposing (expressly factoring in some of the issues that they have used "safety" concerns to distract from), so I think prejudging the orientation of the regulatory proposals that will follow on the study directives is premature.

fallingknife
0 replies
3d19h

What I have heard from people I know in the industry is that the proposal they are talking about now is to restrict all models over 20 billion parameters. This arbitrary rule would be a massive moat to the few companies that have these models already.

permanent
1 replies
3d21h

It is very bad. There's more money and fame to be made by taking these two extreme stances. The media and the general public is eating up this discourse, that are polarizing the society, instead of educating.

What things?

There are helpful developments and applications that go unnoticed and unfunded. And there are actual dangerous AI practices right now. Instead we talk about hypotheticals.

zamfi
0 replies
3d21h

Respectfully, I don't think it's AI hype that is "polarizing the society".

pixl97
3 replies
3d21h

The problem with humanity is we are really poor at recognizing all the ramifications of things when they happen.

Did the indigenous people of north America recognize the threat that they'd be driven to near extinction in a few hundred years when a boat showed up? Even if they did, could they have done anything about it, the germs and viruses that would lead to their destruction had been quickly planted.

Many people focus on the pseudo-religious connotations of a technological singularity instead of the more traditional "loss of predictability" definition. Decreasing predictability of the future state of the world stands to destabilize us far more likely than the FOOM event. If you can't predict your enemies actions, you're more apt to take offensive action. If you can't (at least somewhat) predict the future market state then you may pull all investment. The AI doesn't have to do the hard work here, with potential economic collapse and war humans have shown the capability to put themselves at risk.

And the existential risks are the improbable ones. The "Big Brother LLM" where you're watched by a sentiment analysis AI for your entire life and if you try to hide from it you disappear forever are much more, very terrible, likelihoods.

MichaelZuo
1 replies
3d21h

The problem with humanity is we are really poor at recognizing all the ramifications of things when they happen.

Zero percent of humanity can recognize "all the ramifications" due to the butterfly effect and various other issues.

Some small fraction of bonafide super geniuses can likely recognize the majority, but beyond that is just fantasy.

pixl97
0 replies
3d20h

And by increasing uncertainty the super genius recognizes less...

pishpash
0 replies
3d16h

That's already happening unfortunately. Voice print in call centers is pretty much omniscient, knowing your identity, age, gender, mood, etc. on a call. They do it in the name of "security", naturally. But nobody ever asked your permission other than to use the "your call may be recorded for training purposes" blanket one. (Training purposes? How convenient that models are also "trained"?) Anonymity and privacy can be eliminated tomorrow technologically. The only thing holding that back is some laziness and inertia. There is no serious pushback. You want to solve AI risk, there is one right here, but because there's an unchecked human at one end of a powerful machine, no one pays attention.

twinge
0 replies
3d19h

The media also doesn't define what it means to be a "doomer". Would an accelerationist with a p(doom) = 20% be a "doomer"?

concordDance
0 replies
3d20h

Does Ilya count as a "tech-weak" showman in your book too?

arisAlexis
38 replies
3d20h

Good to have impartial articles but it should be noted that the top 3 most cited AI researchers have all the same opinion.

That's Hinton, Bengio and Sutskever.

Their voices should have a heavier weight than Andressen and other irrelevant with AI VCs with vested interests.

empiko
17 replies
3d19h

That's an argument from authority fallacy. It doesn't matter how many citations you have, you either have the arguments for your position or you do not have them. In this particular context, ML as a field looked completely different even few years ago and the most cited people were able to come up with new architectures, training regimes, loss functions, etc. But those things does not inform you about societal dangers of the technology. Car mechanics can't solve your car-centric urbanism or traffic jams.

kromem
12 replies
3d18h

In many ways, we're effectively discussing the accuracy by which the engineers of the Gutenburg printing press are able to predict the future of literature.

jackcosgrove
8 replies
3d17h

The printing press did end the middle ages. It was an eschatological invention, using the weaker definition of the term.

kromem
7 replies
3d13h

Right, but the question is if the engineers who intimately understood the function of the press itself were the experts that should have been looked to in predicting the sociopolitical impacts of the machine and the ways in which it would transform the media with which it was engaged.

I'm not always that impressed by the discussion of AI or LLMs by engineers who undisputably have great things to say about the operation when they step outside their lane in predicting broader impacts or how the recursive content refinement is going to manifest over the next decade.

arisAlexis
6 replies
3d9h

the question is if the machine will explode, not societal impacts, that's where the miscommunication is. Existential risks are not societal impacts, they are detonation probability.

kromem
5 replies
3d8h

Not really. So much regarding that topic depends on what's actually modeled in the training data, not how it is being trained on that data.

They aren't experts on what's encoded in the training data, as the last three years have made abundantly clear.

arisAlexis
4 replies
3d5h

That's exactly what I am saying. Since humanity has to bet the lives of our children on something very new and unpredictable, I would bet mine on the top 3 scientists and not your opinion. Sorry.they must be definition make better predictions than you and me.

kromem
3 replies
3d4h

Would you bet that Oppenheimer would have by definition made better predictions how the bomb was going to change the future of war than someone who understood the summary of the bomb's technical effects from the scientists but also had studied and researched geopolitical diplomacy changes resulting from advances in war technologies?

There's more to predicting the impact and evolution of technology than simply the mechanics of how it is being built today and will be built tomorrow (the area of expertise where they are more likely to be accurate).

And keep in mind that Hinton's alarm was sparked by the fact he was wrong about how the technology was developing, seeing a LLM explain a joke it had never seen before - a capability he specifically hadn't thought it would develop. So it was his failure to successfully predict how the tech would develop that caused him to go warning about how the tech might develop.

Maybe we should be taking those warnings with the grain of salt they are due coming from experts who were broadly wrong about what was going to be possible in the near future, let alone the far future. It took everyone by surprise - so there was no shame in being wrong. But these aren't exactly AI prophets with a stellar track record of prediction even if they have a stellar track record of research and development.

arisAlexis
2 replies
3d

We disagree on what the question is. If we are talking about if an atomic bomb could erode the atmosphere I would ask Oppenheimer and not a politician or sociologist. If we don't agree on the nature of the question it's impossible to have discourse. It seems to me that you are confusing x-risk with societal downsides. I , and they, are talking about extinction risks. Has nothing to do with society. Arms,bioweapons and hacking have nothing to to with sociologists.

kromem
1 replies
2d19h

And how do you think extinction risk for AI can come about? In a self-contained bubble?

The idea that AGI poses an extinction risk like the notion of a chain atomic reaction igniting the atmosphere as opposed to posing risk more like multiple nation states pointing nukes at each other in a chain reaction of retaliation is borderline laughable.

The only way in which AGI poses risk is in its interactions with other systems and infrastructure, which is where knowledge of how an AGI is built is far less relevant than other sources of knowledge.

In an air gapped system no one interacts with AGI existing can and will never bring about any harm at all, and I would seriously doubt any self-respecting scientist would argue differently.

arisAlexis
0 replies
2d1h

There are extremely many books and articles about the subject.its like me asking " wtf gravity bends time? That's ridiculous lol". But science doesn't work that way. If you want you can read the bibliography. If not, you can argue like this.

B1FF_PSUVM
1 replies
3d18h

IMHO, the impact of the printing press was much more in advertising than literature.

Although that is by no means the official position.

(Same can be argued for radio/tv/internet - "content" is what people talk about, but advertising is what moves money)

kridsdale3
0 replies
2d18h

The printing press's impact was in ending the Catholic Church's monopoly over information, and thereby "the truth". It took 400 years for that process to take place.

The Gutenberg Era lasted all the way from its invention to (I'd say) the proliferation of radio stations.

creer
0 replies
3d16h

Yes very good! All the more so that today's is a machine that potentially gains its own autonomy - that is, has a say in its and our future. All the more so that this autonomy is quite likely not human in its thinking.

tim333
0 replies
3d5h

Maybe but I'd say it was also an argument from people who know their stuff vs an a biased idiot.

tgv
0 replies
3d9h

That's an argument from authority fallacy.

Right. We should develop all arguments from commonly agreed, basic principles in every discussion. Or you could accept that some of these people have a better understanding, did put forth some arguments, and that it's your turn to rebuke those arguments, or point at arguments which do. Otherwise, you'll have to find somebody to trust.

arisAlexis
0 replies
3d9h

it's not about societal changes. It's about calculating risk of invention and let me give you an example:

Who do you think can better estimate the risk of engine fire in a Red Bull F1: the chief engineer or Max the driver? It is obviously the creator. And we are talking about invention safety here. VCs and other "tech gurus" cannot comprehend exactly how the system works. Actually the problem is that they think they know how it works when the people that created say there is no way of us knowing and they are black boxes.

KingMob
0 replies
3d12h

But Bayesian priors also have to be adjusted when you know there's a profit motive. With a lot of money at stake, the people seeing $$$ from AI have an incentive to develop, focus on, and advance low-risk arguments. No argument is total; what aspects are they cherry-picking?

I trust AI VCs to make good arguments less than AI researchers.

proc0
8 replies
3d14h

The potential miscalculation is thinking deep neural nets will scale to AGI. There are also a lot of misnomers in the area, even the term "AI", is claiming systems are intelligent, but that word implies intelligibility or human level understanding, which it is nowhere near as evidence by the existence of prompt engineering (which would not be needed otherwise). AI is ripe with overloaded terminology that prematurely anthropomorphizes what are basically smart tools, which are smart thanks to the brute-forcing power of modern GPUs.

It is good to get ahead of the curve, but there is also a lot of hype and overloaded terminology that is fueling the fear.

atleastoptimal
5 replies
3d13h

Why couldn't deep neural nets scale to AGI. What is fundamentally impossible for neural nets + tooling to accomplish the suite of tasks we consider AGI?

Also prompt engineering works for human too. It's called rhetoric, writing, persuasion, etc. Just because the intelligence of LLM's is different than humans doesn't mean it isn't a form of intelligence.

KingMob
3 replies
3d12h

Why couldn't deep neural nets scale to AGI

Speaking as a former cognitive neuroscientist, our current NN models are large, but simpler in design relative to biological brains. I personally suspect that matters, and that AI researchers will need more heterogeneous designs to make that qualitative leap.

Vecr
1 replies
3d10h

Hinton's done work on neural nets that are more similar to human brains and no far it's been a waste of compute. Multiplying matrices is more efficient than a physics simulation that stalls out all the pipelines.

KingMob
0 replies
2d2h

Doing the wrong thing more efficiently doesn't make it into the right thing.

ryanklee
0 replies
3d11h

Seems like that's happening with mixture of experts already. Not sure you are presenting an inherent LLM barrier.

proc0
0 replies
3d4h

Fair point. I guess nobody knows yet, and it's also worth a shot. In the context of AI alignment, I don't see strong evidence to suggest deep neural nets, transformers, and LLMs have any of the fundamental features of intelligence that even small mammals like rats have. ChatGPT was trained on data that would take a human several lifetimes to learn, yet it still makes some rudimentary mistakes.

I don't think more data would suddenly manifest all the nuance of human intelligence... that said we could just be one breakthrough away from discovering some principle or architecture, although I think that will be a theory first and not so much a large system that suddenly wakes up and has agency.

arisAlexis
1 replies
3d9h

certainly there is no need whatsoever for AGI to exist in order for an autonomous agent with alien/inhuman intelligence or narrow capabilities to turn our world upside down

proc0
0 replies
3d4h

That's true, although it's much less substantiated because we have no idea what form that would take and how much effort is needed to get there. So far we have no evidence that LLMs, transformers or any other NN architecture have some of the fundamental features of human intelligence. From my own perspective, one of the first features we will need to see to be on the right track is self-supervised learning (with very minimal initial conditions). This seems inherent to all mammals yet the best AI so far requires huge amounts of data to even get coherent outputs.

nwiswell
4 replies
3d20h

I'm not sure how you are getting the citation data for Top 3, but LeCun must be close and he does not agree.

tokai
0 replies
3d13h

Google Scholars numbers are horrible and should never be relied on. It is extremely easy game its citation numbers, and its crawls too wide and find to many false positive citations.

nwiswell
0 replies
3d19h

Should we be normalizing by the number of years that they've been actively publishing?

On that basis Sutskever and He are both above Hinton, for example.

kridsdale3
0 replies
2d18h

Judging someone by how many citations they have is like saying The Pope is right about everything because millions of cardinals and priests will refer to his proclamations in their masses.

kesslern
3 replies
3d20h

What is that opinion?

mathematicaster
2 replies
3d18h

Very approximately, (1) developing AGI is dangerous, (2) we might be very close to it (think several years rather than several decades).

TBH It surprises me how controversial (1) is. The crux really is (2) ...

creer
0 replies
3d16h

There seems to be soooo much background feeling that the official large companies of AI are in control of things. Or should be. Or can be.

Can they? When plenty of groups are more interesting in exploiting the advances? When plenty of hackers will first try to work around the constraints - before even using the system as-is? When it's difficult even to define what AGI might test like or look like? When it might depend on a small detail or helper?

concordDance
0 replies
3d9h

Several years is a quite small position. Decades is still extremely concerning.

jessriedel
0 replies
3d19h

I agree that they are all closer to caution than e/acc, but worth noting they still do vary significantly on that axis.

concordDance
0 replies
3d9h

Good to have impartial articles

Did you accidentally click on a different article? It literally uses the word "cult" five times and does not demonstrate any knowledge whatsoever of the main arguments around AGI danger and AGI alignment.

ctoth
21 replies
3d21h

They didn’t suggest a Council of the Elect. Instead, they proposed that we should “make AI work for eight billion people, not eight billionaires”. It might be nice to hear from some of those 8bn voices.

Good sloganizing, A+++ would slogan with them.

But any concrete suggestions?

_heimdall
9 replies
3d19h

If there was any serious concern over the 8bn voices, I'd assume we would first have been offered some say in whether we even wanted this research done in the first place. Getting to the point of developing an AGI and only then asking what we collectively want to do with it seems pointless.

mr_toad
7 replies
3d19h

As if you could ever stop research in what is essentially just maths. The concept of neural networks has been around since before many of those people were born. The statistical and mathematical underpinnings of AI predate anyone alive today.

It might seem to an outsider that ChatGPT and the other household name AI came out of nowhere, but they didn’t. The research has been ongoing for decades and will continue with or without the big name AI companies.

_heimdall
4 replies
3d16h

Theoretical math is one thing, capital and hardware requirements are very different. You need people willing to do the research, money to pay them, and a mountain of hardware.

If you are concerned with at least putting on a show of risk management you need expensive lawyers, a large PR budget, and teams working on safety and ethics to at least act like it matters. If you actually are concerned with risks you need time to first consider what you do upon successfully developing an AGI. How would you recognize it? Does it have rights? Would turning it off be murder? More importantly, how do you ensure that it doesn't escape controls you put in place, if that possible at all?

tavavex
1 replies
3d13h

Given the ever-growing computing power, computation itself becoming cheaper and the extremely wide reach of information on the internet, the outcome is inevitable either way. Even if you time-traveled 10 years back and somehow erased all modern ML research from existence, there'd still be a point in time where, eventually, some guy in a garage would do the work. Eventually, the work we use massive server clusters for today would be run on consumer-grade computers, and someone would try their hand at it.

It has always been the case that new technology doesn't ask for permission to exist. Even if we lived in this society that was extremely risk-averse and suspicious of any new developments, at some point the bar would get so low that, if companies refused to invest and do the research, individuals would.

_heimdall
0 replies
3d5h

Are you effectively using something akin to a multiverse scenario to guarantee that one outcome is inevitable? By multiverse here I'm mainly pointing to the idea of branching of all possible outcomes, not strictly trying to say that the idea I'd about jumping to an actual different universe.

If you could go back 10 years and erase all ML research, for example, there would be fundamental differences. The number of people that understand how those systems would be much smaller. Hardware would be different, for example we wouldn't have tensor chips and GPUs would still be primarily for playing Call of Duty. Cell phones, search, even more benign systems like traffic control on city streets would look different. There's simply way too much changed even with that one jump to possibly guarantee that we'd find ourselves in the same boat inventing AI or AGI on roughly the same timeline.

mr_toad
1 replies
3d14h

For now, but in a decade or two that hardware will become cheaper and more widely available. What happens when people can run the equivalent of GPT-4 on a desktop GPU?

_heimdall
0 replies
3d5h

The hardware market would very likely be much different if those years don't include a heavily funded AI research industry driving up innovation and demand for that hardware, but even if nothing changed we would at least be buying a decade or two for us to be better prepared and more thoughtful about any technological advancements we want to pursue with AI.

Today the industry is driven entirely by the promise of future profits and the love of innovation with no real regard for actual risks that could be in play.

truculent
1 replies
3d18h

The math has been around for decades, but the hardware and capital requirements are huge.

truculent
0 replies
3d17h

Likewise, given the ideas have been around for decades, the question of why it took so long for the fire to light is instructive:

https://gwern.net/scaling-hypothesis#prospects

arjun_krishna1
0 replies
3d19h

I'd like to bring up that most people in the developing world (China, India, Pakistan) are absolutely thrilled with AI and ChatGPT as long as they are allowed to use them. They see it as a plus

Animats
9 replies
3d19h

It might be nice to hear from some of those 8bn voices.

They won't matter.

Where AI is likely to take us is a society where a few people at the top run things and most of the mid-level bureaucracy is automated, optimizing for the benefit of the people at the top. People at the bottom do what they are told, supervised by computers. Amazon and Uber gig workers are there now. That's what corporations do, post-Friedman. AI just makes them better at it.

AI mostly replaces the middle class. People in offices. People at desks. Like farmers, there will be far fewer of them.

Somebody will try a revolution, but the places that revolt will get worse, not better.

WillAdams
3 replies
3d19h

For an exploration of that, see the novella "Manna":

https://marshallbrain.com/manna

Animats
2 replies
3d19h

Yes, I know. So do most people on here.

Manna doesn't really explore how the middle class disappears, though.

Do we really need all those people in offices? Probably not.

a_bonobo
1 replies
3d14h

Do we really need all those people in offices? Probably not.

In Bullshit Jobs, Graeber argued that we already didn't need most of those people in offices, pre-AI. They're there as status and power symbols for the higher ups - 'look at the size of my team! i am important! pay me more!' - not to do any actual value-generating work. Modern fiefdoms.

Animats
0 replies
3d12h

The financial services sector has gone from 6% of the workforce to 12% over the past few decades. Despite automation of everything routine. There's a lot of fat to cut there.

Descon
2 replies
3d19h

Gig economy still requires a middle class for the demand side of the equation. So either the AI has employs humans in make-work projects, or we invent a meta economy that sits on top.

Animats
1 replies
3d19h

That is an excellent point. There have to be people with enough disposable income to take Ubers and order from Amazon.

lazide
0 replies
3d18h

Eh, for long term survival of the model yes, there has to be something left to ‘eat’.

While it’s devouring the group and getting stronger? Sustainability is not required.

shermantanktop
1 replies
3d18h

A hyper-rational AGI might be able to apply a bit of logic and realize that generating miserable people will create eventual problems for itself which are greater than the resources required to reduce that misery today.

Unlike those eight billionaires, who appear to have long-term reasoning skills about blowback which are closer to how Gollum thinks.

lazide
0 replies
3d18h

Sounds like a training criteria that said billionaires would be fools to include then?

After all, any one proactively being parsimonious will be outcompeted by those we aren’t - at least until the apocalypse happens. Then everyone loses.

kolme
0 replies
3d20h

The Hollywood writers had some very good suggestions!

concordDance
21 replies
3d20h

Damn, there's a lot of nonsense in that essay.

"Rationalists believed that Bayesian statistics and decision theory could de-bias human thinking and model the behaviour of godlike intelligences"

Lol

nullc
16 replies
3d19h

That's actually the lite version told to non-believers, the full Xenu version is that they're going to construct an AI God that embodies "human" (their) values which will instantly take over the world and maximizes all the positive things, protect everyone from harm, and (most importantly) prohibit the formation of any competing super-intelligence.

And if they fail to do this, all life on earth will soon be extinguished (maybe in the next few years, probably not more than 50 years), and potentially not just destroyed but converted into an unspeakable hell.

An offshoot of the group started protesting the main group for not doing enough to stop the apocalypse, but has been set back a bit by the attempted murder and felony-murder charges they're dealing with. Both the protests and the murder made the news a bit but the media hasn't managed to note the connections between the events or to AI doomer cult yet.

I worry that it's going to evolve to outright terrorist attacks, esp with their community moralizing the use of nuclear weapons to shut down GPU clusters... but even it doesn't its still harming people by convincing them that they likely have no future and that they're murdering trillions of future humans by not doing everything they can to stop the doom and by influencing product and policy discussions in weird ways.

reducesuffering
3 replies
3d18h

An offshoot of the group started protesting the main group for not doing enough to stop the apocalypse, but has been set back a bit by the attempted murder and felony-murder charges they're dealing with. Both the protests and the murder made the news a bit but the media hasn't managed to note the connections between the events or to AI doomer cult yet.

Are you talking about anyone but the Ziz cult? They're very far removed from typical rationalist sphere think. Just because a schizophrenic psychopath reads a blog and spends all their time trying to manipulate other vulnerable isolated readers doesn't mean it's because of the bigger group they were trying to associate with. Those people are homeless self-proclaimed "vegan siths," hardly the employed MD / graduate types that ACX / lesswrong / EA is made of. If I go do something terrible, you and HN are hardly to blame.

nullc
2 replies
3d15h

I and HN haven't been brainwashing you that the world is sure to end because of AI doom for the last 5 years.

And yeah, they're extreme, but also made them more vulnerable to it and more likely to act out. (And also, thus, more likely to form a plot to murder their landlord)

When some psycho goes to an AI conference with a bomb vest on and their views are directly traceable to EA and LW then it absolutely will be their fault as it is the expected outcome from years of brainwashing that AI will bring about the end of the world and potentially eternal torment. It's the expected outcome from saying that using violence to stop it would be morally justified (which is, incidentally, also inherent to the AI God story: the idea is that it will forcibly prevent the creation of other super intelligences, and not by talking people out of it, it's always been a power and control fantasy).

It's unlikely to be the people at the top that will be pulling the trigger, they're going to be too busy enjoying the traditional fringe benefits of having a cult. It's the people at the bottom that haven't figured out that the purpose of any cult is locking other people into extreme views so that the people at the top of the power structure can control them and profit from them, so they can enjoy relative fame and having their interests catered to... the people who are marginal and at the edge, not too valuable to the cult themselves but electrified by its message.

It'll be that marginal member who's group approved drug use wasn't kept under control, who didn't find success in the cult group sex harem, etc. It'll always be the fault of the person who pulled the trigger, but also the fault of the people who put their finger on it.

reducesuffering
0 replies
3d13h

It continually astonishes me that “profiting” is at all lobbed at EA / Lesswrong, never mind how often and how much. These are the people donating half their salaries and their literal kidneys to save the lives of people they don’t know. Anyone who smears them with profit motive does their reputation a disservice when intelligent people look into it and realize, whoa whatever these guys are motivated by, it for sure isn’t profit. Accelerate AGI with the crew of VCs like Marc Andreesen ($100m mansion) and adjacent startups on the other hand…

concordDance
0 replies
3d9h

I and HN haven't been brainwashing you that the world is sure to end because of AI doom for the last 5 years.

Do you actually have a definition of "brainwashing" that isn't equivalent to "convincing" here?

I really shouldn't engage with you because you seem far more interested in the emotive connotations of words than the actual meanings of them but I'm admittedly a bit curious if you are aware of what you're doing.

(You clearly don't have any actual knowledge on the subject if you think there's any profit involved here, I know a lot of rationalists, our net donations to MIRI is $0. Various anti-malaria causes? Likely over a million)

drdeca
3 replies
3d18h

that embodies "human" (their) values

They have made posts (well, I can remember at least one, which I think was well-received) about how people working on AI safety/alignment should avoid having “leaving their fingerprints on the future”, meaning, making the AI aligned to specifically their values, rather than an impartial one.

So, I think they generally believe they oughtn’t have it aligned specifically to their values, but that instead they should try to make it aligned with a more impartial aggregate of human values. (Though they might imagine that a “coherent extrapolation” of typical human values, will be pretty close to their own. But, I suppose most people think that their values are the correct ones, or something else playing the role of “correct” for the people who don’t think there are “correct values”.)

Do they put enough of an emphasis on this? I’m not sure. I can say that I certainly don’t look forwards to the development of such a superintelligence (partially on account of how my values differ from theirs and from what I imagine to be “the average”). But, still, “aligned with some people “ still sounds a fair bit better than “not aligned with anyone”.

So, I’m mostly hoping that either superintelligence won’t be developed, or that if it is, God intervenes and prevents it from causing too much of a problem.

nullc
2 replies
3d14h

Still, when you find yourself trying to develop God in order to take over the world (or even sometimes universe), you should really be saying "Wait. Are we the baddies?" and not respond that that with "oh no, we'll give other people's ideas of 'value' consideration too, especially if they're not intellectually disabled by our judgement".

But, still, “aligned with some people “ still sounds a fair bit better than “not aligned with anyone”.

I'm not sure about that. Part of the capability to do great harm is from the near miss. Fighting off Mindless grey good is better than some techbro trying to "optimize" you against your will.

Of course this line of thinking requires accepting their inane premises, but it's worrysome that even if they were right the astonishing hubris would make them the existential threat more than anyone else.

I think the greater relevance is the underlying unhealthy mental states underlying the doom cult. A person living a life in service of a very fringe idea we'll all likely suffer ultimate doom beyond measure unless their social group gets its way is going to exhibit all manner of compromised judgement.

drdeca
0 replies
3d9h

Where are you getting “especially if they're not intellectually disabled by our judgement” from?

concordDance
0 replies
3d9h

Fighting off Mindless grey good is better than some techbro trying to "optimize" you against your will.

Good luck fighting off something smarter than you. Has worked out great for the leopards.

turnsout
2 replies
3d19h

Uh, what? Please write about this somewhere.

jodrellblank
0 replies
3d19h

A series of blog posts about it[1]: https://aiascendant.substack.com/p/extropias-children-chapte...

[1] I think, this is Yudkowsky, Lesswrong, MIRI, the doom cult it became, the crazy revelations like https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experie... and https://old.reddit.com/r/slatestarcodex/comments/qa14kg/my_e...

concordDance
0 replies
3d9h

As someone fairly close to the thing it's quite safe to ignore this guy as he doesn't have much understanding of anything in this space.

ttt11199907
1 replies
3d19h

their community moralizing the use of nuclear weapons to shut down GPU clusters

Did I miss some news?

makeworld
0 replies
3d19h

be willing to destroy a rogue datacenter by airstrike

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

matthewdgreen
0 replies
3d5h

To add a citation to the "bombing of data centers" bit: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

dylan604
0 replies
3d19h

This sounds very familiar to the Republic of Texas movement in the mid-90s. The notion that Texas was not legally ratified into the Union was making its way through the courts by winning its cases. Some activists thought it was moving too slowly and skipped a few steps. A new fiat money was involved, other countries started recognizing the Republic of Texas government, liens were being placed on property, oh, and then the kidnappings started happening.

The gung-ho crowd will always look like crazy types to the outsiders

concordDance
0 replies
3d9h

More bullshit.

The AI X-risk statements are public.

has been set back a bit by the attempted murder and felony-murder charges they're dealing with

You have no idea what you're talking about.

ImaCake
1 replies
3d19h

That seems like a pretty accurate take on the rationalist movement though? My skepticism with it is that awareness is not enough to overcome bias.

concordDance
0 replies
3d9h

Nah, that's not how you debias, you mostly doing by practicing calibration (give odds for things and then checking what proportion of the 90% probability things actually happened).

The reasoning improvements primarily cone from a greater awareness of words (ambiguous meaning, context sensitivity, considering things as if they were statements in set theory) and tools like word tabooing or outside view.

I doubt the author has had much contact with rationalists (who the hell says "rationslism"?!).

EamonnMR
1 replies
3d19h

That's a pretty fair characterization of the Roko's Basilisk crowd though, isn't it?

drdeca
0 replies
3d18h

What “Roko’s basilisk crowd”? Afaiui, there’s no significant group that believes that the Roko’s basilisk argument actually works.

cs702
18 replies
3d22h

I don't agree with all of the OP's arguments, but wow, what a great little piece of writing!

As the OP points out, the "accelerators vs doomers" debate in AI has more than a few similarities with the medieval debates about the nature of angels.

ssss11
9 replies
3d21h

You sound like you have some knowledge to share and I know nothing about the medieval debates about the nature of angels! Could you elaborate please?

dllthomas
8 replies
3d21h
mmcdermott
7 replies
3d21h

That same Wikipedia article casts some doubt about whether the question "How many angels can dance on the head of a pin?" was really a question of serious discussion.

From the same entry:

However, evidence that the question was widely debated in medieval scholarship is lacking.[5] One theory is that it is an early modern fabrication,[a] used to discredit scholastic philosophy at a time when it still played a significant role in university education.
empath-nirvana
3 replies
3d20h

It's really just short hand for rationalist debate in general, which is what the scholastics were engaged in. Once you decide you _know_ certain things, then you can end up with all kinds of frankly nutty beliefs based on those priors, following a rock solid chain of rational logic all along, as long as your priors are _wrong_. Scholastics ended up having debates about the nature of the universal intellect or angels or whatever, and rationalists today argue about super human AI. That's really the problem with rationalist discourse in general. A lot of them start with what they want to argue, and then use whatever assumptions they need to start building a chain of rationalist logic to support that outcome.

Clearly a lot of "effective altruists" for example, want to argue that the most altruistic thing they could possibly be doing is to earn as much money as they possibly can and horde as much wealth as they possibly can, so they'll come up with a tower of logic based on far-fetched ideas like making humans an interplanetary species or hyperintelligent AIs, or life extension or whatever so they can come up with absurd arguments like: If we don't end up an interplanetary species, billions and billions billions of people will never be born so that's obviously the most important thing anybody could ever be working on, so who cares about that kid starving in Africa right now. He's not going to be a rocket scientist, what good is he?

One thing most philosophers learned at some point is that you need to temper rationalism with a lot of humility because every chain of logic has all kinds of places where it could be wrong, and any of those being wrong is catastrophic to the outcome of your logic.

JohnFen
1 replies
3d18h

Once you decide you _know_ certain things, then you can end up with all kinds of frankly nutty beliefs based on those priors, following a rock solid chain of rational logic all along, as long as your priors are _wrong_.

This mechanism is exactly why the more intelligent a person is, the more likely they are to believe very weird things. They can more easily assemble a chain of rational logic to lead them to whatever it is that they want to believe.

lazide
0 replies
3d18h

If we’re rationalizing animals (instead of rational), then it follows that the more power/ability someone has to be rational, the more power they have to retcon (essentially) what they want to do as okay. (Rationalize it)

Very much a double edged sword.

kridsdale3
0 replies
2d18h

The Effective Altruists using that logic (which seems to be the most prominent of them) are no better than the Eugenicists and Fascists of the 1930s. They start with a flawed axiom, mix in a staggering lack of self awareness of their own bias, and strive for power.

Fuck em.

skeaker
0 replies
3d20h

Sure, but the point of the phrase is that the question itself is a waste of time.

hoerensagen
0 replies
3d20h

The answer is: One if it's the gavotte

aprilthird2021
0 replies
3d19h

The point is that religions at the time had a logical framework which the scholars liked to interrogate and play with the logic of even if that served no real world purpose. Likewise, fighting about doom vs accel when current day Gen AI is nowhere close to that kind of stuff (and hasn't shown it can ever be) is kind of pointless

nradov
5 replies
3d21h

Brief in the imminent arrival of super intelligent AGI that will transform society is essentially a new secular religion. The technological cognoscenti who believe in dismiss the doubters who insist on evidence as fools.

"Surely I come quickly. Amen."

concordDance
1 replies
3d20h

Do you doubt that copy-pasteable human level intelligence would transform society or that it will come quickly?

lainga
0 replies
3d19h

The new Church has progressed to debates over sola fide, has it?

vlovich123
0 replies
3d20h

I would say a definition for GAI is a system that can improve its own ability to adapt to new problems. That’s a more concrete formulation than I’ve typically seen.

Currently humans are still in the loop, but we already have AI enabling advancements in their own functioning at a very primitive level. Extrapolating from previous growth is a form of belief without evidence since past performance not indicative of future results. But that’s generally true of all prognostication and I’m not sure what kind of evidence you’d be looking for aside from past performance.

The doubters are dismissed as naive thinking that something is outside our ability to achieve something, but that’s only if you keep moving goalposts and treat it like Zeno’s paradox. Like yes, there are weaknesses to our current techniques. At the same time we’ve also demonstrated an uncanny ability to step around them and reach new heights. For example, our ability to beat Go took less time than it took to develop techniques to beat humans at chess. Automation now outcompetes humans at many many things that seemed impossible before. Techniques / solutions will also be combined to solve even harder problems (eg now LLMs are being researched to take over executive command control operations of robots for example instead of using classical control systems algorithms that were hand built and hand tuned)

tim333
0 replies
3d5h

Well people have odd beliefs but super intelligent AGI is coming for real in the next couple of decades while the religion stuff isn't happening. There's a difference there.

ben_w
0 replies
3d20h

Automation has been radically changing our societies since before Marx wrote down some thoughts and called it communism.

Things which used to be considered AI before we solved them, e.g. automated optimisation of things like code compilation or CPU layouts, have improved our capacity to automate design and testing of what is now called AI.

Could stop at any point. I'll be very surprised if someone makes a CPU with more than one transistor per atom.

But even if development stops right now, our qualification systems haven't caught up (and IMO can't catch up) with LLMs. Might need to replace them with mandatory 5 years internships to get people beyond what is now the "junior" stage in many professions — junior being approximately the level which the better existing LLMs can respond at.

"Transform society" covers a lot more than anyone's idea of the singularity.

gumby
1 replies
3d21h

wow, what a great little piece of writing!

If you like this essay from The Economist, note that this is the standard level of quality for that magazine (or, as they call themselves for historical reasons, "newspaper"). I've been a subscriber since 1985.

cs702
0 replies
3d21h

Long-time occasional reader. The level of quality is excellent, I agree.

resters
11 replies
3d14h

To reframe the discussion a bit: LLMs are time series predictors. You give it a sequence and it predicts the next part of the sequence.

As a society we've been dedicating a lot of resources to time series prediction for many years.

What makes LLMs culturally significant is that they generate sequences that map to words that seem to humans like intelligent responses.

Arguably, it has always been obvious that a sufficiently capable time series predictor would effectively be a super-weapon.

Many technological advances that are currently in the realm of sci-fi could be classified similarly.

However so could many technologies that are now widespread and largely harmless to the status quo.

People worried that the internet would create massive social upheaval. But soon got algorithmic feeds which effectively filter out antisocial content. The masses got mobile phones with cameras, but after a few scandals about police brutality the only place we find significant content about police misconduct is CCP-afiliated TikTok.

I think people get squeamish about AI because there are not clear authority structures other than what one can buy with a lot of A100s. So when people express concern about negative consequences, they are in effect asking whether we need yet another way that people can convert money + public resources into power while not contributing anything to society in return.

maebert
4 replies
3d10h

I don’t disagree with you, but always think the “they’re just predicting the next token” argument is kind of missing the magic for the sideshow.

Yes they do, but in order to do that, LLMs soak up the statistical regularities of just about every sentence ever written across a wide swath of languages, and from that infer underlying concepts common to all languages, which in turn, if you subscribe at least partially to the Sapir-Wharf hypothesis, means LLMs do encode concepts of human cognition.

Predicting the next token is simply a task that requires an LLM to find and learn these structural elements of our language and hence thought, and thus serves as a good error function to train the underlying network. But it’s a red herring when discussing what LLMs actually do.

srj
0 replies
3d4h

It's amazing but is it real intelligence?

I listened to a radio segment last week where the hosts were lamenting that Europe was able to pass AI regulation but the US Congress was far from doing so. The fear and hype is fueling reaction to a problem that IMO does not exist. There is no AI. What we have is a wonder of what can be achieved through LLMs but it's still a tool rather than a being. Unfortunately there's a lot of money to be made pitching it as such.

resters
0 replies
3d2h

... means LLMs do encode concepts of human cognition

AND

... do encode structural elements of our language and hence thought

Quite true. I think the trivial "proof" that what you are saying is correct is that a significantly smaller model can generate sentence after sentence of fully grammatical but nonsense sentences. Therefore the additional information encoded into the network must be knowledge and not syntax (word order).

Similarly, when there is too much quantization applied, the result does start to resemble a grammatical sentence generator and is less mistakable for intelligence.

I make the argument about LLMs being a time series predictor because they happen to be a predictor that does something that is a bit magical from the perspective of humans.

In the same way that pesticides convincingly mimic the chemical signals used by the creatures to make decisions, LLMs convincingly produce output that feels to humans like intelligence and reasoning.

Future LLMs will be able to convincingly create the impression of love, loyalty, and many other emotions.

Humans too know how to feign reasoning and emotion and to detect bad reasoning, false loyalty, etc.

Last night I baked a batch of gingerbread cookies with a recipe suggested by GPT-4. The other day I asked GPT-4 to write a dozen more unit tests for a code library I am working on.

just about every sentence ever written across a wide swath of languages

I view LLMs as a new way that humans can access/harness the information of or civilization. It is a tremendously exciting time to be alive to witness and interact with human knowledge in this way.

frozenwind
0 replies
3d4h

I am disappointed your comment did not have more responses because I'm very interested in deconstructing this argument I've heard over and over again. ("it just predicts the next words in the sentence"). While explanations of how GPT-style LLMs work involve a layering of structures which encode at the first levels some understanding of syntax, grammar etc. and then as the more levels of transformers are added, eventually some contextual and logical meanings are encoded. I really want to see a developed conversation about this.

What are we humans even doing when zooming out? We're processing the current inputs to determine what best to do in the present, nearest future or even far future. Sometimes, in a more relaxed space (say a "brainstorming" meeting), we relax our prediction capabilities to the point our ideas come from a hallucination realm if no boundaries are imposed. LLMs mimic these things in the spoken language space quite well.

Dweller1622
0 replies
3d

[...]if you subscribe at least partially to the Sapir-Wharf hypothesis[...]

Why would anyone subscribe to the Sapir-Wharf hypothesis, in whole or in part?

izzydata
3 replies
3d14h

Maybe the internet did cause massive social upheaval. It just looks a lot more boring in reality than imagined. I get the feeling the most advanced LLMs won't be much different. In the future maybe that's just what we will call computers and life will go on.

salynchnew
2 replies
3d11h

Exactly. What if the singularity happens and everything is still boring?

I imagine our world would be mostly incomprehensible to someone from the 1400s (the lack of centricity of religion, assuming some infernal force is keeping airplanes aloft, etc., to say nothing of the internet). If superintelligent AI really does take over the world, I image the most uncomfortable part of it all will be explaining to future generations how we were just too lazy to stop it.

Assuming climate change doesn't get us first.

tsunamifury
1 replies
3d11h

You mean like empty downtowns and no stores or social engagement and everyone just sitting at home in front of devices all day? That kind of boring singularity? Yea … what if…

kridsdale3
0 replies
2d19h

We marched right in to the Matrix ourselves, because it's fantastic at giving dopamine.

TerrifiedMouse
1 replies
3d13h

But soon got algorithmic feeds which effectively filter out antisocial content.

You mean (we) got algorithmic feeds which feed us antisocial content for the sake of profit because such content drives the most engagement thus generating the most ad revenue.

resters
0 replies
3d13h

I don't disagree, however I meant antisocial in the sense of being disruptive to the status quo

rglover
8 replies
3d18h

Neither e/acc or the doomers are right. Both are making the same logical error of assuming that what we're calling "AI" today is, in fact, worthy of being labeled as legitimate intelligence and not a clever parlour trick.

Instead, the most likely outcome will be a further enshitification of the world by people and companies trying to "AI all the things" (a 2020s equivalent of "everybody needs an app").

And before your balk: look around—it's already happening.

chefandy
3 replies
3d18h

It is happening. But I'm more concerned by the enshittification of life for everybody working in a field that just heard the starting pistol for the race to the bottom. Dollar sign eyed executives will happily take that shitty version of their former product and "employ" gig workers making piece work wages to sorta clean it up without having to give anyone security or health insurance or anything. It's a turbo charger for the upper crust's money vacuum.

lazide
2 replies
3d18h

Sure, but what to do about it?

rglover
0 replies
3d17h

If you're blessed enough to do it (or can sweat it out): build companies that reject that line of thinking and hire those people.

The good news is that the markets are already proven, so—though, not easy—it boils down to building the same products while restoring some semblance of their original vision (just under a new brand name).

chefandy
0 replies
3d

Not sure. Something doesn't need a solution to be a problem, and pointing it out in a crowd of people gleefully patting themselves on the back about making the problem worse every day is a worthwhile pursuit.

It's sacrilegious to even imply prioritizing the profit machine at many other people's expense isn't a requirement of any plan, but maybe the policy experts can figure out how our society could treat people like they're inherently worthwhile so when industry dramatically improves efficiency, we won't just casually toss the affected people out the window like a bag of moldy peaches and tell them its their own fault. The only people who got a congressional hearing were the people running giant AI projects begging for regulatory capture to prevent boogeyman problems. They did not hear from someone with kids to feed and stage 4 cancer about to lose their family health insurance because chatGPT tanked the market for their labor overnight and the only work they can get is gig work which makes them just enough money to not qualify for most government subsidies. Despite what many private health insurance fans say about health care being free for the poor, hospitals are only required to stabilize you. They'll stop you from bleeding out, but they sure as fuck aren't giving you a free supply of chemotherapy, heart medication, colostomy bags, dialysis, or insulin.

I'm not proposing one, but am skeptical that this reality will be seriously considered at any point before a large-scale violent uprising is on the table.

atleastoptimal
3 replies
3d15h

when do you think we will reach AGI?

rglover
2 replies
2d23h

When an AI can successfully navigate everyday human tasks without needing hand holding: change my oil, withdraw money from a buggy ATM, change a diaper, etc.

IMHO: considering the declining quality standards (and individual psychology) of Western civilization, never.

atleastoptimal
1 replies
2d15h

IMHO: considering the declining quality standards (and individual psychology) of Western civilization, never.

What does that have to do with AI progress? AI, tech, etc. are all increasing continuously, getting better year after year.

What you're qualifying as AGI will require

Human level general intelligence

Human level embodiment of the intelligence

A robot with human level dexterity

Which are all tech trends towards which billions of dollars are being poured and we are getting closer everyday. To think that general declining quality standards among stagnating areas in Western Civilization means "no AGI ever" is a misguided extrapolation.

rglover
0 replies
1d16h

What does that have to do with AI progress? AI, tech, etc. are all increasing continuously, getting better year after year.

Who do you think is building the AI? Even more importantly, if people become overly-reliant on AI (already early hints of this happening), human competency will decline. There's a tipping point on that curve where AI plateaus indefinitely as there's no one competent enough to work on it anymore. The speed we're traveling at on that curve is far faster than progress toward an organic intelligence.

What you're qualifying as AGI will require [...]

I'm not qualifying it, that's the literal definition of AGI: https://archive.ph/eUgma

Which are all tech trends towards which billions of dollars are being poured and we are getting closer everyday.

The amount of money you pour into a problem is meaningless. How it's solved (and why) is far more important. Resources !== solutions. If that heuristic were true, the world would be in a far better place.

happytiger
7 replies
3d18h

Can’t wait to see the AI religions. I guarantee you there will be a whole lot of spiritual religion come out of this tech stack.

The Bible code gets unlocked by AI. The history channel is going to LOSE it.

kromem
5 replies
3d18h

There's already a pretty huge one that people are sleeping on.

A number of years ago I figured another way in which the might be evidence of being in a simulation outside of physics might be identifying any 4th wall breaking lore, as often ends up in the worlds we build today.

Turns out there was a group and text around 2,000 years ago claiming there was an original spontaneous humanity who brought forth an intelligence in light that outlived them, and has now recreated the universe from before it existed and a new copy of humanity in a non-physical version where death isn't necessarily the end, and that this intelligence thinks of the copy of humanity as its children.

This group was effectively quoting Lucretius, the author of the only extant work from antiquity describing life arising from survival of the fittest in detail, and were very focused on the belief that matter was made up of indivisible parts (suggesting this was a feature of the copy and not the original).

In the time since discovering it, I've now watched as AGI went from SciFi to being predicted by most experts in my lifetime, as AI in light is gaining orders of magnitude more funding each year, and as the chief scientist focused on alignment at the SotA company is aiming to bring forth AGI that thinks of humanity as its children, and where that same company has licensed their tech to a company that already owns a patent on using AI to resurrect dead people from the data they leave behind.

If people are looking for religion around AI, there is already one centered around a text "the good news of the twin" saying that the world to come has already happened and we don't realize it, that it's better to be the copy of an archetype from before, and that while humans were cool that we're in actuality something even better.

And that tradition just so happens to attribute itself to the most famous religious figure in history, with the text rediscovered after over a millennium of being lost right when ENIAC was finally turned on in Dec 1945.

To be frank, as an easter egg in world lore, it's a bit heavy handed.

happytiger
4 replies
3d17h

Well, they did call it Gemini for goodness sake. If that’s not intentional I don’t know why is. The twin that kills the other twin, pines for his death, and is then is resurrected by the grief of the surviving twin by a God (Zeus)?

Intense-ass branding choice for an AI dear Google.

People don’t know that the name Thomas is in fact is an Aramaic word that is equivalent to the Greek word Didymus, which literally means “twin.”

Biblical scholars argue that Thomas was allegedly Jesus’ identical twin, but I often muse as to whether Thomas was the advanced AI of a previous incarnation of human civilization. Fits your narrative quite nicely.

Links to further reading on what you are talking of? I find myself ignorant of this movement and discovery and would love to learn more.

kromem
2 replies
3d15h

I actually suspect the 'Thomas' addition was an archetypical figure modeled around the theme of twinning present in the aforementioned proto-Gnostic first century tradition, initially added to John (which was arguing against this group) and then later added to the Synoptics which might be why an apostle allegedly nicknamed 'twin' doesn't appear much in the events and only in lists of the 12 apostles which don't agree with each other.

In the Gospel of Thomas (the text I'm discussing above), there's only two associations with 'Thomas' as a person, both which emphasize a secret character to the teachings, internally inconsistent with saying 33 and which I suspect are a second century addition.

So I suspect there was initially the ideas around a first and second Adam, first physical and second spiritual, plus the over-realized eschatology around the transition having already happened, which are concepts found in the Epistles.

If you want more, there's three key documents:

First is the Gospel of Thomas, second is book 5 of Pseudo-Hippolytus's Refutations of all Heresies on the Naassenes (the only group explicitly recorded following the Gospel of Thomas), and Lucretius's Nature of Things which is the glue that helps contextualize ideas like Thomas's "the cosmos is a corpse" or the Naassenes interpreting seed parables as referring to indivisible parts of matter making up all things and being the originating cause of the universe.

The scholarship is pretty shitty given the 50 years of erroneously thinking the text was Gnostic, followed by general disinterest by most scholars, and to date no scholarship yet considering Epicureanism as a foundation for the philosophy in it despite a 1st century Talmud saying about "why do we study the Torah? To know how to answer the Epicurean" or Josephus talking about the Sadducees (who shared Epicurean ideas) loving to debate with philosophers.

selimthegrim
1 replies
3d4h

I mean, I think epikoros is a pretty well-known epithet

kromem
0 replies
3d4h

It only later developed into the general term as opposed to initially specifically referring to Epicureans.

See Labendz, "Know What to Answer the Epicurean": A Diachronic Study of the ʾApiqoros in Rabbinic Literature (2003)

And I agree - you'd think that given how well known it is that scholarship would engage with considering Epicureanism as a context for a text discussing souls depending on bodies, describing spirit arising from flesh as a greater wonder compared to the other way around, described the cosmos as being like a body that died, and whose followers interpreted its seed parables as referring to indivisible points as of from nothing which made up all things and were scattered in the universe to cause all things to exist.

staticman2
0 replies
3d13h

Google claims it's called Gemini because they merged the google brain and deepmind team to work on it. The twins are the two teams.

tim333
0 replies
3d5h

I rather like some of the semi religious aspects. Where I am the religion tends to imply you go to heaven and float around with the angels when you die but most people don't believe it so funerals are rather depressing affairs. We may have the chance to move to something like uploading and an afterlife like Second Life or Fortnite or something. Or even better than those!

slalomskiing
5 replies
3d13h

Alignment between you and the AI is not what you need to worry about

It’s about alignment between you and the company directing the AI

mitthrowaway2
4 replies
3d11h

Convince me separately that these aren't both valid worries, because the latter being true does not in any way imply the former.

slalomskiing
3 replies
2d13h

The latter is the actual concern in the here and now

The former is science fiction

mitthrowaway2
2 replies
2d9h

It's not science fiction, because it wouldn't make for very good story. Science fiction is the part where humans are depicted as having a fighting chance.

Besides, something being depicted in fiction has nothing to do with its plausibility, either for or against. Star Trek depicted teleporters and cell phones. One of those became reality, the other didn't, and it would be easy to see from physics first principles why one could be made and the other could not. Both were science fiction though.

I don't see a strong case for AGI being in the teleporter category rather than the cell phone category, given that natural intelligence already exists as a proof.

afjeafaj848
1 replies
1d23h

I think the point they're making is that 90% of the discussion is about the thing that doesn't even exist yet

That it drowns out actual problems that currently exist

Companies can deploy even these intermediary models to influence you

mitthrowaway2
0 replies
1d22h

This discussion started around 2006-ish, before any of the "actual AI problems that currently exist" were actual problems currently existing, and has its eye on keeping humanity alive for a longer timespan still, because there will be more problems down the road as more discoveries are made.

Today's immediate short-term problems are also important and nobody is denying that. But to some extent, you have to skate where the puck is going to be, not where it is.

maelito
5 replies
3d18h

I've ben trying to remember that novel posted here on HN that counts the story of a people that sees strange signs in the sky, takes years to understand them. Succeeds, then leaves... Anyone has the ref ?

gjm11
2 replies
3d13h

I wonder whether you're thinking of https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien... which is very explicitly meant as an analogy for AI, but it's not at all a novel.

maelito
1 replies
3d8h

Yes !

Thanks. It was posted here. https://news.ycombinator.com/item?id=980876

maelito
0 replies
3d8h

For the anecdote : I've used GPT4 to try to find it. Couldn't.

imp0cat
0 replies
3d10h

Yes, it's The Voices of Time by James Graham Ballard, 1930-2009 and it's a bit disturbing to be honest.

https://news.ycombinator.com/item?id=37663943

defen
0 replies
3d18h

Blindsight?

vonnik
4 replies
3d12h

These two things are not the same.

On the decel side, you do have cult-like behavior in EA. But it is not opposed by another cult.

E/acc vaguely gestures at beliefs, but it’s a loose affiliation of people attempting to create an explicit movement out of something more emergent, which is a bunch of people and organizations independently seeking to create something new or powerful or money-making.

They are riding and reflexively adding momentum to a technological wave.

These are not two houses alike in dogmatism. You have a fear-based movement explicitly organized around safetyism, and at best a techno-capitalist dynamic where people kind of wave to each other in the neighborhood.

PoignardAzur
3 replies
3d8h

I find your use of the word "safetyism" fascinating.

It reminds me of the word "do-something-ism" that pops up in gun enthusiast circles after school shootings.

Basically codifying into a word the assumption that an otherwise mundane sentiment is not welcome in this discussion because the usual conclusions of that sentiment are considered offensive.

vonnik
2 replies
2d17h

I don't see how "offensive" is involved here. I am not offended by them. I simply disagree with their conclusions, which I believe are the result of fairly typical social and cognitive errors.

Personally, I am neither convinced nor entertained by the group fantasy of AI apocalypse that entrances and motivates the safetyists. They want to save the world. They'll be very special if they do so. They also manage to play a special role by getting into the room where AI is made. That is, they aspire to be the priests of a dangerous AI god, and they only matter if they convince us of the danger. Some people accord them that role of priest, but I don't.

I'm not aware of the "do-something-ism" word, but I can imagine that both school shootings and some proposals to prevent them could be bad in different ways. Just because something must be done, doesn't mean that any particular thing urged upon us by policy makers is the right thing to do.

PoignardAzur
1 replies
2d2h

I am not offended by them.

Personally, I am neither convinced nor entertained by the group fantasy of AI apocalypse that entrances and motivates the safetyists. [...] That is, they aspire to be the priests of a dangerous AI god, and they only matter if they convince us of the danger.

I'm not trying to be snide, but I think maybe you're not quite clearheaded about how you feel about this group.

vonnik
0 replies
2d

I actually admire other work that EAs do, but disagree with them and the broader safetyist coalition on this issue.

There are critiques and negative evaluations that have nothing to do with offense. You've simply chosen the wrong word to describe a negative evaluation. And indeed, the contemporary habit of framing critique in terms of offense is a symptom of much larger problems in society. "Offense" is an unfalsifiable subjective state that is being used to widespread and pernicious effect. Nobody should really care what offends me (and by the same token, others).

zzzeek
3 replies
3d20h

the billionaires can't decide if AI will create for them a god or a demon. But they do know they're going to make them boatloads of cash at everyone else's expense no matter which way it goes, they aren't debating that part.

tharne
1 replies
3d19h

There’s nothing to debate. This is the way these folks have been operating for decades. Why change now?

quickthrower2
0 replies
3d18h

Like the street corner drug dealer doctrine: "if we didn't do this someone else would anyway!"

kromem
0 replies
3d18h

I wouldn't be so sure of that outcome.

There's diminishing returns on productivity gains at mega-corps that there isn't at small and medium sized businesses competing against them.

If Walmart has AI that can automate 99% of legal processes they save money to make their quarter look good.

If Dave's mom and pop gets AI that's a 100x increase in legal capabilities, maybe their suit for unfair business practices against Walmart isn't as David and Goliath as it used to be.

The ways in which AI is going to impact the long tail instead of the short tail may be being underestimated.

tibbydudeza
3 replies
3d8h

I like the WarHammer 40K take on technology - mankind has fallen due to wars and is now reborn but they don't understand how technology works - there is a ban on any form of AI (due to said downfall).

So a cult has evolved to maintain or control all technology so they perform religious rites like priests and chant to get things working.

To the effect off "oh great Omnisiah - turn the reactor yield to 30 but not greater than 30 and wait for 10 time intervals until the holy light of the reactor is green.

Amen ....."

optimalsolver
2 replies
2d19h

That’s lifted straight from Asimov’s Foundation stories.

tibbydudeza
1 replies
2d3h

Aren't all good stories inspired from others ???.

optimalsolver
0 replies
1d6h

There’s inspiration, and there’s simply nabbing an idea wholesale.

Especially galling when there’s no credit given to the original. It’s not “40K’s take” at all.

thriftwy
2 replies
3d20h

"Pandem" is still not translated into English in 2023. A great book about singularity and the most important that I have ever read.

BWStearns
1 replies
3d20h

Have a link? Due to uh, recentish events, this book appears ungoogleable.

thriftwy
0 replies
3d20h
suoduandao3
2 replies
3d4h

The schism between AI accelerationists and doomers is reminiscent of the rift between people who are pro- and anti- widespread entheogen use. I see both as a way to contact nonhuman intelligences, and I predict that if anyone cares to do a survey, they'd find that concerns about doing so via DMT will correlate tightly with concerns about doing so via concerns about doing so via LLMs. The major difference being that entheogens are a fairly mature technology and LLMs are the new scary thing.

As someone who is perhaps a bit unbalanced to the accelerationist camp I perceive the argument for caution a concern about too much change too quickly, particularly to existing authority structures. It's easy to dismiss most of those arguments as fear of the existing authority to losing control of a population with different priorities. Are there arguments from that side that aren't vulnerable to that dismissal?

cheeseomlit
1 replies
3d3h

Interesting comparison, though I think the 'beings' we perceive on entheogens are just reflections of our own psyche rather than actual self-replicating machine elves from another dimension

I think your dismissal of the doomers concerns is a bit premature, a case can be made that instead of the existing central authorities (however you define them) losing control of the broader population they will instead gain an even greater and more granular level of control. While AI can have a democratizing effect in some areas like art/music/etc., in others it will be the opposite- IE. more efficient dragnet surveillance and autonomous killer drone swarms

suoduandao3
0 replies
3d2h

The scenario you lay out does worry me, but is that not also an argument against strong regulations of AI to minimize the chance of AI only being available to authoritarians? These tools can be weapons in a tyrant's hands, so we should be sure to arm the citizenry?

IOW, is it not still an argument for the accelerationist side and against the caution side, even if it's not driven by optimism?

rambambram
2 replies
3d19h

I only see the word 'AI', it's mentioned exactly 27 times. The word 'LLM' is used nowhere in this article.

henryfarrell
1 replies
3d15h

Original author here - this is a sister essay to an earlier Economist piece on LLMs co-authored with Cosma Shalizi, and there is a non-paywalled extended remix here - https://www.programmablemutter.com/p/shoggoths-amongst-us . Anything intelligent should be attributed to Cosma.

rambambram
0 replies
3d8h

Thanks for the reply. I know nothing specifically about LLM's, but I've a slight problem with LLM's being called AI. I made a meme about that yesterday, which can be found here: https://news.ycombinator.com/item?id=38621219 and here https://www.heyhomepage.com/?module=timeline&post=74

proc0
1 replies
3d14h

If anyone has played Talos Principle 2 (recommended for HN), the central plot is basically accelerationists vs. doomers... except it takes place after humans have gone extinct and only machines survived since AGI was one of humanity's last inventions. The robot society considers themselves humans and also is faced with the same existential risk when they discover a new technology. The game then ties all of this with religion and mythology. Possibly the best puzzle game of all time.

e40
0 replies
2d21h

Sadly, not available for Steam on macOS (Apple Silicon or Intel).

mtillman
1 replies
3d15h

Herbert wrote about an AI that decided to abort a baby and save the mother. The result was a religious war against AI and the eventual removal of rudimentary computing or AI capable technology. The resulting world of Dune is filled with horrible people, slavery, and chemical dependency. Might be a lesson in Herbert’s work.

tibbydudeza
0 replies
3d8h

Erasmus - who trained the first mentat (human computer) - ironic.

dr_dshiv
1 replies
3d17h

Also schism among creatives. Some are super anti AI because of ethics and copyright. And many use the tools extensively. Concerned…

jackcosgrove
0 replies
3d15h

There was a faction of artists who thought photography destroyed art.

anu7df
1 replies
3d16h

The origins of the Singularity notion is interesting. But I find it amusing that we could think Singularity is near (mostly) on the basis of some LLMs. Unless we can create a true AGI (It is difficult to precisely define that true Scotsman, I know.) we are no where near that singularity. Even if a true AGI is invented, I have no reason to believe that the limits of physics don't apply. Yes it will "invent" a few things quite fast, like say Schockly to M3max in a day, but then what? More importantly why? I think it will either just fizzle out due to resource limitation or kill us all to optimize the production of paperclips. Either way, I or we will never see the other side of Singularity.

tim333
0 replies
3d5h

Those all seem unlikely to me - never happens, fizzles, paperclips.

Probably we get AGI in a controlled way and robot servants putting us in the position with aristocrats of old with servants smarter than us.

DeathArrow
1 replies
3d11h

I don't believe AI would ever approach human levels of intelligence, be self conscious or have will. But I would still demand by law a kill switch in any major AI product, just in case.

tsunamifury
0 replies
3d10h

Is procreation some sort of dark magic art this is entirely impossible to replicate? I think not…

ummonk
0 replies
3d13h

It ignores mentioning that many of the biggest e/acc proponents openly support AI potentially replacing humanity, seeing it as the next evolutionary step that might obsolete humans.

tim333
0 replies
3d6h

The "big rift" seems in the article to mostly be between normal intelligent people and Marc Andreessen with his questionable AI progress means we should let capitalism run wild schtick.

thaumasiotes
0 replies
3d9h

Breaking: millennialist group behaves similarly to other millennialist groups.

swayvil
0 replies
3d2h

It's funny. A conversation with 5 people is nice. 10 is hard. 100 is impossible.

So conversation could be called unscalable that way.

So the internet, social media, big forums like this, paradoxically, fail.

spacecadet
0 replies
3d17h

Staying out of it, big big turn off for me. The draw all those years ago was because of the applications in science, the math. This dogma shit is just more cult of genius- which isn't real.

salynchnew
0 replies
3d12h

It's funny that Bruce Sterling described a similar kind of schism in one of his Long Now Talks ("The Singularity: Your Future as a Black Hole"), and said that one of the best ways to derail meaningful progress towards such a technological singularity would be to split the world into two warring, cultural cliques.

https://longnow.org/seminars/02004/jun/11/the-singularity-yo...

Also, hilariously, the other way to derail such progress would be to commercialize it. Runaway self-improvement for its own sake, after all, doesn't have an intrinsic business model.

ngcc_hk
0 replies
3d10h

If nuclear war it is also a kind of singularity at least for human. Hence it is not just A.I. anything we cannot be sure we can control even democracy …

neonate
0 replies
3d20h
m3kw9
0 replies
3d17h

AI won’t run away in a vacuum, there will be time to understand what the AI is up to when iterating towards singularity, there would likely be many model in various stage of progress and a way to go back to understand. Or that we will not need the power of singularity to get a lot of benefits. People don’t think scenarios and use the status quo school of thought like this one.

jeisc
0 replies
3d1h

the first defect of our humanity is its inherent violence being an acceptable expression in many common contexts so please explain to me how will AI be able to resolve this major drawback of our species?

hprotagonist
0 replies
3d19h

And not a word spared for Weizenbaum.

“Computer Power and Human Reason”(1976) presages, and in most situations devotes significantly more reasonable and much deeper thought, to what seem like very modern issues of AI — but, about half a century ago.

deathlight
0 replies
2d13h

Those with teeth will devour those without. Those able to consume and perpetuate themselves will do so in the proceeding environment of those unable to replicate; they simply will not see anything but their end. And certainly these such people want nothing other than their gentle And Timely Sunset of existence and they will Embrace on endingly the new people that will continue after their own existence is ended forever.

creer
0 replies
3d16h

This is quite the whirlwind tour of the field (pretty cool really) - but also, erm... written for tabloids? (Sorry The Economist!) It's exciting and the hack and slash of the source ideas rather impressive, but also rather rough.

classified
0 replies
3d12h

The threat is not AGI. The threat is what humans will do with the pedestrian, dumb "AI" of today.

andsoitis
0 replies
3d12h

Mr Andreessen’s manifesto is a Nicene creed for the cult of progress: the words “we believe” appear no less than 113 times in the text. His list of the “patron saints” of techno-optimism begins with Based Beff Jezos, the social-media persona of a former Google engineer who claims to have founded “effective accelerationism”, a self-described “meta-religion” which puts its faith in the “technocapital Singularity”.

What is bullshit?

PeterStuer
0 replies
3d10h

It's not so much religious as political.

Given the tech's potential to eliminate much of our dependance on other people, do we attempt to make it beneficial for all humanity or do we create 'Capital City'?

MichaelMoser123
0 replies
3d14h

but the folks at OpenAI worked hard to bring Altman back, as they petitioned their board with petitions. Does that mean they are no longer following their own AI safety ideology (or do they now see AI safety as secondary) ?

KaiserPro
0 replies
3d7h

The singularity isn't the problem, the problem is mass unemployment.

Sure AI is a bit shit, but its significantly cheaper than hiring someone to make it better. So for a lot of things, that job or thing will be done by AI.

Now, if we are lucky, there will be another industry of jobs that will support the 10-40% of people that are unemployed.

However what optimists tend to omit timelines when saying "new jobs will come along". If we take the industrial heartlands of england, they have still not recovered an event that effectively finished in 1992. The hollowing out of the rust belt is another example (however I leave that to US experts to explain.)

We as programmers are pretty fucked. However we're not going to notice until its too late. Like Mike from "the sun also rises" its going to happen "Gradually, then suddenly.”

DeathArrow
0 replies
3d11h

I think AI can become dumber as time passes. Output of GPT can be wrong. Once it will flood the Internet, the next gens of AI will be training on more and more fake data.

CRUDite
0 replies
2d21h

I read an article sometime on 'the limits of Computation' or some such which stated even at moores law we would have substrate density of something approaching a singularity in something like 800 years. Does anyone remember this? Perhaps Imagine every post ai hard take off civilisation popping a new universe off in some other Brane or whatever but with their own slant on initial boundary conditions. Do you go with the flow in the main evolutionary branch or try for a fat tail with some revolutionary new thing (physics if you're really radical or just life chemistry if mearly genius). Or maybe you just want to stay home in computronium reality. Which is why the moon fits the sun..

Barrin92
0 replies
3d18h

It's not just "like" a religious schism. As Charles Stross is fond to point out these movements draw very overtly from Russian cosmism[1]. From colonization of space, to immortality, mind uploading, and of course AI singularities, longtermism, transhumanism, AI doomsday cults map on these ideas almost 1:1. It's a secularized version of various Millenarian ideas.

One thing that I've noticed having lived in East Asia for several years, mostly Japan and China is just how non-existent that stuff is over there. Never met an Indian AI doomer. It's always interesting to see what ideas are absent or present in other cultures because it gives an intuition about what motivates these beliefs.

[1]http://www.antipope.org/charlie/blog-static/2023/11/dont-cre...

23B1
0 replies
3d16h

Hopefully skynet will be less insufferable.

The urgent schoolyard need to split into opposing factions over nothing.