We are all so majorly f*d.
The general public does not know nor understand this limitation. At the same time OpenAI is selling this a a tutor for your kids. Next it will be used to test those same kids.
Who is going to prevent this from being used to pick military targets (EU law has an exemption for military of course) or make surgery decisions?
This is just doomerism. Even though this model is slightly better than the previous, using an LLM for high risk tasks like healthcare and picking targets in military operations still feels very far away. I work in healthcare tech in a European country and yes we use AI for image recognition on x-rays, retinas etc but these are fundamentally completely different models than a LLM.
Using LLMs for picking military targets is just absurd. In the future, someone might use some other variation of AI for this but LLMs are not very effective on this.
AI is already being used for picking targets in warzones - https://theconversation.com/israel-accused-of-using-ai-to-ta....
LLM's will of course also be used, due to their convenience and superficial 'intelligence', and because of the layer of deniability creating a technical substrate between soldier and civilian victim provides - as has happened for two decades with drones.
Note that the IDF explicitly denied that story:
https://www.idf.il/en/mini-sites/hamas-israel-war-24/all-art...
Probably this is due to confusion over what the term "AI" means. If you do some queries on a database, and call yourself a "data scientist", and other people who call themselves data scientists do some AI, does that mean you're doing AI? For left wing journalists who want to undermine the Israelis (the story originally appeared in the Guardian) it'd be easy to hear what you want to hear from your sources and conflate using data with using AI. This is the kind of blurring that happens all the time with apparently technical terms once they leave the tech world and especially once they enter journalism.
AI is how it is marketed to the buyers. Either way, the system isn't a database or simple statistics. https://www.accessnow.org/publication/artificial-genocidal-i...
Ex, autonomous weapons like "smart shooter" employed in Hebron and Bethlehem: https://www.hrw.org/news/2023/06/06/palestinian-forum-highli...
If Israel did not take a sophisticated approach to their defense, they would have long been overrun by their neighbors and likely slaughtered to the point of extinction.
These sources are rife with bias. The first article literally says 'artificial genocidal intelligence'. The second emmediatly references the international crime of apartheid.
Genocide was the Arab world's openly stated objective for the Jews in 1948.
The international crime of apartheid was created by the United Nations. To those who understand the middle east, this statement is as equal an endictment of credibility as the former regarding genocide.
The nuance matters. The history matters. I'd recommend watching PBS's lengthy series 'Israel and the Arabs.' it's entertaining and enlightening. Then spend another few hours on the topic and travel to the middle east to see it yourself. After that, articles such as the ones you posted really read quite differently than most might expect.
A euphemism for apartheid and oppression?
What's biased about terming autonomous weapons as "AI"? Or, sounding alarm over dystopian surveillance enabled by AI?
Like Ben Gurion terming Lehi "freedom fighters" as terrorists? And American Jewish intellectuals back then calling them fascists?
https://www.wetheblacksheep.com/p/i-changed-my-mind-on-zioni...
I will spend no more than two comments on this issue.
Most people have already made up their minds. There is little I can do about that, but perhaps someone else might see this and think twice.
Personally, I have spent many thousands of hours on this topic. I have Palestinian relatives and have visited the Middle East. I have Arab friends there, both Christian and Muslim, whom I would gladly protect with my life. I am neither Jewish nor Israeli.
There are countless reasons for me to support your side of this issue. However, I have not done so for a simple reason: I strive to remain fiercely objective.
As a final note, in my youth, I held views similar to the ones you propagate. This was for a simple reason—I had not taken the time to understand the complexities of the Middle East. Even now, I cannot claim to fully comprehend them. However, over time, one realizes that while every story has two sides, the context is crucial. The contextual depth required to grasp the regrettable necessity of Israeli actions in their neighborhood can take years or even decades of study to reconcile. I expect to change few minds on this topic. Ultimately, it is up to the voters to decide. There is overwhelming bipartisan support for Israel in one of the world's most divided congresses, and this support stems more from shared values than from arms sales.
I stand by my original comment. As I said, this will be my last on this topic. I hope this exchange proves useful to some.
Commendable. You'll appreciate this Israeli historian: https://www.youtube.com/watch?v=xj_HKw-UlUk (summary: https://archive.is/dOP7g). And this Israeli Prof, also an expert on Holocaust studies, being fiercely objective: https://www.mekomit.co.il/ps/134005/ (en: https://archive.is/Fjj6f)
Why even spend 2 comments?
The same level of depth as Supremacists who regrettably exterminated non-Aryans?
This is undeniable, but the underlying "shared values" here are not the ones you'd like us to think: https://www.bostonreview.net/articles/instruments-of-dehuman...
Like you say, there's the entire might of the US political and elite class behind you; it isn't some act of courage or rebellion, fwiw.
Propagate? Your final note sounds like a threat.
The nuance of Ben-Gvir presumably ...
The IDF explicitly deny a lot of things, which turn out to be true.
Yeah, but the Guardian explicitly state a lot of things which turn out to be not true also.
Given that the underlying premise of the story is bizarre (is the IDF really so short of manpower that they can't select their own targets), and given that the sort of people who work at the Guardian openly loathe Israel, it makes more sense that the story is being misreported.
More likely, the IDF is committing a genocide and are finding innovative ways to create a large list of targets which grants them plausible deniability.
Just like… (checks notes)… oh yeah every government on the planet.
The "independent examinations" is doing a heavy lift there.
At most charitable, that means a person is reviewing all data points before approval.
At least charitable, that means a person is clicking approved after glancing at the values generated by the system.
The press release doesn't help clarify that one way or the other.
If you want to read thoughts by the guy who was in charge of building and operating the automated intelligence system, he wrote a book: https://www.amazon.com/Human-Machine-Team-Artificial-Intelli...
Why? There are many other types of AI or statistical methods that are easier, faster and cheaper to use not to mention better suited and far more accurate. Militaries have been employing statisticians since WWII to pick targets (and for all kinds of other things) this is just current-thing x2 so it’s being used to whip people into a frenzy.
It can do limited battlefield reasoning where a remote pilot has significant latency.
Call these LLMs stupid all you want but on focused tasks they can reason decently enough. And better than any past tech.
That seems like something a special purpose model would be a lot better and faster at. Why use something that needs text as input and output? It would be slow and unreliable. If you need reaction time dependent decisions like collision avoidance or evasion for example then you can literally hard wire those in circuits that are faster than any other option.
Because you can charge up a lot more when adding hot and hyped features like LLMs instead of doing good engineering.
I don’t know for sure but I imagine getting blackballed by the defence department is not fun.
I guess the future is now then: https://www.theguardian.com/world/2023/dec/01/the-gospel-how...
Excerpt:
nothing in this says they used an LLM
I guess he must have hallucinated that it was about LLMs
But it does say that some sort of text processing AI system is being used right now to decide who to kill, it is therefore quite hard to argue that LLMs specifically could never be used for it.
It is rather implausible to say that an LLM will never be used for this application, because in the current hype environment the only reason the LLM is not deployed to production is that someone actually tried to use it first.
I'm 100% on the side of Israel having the right to defend itself, but as I understand it, they are already using "AI" to pick targets, and they adjust the threshold each day to meet quotas. I have no doubt that some day they'll run somebody's messages through chat gpt or similar and get the order: kill/do not kill.
'Quotas each day to find targets to kill'.
That's a brilliant and sustainable strategy. /s
It’s absurd but LLMs for military targets is absolutely something that some companies are trying to sell regardless of the many known failure modes.
https://www.bloomberg.com/news/newsletters/2023-07-05/the-us...
https://youtu.be/XEM5qz__HOU
I also work in healthtech, and nearly every vendor we’ve evaluated in the last 12 months has tacked on ChatGPT onto their feature set as an “AI” improvement. Some of the newer startup vendors are entirely prompt engineering with a fancy UI. We’ve passed on most of these but not all. And these companies have clients, real world case studies. It’s not just not very far away, it is actively here.
i don't know about European healthcare but in the US, there is this huge mess of unstructured text EMR and a lot of hope that LLMs can help 1) make it easier for doctors to enter data, 2) make some sense out of the giant blobs of noisy text.
people are trying to sell this right now. maybe it won't work and will just create more problems, errors, and work for medical professionals, but when did that ever stop hospital administrators from buying some shiny new technology without asking anyone.
You'd be surprised.
Not to mention it's also used for military and intelligence "analysis".
When immaturity and unfitness for purpose has ever stopped companies selling crap?
I use ChatGPT in particular to narrow down options when I do research, and it is very good at this. It wouldn't be far-fetched to feed it a map, traffic patterns and ask it to do some analysis of "what is the most likeliest place to hit"? And then take it from there.
I hear these complaints and can't see how this is worse than the pre-AI situation. How is an AI "hallucination" different from human-generated works that are just plain wrong, or otherwise misleading?
Humans make mistakes all the time. Teachers certainly did back when I was in school. There's no fundamental qualitative difference here. And I don't even see any evidence that there's any difference in degree, either.
I've heard the same comparisons made with self-driving cars (i.e. that humans are fallible, and maybe even more error-prone).
But this misses the point. People trust the fallibility they know. That is, we largely understand human failure modes (errors in judgement, lapses in attention, etc) and feel like we are in control of them (and we are).
OTOH, when machines make mistakes, they are experienced as unpredictable and outside of our control. Additionally, our expectation of machines is that they are deterministic and not subject to mistakes. While we know bugs can exist, it's not the expectation. And, with the current generation of AI in particular, we are dealing with models that are generally probabilistic, which means there's not even the expectation that they are errorless.
And, I don't believe it's reasonable to expect people to give up control to AI of this quality, particularly in matters of safety or life and death; really anything that matters.
TLDR; Most people don't want to gamble their lives on a statistic, when the alternative is maintaining control.
What you say is true, and I agree, but that is the emotional human side of thinking. Purely logically, it would nake sense to compare the two systems of control and use the one with fewer human casualities. Not saying its gonna happen, just thinking that reason and logic should take precedent, no matter what side you are on.
It definitely seems like a matter of simple math. But, I'm not 100% sure it's always the most logical choice to defer to statistics.
By definition, stats operate at the macro level. So, for instance, I may be a safer driver than the AI average. Should I give up control? I suppose it's also a matter of degree and there's the network effect to consider (i.e. even If I individually beat the average, I'm still on the road with others who don't).
So it gets a little more complicated and I'm also not sure the aversion to relinquishing control is strictly "emotional" (as in the irrational sense). There's something about the potential finality of a failure that goes along with autonomy and agency over one's own life. The idea that a machine could make a mistake that ends your life, and you never had a chance or say in that outcome is off-putting in ways that feel more rooted in rationality and survival than in emotion.
Expanding on this, human failures and machine failures are qualitatively different in ways that make our systems generally less resilient against the machine variety, even when dealing with a theoretically near-perfect implementation. Consider a bug in an otherwise perfect self-driving car routine that causes crashes under a highly specific scenario -- roads are essentially static structures, so you've effectively concentrated 100% of crashes into (for example) 1% of corridors. Practically speaking, those corridors would be forced into a state of perpetual closure.
This is all to say that randomly distributed failures are more tolerable than a relatively smaller number of concentrated failures. Human errors are rather nice by comparison because they're inconsistent in locality while still being otherwise predictable in macroscopic terms (e.g.: on any given day, there will always be far more rear-endings than head-on collisions). When it comes to machine networks, all it takes is one firmware update for both the type & locality of their failure modes to go into a wildly different direction.
Humans know when they've made a mistake. So there's ways to deal with that.
Computers are final. You don't want things to be final when your life's on the line.
You'll never make senior management with that attitude. At worst, "mistakes were made" and look a bit sad.
Because people know they make mistakes, and aren’t always 100% certain and are capable of referring you to other people. Also because the mistakes LLMs make are entirely unlike mistakes humans make. Humans don’t generate fake URLs citing entirely fake references. Humans don’t apologize when corrected and then re-assert the same mistake. Also because we know that people aren’t perfect and we don’t expect them to be infallible, humans can break out of their script and work around the process that’s been encoded in their computers.
But most people do expect computers to be infallible, and the marketing hype for LLMs is that they are going to replace all human intellectual labor. Huge numbers of people actually believe that. And if you could convince an LLM it was wrong (you can’t, not reliably), it has no way around the system it’s baked into.
All of these things are really really dangerous, and just blithely dismissing it as “humans make mistakes, too, lol” is really naive. Humans can decide not to drop a bomb or shoot a gun if they see that their target isn’t what they expect. AIs never will.
Probably the main difference is that humans fail at smaller scale, with smaller effects, and build a reputation, probably chatgpt hallucinations can potentially affect everyone
yikes, mate, you've really misunderstood what's happening.
when a human fucks up, a human has fucked up. you can appeal to them, or to their boss, or to their CEO.
the way these crappy "AI" systems are being deployed, there is no one to appeal to and no process for unfucking things.
yes, this is not exactly caused by AI, it's caused by sociopaths operating businesses and governments, but the extent to which this enabled them and their terrible disdain for the world is horrifying.
this is already happening, of course - Cathy O'Neil wrote "Weapons Of Math Destruction" in 2016, about how unreviewable software systems were screwing people, from denying poor people loans to harsher sentencing for minority groups, but Sam Altman and the new generation of AI grifters now want this to apply to everything.
"Sorry, computer says no."
Humans can be wrong, but they aren't able to be wrong at as massive of a scale and they often have an override button where you can get them to look at something again.
When you have an AI deployed system and full automation you've got more opportunities for "I dunno, the AI says that you are unqualified for this job and there is no way around that."
We already see this with less novel forms of automation. There are great benefits here, but also the number of times people are just stymied completely by "computer says no" has exploded. Expect that to increase further.
Society has spent literal decades being convinced to put their trust in everything computers do. We're now at the point that, in general, that trust is there and isn't misplaced.
However, now that computers can plausibly do certain tasks that they couldn't before via LLMs, society has to learn that this is an area of computing that can't be trusted. That might be easy for more advanced users who already don't trust what corporations are doing with technology[0], but for most people this is going to be a tall order.
[0] https://i.imgur.com/6wbgy2L.jpeg
With humans there is a chance you get things right.
The Diamond Age.
That's what I find most offensive about the use of LLMs in education: it can readily produce something in the shape of a logical argument, without actually being correct.
I'm worried that a generation might learn that that's good enough.
a generation of consultants is already doing that- look at the ruckus around PWC etc in Australia. Hell, look at the folks supposedly doing diligence on Enron. This is not new. People lie, fib and prevaricate. The fact the machines trained on our actions do the same thing should not come as a shock. If anything it strikes me as the uncanny valley of truthiness.
I’ve had coworkers suggest a technical solution that was straight up fabricated by an LLM and made no sense. More competent people realise this limitation of the models and can use them wisely. Unfortunately I expect to see the former spread.
I've spent a few hours last week crafting a piece of code for my coworker, and then when I asked him to test it in the real environment, it turned out that the API he wanted to connect to the code I gave him was just a hallucination by ChatGPT.
OpenAI is. Their TOS says don't use it for that kind of shit.
https://openai.com/policies/usage-policies/
That's the license for the public service. Nothing prevents them from selling it as a separate package deal to an army.
I was in a counter-intelligence unit briefly and there was a mathemtician who spoke to us about the work they were doing to pick targets with the idea that if you can only out one person, who would be the most disruptive. You have all these interconnected but mostly isolated terrorist cells that don't know about each other except through a few people who may not be high up in the command but who are critical for the continuing cohesive existence of that terrorist group of cells and logistics etc.
So the military already was using math to pick targets, this is just the next logical step, albeit, scary as hell step.
In your scenario there were still individuals accountable for the decisions and their outcomes.
How are you supposed to say why a machine learning model produces different outputs from the same input? It's just a black box.
Surgeons don’t need a text based LLM to make decisions. They have a job to do and a dozen years of training into how to do it. They have 8 years of schooling and 4-6 years internship and residency. The tech fantasy that everyone is using these for everything is a bubble thought. I agree with another comment, this is Doomerism.
Surgeons are using robots that are far beyond fly by wire though, to the point that you could argue they're instructing the robots rather than controlling them.
If any regulator acts it will be the EU. The action, if it comes, will of course be very late, possibly years from now, when the horse has long left the stable.
My only hope for the EU government is that they put and AI in charge and it accidentally becomes sentient...
Right now insurance companies make those decisions based on how your life affects the profit/loss statement at the end of the quarter. (In the USA).
So it can’t really be worse if there’s just a RNG in a box. It may be better.
While this is clearly a problem and a challenge to address, the thing that never gets mentioned with this line of criticism is the obvious: a large number of real-life teachers make mistakes ALL the time. They harbor wrong / out-dated opinions, or they're just flat-out wrong about things.
Next it's going to teach them the Earth is flat and there are aliens behind the moon.
I get a good chuckle every morning when the "C3.ai" ad rolls on NPR.
"Hallucination-free," indeed.
Would love to know what actual, contractual guarantees they place around that.
It seems US and China are trying to reach an agreement to use AI to pick military targets these days.
People aren't dumb. They'll catch on pretty quick that this thing is BS'ing them.
https://pessimistsarchive.org/
It is being used to pick military targets, with very little oversight.
https://www.972mag.com/lavender-ai-israeli-army-gaza/
Israel is already doing exactly that... using AI to identify potential targets based on their network of connections, giving these potential targets a cursory human screening, then OK-ing the bombing of their entire family since they have put such faith (and/or just don't care) in this identification process that these are considered high-value targets where "collateral damage" is accepted.
When AI is in charge of controlling weapons, you get this: https://www.accessnow.org/publication/artificial-genocidal-i...
Why would the military use ChatGPT or depend on any way on Openai 's policy? Wouldn't they just roll their own?