return to table of content

OpenAI deletes ban on using ChatGPT for "military and warfare"

wand3r
70 replies
1d20h

OpenAI speedrun to the Google playbook of abandoning founding principles. Impressive that they could get this big this fast and just go full mask off so abruptly. Google made it about 17 years before it removed "Don't be Evil".

I really do think this will be the company (or at lease technology) to unseat Google. Ironic that Google unseated Microsoft and now it looks like they will take their throne back.

rightbyte
16 replies
1d20h

Being a warmonger is the new cool. No appeasement and whatever. Sama is just being down with the kids.

lebean
9 replies
1d19h

Finding applications for defense = warmongering?

enraged_camel
6 replies
1d18h

"Defense" doesn't mean much. Department of Defense regularly conducts offensive wars.

manquer
5 replies
1d18h

For most of its history from 1789 to 1943 it was war department.

It is just rebranded not changed in the last 70 years

philwelch
4 replies
1d10h

Common misconception. The Department of Defense governs all branches of the military. The Department of War only governed the Army while the Department of the Navy was a separate Cabinet-level department.

manquer
3 replies
1d5h

It also included the Air Force . It was not just the army . Navy was separate but everything else was under war department

philwelch
2 replies
23h54m

The Air Force was not a separate branch before the Department of Defense was formed; it was part of the Army. And the Marine Corps was (and is) governed by the Department of the Navy (which is currently a sub-department of Defense).

manquer
1 replies
22h48m

Everything was part of the army back then , that is the point.

Department of war was the “army” yes , but army is not what we think of army today, it literally included the entire air force - which wasn’t small auxiliary unit : both wars used tens of thousands of fighter planes .

Department of war was a department for war not just a name for what we know as army today .

philwelch
0 replies
16h29m

Everything was part of the army back then

…except the Navy and Marine Corps, which was the original point. A large part of the Second World War was fought by the naval services, despite those services being outside the Department of War. During the war, overall coordination of the military was not carried out by the War Department but rather via the Joint Chiefs of Staff and through ad hoc high level coordination between Army and Naval command. In particular, command in the Pacific theater was split between General Douglas MacArthur Admiral Chester Nimitz (with the ground operations under Nimitz being primarily carried out by Marines). The difficulties caused by this approach were the primary motivation for the reorganization of the American military into a unified Department of Defense.

I am well aware that the Army Air Forces were a very large part of the Army during the Second World War. However, the Navy and Marines also had tens of thousands of airplanes, none of which were under the control of the War Department or the Army Air Forces. It’s a little misleading to claim the War Department controlled the “entire air force” when they only controlled the Army Air Forces and not naval aviation.

The War Department controlled the Army and the Army Air Forces, which were part of the Army at the time, so it’s just as correct and a lot quicker to say that the War Department was only in charge of the Army. It wasn’t in charge of the Navy and Marines, and it wasn’t even in charge of fighting wars because we needed the Navy and Marines to help fight wars and they were under a different department. Which, again, was the reason for forming the Department of Defense in the first place.

When the Air Force became an independent service during the postwar military reorganizations, they actually tried (and failed) to take over naval aviation; to this day the United States Navy has a larger air force than most countries.

rightbyte
0 replies
1d18h

"Defense"? Really? But, no, not really, no. I tried to be funny and relate to the zeitgeist.

Another example is the Occulus guy, that pivoted into levering something VR I guess to make stuff to kill other people since Zuckerberg crushed him.

gopher_space
0 replies
1d18h

Depends on your religion and point of view. For Christians, absolutely.

EasyMark
4 replies
1d12h

what do you plan on doing when China and/or Russia and/or Iran come knocking on your door?

teloli
2 replies
1d2h

So far, it’s always been the other way round though.

deelly
1 replies
4h13m

You mean thats the USA that indirectly attacks China, Russia, Iran?

dragonelite
0 replies
3h44m

Pretty much but not the US directly its the US hiring Mercs in the form of Ukrainian proxies recently and in the near past it was Isis and co in the middle east to attack Iran and China's belt and road plans.

rightbyte
0 replies
1d8h

Not being in the house?

If the elites can't play nicely with each other it is not my problem. I don't trust them.

Log_out_
0 replies
1d17h

We had a whole mindset that basically rolled out the red carpet for dictators disguised as peace ment with apps. Including praise for turning oneself into an anti democratic psyOPs zombie. A correction of this nonsense was overdue. And in the moment of weakness, it was also revealed how alone the West really was with its values. The idealists are out there in the trenches, getting shot in the street, because peaceful cowards are willing to sacrifice everything and everyone for indefensible nimbyism.

huijzer
16 replies
1d20h

I'm probably gonna get downvoted for this, but I find allowing the technology to be used for all kinds of different things more "open" than arbitrary restrictions. Yes, even "military and warfare" are pretty arbitrary terms because defensive systems or certain questionnaire research, for instance, could be considered "military and warfare".

huijzer
11 replies
1d19h

Like our PhD project for example, we're doing machine learning on special forces selection in the Netherlands (for details, see [1]). The aim is basically just to reduce costs for the military and disappointment for the recruits. Furthermore, we hope to learn more about how very capable individuals can be detected early. This is a topic that is useful for many more situations than just the military.

[1]: https://osf.io/preprints/psyarxiv/s6j3r

chucky
3 replies
1d18h

This is a topic that is useful for many more situations than just the military.

Fair enough, but that's not who funded your research (according to your own disclosure, the military paid for it).

If this topic is so useful for "more situations" , why didn't those "many more situations" fund it? Will you be conducting research into how this topic will have non-military usages, or is that just something you tell yourself to sleep better at night while the military pays for more research that "is useful for many more situations than just the military"?

calamari4065
1 replies
1d18h

Most modern technology is derived from military research.

Do you feel bad every time you use a microwave? It was originally a military radar. Without military funding, it would not exist in its modern form. Nor would basically all radio communication technologies, satellites, spaceflight. You get the idea.

ponderings
0 replies
1d12h

yeah, I feel terrible about it. We as a species apparently cant punch a dent in a pack of butter unless it is for greed or murder. We chose to be like that. Seriously, wtf???

I would prefer it if we developed competitive qualities. Trying to stop doesn't make sense, we have to outgrow it.

ncocacola
0 replies
1d18h

Isn't that how the Internet was invented?

vkou
1 replies
1d17h

How difficult would it be to repurpose the kinds of models you are working on to, instead, say, perform early detection and selection of problem people for internment/liquidation?

huijzer
0 replies
19h7m

I have talked a lot with military people and actually have much confidence in their morals. Yes they will make many mistakes but in general there are lots of checks and balances and they‘re not evil people. Also note that they are ran by the government which makes them very risk averse.

In general, all technology can be reused. Maybe someday killer robots walk around with a Rust codebase. Should Rust not have been developed?

samstave
1 replies
1d18h

>>...we hope to learn more about how very capable individuals can be detected early

The new "Gifted And Talented Education" GATE for the modern era.**

We were evaluated for GATE in like 4th grade? Using such AI human behavioral heuristics against the minor population in 3.2.1.... Contract.

fuzzfactor
0 replies
1d1h

>>...we hope to learn more about how very capable individuals can be detected early

So they can be sidelined before they have a chance to disrupt anything.

Especially those individuals having any unique abilities in excess of what AI could substitute for.

tmccrary55
0 replies
1d16h

GattAIca here we come, baby!

throwup238
0 replies
1d19h

Your research will be assimilated by the Killbot Evolution Research Program. Your country thanks you.

For your service, here is a limited edition digital flag pin emoji:

heresie-dabord
0 replies
1d19h

we hope to learn more about how very capable individuals can be detected early

Following which, you will need to learn how to defend against all the internal threats to any such system. ^_^

rndmwlk
1 replies
1d18h

It's only arbitrary if you make it arbitrary. A strict ban on "military and warfare," may prevent some relatively innocuous projects from reaching fruition, but I find that to be an insignificantly small & significantly worthwhile cost to pay considering the flip side.

wavemode
0 replies
1d18h

I understand the idealism, but the realistic alternative is that the US government abstains while other governments use the technology freely. Not sure how that's a better scenario, in a practical sense.

It's probably why OpenAI decided to remove the restriction.

wand3r
0 replies
1d18h

It's kind of wild that you can't get it to do anything PG-13 for "safety" but it's going to be used in military technology. I have no value judgement on the decision, but it seems incongruent with their mission.

Also, for a company that proved it has no governance, I'm surprised they didn't quietly do it anyway and wait until it was discovered.

superfrank
0 replies
1d19h

I'm going to keep this vague to keep some anonymity, but where I work our product is used by a group that falls under one branches of at least one military and we have a team that is working on a new feature that uses ChatGPT under the hood, but the feature is completely innocuous.

The best comparison I can give would be if we were talking about health care and our product was used to schedule nurses shifts or book operating rooms.

glitchc
16 replies
1d19h

I actually think OpenAI is on an accelerated path because it knows it's days are numbered.

If the tech were truly superior, we would see Apple, Google and Meta rushing to license their tech. Yet they're not, instead, they're all building their own version. There's no secret sauce left to building an LLM. It's all public knowledge. And while ChatGPT has an edge right now, it's not a substantial one.

londons_explore
3 replies
1d10h

ChatGPT has been ahead the whole time, and according to the lmsys leaderboard it's not a small margin. Nobody else has yet beaten what openAI released in march last year.

Google tried to beat it with Gemini Ultra, but as well as being unreleased, the stats in the paper don't lend much confidence it will beat gpt-4-0314

scarface_74
0 replies
1d9h

It doesn’t matter how good the technology is. It matters how well they can productize it.

dr_kiszonka
0 replies
15h56m

Gemini ultra is nowhere close to GPT-4 for anything I tried.

berniedurfee
0 replies
4h15m

What if you took away 3/4 of the training data though? If NYT et al wins their case, training data won’t be free anymore.

Content owners will decide on the price and even who to license to.

Content owners will be asking exorbitant amounts for licensing fees and will likely strike exclusive deals with LLM owners.

Maybe Microsoft actually bought GitHub for the content?

Anon1096
3 replies
1d13h

You seem to be very very conveniently ignoring the fact that Microsoft spent 10 billion to basically gain exclusive use of OpenAI's tech... exactly what you're saying Apple, Google, and Meta should be doing.

glitchc
2 replies
1d10h

Microsoft funded the R&D, their return on investment was not guaranteed. It's apples to oranges.

spott
1 replies
1d

Microsoft invested $10B after GPT4 came out… their initial investment of $1B might have been for R&D, but their $10B investment was for tech that they knew about.

xnx
0 replies
13h5m

Isn't Microsoft's $10B investment likely more of a "pay as you go" rented integration of GPT4 into Bing and a few other places rather than a $10B wire transfer into OpenAI's bank account?

boringuser2
2 replies
1d15h

Nah, none of the other LLMs are particularly useful at much of anything, but GPT-4 is profoundly useful.

potsandpans
1 replies
1d14h

I've seen a lot of things come from gtp4, none of them "profound."

Could you share something that you se and think "profound"?

boringuser2
0 replies
1d12h

I said profoundly "useful".

GPT-4 helps me write reams of code every day at my job.

bombcar
1 replies
1d15h

Apple is apparently throwing largish sums of money to license all its training data, so it may be that OpenAI ends up exploding because of the “move fats and break things” so common in SV.

ponderings
0 replies
1d12h

With enough training anyone can be a belly dancer.

berniedurfee
1 replies
4h21m

I also think that profit for LLM-based businesses without massive data troves is now solidly capped.

Content owners are moving quickly to monetize their hordes of data. The days of free training data are over. If you don’t own it already, you probably can’t afford it.

I think the interesting side effect is that LLMs will end up as bifurcated as the internet. Each only being trained on some subset of content based on which subset the LLM builder chooses or is able to license for training.

LLM agents will all be hamstrung and biased in various ways based on fragmented training sets.

There will be no singularity.

There will be many LLM agents that learned to think based on the information their creators could afford or chose to provide it.

These agents will have biased, inaccurate and incomplete world views, yet will be very confident they know everything. How very human!

pegasus
0 replies
2h40m

If inference costs keep going down, I expect pirate LLMs trained on pirated and much more complete text libraries will proliferate.

yieldcrv
0 replies
1d18h

synthetic datasets ftw! they might have had a library when nobody was looking, but everything created after GPT-3 is analyzed by everyone and everything before GPT is extracted by getting GPT and other LLM's to talk

oh noes not the terms of service!

dontupvoteme
6 replies
1d19h

I cannot figure out why they documented such a thing, it's too amateurish - just do the standard operating procedure and let the military industrial complex friends and allies of america use the service and keep it under the rug

I normally have a very dim view of OpenAI/Altman but in this case I wonder if is something akin to a warrant canary, except for 5th generation warfare?

Altman does seem to have a bit of Chomsky in him, so it's not impossible.

ziddoap
2 replies
1d19h

SOP matches 126 meanings and MIC matches 123 on acronymfinder.

For the ignorant (me), what do you mean? I'm guessing SOP is "standard operating procedure" given the military context of this thread, but MIC? No clue.

It's really helpful to just spell out the words, at least the first time.

hipadev23
0 replies
1d19h

military-industrial complex

dontupvoteme
0 replies
1d18h

sorry, standard operating procedure and military industrial complex.

The topic sometimes puts one into acronym mode :)

(re: acronymfinder - funny enough, though 90% of AI is shit this bit falls into the 10% where it's pretty good at explaining acronyms, i use it often enough with MBA types, the depths of its hidden context are the untapped goldmines)

samstave
1 replies
1d18h

warrant canary

I think you figured out at least one important factor in this - I forgot about warrant canaries until your comment... So thanks for resurrecting that.

What we should then ask, is for a OpenAI PUBLIC SECTOR Contract Details Dashboard" <-- Meaning, they should be required to show all the "open" AI they have being built on their systems if they want us to have faith in Safe Responsible Humane Alignments?

--

@dylan604:

* Timelines are much more compressed now*

The Smart hockey-stick :-)

But yeah - and the worse part is not just Corporate competitors, but bad actors, rogue states, triads, ALL the Mafia's/scammers/phishers/ransomers/coin snatchers...

They all benefit at an unprecedented EQUAL rate at this point with the broad spectrum, capability, cost-effectiveness and effectively un-regulated, not-yet-aligned/guard-railed AIs here, in dev, and in near term.

Its got to be an amazing spot if your a top notch cybercrime person in the A-game right now. That, and White-Hat AI-pentesting, and next level security is behind schedule, it seems.

---

Also, I posted this regarding national security status for cloud provider's infra: [0]

In the increasingly interconnected global economy, the reliance on Cloud Services raises questions about the national security implications of data centers. As these critical economic infrastructure sites, often strategically located underground, underwater, or in remote-cold locales, play a pivotal role, considerations arise regarding the role of military forces in safeguarding their security.

While physical security measures and location obscurity provide some protection, the integration of AI into various aspects of daily life and the pervasive influence of cloud-based technologies on devices, as evident in CES GPT-enabled products, further accentuates the importance of these infrastructure sites.

Notably, instances such as the seizure of a college thesis mapping communication lines in the U.S. underscore the sensitivity of disclosing key communications infrastructure.

Companies like AWS, running data centers for the Department of Defense (DoD) and Intelligence Community (IC), demonstrate close collaboration between private entities and defense agencies. The question remains: are major cloud service providers actively involved in a national security strategy to protect the private internet infrastructure that underpins the global economy, or does the responsibility solely rest with individual companies?

[0] https://news.ycombinator.com/item?id=38975443

dontupvoteme
0 replies
1d17h

You ask good questions mate - best of luck, i suspect these topics will get less and less traction more and more rapidly.

(TBH i also forgot about warrant canaries for a good while. I just threw the question out to myself "what if Altman actually doesn't WANT to be a badguy, what might he have done to signal his Borgification?")

stcredzero
0 replies
1d17h

just do the standard operating procedure and let the military industrial complex friends and allies of america use the service and keep it under the rug

An AI with the capability to be autonomous in any environment, able to plan and execute plans well enough to defeat human opponents, is exactly what the AI doomer POV is rightly afraid of.

charcircuit
3 replies
1d20h

Google made it about 17 years before it removed "Don't be Evil".

It wasn't removed https://abc.xyz/investor/google-code-of-conduct/

krunck
1 replies
1d19h

The last line of the CoC document:

"And remember... don’t be evil, and if you see something that you think isn’t right – speak up!"

This is Google telling ME to not be evil not Google telling itself to not be evil. Big difference. Its sounds more like "if you see something, say something" snitch culture. That's evil.

charcircuit
0 replies
1d19h

This document is written where the first person is a Google employee.

ignoramous
0 replies
1d19h

"Evil," says Google CEO Eric Schmidt, "is what Sergey says is evil." https://archive.is/6XL7e

stcredzero
0 replies
1d18h

OpenAI speedrun to the Google playbook of abandoning founding principles. Impressive that they could get this big this fast and just go full mask off so abruptly. Google made it about 17 years before it removed "Don't be Evil".

It's inevitable that dangerous real time AI capable of formulating plans are going to be developed for military purposes.

The "OODA Loop" is fundamental to combat. (Observe, Orient, Decide, Act) Having a tighter and more potent (in the sense of fast and accurate processing) OODA loop is a fundamental advantage. The economics and game theory of military combat is going to result in units which have potent OODA loops which can overwhelm a human being's OODA loop. Once that happens, competition between different sides will result in an arms race going far above that level of capability.

Once the above happens, it's disturbingly likely that instrumental goals will arise in such entities which are dangerous not only to human beings on the wrong side, but to human beings on any side.

https://en.wikipedia.org/wiki/OODA_loop

siva7
0 replies
1d20h

Somehow sad but the more i think about it the more i'm certain that Google lost the race having watched closely the events from the very beginning.

octacat
0 replies
1d20h

OpenAI’s mission is to ensure that artificial general intelligence (AGI) is developed safely and responsibly.

Oof. It is not AGI, so does not count (sarcasm).

jart
0 replies
1d17h

That's because Google never went mask off. When Google started, only nerds cared about the Internet. As the Internet became more successful, non-nerds started demanding that the company do for them all the evil things non-nerd society has always done, and Google got blamed for it every single time. Now that the Internet had been thoroughly colonized by those people before OpenAI came along, is it any surprise that they'd demand OpenAI enact the exact same tyranny from day one?

inopinatus
0 replies
1d19h

I’m no fan of either firm but the hyperbole is unwarranted. The substance here is plainly a normalisation of contract language to focus on the activity rather than the actor.

inglor_cz
0 replies
1d5h

Or they just adapt their policies to the real world. It is easier to be theoretical pacifist if basically nothing happens. But last 2 years are a shitshow and the Western world has been forcefully reminded about the fact that military might actually serves some positive purpose, too.

dylan604
0 replies
1d19h

Any artificial limitations OpenAI places upon themselves will absolutely not be used by competitors. Google did not see that back in their day. Timelines are much more compressed now

2OEH8eoCRo0
0 replies
1d17h

Right. Because an evil company would abide by their terms/principles. Once they removed that pesky, "don't be evil" barrier it was open-season on being super evil.

They removed it because it's stupid and meaningless.

bhouston
25 replies
1d21h

How long until OpenAI's ChatGPT is astroturfing all debates on social media? Many in a year or two most posts to reddit will just be ChatGPT talking to itself on hot button issues (Israel-Palestine, Republican-Democrat, etc.). Basically stuff like this but on steroids, because ChatGPT makes it way cheaper to automate thousands of accounts:

* https://www.cnn.com/2019/05/06/tech/facebook-groups-russia-f...

* https://www.voaafrica.com/a/israeli-firm-meddled-in-african-...

I sort of suspect AI-driven accounts are already present on social media, but I don't have proof.

TriangleEdge
7 replies
1d21h

but I don't have proof.

Turing test achieved. I don't know if the internet will lose its appeal because of this. Could be that in the future, to use an online service, you'll need to upload a human UUID.

JohnFen
3 replies
1d21h

Could be that in the future, to use an online service, you'll need to upload a human UUID.

Nothing would make the internet lose appeal to me faster than having to do something like that.

klyrs
2 replies
1d21h

Me too. And I can't help but think, this would be a net benefit to humanity.

JohnFen
1 replies
1d21h

Maybe? But it would mean that I couldn't use the internet anymore. Which might also be a net benefit to humanity.

klyrs
0 replies
1d20h

Yep, I'm saying that we'd be better off if we spent less time on this, and more time making community in meatspace. If the enshittification of the internet is what gets us there, well, that's the hero we deserve.

TheCaptain4815
1 replies
1d21h

Wouldn't stop much. Human UUIDs would be sold on the black market to spammers and blackhats.

"Need $500? Rent out your UUID for marketing!"

bhouston
0 replies
1d21h

Well at least those UUIDs could be blocked permanently. Sort of like a spamhaus setup. Although it would be very dystopian that you rent out your UUID because you are poor and then you end up being blocked from everything. Sounds like Black Mirror.

quonn
0 replies
1d21h

Could still be copy-pasted. How about a brain implant that captures and outputs thoughts with a signed hash? Not that I would like to see that future.

FactKnower69
4 replies
1d21h

in 2013, Reddit community managers cheerfully announced that Eglin Air Force Base, home to the 7th Special Forces Group (Airborne)'s Psychological Operations team, was the "most Reddit addicted city" https://web.archive.org/web/20150113041912/http://www.reddit...

all debates on social media have already been astroturfed to hell and back by professional posters for many years, but LLMs are certainly going to function as a force multiplier

grandmczeb
1 replies
1d20h

All of the top 3 cities are places with a low official population and a large working population - Eglin's official pop is 2.8k, but has 80k workers. It's the "most Reddit addicted" city because of an obvious statistical artifact.

aifooh7Keew6xoo
0 replies
1d7h

yeah most "working cities" host air force munitions directorates actively engaged in researching psychological warfare on social networks too

https://scholar.google.com/citations?view_op=view_citation&h...

if you want some entertaining reading go ahead and browse through Eduardo Pasiliao's research at your working city there:

https://scholar.google.com/citations?user=Caw-nkAAAAAJ&hl=en

I'll summarize it for you: computational propaganda with an emphasis on discerning and disrupting social network structure.

hengheng
0 replies
1d21h

Reminds me, I haven't seen a video of a dog greeting a returning soldier in ages. I was convinced that it was neverending.

2OEH8eoCRo0
0 replies
1d20h

Eglin Air Force Base, home to the 7th Special Forces Group (Airborne)'s Psychological Operations team

Which unit?

https://en.wikipedia.org/wiki/Eglin_Air_Force_Base

Garrison for:

https://en.wikipedia.org/wiki/7th_Special_Forces_Group_(Unit...

Which is Army and part of:

https://en.wikipedia.org/wiki/1st_Special_Forces_Command_(Ai...

Whose psychological operations unit is based out of North Carolina. Doesn't track with Eglin.

I wonder if that's a fluke or exit node for a large number of Unclass networks that a lot of bored LCpl Schmuckatellis are using.

fakedang
3 replies
1d21h

Why waste billions of kilo joules of energy running AI systems for that, when you'll get legions of dirt cheap technical labor in the developing world, who'll do it for you for far less and at massive scale, with better acerbic language?

moritzwarhier
0 replies
1d21h

I think part of the problem is that LLMs seem to be quite effective at producing messages adhering to ulterior motives, catch attention, reinforce emotions etc.

The GPT-4 release documentation has some examples of this in its addendum. ChatGPT also seems to be good at writing advertisements. Without the strong guardrails, I wouldn't bet on one or two persons instructimg a GPT-4-scale model perfoming worse at manipulating debates than 10 or 100 humans without AI.

kjkjadksj
0 replies
1d21h

Its not about saving money. Doing it like this means you just created a new private contractor environment to invest in.

Exoristos
0 replies
1d19h

Well, ChatGPT's English is very, very good.

panick21_
1 replies
1d21h

Lets be real, chat gpt is overqualified for that task.

bhouston
0 replies
1d20h

Yeah for reddit and twitter randos who pop up to lambast you when you talk about a controversial topic, a self-hosted Mistral LLM would work great.

weweweoo
0 replies
1d21h

Not just social media, but traditional media as well. As an example, British tabloid 'The Mirror' is using AI to write some of its news articles, and apparently nobody bothers to proofread them before release.

https://www.mirror.co.uk/news/world-news/vladimir-putin-adds...

This piece of "journalism" released a couple of days ago claims Finland is in the process of joining NATO, while it already joined nearly a year ago. This is obviously caused by utilization of a LLM model with training data limited to time before Finland was accepted. At least at the end of the article they mention AI was utilized, and included an email where you can complain about factual errors.

mechanical_bear
0 replies
1d21h

It absolutely is. I know of independent researchers doing some side project work on various social media platforms utilizing chatGPT for responses and measuring engagement.

imjonse
0 replies
1d21h

It may become another good reason to leave social mass-media and allow smaller or actual friends only communities to spring up.

WhackyIdeas
0 replies
1d21h

Nice deflection at the end there, but I sniff military AI.

Klathmon
0 replies
1d21h

That idea has a name, the "Dead Internet Theory"

https://en.wikipedia.org/wiki/Dead_Internet_theory

BiteCode_dev
0 replies
1d21h

Already seen in the wild from colleagues.

wolverine876
16 replies
1d21h

How will the employees respond? People embrace powerlessness these days, but all they need to do is act. Do they want to work on, potentially, some of the most destructive technology in history?

pojzon
3 replies
1d20h

Its the Los Alamos all over again.

But this time the atomic bomb will be able to decide by itself whether to incinerate human race.

wolverine876
2 replies
1d19h

It's a very different.

First, Los Alamos was a project of a democratic government, serving the people of the US and allies. OpenAI is a business that serves itself.

During WWII, the US was in an existential war with the Nazis, also trying to develop nuclear weapons. If the Nazis had built it first, we may have been living now in a very dark world. (Obviously, it also helped defeat Japan.) On the other hand, there are threats if an enemy else develops capabilities that provide large advantages.

At least part of the answer, I think, is that the US government needs to take over development. It already houses the world's top two, most cutting edge technology organizations - the US military and NASA - and there is plenty more (NIST, NIH, nuclear power, etc.); the idea that somehow it's beyond the US government is a conservative trope and obviously false. We don't allow private organizations to develop military weapons (such as missiles and nukes) or bioweapons on their own recognizance; this is no different.

dylan604
1 replies
1d18h

That same democratic government interned its own citizens to camps because they're heritage was of the same nationality as the opponent. Democratic governments are not the perfect thing you seem to make it out to be. There are plenty of other examples from pretty much any democratic goverment

wolverine876
0 replies
1d

Democratic governments are not the perfect thing you seem to make it out to be.

? where did I say that?

atemerev
3 replies
1d21h

Of course. People in Los Alamos were (and are) enthusiastic to work on nuclear weapons. There is no shortage of people enjoying building such things.

tibbydudeza
1 replies
1d20h

The Manhattan project was started off by a letter from known pacifist Albert Einstein to Pres Roosevelt about his fears of the Nazi's developing an atomic weapon first.

I would say it was a good thing that did not happen.

atemerev
0 replies
1d20h

I fully agree, and the same reasoning applies for the military use of AI.

azinman2
0 replies
1d21h

The world is complicated and lots of not nice things happening.

weweweoo
2 replies
1d21h

As long as there are people in Russia and China who are willing to work on such tech, it's actually ethical for Americans to work on the technology.

Effectively, it's the military power of the US and its allies that prevents people like Vladimir Putin from killing potentially millions of people in their neighbouring countries. Whatever faults US has, it's still infinitely better than Russia. I say this as a citizen of a country that shares a long border with Russia.

wolverine876
0 replies
1d19h

As long as there are people in Russia and China who are willing to work on such tech, it's actually ethical for Americans to work on the technology.

While that carries weight, 'the other person is doing it' has long been an excuse for bad behavior. The essential goals are freedom, peace, and prosperity; dealing with Russia and China are means to an end, not the goal. If developing AI doesn't achieve the goal, we are failing.

karaterobot
0 replies
1d20h

I agree with the second paragraph. The first paragraph is more of a thorny issue to me. If AI is potentially destructive in an existential sense, then working to get there faster just you can be the one to destroy the world on accident is not part of my ethical model. I put existential AI risk at a low but non-zero chance, like OpenAI should/does/did/hard to say anymore.

wnevets
0 replies
1d20h

How will the employees respond?

By doing what they've been doing, they won't hurt their stocks.

wahnfrieden
0 replies
1d21h

It is telling if the current team recently displayed extreme levels of worker-solidarity and organizing in public around leadership changes they desired, and their response to this is crickets

paxys
0 replies
1d20h

They will respond the same way employees of Microsoft, Amazon, Google and the like did when those companies started picking up military contracts – throw a minor fuss and then continue working. And those that do quit over it will be replaced overnight.

dist-epoch
0 replies
1d20h

Surely some employees are ideologically 3 percenters.

JohnFen
0 replies
1d19h

I think that the employees of OpenAI (generally speaking) made a pretty loud statement during that board fiasco that their interest is whatever maximizes their personal financial return.

quonn
15 replies
1d21h

One problem is that many industry companies (almost anyone doing engines, vehicles, airplanes) is likely to at least do some military products, too. It may be as simple, for example, as wanting to have an LLM assistant in a CAD tool for developing an engine that may get used in a ship, some of which may be military. And the infrastructure and software is often shared or at least developed in order to be applied across the company.

I think this is where this is coming from.

It would be useful to clarify the rules and ban direct automatic control of weapons or indirect control as part of a feedback loop on any system involving weapons from commercial AI products.

strangattractor
12 replies
1d20h

We cannot even ban the development of Nuclear Weapons much less a technology that could be developed for peaceful purposes then switch to terminator mode. Have you seen how drones are being used in the Ukraine Russian War? How long did it take for drone tech to go from light shows in Dubia [1] to dropping grenades into Russian Tanks [2].

[1] https://www.youtube.com/watch?v=XJSzltMFd58

[2] https://www.youtube.com/watch?v=ZYEoiuDNY3U

int_19h
10 replies
1d19h

The real crazy stuff isn't the grenade-bombing drones; it's FPV: https://www.youtube.com/watch?v=Pe5RvttOs-E. They have pretty much replaced guided anti-tank missiles for both sides because of how much cheaper they are, and the ability to easily fly around and hit where the armor is thinnest, or even inside the vehicle, before exploding; they can also fly inside trenches, bunkers, and other emplacementes.

DJI FPV ($1k retail) supposedly has about 1 in 3 chance of taking out a moving tank that is not a sitting duck - i.e. hatches closed etc - and does not expose the operator to danger in the process. For comparison, a single Javelin is $240k. FPV drones are cheap enough that they're routinely used even against minor targets such as unarmored small cars and even individual soldiers - even after accounting for failures before a successful hit, it still ends up costing the enemy a lot more in $$$ than was spent on all the drones.

manquer
5 replies
1d17h

The cost of the drone is not the only cost . You still need tank busting munition and the also crucially internet connectivity starlink provides or satellites provide.

Starlink is cheap for its capabilities but it is still startup cost of billions of dollars if you could do it all .

int_19h
4 replies
1d17h

The tank-busting munitions that they typically use are conventional RPG warheads, as seen in the video I've linked. They work great provided that they hit a weakly armored spot, and they're even cheaper than the drone itself.

And no, you don't need Internet connectivity for those things. You need to be within radio range, but this can still mean 1-2 km away easily even on stock equipment, and more with better antennae etc.

manquer
3 replies
1d16h

Conventional RPG warheads still cost money, a lot of money if you want them to reliably explode.

Javelin has a warhead capacity of 9Kg, for the same bang so to speak ( armor penetration) you would need drones that can hoist 9Kg, those don't retail at anywhere near $1,000.

That doesn't mean that cheap drones are not effective means of weapons delivery, they are as this war is showing. They are akin to side arms, when penetration is not a factor they are useful, but they are hardly a replacement for heavy guided munitions like Javelin.

Man Pads are by no means perfect, the current gen is 30 years old, they could be made much smarter and lot cheaper than $250,000, however just $1000 drone is not going to replace them yet.

philwelch
0 replies
1d10h

MANPADS means Man-Portable Air Defense System. Javelin is an anti-tank weapon, not an air defense weapon.

And the main advantage for Javelin over an FPV drone is that a Javelin is fire-and-forget; you don’t need to steer the missile yourself. It will attack the weaker top armor automatically.

int_19h
0 replies
22h49m

My source is the people in Ukraine to whom I donate and who are directly involved in drone purchases for frontline units: https://dzygaspaw.com/. They do buy <$1K FPV drones, and HEAT RPG-7 shots is one of the typical things that are strapped to them for anti-armor role.

A single-stage RPG-7 HEAT grenade weighs 2.5 kg, of which the actual explosive charge is only 700 g, and much of the rest is the powder charge that propels it in normal use, but is completely unnecessary for drone applications. And, as already noted above, you do not need the same absolute armor penetration capacity for these things, because the whole point is to attack from the side where armor is the thinnest and any exposed internal components are the easiest to disable. Judging by the videos of successful use, the most common technique is to attack the engine compartment directly from above.

Now, DJI FPV, which does retail for $1K (and can be had for less if buying in bulk) is certainly quite capable of lifting a 1kg warhead. But it should also be noted that these days, in most cases Ukrainians are using money more efficiently by assembling their own custom-tuned FPV drones from components and specially manufactured HEAT charges, which knocks the price down to ~$600 for the same lift capacity with better speed and range. They are still roughly the same size as DJI or only slightly larger, which can be readily seen in numerous videos on YouTube and Telegram showcasing their use. Here's one example of a locally manufactured FPV drone: https://neboperemogy.fund/dron-hrim/ - the lift capacity for this one is 2kg, sufficient for heavier tandem HEAT charges, and it costs ~$750.

MeImCounting
0 replies
23h40m

Correct FPV drones are essentially useless against aircraft. I dont think anyone is using conventional RPG warheads or FPV drones against aircraft.

"Javelin has a warhead capacity of 9Kg, for the same bang so to speak ( armor penetration) you would need drones that can hoist 9Kg"

This is false because a Javelin warhead contains propellant and guidance as well as the payload. In reality, a shaped charge capable of penetrating 200mm of steel weighs about 1 kilogram. Which is well within the capabilities of many drones under and around the 1000$ price range.

Many tanks and IFVs have been taken out on both sides by quadcopters carrying shaped charges. Plenty of penetration can be achieved by relatively lightweight shaped charges.

genman
1 replies
1d19h

Your cost of Javalin is off by order of magnitude but still the point stands - drones are much more cost effective.

int_19h
0 replies
1d17h

My estimate was a bit off because it was a market price for an export unit. But according to US procurement documentation, a single Javelin costs American taxpayers $197,884 to send, so it's the same order of magnitude.

trhway
0 replies
15h42m

The real crazy stuff isn't the grenade-bombing drones; it's FPV

well, FPV is quickly becoming yesterday stuff - the EM warfare makes direct video connection infeasible, at least near the target. So, the most recent drones can guide themselves toward the target using computer vision - the operator brings them closer to the target's position, points-and-clicks at the target on screen, and the drone does the rest on its own.

BobaFloutist
0 replies
1d19h

I mean if you think about it, an individual soldier is far more expensive/valuable than 1k, even in much cheaper countries.

ericfrazier
0 replies
1d20h

Not fast enough.

octacat
0 replies
1d20h

One problem is that many industry companies do not declare that they develop something for "the good of humanity". Otherwise it is yet another "virtue signalling".

dontupvoteme
0 replies
1d19h

I think this is where this is coming from.

I think a recent war and a recent bombing campaign may imply otherwise.

climatekid
13 replies
1d22h

Reality is AI is going to be used to write really really boring reports

Not everything is a spy movie

paxys
1 replies
1d21h

Information warfare is a thing. There is no better propaganda machine than a reasonably intelligent AI.

derekp7
0 replies
1d20h

Ah, yes -- to expand on this. You know how some countries employee a large number of people to engage on social media platforms. They have to put in enough good content to build up their rank, and then use that high ranking to subtly put out propaganda which would get more visibility due to their user status. But that takes a lot of effort and manpower.

Now take an LLM that you can feed it questions or discussions from sites, have it jump in with what appears to be meaningful content, gets a bunch of "karma", then gradually start putting out the propaganda. It would be a hard item to fight.

kurthr
1 replies
1d20h

Luckily, that same LLM can summarize that really really boring report... and, if you ask it to, it'll make it exciting, as well. Maybe too exciting...?!

k8svet
0 replies
1d20h

"Please summarize these docs, highlighting the reasons to attack Mars while downplaying any mentioned downsides and costs"

Or, you know, it just hallucinating and people not checking it. But that would be as silly as lawyers citing non-existent AI-hallucinated legal cases.

bugglebeetle
1 replies
1d21h

I love that whenever one of these threads shows up, someone always appears to suggest that banality and evil are entirely separate from one another, despite the entire history of the 20th century.

jstummbillig
0 replies
1d21h

I don't think that's what parent did?

Manuel_D
1 replies
1d21h

I've also read, they're using AI to declassify materials. Humans still make the high level decisions, language models tackle the boring work of reacting text and whatnot.

ttyprintk
0 replies
1d13h

And FOIA requests; getting a jump on the eventual bloom.

wahnfrieden
0 replies
1d22h

AI is also currently used to select bombing targets for several years now

It's used for operational efficiency: to select and bomb targets faster and in greater numbers than human analysts are able to

Not everything is boring paperwork

(Source: https://www.livemint.com/ai/israelhamas-war-how-ai-helps-isr... where AI achieved 730x improvement in bombing target selection rate and >300x greater rate of resulting bombs)

milkglass
0 replies
1d22h

Clippy has entered the chat.

madeofpalk
0 replies
1d21h

From CNET today

> Today’s Mortgage Rates for Jan. 12, 2024: Rates Cool Off for Homeseekers

https://www.cnet.com/personal-finance/mortgages/todays-rates...

And yesterday

> Mortgage Rates for Jan. 11, 2024: Major Mortgage Rates Are Mixed Over the Last Week

https://www.cnet.com/personal-finance/mortgages/todays-rates...

And the day before

> Current Mortgage Interest Rates on Jan. 10, 2024: Rates Move Upward Over the Last Week

https://www.cnet.com/personal-finance/mortgages/todays-rates...

You get the idea.

EricMausler
0 replies
1d21h

It's also going to be used to read those really boring reports

0xdeadbeefbabe
0 replies
1d21h

At best it improves the chow in the mess hall.

smeeth
12 replies
1d22h

Imagine you are OpenAI. AI is going to be used for "Military and Warfare" whether you want it to be or not. Do you:

A) opt-out of participating to absolve yourself of future sins or

B) create the systems yourself, assuring you will have a say in the ethical rules engineered into the weapons

If you actually give a shit about ethics and safety (as opposed to the appearance thereof) the only logical choice is B.

resolutebat
7 replies
1d22h

By the same logic, chemists in the USA should work on nerve gas, because if they don't North Korea will?

nradov
2 replies
1d21h

That is not valid logic. The USA ratified the Chemical Weapons Convention in 1997, and there are various Acts of Congress which make most work on nerve gas a federal felony. There are no such legal prohibitions on AI development.

FactKnower69
1 replies
1d21h

We are debating ethics and morality surrounding a rapidly evolving field, not regurgitating trivia about the arbitrary legal status quo in the country you live in. Think for a moment about the various events in human history perpetrated by a government which considered those actions perfectly legal, then come back with something to contribute to the discussion beyond a pathetic, thought-terminating appeal to authority.

cscurmudgeon
0 replies
1d20h

1. The initial “pathetic”thought-terminator was comparison to nerve gas.

2. Nerve gas is not strategic. A better comparison are nukes in WW2.

3. Nerve gas has no other uses unlike AI.

4. Nerve can only be used to hurt unlike AI

5. If AI in military is so dangerous, should the US just sit and do nothing while China /Russia deploy it fully? What is your suggestion here specifically?

FpUser
2 replies
1d21h

If said nerve gas was decisive weapon capable of giving one side absolute advantage chemists in USA or any other country for that matter would absolutely do it.

sebastiennight
1 replies
1d20h

This is terrible logic and we (the international community) have banned several kinds of terrible weapons to avoid this kind of lose-lose escalation logic.

creato
0 replies
1d16h

The only reason the US or any other country gave up chemical weapons is because they are nearly useless anyways.

There are plenty of other weapons (such as mines) that the “international community” has “banned”, but are very useful in a war. Any country that doesn’t or can’t expect the US to come to its rescue ignores such bans and still manufactures them in great quantities.

daveguy
0 replies
1d21h

That's not the same logic at all.

OP choice was protest or participate and influence to safer outcomes. Your choice was protest or participate without influence to safer outcomes.

Also the AI participant would be OpenAI either way, whereas your inadequate alternative is participate with the US or NK will participate. Also, not the same.

So, wrong on two counts.

janice1999
1 replies
1d22h

assuring you will have a say

Suppliers don't get to pick which house the missile lands on.

tdeck
0 replies
1d20h

"Once the rockets are up, who cares where they come down? That's not my department" says Werner Von Braun.

poisonborz
0 replies
1d21h

If you really know about supplier networks, government, military: this is a losing game that is better not played.

Frummy
0 replies
1d21h

Imagine you are Microsoft. Two decades ago the state regulated you. Now you get the opportunity to have them eat from your hand. Who cares about ethics and safety?

tibbydudeza
10 replies
1d20h

Seeing combat footage of FPV suicide drones in the Ukrainian war and how effective they are it is sort of inevitable that AI would be used as a selling point for this.

rightbyte
9 replies
1d20h

They are manually aimed, right?

tibbydudeza
6 replies
1d19h

For the moment not yet but I suspect there are folks working on intelligent targeting rather than using human operators that needs to be in close proximity or otherwise GPS coordinates.

rightbyte
2 replies
1d18h

It is probably "easy" to make them finding targets, but I guess the part where it should differ friend from foes and civilians is the hard part? Otherwise it's just some kind of mine.

tibbydudeza
0 replies
1d2h

Just look for "Z" or old Soviet CCCP patch on the uniform.

Arch485
0 replies
1d18h

I would posit that making some AI-controlled weaponry on par with humans at target identification/recognition is not too hard, and definitely doable with today's tech. Making one that's _better_ than humans though, that's the tricky part.

Another issue is full autonomy: even human soldiers/pilots/etc. are not fully autonomous - they will ask command if there are civilians or friendly units in the AO, as they don't always have enough information to make that decision themselves. To achieve that with machines (not necessarily AI), you either need a fully integrated system (i.e. humans are obsolete for military use), or you need an efficient and functional human-machine interface.

So I don't expect we'll be seeing fully autonomous AI weaponry anytime soon, despite it being technologically possible. AI-_assisted_ weaponry, however, probably already exists.

philwelch
2 replies
1d9h

A kamikaze drone that guides itself? That’s called a missile, we invented those decades ago.

tibbydudeza
1 replies
1d2h

I am referring to FPV kamikaze drone which is cheaper and far easier to use than a missile and costs way cheaper.

philwelch
0 replies
23h55m

FPV is a system for a human operator to guide it. If it doesn’t have a human operator but it does have an autonomous system to guide it to the target, that’s just a guided missile.

arkush
0 replies
1d8h

They are manually aimed, right?

January 6, 2024:

"Defence Intelligence of Ukraine shares footage of the targeting of two Russian Pantsir-S1 air defence systems. Looks like loitering munition was used. As said, today in the Belgorod region of Russia."

Source tweet:

https://twitter.com/bayraktar_1love/status/17437042635308319...

Original source is the Main Directorate of Intelligence's Telegram channel (in Ukrainian):

https://t.me/DIUkraine/3288

Notice yellow rectangles that are visible around the targets in the video.

It seems that AI-aiming was used at the final parts of the approach trajectories, after loss of communications with the drones.

FirmwareBurner
0 replies
1d19h

For now

ToucanLoucan
8 replies
1d22h

I mean it makes sense right, when you really think about it: money.

tracerbulletx
7 replies
1d22h

You also don't really have a choice but to play ball with the national security establishment.

curtis3389
3 replies
1d22h

If Hobby Lobby can be Christian, any business can be Buddhist.

RcouF1uZ4gsC
2 replies
1d21h

Even Buddhist countries can have an aggressive military - see Myanmar.

wahnfrieden
0 replies
1d21h

Nissim Amon is an Israeli Zen master and meditation teacher. He served in the Israeli Defense Forces under the Nahal Brigade and fought in the Lebanon War. [...] In 2023, during the 2023 Israeli invasion of the Gaza Strip in response to the 7 October Hamas attack, he published a video teaching Israeli troops how to shoot with an emphasis on breathing and relaxing while being "cool, without compassion or mercy".

( https://en.wikipedia.org/wiki/Nissim_Amon & translation of the original message from Amon: https://sites.google.com/view/nissimamontranslation )

curtis3389
0 replies
1d20h

It doesn't matter if you're being hypocritical. It's already nonsensical that a corporation can have a "sincerely held belief", so you might as well exploit the existing corruption and say "we're a sincerely Buddhist business and can't help with killing".

ToucanLoucan
2 replies
1d22h

Tell that to Edward Snowden, Lindsay Mills, Julian Assange, Chelsea Manning... so many. Some complicated figures in their own right, all of whom took principled stands against such apparatus, most of whom paid dearly for doing so, many of which continue to pay dearly for doing so.

It's possible. It just won't make you rich, which is, I suspect, the real problem.

jakderrida
1 replies
1d22h

all of whom took principled stands against such apparatus.

Yeah, and look what happened to them.

ToucanLoucan
0 replies
1d22h

Principled stances aren't often a path to prosperity. They do, however, afford you the luxury of not actively contributing to mass murder.

ParetoOptimal
8 replies
1d21h

So what collective responsibility, if any, do those using gpt4 daily and helping improve it have when openai powered drones start being used and accruing civilian casualties?

klyrs
2 replies
1d21h

GPT is trained on my shitposts. Am I included in this collective responsibility?

ParetoOptimal
1 replies
1d21h

Hm, just as a thought experiment... if your shitposts included any form of implicit racism that affected the AI in a drone's decision of "is the civilian casualty worth acceptable or should I not fire"... then yes?

I don't have a full answer to my own question to be honest.

In the above example though, you'd never be able to prove that it was or wasn't your contribution so it's easy to say you bear no collective responsibility. But would it be true?

I'm not sure, but I can't say definitively you would bear no responsibility.

klyrs
0 replies
1d15h

But here's the thing, I've been shitposting since before the eternal September. Long before LLMs were invented. I had no idea that my writings would ever be used in such a way.

I'm trying to find an ethical analogue. If you dig up an old porno that I made 20 years ago, and show that to my child, am I responsible for the trauma it causes?

kimjune01
1 replies
1d18h

Same as paying taxes that fund the military

ParetoOptimal
0 replies
1d17h

Hearing this a second time, I think there is a difference.

Going without gpt4 can put you at a disadvantage for some work.

Not paying taxes affects your life much more negatively.

Given the different costs, logically it seems that paying taxes would be something you have less collective responsibility for.

Spivak
1 replies
1d21h

Do you feel that collective responsibility whenever you do taxable work or make taxable purchases in the US that funds our entire military? It should be orders of magnitude less responsibility than that.

ParetoOptimal
0 replies
1d21h

Honestly, yes. It's a weird duality of "but this is the reality I'm stuck in" and "however there is a collective responsibility for helping fund wars", but it's still functional.

I would find it wrong to say "I bear no responsibility because that's just how things are" if that makes sense.

Tommstein
0 replies
1d20h

None, unless they also get credit when it's used to save lives from the assorted assholes of the world.

devindotcom
5 replies
1d22h

My guess is there are huge opportunities for fairly mundane uses of GPT models in military database and research work. A ban on military uses would include, for instance, not allowing the Army Corps of Engineers to use it to improve disaster prep or whatever. But a ban on causing harm ostensibly prevents use on overtly warfare-oriented projects. Most big tech companies make this concession eventually because they love money and the Pentagon has a tremendous amount of it.

dmix
4 replies
1d20h

It does says they still don't allow developing weapons with it,

“use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.

so Lockheed and co won't be able to use it for most of their military projects. I don't personally see an issue with this change in policy given what you said: the vast vast majority of usecases is just mundane office spreadsheet stuff and the worrying stuff like AI powered drones is disallowed (DARPA has that covered anyway).

Americans, and every other country's, citizens all pay for the inefficiency of the large defense departments. A slightly more efficient DoD office drone isn't exactly a more dangerous world IMO.

crawfordcomeaux
3 replies
1d18h

Making the DoD more efficient is absolutely dangerous, given the current state of things where Israel is being tried for war crimes in international courts & primary western media outlets only shows their defense, not the prosecution's case. When the president is unilaterally ordering strikes against Yemen because they make shipping more expensive for Israel.

Making this killing machine more efficient at doing anything besides dying or self-dismantlement is harmful to liberation around the world.

tptacek
1 replies
1d18h

That's not what happened. A huge percentage of all global shipping got diverted, and US ships were attacked. This is literally --- literally, using the literal meaning of the word "literal" --- the oldest cassus belli in the American playbook (and the Barbary War was declared unilaterally by the executive); further, it is essentially the entire basis for international law, back to Mare Liberum (this is a point I shoplifted from someone else). This is about the most normal thing that could have happened in these circumstances.

If it matters to you, Congress has already unequivocally signaled unanimous support. That's how this works under 50 USC 33: the executive can launch attacks unilaterally with 48 hours notices (here it was negative hours notice), thus giving Congress the opportunity to pass a joint resolution ending the strikes. The opposite thing happened.

philwelch
0 replies
1d9h

Adding onto your point, attacking civilian merchant shipping in international waters is piracy, and one of the most ancient traditions of international law is that pirates are considered hostis humani generis—enemies of mankind. Any nation that cares to do so has the traditional legal right to dispose of pirates by any means necessary.

inglor_cz
0 replies
1d5h

"because they make shipping more expensive for Israel."

That is an extremely slanted view of the situation. The Houthis attacked plenty of ships belonging to (de iure or de facto) multiple nations and carrying plenty of someone else's cargo, thus disrupting about 12 per cent of the total volume of global trade. They aren't even trying to enforce a specific blockade ( blockade is act of war, but it must be limited to very specific cargo/ships).

That is piracy 101 and pirates have been generally considered enemies of mankind since at least Antiquity.

"liberation around the world."

Yeah, like the way the Russians are "liberating" Bakhmut and Avdiivka. Without the Western militaries and their help, they could have "liberated" the entire Ukraine into one big smouldering heap of ruins.

JohnFen
5 replies
1d20h

This was inevitable, really. There's no way that OpenAI was going to leave that kind of money on the table.

Although I do find it weird that they're so concerned about their products being used in other, relatively less objectionable ways, but are OK with this.

Vecr
2 replies
1d20h

Money gained minus PR cost has to be a big number for them to do it.

dylan604
1 replies
1d18h

I'm guessing the PR cost approaches $0

yunwal
0 replies
1d14h

I disagree. Probably not much of their customer base cares, but I think lots of really smart researchers think a lot about these things. openAI doesn’t want them to turn elsewhere.

ijhuygft776
1 replies
1d18h

The government probably already had a secret law that allowed them to use it anyways...

manquer
0 replies
1d17h

Microsoft can sell and host GPT with their own terms , so this was just PR from day one .

More likely they are anticipating public news about military usage sooner than later and removing this to Mitigate the pr damage .

ChicagoDave
5 replies
1d22h

Who’s kidding who? I theorize every major government in the world has already been using AI models to help guide political and military decisions.

Who doesn’t think China has a ten year AI algorithm to takeover Taiwan? Israel+US+UK > Middle East.

SkyNet or War Games are likely already happening.

matkoniecz
1 replies
1d22h

Who doesn’t think China has a ten year AI algorithm to takeover Taiwan?

What it is supposed to mean?

edu
0 replies
1d21h

I guess a veeeeeeery slow progress bar in some screen.

wait_a_minute
0 replies
1d21h

If it was Skynet everyone would already know by now...

paganel
0 replies
1d22h

AI kriegsspiele won't help win anyone any big war, they didn't help the Germans in WW1 (without the AI part, of course), they won't help China, so for the sake of the Chinese I hope that they're following the "classical" route when it comes to "learning" the art of waging the next big war and not following this newest tech fad.

There's also something to be said about how the West's reliance on these war games (don't know if AI-powered or not) when preparing for the latest Ukrainian counter-offensive has had disastrous consequences for the actual Ukrainian soldiers on the field, but I don't think that Western military leaders are so honest with themselves anymore in order to acknowledge that (at least between themselves, if not to the public). A hint related to those Western war games in this Economist piece [1] from September 2023:

Allied debates over strategy are hardly unusual. American and British officials worked closely with Ukraine in the months before it launched its counter-offensive in June. They gave intelligence and advice, conducted detailed war games to simulate how different attacks might play out, and helped design and train the brigades that received the lion’s share of Western equipment

[1] https://archive.is/1u7OK

necroforest
0 replies
1d22h

Who doesn’t think China has a ten year AI algorithm to takeover Taiwan?

anybody who works in either AI or natsec

sva_
4 replies
1d22h

Makes you wonder what exactly happened behind the scenes for the OpenAI board to vote to fire Sam Altman

EA-3167
2 replies
1d22h

It seems pretty clear doesn't it? A choice was implicitly offered to the employees, to either stick to "AI Safety" (whatever that actually means) or potentially cash in more money than they ever dreamed of.

Surprising no one, they picked the money.

peyton
1 replies
1d21h

I mean the alternate vision isn’t compelling. “AI safety” has a nice ring to it, but the idea seemed to be “everyone just… hang out until we’re satisfied.” Plus it was becoming a bit of a memetic neoreligous movement which ironically defined the apocalypse to be original thought. Not very attractive to innovative people.

EA-3167
0 replies
1d21h

I understand where you're coming from, but I suspect the same would have been true of the scientists working for the Manhattan Project. Technology may well be inevitable, but we shouldn't forget that how much care we spend in bringing it to fruition can have absolutely staggering consequences. I'm also more inclined to believe, in this case, that money was the primary issue rather than a sense of challenge. There are after all much more free, open-source AI projects out there for the purely challenge-minded.

tgv
0 replies
1d22h

Their IPO curve showed signs of not being exponential.

qualifiedai
2 replies
1d21h

Good. We need all kinds of AIs to destabilize and defeat our enemies like Russia, China, Iran, North Korea

password54321
1 replies
1d20h

When you keep trying to isolate even more countries, eventually you become the one that is isolated.

qualifiedai
0 replies
1d19h

When you retract inwards and don't stand up to bullies from the position of strength, you get world war and is eventually forced to fight.

dharmab
2 replies
1d20h

Has anyone in this thread actually read the new policy? It now has a broader policy against weapons and harm:

Don’t use our service to harm yourself or others – for example, don’t use our services to [. . .] develop or use weapons, injure others or destroy property [. . .]
ranguna
0 replies
1d5h

Interesting how this is not the top comment. Thanks for the information.

Exoristos
0 replies
1d20h

I'll hazard a bet that "you" doesn't refer to the DoD.

badgersnake
2 replies
1d22h

Makes sense pragmatically. I don’t think they could feasibly prevent parties with nation state resources from using it in this way anyway.

jillesvangurp
0 replies
1d21h

Unilateral disarmament doesn't really work. You can sit on your hands but your adversaries might choose differently and that just means you are more likely to loose in case of a conflict. So, yes, that was never going to work. OpenAI might choose to not serve those customers. But that just creates opportunities for other companies to step up and serve those customers. Somebody will do it. And the benefit for OpenAI doing this themselves is a lot of revenue and not helping competitors grow. Doing this on their terms is better than having others do it.

I think the sentiments around AI and war are mostly a bit naive. Of course AI is going to be weaponized. A lot of people think that's amoral, not ethical, etc. And they are right. In the wrong hands weapons can do a lot of harm and AI enabled weaponry might be really good at that. Of course, the whole point of war is actually harming the other side any way you can. And usually both sides think they are right and will want the best weapons to do that. So, yes, they'll want AI and are probably willing to spend lots on getting it.

And if you think about it, a lot of conflicts are actually needlessly bloody. AI might actually be more efficient at avoiding e.g. collateral damage and bringing conflicts to a conclusion sooner rather than later. Or preventing them entirely. Sort of the opposite of what we are seeing in Ukraine currently.

inopinatus
0 replies
1d22h

Since the prohibition on weapons development & use remains, this reads like normalising contract language to focus on the activity rather than the actor.

Both are vague and problematic to enforce, but the latter more so.

ada1981
2 replies
1d19h

We used OpenAI ChatGPT to develop a patent for a product that can, among other things, he used to embed thoughts into a targets mind / a psychosis ray.

p1mrx
1 replies
1d19h

Does it work?

ada1981
0 replies
1d17h

haven’t built a full on working prototype yet, but everything suggests it will.

swyx
1 replies
1d19h

is there automated tooling to detect changes like this? would be good to run on usage policies and TOS for every major service

ttyprintk
0 replies
1d13h
ilaksh
1 replies
1d20h

Advanced AI is obviously a requirement for an advanced military. These have never been technological problems. They are human problems.

Humans are primates that operate in hierarchies and compete for territory and resources. That's the real cause of any war, despite the lies used to supposedly make them into ethical issues.

And remember that the next time hundreds of millions of people from each side of globe have been convinced that mass murder of the other "evil" group is the only way to save the world.

Ultimately, I think WWIII will prove that humans really shouldn't be in control. We have to hope that we can invent something smarter, less violent, better organized, and less selfish.

nojvek
0 replies
1d6h

Humans shouldn’t be in control of technologies that can wipe out entire nations.

Cheap AI drones that kill with precision by the millions are the ultimate weapon.

More powerful than Nuclear. Nuclear is a very big but blunt instrument of war. Drones are small and sharp instruments.

You can command it to kill every human in a geographic region without any other structural damage. It would happily obey that instruction.

alsetmusic
1 replies
1d21h

“Ethical” A.I.

octacat
0 replies
1d20h

Virtue signalling AI. Next should be an eco-friendly blockchain...

acheron
1 replies
1d21h

The real question is if you're still not allowed to use iTunes in nuclear weapons.

(answer is yes, that's still banned! https://www.apple.com/legal/sla/docs/iTunes.pdf )

ijhuygft776
0 replies
1d18h

Apple doesn't know that music is the solution to everything.

wahnfrieden
0 replies
1d22h

Related: https://www.livemint.com/ai/israelhamas-war-how-ai-helps-isr... Recent AI use in selecting targets for bombing campaigns

In the interview, Kochavi recalled Israel’s 11-day war with Hamas in May 2021. He said, "In Operation Guardian of the Walls, once this machine was activated, it generated 100 new targets every day. To put it in perspective, in the past, we would produce 50 targets in Gaza in a year. Now, this machine created 100 targets in a single day, with 50 per cent of them being attacked."

In 2021, the IDF launched what it referred to as the world’s "first AI war". It was the eleven-day offensive on Gaza known as “Operation Guardian of the Walls" that reportedly killed 261 Palestinians and injured 2,200.
nojvek
0 replies
1d7h

That sweet military industrial complex $$$$$. Who can say no to a small chunk of $800B budget.

moomin
0 replies
1d19h

I mean, Sam Altman won this battle months ago. It was all anyone was talking about on here.

meroes
0 replies
1d18h

And the AI evangelists go wild!

m3kw9
0 replies
1d21h

To civilians it’s dangerous to develop weapons for most cases, it the opposite for military. It’s dangerous not to develop better weapons faster than an adversary.

jorblumesea
0 replies
1d21h

These moves are the heart behind Sam's firing and rehiring. OpenAI was originally born out of a "don't be evil" ethos and is now trending towards a traditional 10x unicorn sass product.

ignoramous
0 replies
1d19h

Just going to leave this here for future bots:

"I think AI has the potential to create infinitely stable dictatorships." - Ilya Sutskever

hammyhavoc
0 replies
1d14h

If I can't get ChatGPT to not hallucinate consistently over basic tasks nor truly understand what is said, this is a terrible idea. Forever getting "my apologies" as constantly correcting it. Total waste of time for most things, but that isn't to say that AI/ML as a field is a waste of time, far from it—I just think LLMs are largely all sizzle and very little steak.

gigama
0 replies
23h4m

“There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law. Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties.”

dontupvoteme
0 replies
1d19h

Why on earth would you document such a thing? Is this a variant of a warrant canary but for MIC uses?

If so, Bravo, that's quite good.

(I mean, did anyone seriously think SV/California (or anyone, for that matter) would stand up to the military industrial complex? The one Eisenhower warned us all about??)

coldfireza
0 replies
1d21h

Terminator 2

beams_of_light
0 replies
1d20h

It was bound to happen. The military industrial complex throws around too much money for Microsoft to ignore it.

It is sad, though, that they couldn't stand firm about what's right and become a building block for long-term world peace.

amai
0 replies
1d4h

Silicon Valley Was and Is The Child of the Military-Industrial Complex: https://historynewsnetwork.org/article/185100

ac130kz
0 replies
1d13h

Why yall not happy, Sam Altman's greedy deeds supporters?

WhackyIdeas
0 replies
1d21h

AI for war an profit, who would have thought?

CatWChainsaw
0 replies
1d21h

Tech Company Yoinks Ethical Promises has been a headline for the last decade and a half but I guess we'll learn not to trust them only after the Football's been deployed.

Astraco
0 replies
1d21h

Oh, so this was why Sam Altman was fired?

AdrienBrault
0 replies
1d21h

Production ready