return to table of content

'Lavender': The AI machine directing Israel's bombing in Gaza

Mgtyalx
19 replies
3h15m

@dang Please consider that this is an important and well sourced article regarding military use of AI and machine learning and shouldn't disappear because some users find it upsetting.

d--b
15 replies
3h13m

Should have the ability to turn off comments for these.

supposemaybe
8 replies
3h12m

Why? Hurty words too hurty for people?

edu
4 replies
3h11m

It's more about being able to have a civilized conversation in some topics.

pc86
2 replies
3h10m

Plenty of us are capable of having civilized conversation on these topics.

If you can't, you should be banned. The problem will work itself out over time.

edu
1 replies
3h3m

I was talking in general, not about myself.

I agree with you, this conversations should be had. But unfortunately a small, but comitted, minority can (and often will) turn the comments on sensitive topics into a toxic cesspool.

threatofrain
0 replies
1h21m

Why not just treat it as a way for undesirable guests to reveal themselves? Sounds like HN never wanted these guests and doesn't have the administrative attention to be watching all the time.

infamouscow
0 replies
3h4m

Civilized conversation is limited by the emotional stability of those having it.

People have it so easy now they've grown up and spent their entire lives in total comfort and without even the slightest hint of adversarial interaction. So when they encounter it, they overreact and panic at the slightest bit of scrutiny rather than behave like reasonable adults.

d--b
2 replies
3h10m

They tend to remove posts causing flame in comments

supposemaybe
0 replies
2h58m

It’s the most important thing going on in the world. And geeks shouldn’t be thought of as people who will sit and think how cool the death machine AI that Israel has developed which chooses how and when 30K children die… geeks make this tech, profitise from it, and lurk about HN and when it comes to facing the reality of their creations they want to close the conversation down, flag comments, and evade the hurty real world reality of it. Sad. And pathetic. I’m not saying you are personally.

mistermann
0 replies
2h44m

They tend to remove posts causing flame in comments

It can be fun to consider the precise and comprehensive truth value of such statements (or, the very nature of "reality" for extra fun) using strict, set theory based non-binary logic.

It can also be not fun. Or sometimes even dangerous.

pc86
4 replies
3h11m

HN exists for us to comment on articles. The majority of comments are from folks who didn't even read the article (and that's fine).

Turning off comments makes as much sense as just posting the heading and no link or attribution.

d--b
3 replies
3h8m

Well, this post is surely going to get removed because of flaming in comments, so, which is better, post with no comments, or no post at all?

xpe
0 replies
2h46m

so, which is better, post with no comments, or no post at all?

The false choice dilemma is dead. Long live the false choice dilemma!

xpe
0 replies
2h55m

Well, this post is surely going to get removed because of flaming in comments

This is one prediction of many possible outcomes.

Independent of the probability of a negative downstream outcome:

1. It is preferable to correct the unwelcome behavior itself, not the acceptable events simply preceding it (that are non-causal). For example, we denounce when a bully punches a kid, not that the kid stood his ground.*

2. We don't want to create a self-fulfilling prophecy in the form of self-censorship.

* I'm not dogmatic on this. There are interesting situations with blurry lines. For example, consider defensive driving, where it is rational to anticipate risky behavior from other drivers and proactively guard against it, rather than waiting for an accident to happen.

pc86
0 replies
3h4m

Having civil conversation and banning aggressively those who can't be adults?

mistermann
0 replies
3h11m

The goal of that being?

Garvi
2 replies
2h12m

I'm betting the author of this comment knows full well the Dang has been doing this for all US and Israel critical topics for years and this comment has zero chance of affecting anything. The very definition of lip service. I hope you feel better in your continuous support for HN.

Also the topic is locked and no one at this point gives a damn what is said in here.. I'm actually curious if I'm getting "disciplined" for this. No one besides the people that already posted and maybe one or two other people will ever see this.

skilled
1 replies
1h53m

That is not the case at all.

See this comment from dang:

https://news.ycombinator.com/item?id=39435024

There are more comments like this from him, you can find them using Algolia.

HN is not acting in bad faith whatsoever.

This story in particular “qualifies” for what would be interesting to HN readers while taking into account the sensitivity of the subject.

I fully expect the discussion to be allowed and the flag lifted, but HN mod team is very small and it might take a while - it quite literally always does.

mdekkers
0 replies
1h44m

Agreed. Also take into account how this and a few mirror discussions are rapidly degrading into “x are bad” political discussions which are just not that intere here.

dw_arthur
17 replies
3h59m

Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs”, the sources said, destroying entire homes and killing all their occupants.

The world should not forget this.

prpl
5 replies
3h54m

So the entire family and neighbors family.

Sure would be convenient if Hamas is 6% of the population

myth_drannon
1 replies
15m

So why it didn't happen? 40000 operatives, x30 family members would mean the entire Gaza population is gone in a matter of weeks.

mulmen
0 replies
12m

Assuming no operatives are related.

gryzzly
1 replies
24m

convenient how, you mean?

bregma
0 replies
19m

The result would be plenty of fresh unoccupied land to settle on. Just a little bit of cleanup required.

pelorat
0 replies
18m

40% or something voted for them, and pretty sure all of those are considered targets now.

dariosalvi78
5 replies
3h53m

Definitely Palestinians are not going to forget this.

supposemaybe
3 replies
3h21m

Em, I think you mean any reasonable minded human that walks the planet.

acdha
1 replies
37m

Yes, but specifically the Palestinian impact is why it’s such a terrible policy for Israel unless you assume their goal is perpetual war. Most people do not want to kill other people but each innocent killed like this is leaving behind friends, family, and neighbors who will want vengeance and some fraction of them will decide they need to resort to violence because the other mechanisms aren’t being used. Watching this happen has been incredibly depressing as you can pretty much mathematically predict a revenge period measured in decades.

KingMob
0 replies
16m

This assumes they're going to leave enough people alive to even enact vengeance. If they murder everyone, than there's no need to worry about any Gazan revenge; there will be no Gazans.

flir
0 replies
1h40m

I think he means the cycle of violence will continue.

Which is what I kinda assume Hamas wanted in the first place.

tjpnz
0 replies
3h50m

I would extend that to the wider region.

TheGeminon
1 replies
22m

With 37,000 Palestinians marked as suspected militants, it would mean they expected up to 555,000 - 740,000 civilian casualties.

mulmen
0 replies
13m

How did you arrive at these numbers?

NovemberWhiskey
1 replies
17m

The law of armed conflict acknowledges that civilian deaths are inevitable, and only prohibits attacks that are directed at civilians; rather than those which are directed at combatants with expected civilian casualties as collateral damage.

The legal question is whether the civilian casualties are proportional to the concrete military value of the target.

A question that's worth considering is whether, when considering proportionality, all civilians (as defined by law) are made equal in a moral sense.

For example, the category "civilian" includes munitions workers or those otherwise offering support to combatants on the one hand, and young children on the other. It also includes members of the civil population who are actually involved in hostilities without being a formal part of an armed force.

The law of armed conflict doesn't distinguish these; albeit that I think people might well distinguish, on a moral level, between casualties amongst young children, munitions workers, and informal combatants.

bluish29
0 replies
6m

For example, the category "civilian" includes munitions workers or those otherwise offering support to combatants on the one hand, and young children on the other. It also includes members of the civil population who are actually involved in hostilities without being a formal part of an armed force.

I wonder if you would say the same on the other side where every male or female above 18 years is required to serve in thr military and in the reserve afterwards? [1]

By your argument would you say that all of these are legitimate targets?

[1] https://en.m.wikipedia.org/wiki/Conscription_in_Israel

morkalork
0 replies
31m

"Our system is 90% accurate if you don't count the 15-20 innocent people taken out for each hit". I know they're measuring the accuracy of target identification but that's laughable when used in this context.

For 100 targets, 90 are 'correct', plus 20x civs per-target is 90/2100 or 4% real accuracy.

Say you use a model that's only 50% accurate and limit yourself to 10 civs per-target, you're at 50/1100 or 4.5% accuracy!

wantlotsofcurry
15 replies
2h25m

Upsetting how quickly the other thread was flagged and downranked.

hn_throwaway_99
6 replies
2h13m

As someone who sees both sides of this, and as someone who didn't understand this for some time, it's important to understand that one reason a story is likely to get flagged is because users think it's highly unlikely to lead to productive discussion. It doesn't mean it's a bad story, or even unworthy of discussion, but many types of stories seem to, pretty predictably, lead to a cesspool of comments where it's clear most folks have no desire to listen to opposing points of view.

FWIW, I found this to be a really interesting story that I didn't previously know about, so I hope it stays up, and this is a story I'd be willing to vouch for.

throwaway74432
1 replies
26m

it's highly unlikely to lead to productive discussion.

I guess all you have to do, if you want to suppress information about something, is to ensure that its comments always devolve into unproductive discussions. Funny, I once read about this as a tactic for controlling information flow in online communities...

axlee
0 replies
5m

If only we had a word for this behaviour, for example some nordic folklore creature ?

spxneo
0 replies
24m

flagging is voting to censor a particular view. it could have legit uses like spam or toxic comments but just as easy to censor narratives that isn't aligned or clashes with the voter's

im not sure what other tools exist other than a block button like X

pphysch
0 replies
2h3m

Successful flagging doesn't (just) disable comments, it disables discovery/access.

For a high quality piece of tech-related investigative journalism like this, flagging is simply censorship.

dfxm12
0 replies
1h33m

If one don't want to engage, the hide button isn't too far from the flag button. It's important that people have the option to speak freely and openly about this topic, since so many places shut down any conversation that shows sympathy for Palestinians and/or doesn't paint Israel as unequivocally morally good. This is one of the reasons Israel has been able to get away with this behavior for so long.

Considering what regularly doesn't get flagged on this site related to AI, conflict, etc., this topic seems to fit in.

consumer451
0 replies
2h3m

There is a system in place for flagging specific comments by users.

Admins can, and do, prune entire branches of comments off of posts.

These two methods would take a bit more work than just banishing the topic entirely, but with topics like the first time that "AI" kill lists are publicized, maybe exceptions should be made.

segasaturn
3 replies
2h22m

I don't understand why it was flagged, obviously it is a sensitive topic but AI being used to kill people is very clearly a HN-worthy topic

calibas
1 replies
2h16m

It was flagged because someone doesn't want people seeing this.

It's also currently dropping rank on the front page, despite being heavily upvoted.

luketaylor
0 replies
2h5m

Now removed from the front page even without being labeled as flagged.

nemo44x
0 replies
2h14m

Yeah, you'd hope that a higher level conversation about the use of technology in war, pros/cons, etc could supersede personal political beliefs about this particular conflict. We don't need people's moral judgements on who is right or wrong in this particular case but it would be neat to hear people's thoughts on utilizing information technology as a weapon of war.

dguest
1 replies
2h22m

Let's see how long it takes this time! I'd give it 50% odds of lasting 12 minutes.

Edit: Flagged after less than 9 minutes, I overestimated!

dfxm12
0 replies
2h18m

It was flagged in 9, but is now back. Get your comments in while you can!

thomastjeffery
0 replies
1h54m

I don't take any issue with people flagging a post, so long as an actual person makes the ultimate decision on whether to keep it up.

This is in contrast to how I feel about a statistical model flagging people to be murdered. That's not even remotely OK, even if the decision to actually carry out the murder ultimately goes through a person. Using a statistical model to choose targets is incredibly naive, and practically guarantees that perverse incentives will drive decision-making.

dang
0 replies
28m

This is a typical phenomenon when a topic is divisive, and the current topic is perhaps the most divisive HN has ever seen.

hunglee2
11 replies
3h15m

AI generated kill lists are sadly inevitable. Had hoped we'd get a few more years before we'd actually see it being deployed. Lots to think about here

xdennis
4 replies
3h4m

I don't know about kill lists, but AI weapons kinda make sense.

No weapons are nice, but if the good guys don't develop AI weapons, the bad guys will.

From what I gather, many US engineers are morally opposed to them. But if China develops them and gets into a war with the US, will Americans be happy to lose knowing that they have the moral high ground?

skidd0
1 replies
2h53m

Right, just like if the good guys don't develop a novel coronavirus in a lab, the bad guys will and unleash it on the world!

Development of tools of death is not a good guy/bad guy thing. The "bad guys" think the "good guys" are bad.

I think "killing" is bad, no matter who develops the tools.

shepherdjerred
0 replies
1h59m

There are certainly times when killing is justified. Defeating the Axis in WW2 is a great example of this.

hirsin
0 replies
23m

This assumes that AI based weaponry provides value. The case in point here is showing that the only value it provides is a flimsy justification for civilian casualties. We... Don't need more of that in the US, nor would it provide a "good guy" any legitimate value.

atlantic
0 replies
6m

"Good guys" and "bad guys". Where did you learn your ethics, the Cartoon Network?

throwaway74432
1 replies
36m

They're great because the accountability for fuckups goes on the system, not on the people using the system. "Oops, the system had a bug" doesn't kill careers like "Oops, I made a bad call."

krunck
0 replies
24m

AIs that generate kill lists that kill the innocent should themselves be put on a kill list.

Edit: And the humans who approved the list should be help accountable, of course.

uxp100
0 replies
3h0m

Depending on your definition of AI they’ve probably been around for a while.

This does seem to be a big step more “AI” than previous systems I’ve heard described though.

shmatt
0 replies
15m

How do you think people are chosen to visit a secret CIA prison, or chosen to get a 12 hour interrogation every time they enter the US?

KingMob
0 replies
5m

Can't wait to be killed by drone strike when a GPT hallucinates my name.

BitwiseFool
0 replies
3h5m

Such things have been around for at least a decade. It didn't start with the same kind of AI that's being talked about recently, but there is a large automated scoring component: "Targets are often chosen based on metadata."

https://en.wikipedia.org/wiki/Disposition_Matrix

skilled
10 replies
3h5m

The Guardian has this story on the front page also, they were given details about it pre-publishing,

https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...

And, personally, I think that stories like this are of public interest - while I won’t ask for it directly, I hope the flag is removed and the discussion can happen.

JeremyNT
4 replies
2h25m

This is the now-flagged to death HN thread on the Guardian version [0]

I would hope they can be unflagged and merged, this appears to be an important story about a novel use of technology.

[0] https://news.ycombinator.com/item?id=39917727

dang
2 replies
41m

Yes. We've merged that thread hither.

dang
0 replies
33m

Thanks, we'll merge that one too.

ok123456
0 replies
1h25m

It's flagged now, too. It's all so tiring.

tguvot
2 replies
31m

guardian references 972 as source for report. it's not like it's "the guardian" article

tguvot
0 replies
7m

972 this is leftist "blog magazine" with questionably sourced material. while there might be some truth to core claim of automated system (which IDF confirmed that exists), rest of claims probably outcome of "broken phone". But everybody will use it as statement of undeniable fact in order to evolve as usual discussion into "genocidal Israel indiscriminately killing civilians in droves and performs ethnic cleaning and other countless war crimes" and downvote into oblivion everybody who will disagree with it.

tsujamin
1 replies
2h51m

Pretty disappointing that Guardian article also got quickly flagged after HN submission

bowsamic
0 replies
14m

The flagging on this site is pretty crazy recently

abvdasker
8 replies
3h6m

Accepting technological barbarism is a choice. Among engineers there should be a broad refusal to work on such systems and a blacklist for those who do.

snird
4 replies
2h12m

The other option here is carpet bomb a la Drezden, that would have resulted in >400,000 casualties at best.

Why is it barbarism? If it makes the war more efficient and more targeted, it is preferred.

justin66
1 replies
1h34m

The other option here is carpet bomb a la Drezden

Right. Because there are always just two options when you're designing a strategy.

kjkjadksj
0 replies
1m

You act like people individually have agency to make sweeping changes of how the world works

talldayo
0 replies
2h2m

Why is it barbarism?

Because the civilian death toll far outweighs the militant casualties?

hobs
0 replies
1h50m

More targeted with half a million dead? Sounds like you forgot to take your not crazy pills.

treyd
0 replies
31m

It sure would be nice if this industry had the tiniest shred of collective consciousness and realized our capacity to exert some level of control over what gets built and what doesn't.

kjkjadksj
0 replies
2m

The people working on these understand the alternative looks like a WWII bombing campaign with greater loss of life

golergka
0 replies
5m

Not everyone sees the world as you do. Given this article and other information I know about this system, I would be honoured to work on it and take a significant pay cut, as it actively makes the world a better and safer place.

malfist
6 replies
4h6m

There is no justification for killing noncombatants, even if AI told you you could.

spuz
1 replies
3h42m

The use of AI and the authorisation to kill civilians are unrelated parts of this story. Nowhere does it mention that the AI is being used to justify killing of civilians.

rany_
0 replies
3h10m

Yeah, because they need to spell out what they're trying to have you infer.

XorNot
1 replies
3h54m

This is not what the article is about, and not what AI was being used for.

rany_
0 replies
3h39m

Read between the lines, they're trying to blame their AI for the civilian casualties.

hugodan
0 replies
3h59m

There is no justification for killing.

basil-rash
0 replies
3h59m

Wild that this is still a controversial statement on HN, which is otherwise rather forward thinking.

factorialboy
6 replies
3h58m

Can we please discuss the merits of this article — role of AI in future conflicts — without taking sides on any of the ongoing wars?

harimau777
1 replies
3h20m

The issue as I see it is that the tools available don't just determine how a given war is fought, they also determine whether it is fought at all.

If Israel wasn't able to use tools like this, then it probably wouldn't be viable for them to identify much of Hamas (that's kind of the point of guerilla warfare). Since that would make it difficult to fight a war efficiently, they would be more likely to engage in diplomacy.

raxxorraxor
0 replies
2h59m

Very doubtful. There is no room for any diplomacy after such an attack. It would be fought with more primitive weapons and the side with more bombs would prevail.

mempko
0 replies
1h6m

Why not both? Taking a side does not mean you are clouded in judgement on this point.

gizmo686
0 replies
3h31m

I'm not sure that is possible. The nature and limitations of current AI technology means that it is almost impossible to talk about it without coming to certain conclusions about the party using it.

To put it bluntly, useing AI to decide on targets for lethal operations in unconsiounable given the current and forseable state of technology.

Come back to me when it can be trusted to make mortgage eligability questions without engaging in what would be blatantly illegal discrimination if not laundered by a computer algorithm.

basil-rash
0 replies
3h55m

No, probably not. When the topic at hand is the selection criteria used to justify the killing of tens of thousands of civilians, your stance on whether the ones killing tens of thousands of civilians are justified in doing so is rather intrinsic.

CubsFan1060
0 replies
3h55m

I am going to bet the answer to your question is "No"

Quanttek
6 replies
3h4m

Years ago, scholars (such as Didier Bigo) have already raised concerns about the targeting of individuals merely based on (indirect) association with a "terrorist" or "criminal". Originally used in the context of surveillance (see Snowden revelations), such systems would target anyone who would be e.g. less than 3-steps away from an identified individual, thereby removing any sense of due process or targeted surveillance. Now, such AI systems are being used to actually kill people - instead of just surveil.

IHL actually prohibits the killing of persons who are not combatants or "fighters" of an armed group. Only those who have the "continuous function" to "directly participate in hostilities"[1] may be targeted for attack at any time. Everyone else is a civilian that can only be directly targeted when and for as long as they directly participate in hostilities, such as by taking up arms, planning military operations, laying down mines, etc.

That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers, propagandists, financiers, …) can be targeted for attack - all the others must be arrested and/or tried. Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.

Lavender is so scary because it enables Israel's mass targeting of people who are protected against attack by international law, providing a flimsy (political but not legal) justification for their association with terrorists.

[1]: https://www.icrc.org/en/doc/assets/files/other/icrc-002-0990...

CommieBobDole
2 replies
2h44m

It's also interesting (and I guess typical for end-users of software) how quickly and easily something like this goes from "Here's a tool you can use as an information input when deciding who to target" to "I dunno, computer says these are the people we need to kill, let's get to it".

In the Guardian article, an IDF spokesperson says it exists and is only used as the former, and I'm sure that's what was intended and maybe even what the higher-ups think, but I suspect it's become the latter.

solarpunk
0 replies
5m

20 second turnaround from target acquisition to strikes seems to guarantee it's become the latter.

surfingdino
1 replies
15m

It always starts with making a list of targets that meet given criteria. Once you have the list its use changes from categorisation to demonisation -> surveillance -> denial of rights -> deportations -> killing. Early use of computers by Germans during WW2 included making and processing of lists of people who ought to be sent to concentration camps. The only difference today is that we are able to capture more data and process it faster at scale.

kjkjadksj
0 replies
17m

War is messy, I don’t think there was ever one with the surgical precision these courts expect. Ai flagging targets sure beats the cold impunity of the strategic air campaign during wwii and vietnam by far.

mckirk
5 replies
2h14m

“You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs]”

At that point I had to scroll back up to check whether this was just a really twisted April's Fools joke.

xyzelement
3 replies
1h48m

What part of this upsets you vs a baseline understanding of reality?

There's often a criticism of the US military doctrine that our weapons are great but are often way more expensive than the thing we shoot them at (as exemplified in our engagement with the Houthis in the Red Sea.)

If anything, the quote you pulled sounds like its talking about highly precise weaponry, and it seems to me that the way to minimize the overall death in a war is to use your precise weapons to take out the most impactful enemy.

Which part of this is different than how you see the world so that reading this quote threw you?

jakupovic
1 replies
1h37m

I'll answer for the previous post. The most disturbing part is stating main criteria is being a male and their models have 10% error rate.

xyzelement
0 replies
32m

I don't think you're parsing the article correctly.

There is no allegation that the main criteria for the algorithm is "being male."

The allegation is that the human double-checking of the algorithm confirms the target is male (as opposed to woman/child.)

anigbrowl
0 replies
1h0m

Civilians aren't strategic targets like military decision-makers, but describing them as 'unimportant' is a sign of moral vacuity.

tokai
0 replies
1h46m

Its rich when the argument for the system is that the targeting is the bottleneck.

chinchilla2020
5 replies
3h8m

972mag is not considered a legitimate journalism outlet. They create clickbait articles with a political goal.

There is zero evidence of some AI bot directing the Israeli military. This is pure conspiracy nonsense

tguvot
1 replies
2h5m

guardian references 972 as source for report

mempko
0 replies
15m

The guardian also talked to other sources.

luaybs
0 replies
3h6m

Where is YOUR evidence for such accusations?

barbazoo
5 replies
2h7m

Getting all these reports about atrocities, I wonder if the conflict in the area has grown more brutal over the decades or if this is just business as usual. I'm in my late 30s, growing up in the EU, the conflict in the region was always present. I don't remember hearing the kind of stories that come to light these days though, indiscriminate killings, food and water being targeted, aid workers being killed. I get that it's hard to know what's real and what's not and that we live in the age of information, but I'm curious how, on a high level, the conflict is developing. Does anyone got a good source that deals with that?

myth_drannon
1 replies
7m

ISIS ways of operation sure influenced other terrorist organizations. Hamas atrocities do look like copycats of ISIS ( burning alive, beheadings) with deliberate usage of social media to satisfy their Muslim supporters and Western left/woke narcissists.

But that only looks more brutal to those who get their news from Facebook. It was always brutal being attacked by Arab/Muslim mobs. Farhud in Iraq(1941) and Arab riots of the 1920s in Palestine and other places. Same hallmarks with beheadings, rapes and burnings.

Nothing new.

f38zf5vdt
0 replies
4m

The left just can't get enough of those woke beheadings!

xk_id
0 replies
1h12m

The weaponisation of online media for manipulating the perception of global audiences about the conflict, has definitely ramped up recently. For example, the official Twitter account of Israel’s Ministry of Foreign Affairs has posted videos of muslim preachers appearing to denounce lgbt culture during public service in Palestinian mosques. Hamas themselves are denying their involvement in the 2023 massacre and accusing Israel of staging the graphic footage that was disseminated. This greatly polarises the debates on social media and it’s much more common now to see people who are deeply invested emotionally in the narrative of either side.

kjkjadksj
0 replies
6m

When the US dropped napalm indecriminately over the vietnamese jungle or absolutely leveled dresden in one bombing run or unleashed nuclear hellfire over japan, they probably killed a lot of journalists and doctors and food workers as well. Interestingly, western media did not beat itself into a frenzy over it at the time. Its easy to get cynical about it all seeing how easily narratives are manufactured and controlled to serve political ends.

dfxm12
0 replies
1h48m

Most of the mainstream media has historically glossed over the atrocities, but it is impossible to ignore them today because of what we see live on the scene thanks to smaller outlets having a broader reach and social media.

It's mostly business as usual. The technology makes the brutality more efficient, though:

Describing human personnel as a “bottleneck” that limits the army’s capacity during a military operation, the commander laments: “We [humans] cannot process so much information. It doesn’t matter how many people you have tasked to produce targets during the war — you still cannot produce enough targets per day.”

...

By adding a name from the Lavender-generated lists to the Where’s Daddy? home tracking system, A. explained, the marked person would be placed under ongoing surveillance, and could be attacked as soon as they set foot in their home, collapsing the house on everyone inside.

“Let’s say you calculate [that there is one] Hamas [operative] plus 10 [civilians in the house],” A. said. “Usually, these 10 will be women and children. So absurdly, it turns out that most of the people you killed were women and children.”

Using Google search, you can search new articles in previous years. You'll find older articles about Israel killing aid workers, for example. This is from 2018: https://www.theguardian.com/global-development/2018/aug/24/i...

The interesting thing about how this conflict is developing is that this story is full of quotes from Israeli intelligence. Most plainly say what they're doing. Western outlets may put a positive spin on it (because our governments generally support Israel), but the Israeli military themselves are making their intentions clear: https://news.yahoo.com/israeli-minister-admits-military-carr...

arminiusreturns
5 replies
2h22m

“We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A., an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
A_D_E_P_T
3 replies
2h4m

It’s much easier to bomb a family’s home.

Okay, how is this not a war crime?

There are ~2M civilians who live in Gaza, and many of them don't have access to food, water, medicine, or safe shelter. Some of those unfortunates live above, or below, Hamas operatives and their families.

"Oh, sorry, lol." "It was unintentional, lmao, seriously." "Our doctrine states that we can kill X civilians for every hostile operative, so don't worry about it."

The war in Gaza is unlike Ukraine -- where Ukrainian and Russian villagers can move away from the front, either towards Russia or westwards into Galicia -- and where nobody's flattening major population centers. In Gaza, anybody can evidently be killed at any time, for any reason or for no reason at all. The Israeli "strategy" makes the Ukrainians and Russians look like paragons of restraint and civility.

mardifoufs
1 replies
1h35m

Because it's Israel. It's also why no western country has ever really officially condemned Israel no matter what they do. They are on "our side" so it's okay. And those civilians kind of deserved it anyways or something, and we can just trust every single word the IDF says and use them as an actual source to pretend the IDF isn't into mass civilian murder.

The only thing that made this time a bit different is the crazy, almost hard to believe, switch from the Ukrainian conflict and how it was seen and portrayed... To western countries staying completely silent when again, it's our side doing it. Well it wasn't hard to believe but it just made it a lot more blatant.

Israel doesn't really care though since israeli officers routinely go on public tirades that amount to mask-off allusions to genocide ("wipe Gaza" "level the city to the ground" "make it unliveable"), with again 0 consequences at all. Even Russia at least tries to not have Russian military officers just say the quiet part out loud.

cmrdporcupine
0 replies
1h17m

Even when our own citizens are killed, they don't get condemned.

E.g. the IDF targeted and killed a Canadian UN peacekeeper in 2008 (because he got too squeaky) and the Canadian gov't barely lodged a protest.

https://www.cbc.ca/news/canada/ottawa/un-officer-reported-is...

pphysch
0 replies
1h57m

The problem isn't determining whether Israel is committing warcrimes or genocide, it's the fact that it's a rogue, supremacist state that sees itself above international law, and is bolstered in that position by its unregistered-foreign-agent minions in Washington, a UNSC permanent member.

tiahura
0 replies
2h11m

Watching i24 news is a little unsettling. They run bits with interrogators announcing how productive torture has been, and make jokes about how it would be much easier if lemons just gave up their juice without being squeezed.

2devnull
5 replies
3h51m

Probably going to be flame city in this thread, but I think it’s worth asking: is it possible that even with collateral damage (killing women and children because of hallucinations) that AI based killing technology is actually more ethical and safer than warfare that doesn’t use AI. But AI is really just another name for math, so maybe it’s not a useful conversation. Militaries use advanced tech and that’s nothing new.

janice1999
1 replies
3h46m

AI based killing technology is actually more ethical and safer than warfare that doesn’t use AI

No. It's just a tool. People still configure the parameters and ultimately make decisions. Likewise modern missile do not make conflicts more or less ethical just because they require advanced physics.

harimau777
0 replies
3h26m

The people mentioned in the article say that they spent about 20 seconds on each target and basically just rubber stamped them. In that case, I don't think people are ultimately making the decisions in a traditional sense.

r00fus
0 replies
2h6m

No the AI was the scapegoat for IDF deciding to "target" low-level enemies, then bombing them with bunker-buster 2000lb bombs that leveled entire buildings and city blocks around those targets.

The AI did something, but the IDF used it to justify effectively committing a genocide.

mikrl
0 replies
1h50m

I think the concern is that the AI is making life or death judgements against people. Some may of course be lawful combatants under the rules that govern such things, but the fact that an AI is drawing these conclusions that humans act on is the shocking part.

I doubt an artillery system using machine learning to correct its trajectory and get better accuracy would be controversial, since the AI in that case is just controlling the path of a shell that an operator has determined needs to hit a target decided upon by humans.

harimau777
0 replies
3h28m

I think that depends on what the alternative is. It seems to me that the problem is that there's no way for Israel to wipe out Hamas without massive collateral damage. However, instead of giving up on wiping out Hamas, they just decided that they are OK with the collateral damage.

supposemaybe
4 replies
2h9m

My question is:

How far does the AI system go… is it behind the AI decision to starve the population of Gaza?

And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?

How far does the AI system go?

Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”

There’s so much about this death machine AI I would like to know.

diggan
2 replies
2h1m

How far does the AI system go… is it behind the AI decision to starve the population of Gaza?

No, the point of this program seems to be to find targets for assassination, removing the human bottleneck. I don't think bigger strategic decisions like starving the population of Gaza was bottlenecked in the same way as finding/deciding on bombing targets is.

is it also behind the decision to kill the aid workers who are trying to feed the starving?

It would seem like this program gives whoever is responsible for the actual bombing a list of targets to chose from, so supposedly a human was behind that decision but aided by a computer. Then it turns out (according to the article at least) that the responsible parties mostly rubberstamped those lists without further verification.

can an AI commit a war crime?

No, war crimes are about making individuals responsible for their choices, not about making programs responsible for their output. At least currently.

The users/makers of the AI surely could be held in violation of laws of war though, depending on what they are doing/did.

dfxm12
1 replies
1h14m

No, the point of this program seems to be to find targets for assassination, removing the human bottleneck.

There is also another AI system that tracks when these target get home.

Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

I think "assassination" colloquially means to pinpoint and kill one individual target. I don't mean to say you are implying this, but I do want to make it clear to other readers that according to the article, they are going for max collateral damage, in terms of human life and infrastructure.

“The only question was, is it possible to attack the building in terms of collateral damage? Because we usually carried out the attacks with dumb bombs, and that meant literally destroying the whole house on top of its occupants. But even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”

diggan
0 replies
1h9m

Yeah, I wasn't 100% sure of using the "assassination" wording in my comment, but after thinking about it I felt it most neutral approach is to use the same wording they use in the article itself, in order to not add my own subjective opinion about this whole saga.

In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.

I'd agree with you that once you decide it's worth to kill 100 civilians for one target, it's really hard to call it "assassination" at that point...

thomastjeffery
0 replies
1h15m

Also, can an AI commit a war crime?

"An AI" doesn't exist. What is being labeled "AI" here is a statistical model. A model can't do anything; it can only be used to sift data.

No matter where in the chain of actions you put a model, you can't offset human responsibility to that model. If you try, reasonable people will (hopefully) call you out on your bullshit.

There’s so much about this death machine AI I would like to know.

The death machine here is Israel's military. That's a group of people who don't get to hide behind the facade of "an AI told me". It's a group of people who need to be held responsible for naively using a statistical model to choose who they murder next.

randysalami
4 replies
3h3m

I wonder how accurate this technology really is or if they care so little for the results and instead more for the optics of being seen as advanced. On one hand, it’s scary to think this technology exists but on the other, it might just be a pile of junk since the output is so biased. What’s even scarier is that it’s proof that people in power don’t care about “correct”, they care about having a justification to confirm their biases. It’s always been the case but it’s even more damming this extends to AI. Previously, you were limited by how many humans can lie but now you’re limited by how fast your magic black box runs.

stevenwoo
2 replies
2h20m

It's unconfirmed who authorized it but the recent food charity workers killed by Israeli bombing had a security person (death confirmed by family in UK) who is unarmed but by job description clears the way by telling Israeli authorities where the charity team is going to be so the chain of command knew who they were, so one is naturally lead to ask - who would authorize a targeted killing in this situation? The after photos show the missile went right through the roof of the car, ironically next to the food charity's visible logo on top of the car. Israeli defense minister now claims it was a mistake, although if they had hit a real target it might have been acceptable in terms of their rules of engagement with 15-100 unrelated collateral deaths according to the investigation.

shmatt
0 replies
12m

War zones aren't as quiet and organized as you would imagine. More so when one side is disguised as regular civilians. All war zones also have people killed by friendly fire. I would assume friendly fire > killing western charity workers > killing civilians in order of importance to the military

Yet still, even that its the most important, friendly fire still happens

skidd0
0 replies
2h56m

I think optics of being advanced aren't the main goal. Some form of "justification", no matter how flimsy, especially if it's hard to audit how the "AI" came to it's conclusions, is the goal. Now anyone is a target. Similar to cops in the US "smelling weed" or dogs "signaling". It provides the means to justify any search, or in this case, any kill. The machine grinds away..

oliwarner
4 replies
2h52m

HN has a serious problem if factual technology stories cannot exist here because some people don't like the truth.

This should be advertised. The true price of AI is people using computers to make decisions no decent person would. It's not a feature, it's a war crime.

spxneo
1 replies
31m

I'm not sure why its such a shock to many to see the censorship on HN. This isn't a public square.

We are privy to the whims of whatever political views of those that aligned/run/manage/stake in YC and their policies and values.

oliwarner
0 replies
3m

I'm not shocked, I said it was a problem.

I think it takes a tiny number of flags to nuke a post, independent of its upvotes, so strong negative community opinions are always quick to kill things.

To restore it, mods have to step in, get involved, pick a "side".

I think the flagging criteria needs overhauling so popular, flagged posts only get taken down at the behest of a moderator. But that does mean divisive topics stay up longer.

For the nothing it's worth, I don't see this post as divisive. It's uncovering something ugly and partisan in nature, but a debate about whether or not an AI should be allowed to make these decisions needn't be partisan at all.

bitcharmer
1 replies
2h41m

This is not new and dang and the others are absolutely fine with posts getting gang-flagged in a matter of minutes. Just shows how impartial they are.

jakupovic
0 replies
1h29m

Complicit is the word you're looking for.

nerfbatplz
4 replies
3h6m

Already deleted, that was quick.

If we can’t trust AI to drive a car, how the hell can we trust it to pick who lives and who dies?

xdennis
1 replies
2h57m

That's a valid point, but a terrible example because AI cars are legal in many places.

oliwarner
0 replies
2h49m

And they are illegal [in many places] because we haven't had the right conversations. We need to codify solutions to the trolley problem so decisions in bad circumstances align with what we expect.

rabite
0 replies
1h30m

In all fairness, driving a car is a lot more complicated and full of dangerous edge cases than dropping objects or shooting anyone within a geofence.

OscarTheGrinch
0 replies
33m

"AI" in this case is probably mostly Oct 6 cell phone locations.

It is obvious that Israel has loosened their targeting requirements, this story points to their internal justifications. The first step in ending this conflict must be to reimpose these standards of self restraint.

goethes_kind
4 replies
2h3m

Israel's evil keeps taking me by surprise. I guess when people go down the path of dehumanization there are truly no limits to what they are ready to do.

But what is even sadder is that the supposedly morally superior western world is entirely bribed and blackmailed to stand behind Israel. And then you have countries like Germany where you get thrown in jail for being upset at Israel.

gryzzly
1 replies
23m

what do you mean "bribed and blackmailed"?

luketaylor
0 replies
7m

On AIPAC in the US:

1. “How the Israel lobby moved to quash rising dissent in Congress against Israel’s apartheid regime”

2. “Top Pro-Israel Group Offered Ocasio-Cortez $100,000 Campaign Cash”

3. “Senate Candidate in Michigan Says He Was Offered $20 Million to Challenge Tlaib”

[1]: https://theintercept.com/2023/11/27/israel-democrats-aipac-b...

[2]: https://www.huffpost.com/entry/ocasio-cortez-aipac-offer-con...

[3]: https://www.nytimes.com/2023/11/22/us/politics/hill-harper-r...

HDThoreaun
1 replies
47m

It's been pretty clear to me for a while now that Israel's long term plan for the Palestinians is to expel them all. Starvation isnt a requirement for that, but it is probably the path of least resistance. I will say that its happening a lot faster than I expected though, Israel definitely taking advantage of the situation here.

KingMob
0 replies
14m

But in lieu of expulsion, it seems they're ok with starvation and mass murder as alternatives.

anjel
4 replies
2h9m

A rather opinionated site with no about page.

hindsightbias
3 replies
4h6m

"Because of the system, the targets never end."

The future is now.

ourguile
1 replies
3h58m

The purpose of a system is what it does. :(

supposemaybe
0 replies
3h48m

The AI system runs on the blood of Gaza children.

prpl
0 replies
3h31m

Endless scrolling feed

diyseguy
3 replies
3h13m

The new political excuse for genocide: wasn't me, the AI did it.

mistermann
1 replies
3h10m

Continuously throw enough plot twists and general stimulation at people and they'll never have the time to consider whether they're living in a simulation.

jakupovic
0 replies
1h36m

Interesting, how do we prove we don't live in a simulation or do we care enough to know?

supposemaybe
0 replies
3h9m

Or in the words of Shaggy…

“Saw you blowing up the children…”

“It wasn’t me.”

throw7
2 replies
2h12m

Why is this flagged?

Our premiere AI geniuses were all sqawking to congress about the dangers of AI and here we see that "they essentially treated the outputs of the AI machine “as if it were a human decision.”

Sounds like you want to censor information that could hurt your bottomline.

jessepasley
0 replies
1h44m

It shows Israel in a bad light.

93po
0 replies
1h19m

HN, both its community and the moderators, flag posts that generate a lot of conflict in the comments. The comments on this are especially bad by HN standards and therefore the flagging is inline with how the site is openly operated.

I am pro Palestine and not simping for Israel. I think visibility on Israel's actions matter, but HN is also very clearly not the appropriate website for a lot of politically involved news.

spxneo
2 replies
17m

The most disturbing part for me (going beyond Israel/Palestine conflict) is that modern war is scary:

- Weaponized financial trojan horses like crypto

- Weaponized chemical warfare through addictions

- Drone swarm attacks in Ukraine

- AI social-media engineered outrage to change publics perception

- Impartial, jingoistic mainstream war propaganda

- Censorship and manipulation of neutral views as immoral

- Weaponized AI software

Looks like a major escalation towards a total war of sorts.

surfingdino
0 replies
12m

War has always been scary. We are busy inventing new ways of killing each other and there is no sign of stopping.

bawolff
0 replies
6m

I'm sorry, you think this is new?

War is terrible. War has always been terrible. It was almost certainly worse in the past, but it still sucks now. Most of the things you mention were way worse 100 years ago.

Sure, AI didn't write the propaganda, instead humans did. The affect was the same.

smt88
2 replies
3h56m

I know many people won't read past the headline, but please try to.

This is the second paragraph:

"In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict."

yonisto
1 replies
15m

What can one do when Hamas has embedded them self in the civilian population? Why don't they get out and meet the Israeli army on the battle field? This is no different than chemotherapy, in order got the body to survive some healthy cells will die together with cancerous one. It is much better than the carpet bombing used by other nations.

KingMob
0 replies
9m

What can one do when criminals have embedded them self in the civilian population? Why don't they get out and meet the police on the battle field?

We wouldn't tolerate a SWAT team blowing up a hospital if the mafia had taken over the basement, I have no idea why you think this is acceptable.

It is much better than the carpet bombing used by other nations.

It is exactly like the carpet bombing used by other nations.

shmatt
2 replies
3h56m

I suggest everyone listen to the current season of the Serial podcast.

processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.

This is really no different than how the world was working in 2001 and choosing who to send to Gitmo and other more secretive prisons, or bombing their location

More than anything else it feels like just like in the corporate world, the engineers in the army are overselling the AI buzzword to do exactly what they were doing before it existed

If you use your paypal account to send money to an account identified as ISIS, you're going to get a visit from a 3 letter organization really quick. This sounds exactly like that from what the users are testifying to. Any decision to bomb or not bomb a location wasn't up to the AI, but to humans

shmatt
0 replies
8m

Australia, Canada, Denmark, France, Germany, and Norway were heavily involved in the war on terror. Bombing Afghanistan but also arresting "suspected" people of their own

fullstick
2 replies
2h12m

The name of Lavender makes this so surreal to me for some reason. I'm of the opinion that algorithms shouldn't determine who lives and dies, but it's so common even outside of war.

nemo44x
1 replies
2h0m

I think the algorithm, in this case, makes a suggestion and then a human evaluates it. The article claims they've only looked at the sex of the target (kill if male) but also claims 90% effectiveness. I'm curious if 90% is a good number or not? War will always have collateral damages but if technology can help limit that beyond what only a human could do then I'd say it's a net positive. I think the massive efficiencies the algorithm brings to picking targets is a bit frightening (nowhere to run or hide now) but there's no real turning back.

People thought this way about the machine gun, the armored tank, the atom bomb. But once the genie is out there's no putting it back in.

As an aside, I think this is a good example of how humans and AI will work together to bring efficiency to whatever tasks need to be accomplished. There's a lot of fear of AI taking jobs, but I think it was Peter Thiel who said years ago that future AI would work side by side humans to accomplish tasks. Here we are.

tokai
0 replies
1h54m

During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing
binarymax
0 replies
32m

I really want to support this, but the website is pretty bad. Blinding colors, poor and sparse information, and a links to shop/donate without a notion as to what or who the org is.

supposemaybe
1 replies
3h42m

Is the AI the one deciding to let all the children of Gaza starve? I’d like to know how far this death machine goes?

rowanseymour
1 replies
3h52m

As bad as this story makes the Israelis sound, it still reads like ass-covering to make it sound like they were at least trying to kill militants. It's been clear from the start that they've been targeting journalists, medical staff and anyone involved in aid distribution, with the goal of rendering life in Gaza impossible.

rozap
0 replies
21m

Yea this really seems like more of a weapon of propaganda directed at Israelis. If they didn't want people to know about it, we probably wouldn't know about it. The fact that we're talking about it is probably not an accident, and I guess the play here would be to convince Israelis that the army is technologically advanced and they know what they're doing, so don't question it. But AI or not they were going to commit genocide and violate every international humanitarian law on the book. But for the people that still believe the genocide is justified I think this probably improves the optics.

realo
1 replies
2h3m

How is this not a genocide?

How are those "acceptable" collateral deaths not war crimes?

Stevvo
0 replies
18m

It is and they are.

photochemsyn
1 replies
3h5m

The difference between previously revealed 'Gospel' and this 'Lavender' is revealed here:

"The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list."

It's one thing to use these systems to mine data on human populations for who might be in the market for a new laptop, so they can be targeted with advertisements - it's quite different to target people with bombs and drones based on this technology.

r00fus
0 replies
2h41m

The link between targeting - whether for advertisements or for death - is quite disturbing.

Both use personal metadata, and both can horribly get it wrong.

ceejayoz
0 replies
3h56m

Shades of https://www.nytimes.com/2012/05/29/world/obamas-leadership-i....

It is also because Mr. Obama embraced a disputed method for counting civilian casualties that did little to box him in. It in effect counts all military-age males in a strike zone as combatants, according to several administration officials, unless there is explicit intelligence posthumously proving them innocent.

Counterterrorism officials insist this approach is one of simple logic: people in an area of known terrorist activity, or found with a top Qaeda operative, are probably up to no good. “Al Qaeda is an insular, paranoid organization — innocent neighbors don’t hitchhike rides in the back of trucks headed for the border with guns and bombs,” said one official, who requested anonymity to speak about what is still a classified program.
jarenmf
1 replies
2h2m

Damn, some people really don't want anyone to see this

jauntywundrkind
0 replies
1h48m

So frustrating how easy it is for those of a certain zeal to wipe off mention of that which they find inconvenient.

There could hardly be a more pertinent issue for tech right now. Just sweepingly wild shit that we should be grappling with.

ulnarkressty
0 replies
7m

As a backer on the original Oculus kickstarter, I have such a sinking feeling in my stomach every time this comes up. My money went to enable Luckey to achieve this and I hate myself for it.

giantg2
1 replies
25m

"Lavender learns to identify characteristics of known Hamas and PIJ operatives, whose information was fed to the machine as training data, and then to locate these same characteristics — also called “features” — among the general population, the sources explained. An individual found to have several different incriminating features will reach a high rating, and thus automatically becomes a potential target for assassination."

Hamas combatants like fried chicken, beer, and women. I also like these things. I can't possibly see anything wrong with this system...

amarcheschi
0 replies
11m

This literally looks like any aborrhent ai "predicting" system such as the ones we've heard a ton about in the past, with the same mistakes (I wonder if they're really mistakes, bugs, or ahem... Features)

d--b
1 replies
3h17m

`public bool isSomehowAssociatedWithHamas() { return true; }`

AI

Yeah, yeah guidelines and all.

stevenwoo
0 replies
3h13m

It’s slightly more complicated a.) looks like male b.) lives here c.) send unguided munition if less than 15 or 100 other non targets depending upon value of target.

tombert
0 replies
3h11m

Had a minor panic; I got to a final stage of an interview for a company called "Lavender AI". They were doing email automation stuff, but seeing the noun "Lavender" and "AI" in combination with "bombing" made me think that they might have been part of something horrible.

ETA:

I wonder if this is going to ruin their SEO...it might be worth a rebrand.

tokai
0 replies
2h4m

While humans select these features at first, the commander continues, over time the machine will come to identify features on its own. This, he says, can enable militaries to create “tens of thousands of targets,”

So overfitting or hallucinations as a feature. Scary.

tivert
0 replies
3h2m

The VCs promised a utopia of flying cars and abundance, but all we got was more inequality and these AI death machines.

supposemaybe
0 replies
2h17m

Lavender: One person’s flower, another person’s AI death machine.

skilled
0 replies
21m

I am more curious about the “compute” of an AI system like this. It must be extremely complicated to do real-time video feed auditing and classification of targets, etc.

How is this even possible to do without having the system make a lot of mistakes? As much AI talk there is on HN these days, I would have recalled an article that talks about this kind of military-grade capability.

Are there any resources I can look at, and maybe someone here can talk about it from experience.

rvcdbn
0 replies
3h7m

Anyone who knowingly developed this should be tried held personally responsible.

rich_sasha
0 replies
1h2m

I don't like anything about this war, but in a way, I think concerns of AI in warfare are, at this stage, overblown. I'm more concerned about the humans doing the shooting.

Let's face it, in any war, civilians are really screwed. It's true here, it was true in Afghanistan or Vietnam or WWII. They get shot at, they get bombed, by accident or not, they get displaced. Milosevic in Serbia didn't need an AI to commit genocide.

The real issue to me is what the belligerents are OK with. If they are ok killing people on flimsy intelligence, I don't see much difference between perfunctory human analysis and a crappy AI. Are we saying that somehow Hamas gets some brownie points for not using an AI?

resource_waste
0 replies
19m

I'm probably pro-isreal because I'm a realpolitik American that wants America's best interest. (But I'm not strong either way)

Just watched someone get their post deleted for criticizing Israel's online PR/astroturfing.

Israel's ability to shape online discussion has left a bad taste in my mouth. Trust is insanely low, I think the US should get a real military base in Israel in exchange for our effort. If the US gets nothing for their support, I'd be disgusted.

notduncansmith
0 replies
3h47m

“This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”
nahuel0x
0 replies
29m

Using the latest advances in technology and computing to plan and execute an ethnic cleansing and genocide? Sounds familiar? If not, check "IBM and the Holocaust".

mzs
0 replies
16m

… normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. …

mrs6969
0 replies
11m

Any human being would not accept this. If it is happening to Palestinian people, it will happen to any other country in the world. Israel is committing genocide in front of the world. 50 years from now, some people will be sorry while committing another genocide.

be ready to be targeted by AI, from another state, within another war

mistermann
0 replies
3h2m

"This will get flagged to death in minutes as what happens to all mentions of israel atrocities here" (now dead)

It maybe worth noting that there is at least one notification service out there to draw attention to such posts. Joel spolsky even mentioned such a service that existed back when stackoverflow was first being built.

Human coordination is arguably the most powerful force in existence, especially when coordinating to do certain things.

Also interesting: it would seem(!) that once an article is flagged, it isn't taken down but simply disappears from the articles list. This is quite interesting in a wide variety of ways if you think about it from a global cause and effect perspective, and other perspectives[1]!

Luckily, we can rest assured that all is probably well.

[1] https://plato.stanford.edu/entries/perception-problem/

mirekrusin
0 replies
3m

Why would active military offcers tell the story about careless use of tech and bombing families with children while they sleep? Feels like propaganda more than anything else.

It's plausible they use AI. It's also plausible they don't that much.

It's plausible it has high false positive rate. It's also plausible it has multiple layers of crosschecks and has very high accuracy.

It's plausible it is used in rush without any doublechecks at all. It's also plausible it's used after other intelligence as verification only.

It's plausible that targets are easier to locate home. It's plausible it's not, ie. it may be easier to locate them around listed, known operation buildings, tracked vehicles, while known, tracked mobile phone is used etc.

It's plausible that half a dozen active officers want to share this information. It's also plausible that narrow group of people have access to this information. It's plausible they would not engage in activity that could be classified as treason. It's also plausible most personel simply doesn't know the origin of orders up the chain, just immediate.

It's plausible it's real information. It's also plausible it's fake or even AI generated fake.

me_again
0 replies
3h6m

"zero-error policy" as described here is a remarkable euphemism. You might hope that the policy is not to make any errors. In fact the policy is not to acknowledge that errors can occur!

majikaja
0 replies
55m

Will America fight on Israel's bidding if it starts a war with Iran? Thus opening a new front with the war against Russia

majikaja
0 replies
2h30m

They forgot to tune the AI to only kill non-whites

kazmer_ak
0 replies
2h8m

Turns out, it, too, was just 1000 dudes in India watching camera footage and clicking things.

dhanna
0 replies
1h55m

The use of these AI systems are the biggest evidence of the Genocidal rules of engagement from the Israelis.

contemporary343
0 replies
53m

I’m really not sure why this got flagged. It seemed like a well sourced and technology-focused article. Independent of this particular conflict, such automated decision making has long been viewed as inevitable. If even a small fraction of what is being reported is accurate it is extraordinarily disturbing.

camdenlock
0 replies
1h31m

Our core values are a commitment to equity
algem
0 replies
2h2m

this is a horrific use of ai

aaomidi
0 replies
1h20m

I wonder if the WCK assassinations were related to this.

Stevvo
0 replies
29m

First time I've really felt like I'm living in a dystopian science fiction.

NickC25
0 replies
2h3m

This shouldn't be flagged.

FerretFred
0 replies
2h12m

Next step is for similar AI systems to decide when to start a war, or not ...

DarkByte
0 replies
4h1m

What a laughable attempt at deflecting the blame of their blatant genocide to "an AI system" when social media is full of evidence of statements of intention and videos actions of those intentions.

Their evil is not in the use of the weapon system itself by their military but their intentional labelling of civilians and aid workers as threats for that AI system to act upon.

Also relieving the person pulling the trigger of any responsibility because an "AI system" is involved is laughable. War criminals.

Ancapistani
0 replies
3h6m

the system makes what are regarded as “errors” in approximately 10 percent of cases

This statement means little without knowing the accuracy of a human doing the same job.

Without that information this is an indictment of military operational procedures, not of AI.