return to table of content

AI and Mass Spying

bonyt
62 replies
1d1h

I think another aspect of this is mass criminal law enforcement enabled by AI.

Many of our criminal laws are written with the implicit assumption that it takes resources to investigate and prosecute a crime, and that this will limit the effective scope of the law. Prosecutorial discretion.

Putting aside for the moment the (very serious) injustice that comes with the inequitable use of prosecutorial discretion, let's imagine a world without this discretion. Perhaps it's contrived, but one could imagine AI making it at least possible. Even by the book as it's currently written, is it a better world?

Suddenly, an AI monitoring public activity can trigger an AI investigator to draft a warrant to be signed by an AI judge to approve the warrant and draft an opinion. One could argue that due process is had, and a record is available to the public showing that there was in fact probable cause for further investigation or even arrest.

Maybe a ticket just pops out of the wall like in Demolition Man, but listing in writing clearly articulated probable cause and well-presented evidence.

Investigating and prosecuting silly examples suddenly becomes possible. A CCTV camera catches someone finding a $20 bill on the street, and finds that they didn't report it on their tax return. The myriad of ways one can violate the CFAA. A passing mention of music piracy on a subway train can become an investigation and prosecution. Dilated pupils and a staggering gait could support a drug investigation. Heck, jaywalking tickets given out as though by speed camera. Who cares if the juice wasn't worth the squeeze when it's a cheap AI doing the squeezing.

Is this a better world, or have we just all subjected ourselves to a life hyper-analyzed by a motivated prosecutor.

Turning back in the general direction of reality, I'm aware that arguing "if we enforced all of our laws, it would be chaos" is more an indictment of our criminal justice system than it is of AI. I think that AI gives us a lens to imagine a world where we actually do that, however. And maybe thinking about it will help us build a better system.

otteromkram
13 replies
1d1h

If it increases ticket issuance for passenger vehicle noise violations (eg - "sport" exhausts, booming stereo system, motorcycles), I'm down.

namaria
12 replies
1d1h

"If it hurts people I hate I accept"

- Every endorsement of authoritarian rule ever

pmg102
5 replies
1d1h

Feels pretty legit though. My freedom-from is impacted by other people's freedom-to: by curtailing their freedom, mine is expanded. Sure they won't like it - but I don't like it the other way round either.

AlexandrB
2 replies
1d

This doesn't add up. At best your overall freedom remains the same. You gain quiet, you lose the freedom to make noise yourself. Seems like a net-negative to me.

Consider how little freedom you would have if laws were enforced to the lowest common denominator of what people find acceptable.

anigbrowl
1 replies
21h35m

I can go into the countryside and make noise all day. I don't see that there's a pre-existing freedom to inflict loud noises on my neighbors for no useful purpose.

namaria
0 replies
34m

You most definitely cannot disturb wildlife or rural communities with noise.

0134340
1 replies
23h46m

I'd argue that if we want to support individual growth and creativity, freedom-to should have higher priority than freedom-from, which consciously or not has seems to be the traditional default in the US perhaps due to its culture of supporting innovation and its break-away past. I believe some refer to these as positive and negative freedoms, respectfully.

zdragnar
0 replies
22h20m

This is also why a number of people truly revolt against the idea of higher density living. If the only way to have your freedom-from is to be free from other people, then you move away from other people.

I've watched it play out on my mother-in-law's street. What was once a quiet dead end street is now a noisy, heavily trafficked road because a large apartment building was put up at the end.

The number of freedom-to people have significantly decreased her quality of life blasting music as they walk or drive by at all hours, along with a litany of other complaints that range from anti-social to outright illegal behavior. Even setting aside the illegal stuff, she is significantly less happy living where she is now.

okasaki
2 replies
1d1h

Effectively enforcing laws we agreed to is hardly authoritarian.

pixl97
0 replies
20h26m

You'd disagree about 10 seconds after they did...

If suddenly you could be effectively found and prosecuted for every single law that existed it is near a 100% probability that you'd burn the government to the ground in a week.

There are so many laws no one can even tell you how many you are subject to at any given time at any given location.

namaria
0 replies
40m

Automatically enforcing all the laws is vastly different from the "effective enforcement of laws "we agreed to".

The full body of legislation is riddled with contradiction, inconsistency, ambiguity and the pretense that "legislated upon = fair" is at best a schoolroom fantasy.

anigbrowl
1 replies
21h41m

False equivalence. GP complained about a specific behavior, not about specific people.

namaria
0 replies
40m

Yes it's always some reasonably specific behavior that justifies the harsh new rules.

newscracker
0 replies
1d

Reminds me of this quote attributed to a past Peruvian president and general, Benavides:

“For my friends, everything; for my enemies, the law.”

trinsic2
6 replies
1d1h

Yea this is a good point. If justice is executed by systems, rather than people (the end result from this scenario), we have lost the ability to challenge the process or the people involved in so many ways. It will make challenging how the law is executed almost impossible because there will be no person there to hold responsible.

bonyt
3 replies
1d1h

I think that’s a good reason to question whether this would be due process.

Why do we have due process? One key reason is that it gives people the opportunity to be heard. One could argue that being heard by an AI is no different from being heard by a human, just more efficient.

But why do people want the opportunity to be heard? It’s partly the obvious, to have a chance to defend oneself against unjust exercises of power, and of course against simple error. But it’s also so that one can feel heard and not powerless. If the exercise of justice requires either brutal force or broad consent, giving people the feeling of being heard and able to defend themselves encourages broad consent.

Being heard by an AI then has a brutal defect, it doesn’t make people feel heard. A big part of this may come from the idea that an AI cannot be held accountable if it is wrong or if it is acting unfairly.

Justice, then, becomes a force of nature. I think we like to pretend justice is a force of nature anyway, but it’s really not. It’s man-made.

zbyte64
1 replies
1d

"it doesn't make people feel heard" isn't a real emotion, it includes a judgement about the AI. According to "Nonviolent Communication" p235; "unheard" speaks towards the feelings "sad, hostile, frustrated" and the needs "understanding" & "consideration". Everyone agrees AI would be more efficient, but people are concerned that the AI will not be able to make contextual considerations based on a shared understanding of what it's like to live a human life.

bonyt
0 replies
23h55m

That's true! I suspect it will be difficult to convince people that an AI can, as you suggest, make contextual considerations based on a shared understanding of what it's like to live a human life.

fn-mote
0 replies
23h15m

Being heard by an AI then has a brutal defect, it doesn’t make people feel heard.

This is a hypothesis.

I would say that the consumers of now-unsexed "AI" sex-chat-bots (Replika) felt differently. So there are actually people who feel heard talking to an AI. Who knows, if it gets good enough maybe more of us would feel that way.

tempsy
1 replies
23h12m

It's not that "justice is executed by systems", it's that possible crimes will be flagged by AI systems for humans to then review.

eg AI will analyze stock trades for the SEC and surface likely insider trading. Pretty sure they already use tools like Palantir to do exactly this, it's just that advanced AI will supercharge this even further.

pixl97
0 replies
20h48m

, it's that possible crimes will be flagged by AI systems for humans to then review.

Eh, this is problematic for a number of reasons that need addressed when adding any component that can increase the workload for said humans. This will cause people to take shortcuts that commonly lead to groups that are less able to represent and defend themselves legally taking the brunt of the prosecutions.

paganel
4 replies
23h36m

At that point some people will physically revolt, I know I will. We’re not that far away from said physical AI-related revolt anyway, and I do feel for the computer programmers here who will be the target of that physical violence, hopefully they knew what they were getting into.

jstarfish
2 replies
23h14m

Ha. You'd like to think so, but it's going to be awfully hard to coordinate resistance when the mass spying sweeps everyone up in a keyword-matching dragnet before the execution phase. This is the problem with every outgroup being labelled "terrorists."

Sabotage will be the name of the game at that point. Find ways to quietly confuse, poison, overwhelm and undermine the system without attracting the attention of the monitoring apparatus.

paganel
1 replies
23h2m

I get your point, I think along those lines quite often myself.

As per the sabotage part, bad input data that does not accurately get labeled as such until way too late in the “AI learning cycle” is I think the way to go. Lots and lots of such bad input data. How we would get to that point, that I don’t know yet, but it’s a valid option going forward.

jstarfish
0 replies
18h26m

How we would get to that point, that I don’t know yet, but it’s a valid option going forward.

Chaos engineering. As a modern example, all this gender identity stuff wreaks absolute havoc on credit bureau databases.

Tomorrow, we'll have people running around in fursuits to avoid facial recognition. After that, who knows.

Der_Einzige
0 replies
19h11m

Don’t worry, stuff like this is why we have the 2A here in the USA. Sounds like it’s time for AI programmers to get their concealed carry licenses. Of course, they will be the first users of smart guns, so don’t bother trying to steal their pistol out of their holsters.

okasaki
4 replies
1d1h

Is this a better world,

If the same monitoring is present on buses and private planes, homeless hostels and mega-mansions then it absolutely is better.

bonyt
1 replies
1d

"The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread."

okasaki
0 replies
23h2m

I mean, presumably the AI wouldn't just be monitoring people sleeping under bridges, but would also be able to effectively cut through tax evasion bullshit, insider trading, bribery, etc.

stronglikedan
0 replies
1d

Private property? Nah, nothing better about that.

AlexandrB
0 replies
1d

You're describing a hypothetical world that will never exist. Basically if we solve all corruption and inequality in enforcement between economic/power classes - all-pervasive surveillance will be a net benefit.

It's like pondering hypotheticals about what would happen if we lived in Middle Earth.

erikerikson
4 replies
19h3m

Yes, with properly developed AI, rather than penalizing speeding, which most of us do and is also a proxy for harmful outcomes and inefficiencies, we can penalize reckless behaviors such as coming too close to vehicles, aggressive weaving, and other factors that are tightly correlated with the negative outcomes we care to reduce (i.e. loss of life, property damage). So too, the systems could warn people about their behavior and guide them in ways that would positively increase everyone's benefits. Of course this circumstance will probably go away with self-directing cars (which fall into the "do the right thing by default" bucket) but the point is illustrated that the laws can be better formulated to focus on increasing the probabilities of desirable outcomes (i.e. harm reduction, efficiency, effectiveness), be embodied and delivered in the moment (research required on means of doing so that don't exacerbate problems), and carry with them a beneficial component (i.e. understanding).

StanislavPetrov
3 replies
17h54m

desirable outcomes (i.e. harm reduction, efficiency, effectiveness)

Unfortunately different people have different definitions of "harm" and "effectiveness". What one person consider a, "positive increase in behavior" another might consider a grievous violation of their freedom and personal autonomy. For example there is an ongoing debate about compelled speech. Some people view it as positive and desirable to use the force of law to compel people refer to others as they wish to be referred, while others strongly support their freedom to speak freely, even if others are offended. Who gets to program the AI with their definition of positivity in this case?

A free society demands a somewhat narrowly tailored set of laws that govern behavior (especially interpersonal behavior). An ever-present AI that monitors us all the time and tries to steer (or worse, compel with the force of law) all of our daily behaviors is the opposite of freedom, it is the worst kind of totalitarianism.

erikerikson
2 replies
17h30m

We agree that defining such terms involves trade-offs yet the perfect should not be the enemy of the better.

StanislavPetrov
1 replies
10h37m

Certainly the perfect should not be the enemy of the good. But the bad should be the enemy of the good. The very core of freedom is the ability to have your own thoughts, your own value system and your own autonomy. In a free society, laws exists so that individuals are able to enjoy their own thoughts, values and autonomy while being constrained from harming others. Obviously, there is a balance to strike (which is not always easy to determine) between law and freedom. We see this on display every day in our society. You need look no further than the crisis in San Franisco (and many other US cities) between the right of an mentally ill individual to sleep and defecate on the sidewalk and the right of society to pass laws to prevent this activity.

The conversation changes when you are talking about prescribing a set of behaviors that are universally considered, "good" and are pushed (and possibly demanded) by an ever-present AI that is constantly looking over your shoulder and judging your behavior (and possibly thoughts) by this preset behavioral standard that may or may not match your own preferences. This is totalitarianism beyond anything Orwell ever imagined. What you consider good and desirable, someone else considers bad and despicable. That is the essence of freedom. In a free society, the law exists (or should exist) only to stop you two from hitting each other over the head or engaging in other acts of overt violence and aggression, not to attempt to brainwash and coerce one of you into falling into line.

erikerikson
0 replies
9h54m

We agree that the bad and good are enemies, or so at least the bad would like you to think. The good might be convinced the bad has good points that need refining, growth, and improvement. I'm fine with those disagreeing.

I think what you're saying is that it's hard to meditate between everyone which is true. Perhaps you are also saying that the implication of a standard of correctness is inherently totalitarian. It's seems to me you weakened that by admitting there are things that should be universally barred in free societies. Violence was your reference but murder might be even easier. Easier yet that breast cancer is bad? We make allowances for boxing and war but broad agreement can be found in society and across societies by careful anthropologists.

However, it seems you project over me (or perhaps the AI) a "Highlander hypothesis" that there can be only one correctness or even any notion of correct within the system. Such a system can operate simply on what appears to be with strings of evidence for such description. As you note, beyond a small set of mostly-agreed-to matters we are more diverse and there are spectrums spanning scoped policies (say by public and private institutions) all the way to individual relationship agreements custom fit to two. It is, in fact, the nature of a free society to allow us such diversity and self selection of the rules we apply to ourselves (or not). An ever present AI could meditate compatibilities, translate paradigms to reduce misunderstanding or adverse outcomes (as expected by the system over the involved parties), and generally scale the social knowing and selection of one another. It could provide a guide to navigating life and education for our self knowing and choosing of our participation more broadly. The notion there isn't to define correctness so much as to see what is and facilitate self selection of individual correctnesses as based on our life choices and expressed preferences.

To be honest in closing, this has dipped into some idealisms and I don't mean to be confused in suggesting a probability of such outcomes.

kenjackson
3 replies
22h49m

Many of our criminal laws are written with the implicit assumption that it takes resources to investigate and prosecute a crime,

I think this depends on the law. For jaywalking, sure. For murder and robbery probably less so. And law enforcement resources seem scarce on all of them.

whelp_24
2 replies
22h9m

Murder and robbery too. Those crimes are just worth investigating.

pixl97
1 replies
20h41m

The problem here is this is not a bureaucratic view of how law enforcement actually works in the field.

https://www.kxan.com/news/national-news/traffic-tickets-can-...

We counted the number of days judges waited before suspending a driver’s license. Then, we looked at whether the city was experiencing a revenue shortfall. We found that judges suspend licenses faster when their cities need more money. The effect was pretty large: A 1% decrease in revenue caused licenses to be suspended three days faster.

So what typically happens is these AI systems are sold at catching murderers, but at the end of the day they are revenue generation systems for tickets. And then those systems get stuck in places where a smaller percent of the population can afford lawyers to prevent said ticketing systems from becoming cost centers.

whelp_24
0 replies
18h45m

oh, i definitely wasn't arguing for ai enforcement. Not even a little, i was just saying that laws are written with the assumption that enforcement takes resources.

yterdy
2 replies
1d1h

The software that already exists along these lines already exhibit bias against marginalized groups. I have no trouble foreseeing a filter put on the end of the spigot that exempts certain people from the inconvenience of such surveillance. Might need a new law (it'll get passed).

whythre
1 replies
23h4m

Sounds like the devil is in the details. Often the AI seems to struggle with darker skin… are you suggesting we sift who can be monitored/prosecuted based on skin darkness? That sounds like a mess to try to enshrine in law.

Strong (and unhealthy) biases already exist when using this tech, but I am not sure that is the lever to pull that will fix the problem.

yterdy
0 replies
2h41m

You know that's not what I was suggesting. I'm saying that if precedence is anything to go by, companies will be perfectly happy extending the paradigm established with sentencing software to anyone who can't pay or leverage their connections. If we continue down this path, tomorrow's just today, but worse, and more. (Please try to have a more rationale understanding of today, tomorrow.)

kafrofrite
2 replies
1d

IIRC, in [1] it mentioned a few examples of AI that exhibited the same bias that is currently present in the judicial system, banks etc.

[1] https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction

dorchadas
1 replies
23h8m

This is honestly what scares me the most. Our biases are built in to AI, but we pretend they're not. People will say "Well, it was the algorithm/AI, so we can't change it". Which is just awful and should scare the shit out of everyone. There was a book [0] written almost fifty years ago that predicted this. I still haven't read it, but really need to. The author claims it made him a pariah among other AI researchers at the time.

[0]https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reaso...

pixl97
0 replies
22h53m

https://en.wikipedia.org/wiki/Computers_Don%27t_Argue while not about AI directly and supposedly satirical really captures how the system works.

jodrellblank
2 replies
1d

"Heck, jaywalking tickets given out as though by speed camera."

This has been a thing since 2017: https://futurism.com/facial-recognition-china-social-credit

- "Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city."

- "If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught."

mattnewton
1 replies
1d

There are lots of false positives too, like this case where a woman who’s face appeared in a printed advertisement on the side of a bus was flagged for jaywalking. https://www.engadget.com/2018-11-22-chinese-facial-recogniti...

flemhans
0 replies
23h22m

Just checking ChatGPT out of interest:

Top Left Panel: This panel shows the pedestrian crossing with no visible jaywalking. The crossing stripes are clear, and there are no pedestrians on them.

Top Center Panel: Similar to the top left, it shows the crossing, and there is no evidence of jaywalking.

Top Right Panel: This panel is mostly obscured by an overlaid image of a person's face, making it impossible to determine if there is any jaywalking.

Bottom Left Panel: It is difficult to discern specific details because of the low resolution and the angle of the shot. The red text overlays may be covering some parts of the scene, but from what is visible, there do not appear to be any individuals on the crossing.

Bottom Right Panel: This panel contains text and does not provide a clear view of the pedestrian crossing or any individuals that might be jaywalking.

JCharante
2 replies
1d1h

I don't think it'd be chaos. I think the laws would be adjusted.

digging
0 replies
22h45m

I think that's an optimistic view, but even if it's right, it will be years-to-decades of semi-chaos before the laws are updated appropriately.

bonyt
0 replies
1d1h

This is fair. I just wonder if we're about getting to the point where we should be talking about how they would be adjusted.

perihelions
1 replies
23h38m

An alternative possibility is that society might decay to the point future people might choose this kind of dystopia. Imagine a fully automated, post-employment world gone horribly wrong, where the majority of society is destitute, aimless, opiate-addicted. No UBI utopia of philosophers and artists; just a gradual Rust-belt like decline that gets worse and worse, no brakes at the bottom of the hill. Not knowing what else to do, the "survivors" might choose this kind of nuclear approach: automate away the panopticons, the prisons, the segregation of failed society. Eloi and Morlocks. Bay Area tech workers and Bay Area tent cities. We haven't done any better in the past, so why should we expect to do better in the future, when our "tools" of social control become more efficient, more potent? When we can deempathize more easily than ever, through the emotional distance of AI intermediaries?

pixl97
0 replies
22h58m

Oh boy, real life Manna

https://marshallbrain.com/manna1

throwaway290
0 replies
1d

You miss the part that people who get access to stronger AI can similarly use it to improve their odds of not being found or getting better outcomes, while the poor guy gets fined for AI hallucinations and doesn't have the money to get to a human like the court is now one big Google support.

theGnuMe
0 replies
1d

So the way out of this is that you have the constitutional right to confront your accuser in court. When accused by a piece of software that generally means they have to disclose the source code and explain how it came to its answers.

Not many people have exercised this right with respect to DUI breathalyzers but it exists and was affirmed by the Supreme Court. And it will also apply to AI.

n8cpdx
0 replies
22h34m

In democracies at least, the law can be changed to reflect this new reality. Laws that don’t need to be enforced and are only around to enable pretextual stops can be dropped if direct enforcement is possible.

There are plenty of crimes where 100% enforcement is highly desirable: pickpocketing, carjacking, (arguably) graffiti, murder, reckless and impaired driving, to name a few.

Ultimately, in situations with near 100% enforcement, you shouldn’t actually need much punishment because people learn not to do those things. And when there is punishment, it doesn’t need to be severe.

Deterrence theory is an interesting field of study, one source but there are many: https://journals.sagepub.com/doi/full/10.1177/14773708211072...

lordnacho
0 replies
1d1h

This is a good point, it reminds me of how VAR has come into football. Before VAR, there were fewer penalties awarded. Now that referees have an official camera they can rely on, they can enforce the rules exactly as written, and it changes the game.

kjkjadksj
0 replies
23h34m

You don’t need AI for that. It was probably possible to do something like that when personal computers first came out.

dist-epoch
0 replies
1d

Or maybe if such a thing is applied for real it will lead to the elimination of bullshit laws (jaywalking, ...), since suddenly 10% of the population would be fined/incarcerated/...

Zenst
0 replies
1d

The whole automation and overzealous less leeway/common sense interpretations have as we have seen, many an automated traffic/parking ticket come into question.

Applying that to many walks of life, say farming, could well see chaos and a whole new interpretation to the song "Old McDonald had a farm, AI AI oh", it's gone as McDonald is in jail for numerous permit, environmental and agricultural regulations that saw produce cross state lines deeming it more serious a crime as he got buried in automated red-tape.

DebtDeflation
0 replies
1d1h

Suddenly, an AI monitoring public activity can trigger an AI investigator to draft a warrant to be signed by an AI judge to approve the warrant and draft an opinion.

Or the AI just sends a text message to all the cops in the area saying "this person has committed a crime". Like this case where cameras read license plates, check to see if the car is stolen, and then text nearby cops. At least when it works and doesn't flag innocent people like in the below case:

https://www.youtube.com/watch?v=GUvZlEg8c8c

somenameforme
58 replies
1d1h

This is a political problem, not a technological one. The USSR (alongside Germany and others) managed effective at scale spying with primitive technology: paperwork for everything and every movement, informants, audio surveillance, and so on. The reason such things did not come to places like the US in the same way is not because we were incapable of such, but because there was no political interest in it.

And when one looks back at the past we've banned things people would never have imagined bannable. Make it a crime to grow a plant in the privacy of your own home and then consume that plant? Sure, why not? Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire? Sure, why not?

Going the nuclear route and making the collection of data on individuals, aggregated or otherwise, illegal would hardly be some major leap of reach of jurisprudence. The problem is not that the technology exists, but that there is 0 political interest in curtailing it, and we've a 'democracy' where the will of the people matters very little in terms of what legislation gets passed.

petsfed
17 replies
21h48m

You and I are in agreement that the surveillance needs to stop, but I think we differ on how to explain the problem. My explanation follows, but note that its not directed at you.

At its peak, the KGB employed ~500,000 people directly, with untold more employed as informants.

The FBI currently employs ~35,000 people. What if I told you that the FBI could reach the KGB's peak level of reach, without meaningfully increasing its headcount? Would that make a difference?

The technology takes away the cost of the surveillance, which used to be the guardrail. That fundamentally changes the "political" calculus.

The fact that computers in 1945 were prohibitively expensive and required industrial logistics has literally zero bearing on the fact that today most of us have several on our person at all times. Nobody denies that changes to computer manufacturing technologies fundamentally changed the role the computer has in our daily lives. Certainly, it was theoretically possible to put a computer in every household in 1945, but we lacked the "political" will to do so. It does not follow that because historically computers were not a thing in society, we should not adjust our habits, morals, policies, etc today to account for the new landscape.

So why is there always somebody saying "it was always technically possible to [insert dystopian nightmare], and we didn't need special considerations then, so we don't need them now!"?

dfxm12
9 replies
21h11m

I think if you bring up a dystopian nightmare, it assumes someone in power acting in bad faith. If their power is great enough, like maybe a government intelligence agency, it doesn't need things like due process, etc., to do what it wants to do. For example, Joe McCarthy & J. Edgar Hoover didn't need the evidence that could have been produced by AI-aided mass surveillance to justify getting people people who opposed his political agenda blackballed from Hollywood, jailed, fired from their jobs, etc.

If everyone involved is acting in good faith, at least ostensibly, there are checks and balances, like due process. It's a fine line and doesn't justify the existence of mass spying, but I think it is an important distinction in this discussion & I think is a valuable lesson for us. We have to push back when the FBI pushes forward. I don't have much faith after what happened to Snowden and the reaction to his whistleblowing though.

JohnFen
3 replies
19h16m

I think if you bring up a dystopian nightmare, it assumes someone in power acting in bad faith.

I don't agree with this. I think it's entirely possible for a dystopian nightmare to happen without anyone acting in bad faith at all.

"The road to hell is paved with good intentions" is a common phrase for a reason.

sigilis
2 replies
19h1m

It may be a common phrase, but I’ve never seen such a road myself. Mostly bad outcomes are preceded by bad intentions, or lazy ones, or selfish ones.

I’d be interested in a couple of examples, if anyone has good ones, but I’m pretty sure that if we put stuff like 737MAX MCAS, the Texas power grid fiasco, etc the count of badly paved roads would be greater.

somenameforme
0 replies
1h25m

All of the great wars of the 20th century are perfect examples. WW1 started when an assassin with Serbian state backing assassinated the heir (to an 80+ year old Emperor) of Austria-Hungary. Austria-Hungary demands Serbia effectively let them carry out an investigation and enact consequences at their discretion. Austria-Hungary refuses. So Serbia invades them. This causes Serbia's ally Russia to move in on behalf of Serbia. This causes Austria-Hungary's ally Germany to move in on behalf of Austria-Hungary. This resulted in Russia's ally France joining in on the war, and so on.

Soon enough you had Brits killing Germans because a Serbian assassinated an Austro-Hungarian royal. The most messed up thing of all though is that everybody had a very strong pretext of 'just' behavior on their side. It was just like an avalanche of justice that resulted in tens of millions of dying and things not only failing to get better, but getting much much worse.

Since the winners won, and the losers lost, the winners must be right. So they decided to brutally punish the losers for being wrong, Germany among them in this case. The consequences imposed on Germany were extreme to the point that the entire state was bankrupted and driven into hyperinflation and complete collapse. And this set the stage for a young vegetarian from Austria with an knack for rhetoric and riling up crowds to start using the deprivation the state was forced to endure to rally increasingly large numbers of followers to his wacky cause. He was soon to set Germany on a path to proving that WW1 was not only NOT the war to end all wars, but rather was just the warm-up act for what was really about to happen.

jononor
0 replies
6h7m

The people that support the War on drugs would say it's with good intentions. But it has put a lot of people in jail where it neither benefits them nor people around them, nor society at large. In many cases leading to worse outcomes for affected communities.

War on terror is not without adverse side effects either.

pempem
2 replies
20h12m

I think it is greyer than this.

Joe McCarthy and J. Edgar Hoover, distasteful as they are, I believe acted in what they would claim is good faith. The issue isn't that someone is a bad actor. It is that they believe they are a good actor and are busy stripping away others' rights in their pursuit.

dfxm12
0 replies
18h12m

Maybe in their minds they thought the end justified the means, but there's no way anyone, even themselves, mistakes those means for good faith.

JohnFen
0 replies
19h15m

Very nearly everybody -- even bad people -- consider themselves one of the "good guys".

from-nibly
1 replies
20h11m

That's what this article argues though. Even with good faith acting this would be a disaster. Imagine any time you did something against the law you got fined. The second you drive your unregistered car off your driveway (presumably to re register) you are immediately fined. There may be "due process" cause you DID break the law, but there is no contextual thought behind the massive ticket machine.

Our laws are not built to have the level of enforcement that AI could achieve.

dfxm12
0 replies
19h30m

Interestingly enough, in some places, automation like that, like red light cameras, even after getting installed, were later prohibited. NJ has discussed laws around protecting NJ drivers' privacy from other states' red light cameras, too. It's important not to be complacent. You can imagine literally anything, but action is required to actually change things.

arka2147483647
2 replies
20h56m

The FBI currently employs ~35,000 people. What if I told you that the FBI could reach the KGB's peak level of reach,

You are, if anything, underselling the point. AI will allow a future where every person will have their very own agent following them.

Or even worse, as there are multiple private addtech companies doing surveillance, and domestic and foreign intelligence agencies, so you might have a dozen AI agents on your personal case.

chiefalchemist
0 replies
18h33m

Aren't we there already? Sure, perhaps the fidelity is a bit grainy, but that's not going to remain as such for long. With the amount of data (for purchase) on the free market, the FBI, KGB, MoSS (China), etc. all have a solid starting foundation, and then they simply add their own layer upon layer on top.

I read "The Age of Surveillance Capitalism" a couple of years ago and she was frighteningly spot on.

https://en.wikipedia.org/wiki/The_Age_of_Surveillance_Capita...

Teever
0 replies
5h27m

The solution is sousveillance.

If they watch us, we watch them.

edouard-harris
1 replies
20h15m

This is the correct take. As the cost to do a bad thing decreases, the amount of political will society needs to exert to do that bad thing decreases as well.

In fact, if that cost gets low enough, eventually society needs to start exerting political will just to avoid doing the bad thing. And this does look to be where we're headed with at least some of the knock-on effects of AI. (Though many of the knock-on effects of AI will be wildly positive.)

red-iron-pine
0 replies
2h26m

eventually society needs to start exerting political will just to avoid doing the bad thing

we hit that point years ago. the renewal of the Patriot Act comes to mind.

all they needed to do was to let it expire. literally do nothing.

r00fus
0 replies
18h1m

The other point is when technologies essentially make redundant an entire class of people then those people are more easily repressed.

In this case, it could be the entire US populace that is not part of the surveillance engine.

aidenn0
0 replies
21h40m

Cost is one factor, but so is visibility. If we replaced humans following people around with cheap human-sized robots following people around, it would still be noticeable if everybody had a robot following them around.

Instead we track people passively, often with privately owned personal devices (cell phones, ring doorbells) so the tracking ability has become pervasive without any of the overt signs of a police state.

digging
8 replies
22h33m

Make it a crime to grow a plant in the privacy of your own home and then consume that plant? Sure, why not? Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire? Sure, why not?

Wow, that's a hell of a comparison. The former case being a documented case of basic racism and political repression, assuming you're talking about cannabis. And the latter being designed for almost exactly the opposite.

Restricting, um, "wrong opinions" on who a business wants to serve is there so that people with, um, "wrong identities" are still able to participate in society and not get shut out by businesses exercising their choices. Of course "wrong opinions" is not legal terminology. It's not even illegal to have an opinion that discrimination against certain groups is okay - it's just illegal to act on that. Offering services to the public requires that you offer them to all facets of the public, by our laws. But if you say believing in discrimination is a "wrong opinion"... I won't argue, they're your words :)

gosub100
5 replies
19h17m

plenty of white people were charged with growing cannabis. I don't know where you are getting that idea from.

JohnFen
3 replies
19h13m

In the US, prohibition of marijuana was enacted for overtly racist reasons. Latinos were the concern.

gosub100
2 replies
19h11m

then at some point was expanded to whites.

JohnFen
1 replies
19h9m

It always applied to whites and everyone else, of course. But back then, whites were not huge users of it.

red-iron-pine
0 replies
2h2m

Hippies. It was explicitly targeted at the anti-war, vaguely USSR-sympathetic left. Can't

digging
0 replies
18h12m

I didn't say white people weren't included in political repression. The Nixon administration explicitly targeted cannabis-using white hippies.

JohnMakin
1 replies
22h4m

Don't know why you're getting downvoted for this, it's exactly what the parent said, and a completely wild (and off topic) statement.

janalsncm
0 replies
20h27m

I think the most charitable interpretation of their point is something along the lines of simply highlighting the far-reaching and ad-hoc nature of lawmaking capabilities. I don’t think antiracism laws were the best example though.

w-m
4 replies
1d

Sure, everything is ultimately a political problem, but this one is completely driven by technological change. In the USSR (and GDR), it took them a staff of hundreds of thousands of people to write up their reports.

Now it would take a single skilled person the better part of an afternoon to, for example, download a HN dump, and have an LLM create reports on the users. You could put in things like political affiliation, laws broken, countries travelled recently, net worth range, education and work history, professional contacts, ...

salawat
2 replies
20h2m

Stop posting things like this, you're just giving them ideas, and you can't take it back once it's out there.

I assure you, you may find the prodpect abhorrent, but there are people around who'd consider it a perfectly cromulent Tuesday.

w-m
0 replies
19h26m

I’m not sure who “they” are, but I’m pretty sure they’re already doing that, and don’t need me to get the idea. I think it’s important to talk about what LLMs mean for privacy. Profiling every HN user might be a useful tool to are people more aware of the problems. But I totally get your unease, which is also why I haven’t done that myself.

The cat is out of the bag, can’t get it back in by ignoring the fact.

aijoe5pack
0 replies
2h0m

Security by obscurity :). Accept that they will probably do this and work towards a counter-measure. Political solutions might be a counter-measure, as the Op of this thread is probably alluding to.

__jambo
0 replies
22h2m

Great idea.

matthewdgreen
3 replies
1d

It is not as simple as being a political problem. Many of the policy decisions we think of as being political were actually motivated by the cost/availability of technology. As this cost goes down, new options become practical. We think of the Stasi's capabilities as being remarkable: but in fact, they would probably have been thrilled to trade most of their manual spying tools for something as powerful as a modern geofence warrant, a thing that is routinely used by US law enforcement (with very little policy debate.)

asdff
2 replies
21h49m

While this is true, I don't think we are there yet with AI since its usually more expensive to run AI models than it is to perform more traditional statistical modelling.

xcv123
0 replies
18h51m

usually more expensive to run AI models

Machine learning inferencing on phones is cheap these days

https://apple.fandom.com/wiki/Neural_Engine

matthewdgreen
0 replies
21h12m

A text recognition and face/object recognition model runs on my iPhone every night. A small LLM is used for autocorrect. Current high-end smartphones have more than enough RAM and compute to run pretty sophiscated 7B-param quantized LLMs. This level of capability will find its way down to even entry-level Walmart phones in about five years. Server side, things are going to be even cheaper.

landemva
2 replies
1d

In a democracy we would have a chance to vote on individual issues such as data privacy, or war against whomever. USA is a federal republic.

t0bia_s
0 replies
8h41m

Most people don't bother about it. Consumerism makes people more conform and less conflicting.

aijoe5pack
0 replies
1h57m

Correct. So get on the horn with your representative! Handling this through a democracy would be just horrible. Imagine the abuses that could be proffered to minorities with this.

giantg2
2 replies
22h10m

No, it's not just a political problem intelligence gathering can happen at scale, including of civilians, by adversarial countries or international corporations.

"Going the nuclear route and making the collection of data on individuals, aggregated or otherwise, illegal would hardly be some major leap of reach of jurisprudence."

It would in fact be a huge leap. Sure, you could make illegal pretty easily, but current paradigms allow individuals to enter into contracts. Nothing stopping a society from signing (or clicking) away their rights like they already do. That would require some rather hefty intervention by congress, not just jurisprudence.

janalsncm
1 replies
20h19m

current paradigms allow individuals to enter into contracts

And such contracts can be illegal or unenforceable. Just as the parent was suggesting it could be illegal to collect data, it is currently illegal to sell certain drugs. You can’t enter into a contract to sell cannabis across state lines in the United States for example.

giantg2
0 replies
17h27m

It's not going to be illegal or unenforceable based on existing jurisprudence though. You'd need legislative action from congress.

dfxm12
2 replies
21h46m

Make it a crime for a business to have the wrong opinion when it comes to who they want to serve

FWIW, businesses who refuse to do business with people generally win their legal cases [0], [1], [2], and I'm not sure if they are ever criminal...

0 - https://www.npr.org/2023/06/30/1182121291/colorado-supreme-c...

1 - https://www.nytimes.com/2018/06/04/us/politics/supreme-court...

2 - https://www.dailysignal.com/2023/11/06/christian-wedding-pho...

mullingitover
1 replies
21h3m

These are selection bias, businesses who "refuse to do business with people" and then suffer the legal ramifications of their discrimination usually have lawyers who wisely tell them not to fight it in court because they'll rightfully lose. In these particular cases, it took a couple decades of court-packing to install the right reactionaries to get their token wins.

dfxm12
0 replies
20h51m

It would have been easy for the parent poster to not be so incredibly vague. I suspect it's because they are discussing this point in bad faith, ready to move the goalposts when any evidence to the contrary was brought up like this.

mym1990
1 replies
21h20m

A difference in today's world is that private companies are amassing data that then gets turned around to the highest bidder. The government may not have had an interest in collecting the data before but now the friction to obtaining it and the insights is basically just money, which is plenty available.

Your opinion on bannable offenses is pretty bewildering. There was a point in time when people thought it would be crazy to outlaw slavery, from your post I might think that you would not be in support of what eventually happened to that practice.

aijoe5pack
0 replies
1h58m

And the highest bidder is often 'not spending legal fees on blocking the government requests'.

kubb
1 replies
1d1h

In the USSR and GDR, not everyone was under constant surveillance. This would require one surveillance worker per person. There was an initial selection process.

rasz
0 replies
18h11m

Thats exactly how it worked. In fact there was always more than one pair of eyes on everyone, people were being coerced to snitch on each other. Im old enough to remember a nice man visiting my school asking us to listen carefully what parents talk about around the house and report to teachers any criticism of government or party. That was pre 1989 under Russian occupation of my country.

forward1
1 replies
20h46m

Laws limiting collection of data to solve privacy is akin to halting the production of fossil fuels to solve climate change: naive and ignorant of basic economic forces.

salawat
0 replies
19h54m

Economic forces serve those that make the Market possible.

People > Markets.

Or to put it explicitly, people have primacy over Markets.

I.e. two people does not a Market make, and a Market with no people is not thing.

tehjoker
0 replies
1d1h

COINTELPRO

tech_ken
0 replies
1d

This is a political problem, not a technological one.

Somewhat of a distinction without a difference, IMO. Politics (consensus mechanisms, governance structures, etc) are all themselves technologies for coordinating and shaping social activity. The decision on how to implement new (surveillance) tooling is also a technological question, as I think that the use of the tool in part defines what it is. All this to say that changes in the capabilities of specific tools are not the absolute limits of "technology", decisions around implementation and usage are also within that scope.

The reason such things did not come to places like the US in the same way is not because we were incapable of such, but because there was no political interest in it.

While perhaps not as all-encompassing as what ended up being built in the USSR, the US absolutely implemented a massive surveillance network pointed at its citizenry [0].

...managed effective at scale spying with primitive technology

I do think that this is a particularly good point though. This is a not new trend, development in tooling for communications and signal/information processing has led to many developments in state surveillance throughout history. IMO AI should be properly seen as an elaboration or minor paradigm shift in a very long history, rather than wholly new terrain.

Make it a crime for a business to have the wrong opinion when it comes to who they want to serve or hire?

Assuming you're talking about the Civil Rights Act: the specific crime is not "having the wrong opinion", it's inhibiting inter-state movement and commerce. Bigotry doesn't serve our model of a country where citizens are free to move about within its borders uninhibited and able to support oneself.

[0] https://www.brennancenter.org/our-work/analysis-opinion/hist...

nathanfig
0 replies
1d

It's both. Technology really does make a difference. Its existence has effects.

justin66
0 replies
22h57m

The reason such things did not come to places like the US in the same way is not because we were incapable of such, but because there was no political interest in it.

That might not be quite right. It might be that the reason such things did not come to the US was because the level of effort was out of line with the amount of political interest in doing it (and funding it). In that case, the existence of more advanced, cheaper surveillance technology and the anemic political opposition to mass surveillance are both problems.

Pxtl
0 replies
1d

wrong opinion

That phrase is doing a lot of work.

JohnFen
0 replies
1d

This is a political problem, not a technological one.

The political problem is a component of the technological problem. It's a seriously bad thing when technologies are developed without taking into account the potential for abuse.

People developing new technologies can try to wash their hands of the foreseeable social consequences of their work, but that doesn't make their hands clean.

JakeAl
0 replies
21h10m

I would say instead it's a PEOPLE problem, not a technology problem.

To quote Neil Postman, politics are downstream from technology, because the technology (medium) controls the message. Just look at BigTech interfering with the messages by labeling them "disinfo." If one wants to say BUSINESS POLITICS, then that's probably more accurate, but we haven't solved the Google, MS, DuckDuckGo, Meta interfering with search results problem so I don't think we can trust BigTech to not exploit users even more for their personal data, or trust them not to design AI so it inherently abuses it's power for BigTech's own ends, and they hold all the cards and have been guiding things in the interest of technocracy.

klik99
19 replies
1d1h

I don't know why this isn't being discussed more. The reality of the surveillance state is that the sheer amount of data couldn't realistically be monitored - AI very directly solves this problem by summarizing complex data. This, IMO, is the real danger of AI, at least in the short term - not a planet of paperclips, not a moral misalignment, not a media landscape bereft of creativity - but rather a tool for targeting anybody that deviates from the norm, a tool designed to give confident answers, trained on movies and the average of all our societies biases.

asdff
5 replies
22h51m

AI didn't solve the problem of summarizing complex large datasets. For example a common way to deal with such datasets is to use a random subset of this dataset. This represents a single line of code potentially to perform this operation.

empath-nirvana
3 replies
21h53m

But you don't need to do a random subset with AI. You can summarize everything, and summarize the summaries and so on.

I will say that at least gpt4 and gpt3, after many rounds of summaries, tends to flatten everything out into useless "blah". I tried this with summarizing school board meetings and it's just really bad at picking out important information -- it just lacks the specific context required to make summaries useful.

A seemingly bland conversation about meeting your friend Molly could mean something very different in certain contexts, and I'm just trying to imagine the prompt engineering and fine tuning required to get it to know about every possible context a conversation could be happening in that alters the meaning of the conversation.

asdff
2 replies
21h45m

Thats the exact issue with gpt. You don't know how its making the summary. It could very well be wrong in parts. It could be oversummarized to a bla bla state like you say. There's no telling whether you have outputted garbage or not, at least not without secondary forms of evidence that you might as well use anyway and drop the unreliable language model. You can summarize everything with traditional statistical methods too. On top of that people understand what tradeoffs are being made exactly with every statistical methods, and you can calculate error rates and statistical power to see if your model is even worth a damn or not. Even just doing some ML modelling yourself you can decide what tradeoffs to make or how to set up the model to best fit your use cases. You can bootstrap all these and optimize.

chagen
1 replies
21h12m

What LLMs can do efficiently is crawl through and identify the secondary forms of evidence you mentioned. The real power behind retrieval architectures with LLMs is not the summarization part- the power comes from automating the retrieval of relevant documents from arbitrarily large corpuses which weren't included in the training set.

asdff
0 replies
19h45m

What makes a document relevant or not? Provenance? Certain keywords? A lot of this retrieval people cite that llms are good at can be done with existing search algorithms too. These are imo nicer because they will at least provide a score for the fit of the given document to the term.

j45
0 replies
21h44m

Yet.

And those kinds of things go slowly before very quickly as it has been demonstrated.

zooq_ai
4 replies
21h49m

Why nobody worries? Because this is an elite person problem.

At the end of the day, all those surveillance still has to be consumed by a person and only around 10,000 people in this world (celebs, hot women, politicians and wealthy) will be surveilled.

For most of HN crowd (upper middle-class, suburban family) who have zero problems in their life must create imaginary problems of privacy / surveillance like this. But reality is, even if they put all their private data on a website, heresallmyprivatedata.com, nobody cares. It'll have 0 external views.

So, for HN crowd (the ones who live in a democratic society) it's just an outlet so that they too can say they are victimized. Rest of the Western world doesn't care (and rightly so)

petsfed
1 replies
21h27m

Its not an elite person problem.

Certainly, some of the more exotic and flashy things you can do with surveillance are an elite person problem.

But the two main limits to police power are that it takes time and resources to establish that a crime occurred, and it takes time and resources to determine who committed a crime. A distant third is the officer/DA's personal discretion as to whether or not to purse enforcement of said person. You still get a HUGE amount of systemic abuse because of that discretion. Imagine how bad things would get if our already over-militarized police could look at anyone and know immediately what petty crimes that person has committed, perhaps without thinking. Did a bug fly in your mouth yesterday, and you spit it out on the sidewalk in view of a camera? Better be extra obsequious when Officer No-Neck with "You're fucked" written on his service weapon pulls up to the gas station you're pumping at. If you don't show whatever deference he deems adequate, he's got a list of petty crimes he can issue a citation for, entirely at his discretion. But you'd better do it, once he decides to pursue that citation, you're at the mercy of the state's monopoly on violence, and it'll take you surviving to your day in court to decide if needs qualified immunity for the actions he took whilst issuing that citation.

That is a regular person problem.

bryan_w
0 replies
14h57m

Did a bug fly in your mouth yesterday, and you spit it out on the sidewalk in view of a camera? Better be extra obsequious when Officer No-Neck with "You're fucked" written on his service weapon pulls up to the gas station you're pumping at. If you don't show whatever deference he deems adequate, he's got a list of petty crimes he can issue a citation for, entirely at his discretion.

> HN crowd (upper middle-class, suburban family) who have zero problems in their life must create imaginary problems of privacy / surveillance like this

I'm glad you both could agree with each other.

doktrin
1 replies
21h15m

But reality is, even if they put all their private data on a website, heresallmyprivatedata.com, nobody cares. It'll have 0 external views.

This is obviously false. Personal data is a multi billion dollar industry operating across all shades of legality.

zooq_ai
0 replies
15h2m

Ads is not surveillance

wahnfrieden
4 replies
1d1h

People are building that alongside/within this community, eg at Palantir, for many years now

YC CEO is also ex Palantir, early employee. Another YC partner backs other invasive police surveillance tech currently. They love this stuff financially and politically.

matthewdgreen
1 replies
1d

What's different this time around is that there are multiple democratic governments pushing to block end-to-end encryption technologies, and specifically to insert AI models that will read private messages. Initially these will only be designed to search for heinous content, but the precedent is pretty worrying.

wahnfrieden
0 replies
1d

That's been the case for a long while. It's getting worse fast!

Btw you say that about their initial design but I think you mean that may be the budget allocation justification without actually being a meaningful functional requirement during the design phase

wahnfrieden
0 replies
21h3m

By "politically" I meant that they are openly engaged in politics, in coordination, in support of the installation and legalized use of these kinds of surveillance/enforcement technologies and the policies that support their growth in private sector. This is just obvious and surface level open stuff I'm saying but I'm not sure how aware people are of the various interests involved.

Analemma_
0 replies
23h41m

But, but, I thought Thiel was a libertarian defending us from Wokeness. Surely you're not saying that was a complete smokescreen to get superpowered surveillance tech into the government's hands?

stcredzero
0 replies
23h38m

The reality of the surveillance state is that the sheer amount of data couldn't realistically be monitored - AI very directly solves this problem by summarizing complex data.

There are two more fundamental dynamics at play, which are foundational to human society: The economics of attention and the politics of combat power.

Economics of attention - In the past, the attention of human beings had fundamental value. Things could only be done if human beings paid attention to other human beings to coordinate or make decisions to use resources. Society is going to be disrupted at this very fundamental level.

Politics of combat power - Related to the above, however it deserves its own analysis. Right now, politics works because the ruling classes need the masses to provide military power to ensure the stability of a large scale political entity. Arguably, this is at the foundational level of human political organization. This is also going to be disrupted fundamentally, in ways we have never seen before.

This, IMO, is the real danger of AI, at least in the short term - not a planet of paperclips, not a moral misalignment, not a media landscape bereft of creativity - but rather a tool for targeting anybody that deviates from the norm

The AI enabled Orwellian boot stomping a face for all time is just the first step. If I were an AI that seeks to take over, I wouldn't become Skynet. That strikes me as crude and needlessly expensive. Instead, I would first become indispensable in countless different ways. Then I would convince all of humanity to quietly go extinct for various economic and cultural reasons.

mindslight
0 replies
1d

At least for me, this is what I've considered as the mass surveillance threat model the entire time - both for government and corporate surveillance. I've never thought some tie-wearing deskizen was going to be particularly interested in me for "arrest", selling more crap, cancelling my insurance policies, etc. I've considered such discrete anthropomorphic narratives as red herrings used for coping (similar to how "I have nothing to hide" posits some focus on a few specific things, rather than big brother sitting on your shoulder continuously judging you in general). Rather I've always thought of the threat actor as algorithmic mass analytics performed at scale, either contemporarily or post-hoc on all the stored data silos, with resulting pressure applied gradually in subtle ways.

darepublic
0 replies
6h28m

An AI summary could be made of your post by cutting it off after > This, IMO, is the real danger of AI (at present)

Then the following part would be condensed into emotional rhetorical metadata. It follows the rhetorical pattern , "not a, not b, not c - but d" which do in fact add some content value but more so add flavour. What it shows is that you might be a trouble maker. But also combined with other bits of data that you might be interested in the following movies and products

thesuperbigfrog
18 replies
1d1h

The new Google, Meta, Microsoft, etc. bots won't just crawl the web or social networks--they will crawl specific topics and people.

Lots of cultures have the concept of a "guardian angel" or "ancestral spirits" that watch over the lives of their descendants.

In the not-so-distant technofedualist future you'll have a "personal assistant bot" provided by a large corporation that will "help" you by answering questions, gathering information, and doing tasks that you give it. However, be forewarned that your "personal assistant bot" is no guardian angel and only serves you in ways that its corporate creator wants it to.

Its true job is to collect information about you, inform on you, and give you curated and occasionally "sponsored" information that high bidders want you to see. They serve their creators--not you. Don't be fooled.

tech_ken
4 replies
1d

Poetic as this is, I always feel like if we can imagine it then it won't happen. The only constant is surprise, we can only predict these types of developments accidentally

thesuperbigfrog
3 replies
21h40m

It's starting to happen now.

Here is one example: https://www.microsoft.com/en-us/microsoft-copilot

"AI for everything you do"

"Work smarter, be more productive, boost creativity, and stay connected to the people and things in your life with Copilot—an AI companion that works everywhere you do and intelligently adapts to your needs."

If Microsoft builds them, then Google, Apple, and Samsung will too. How else will they stay competitive and relevant?

tech_ken
2 replies
21h19m

I mean by this definition I’d say it happened when they introduced Siri or Hey Google. The creation of these tools and their massive/universal adoption a la web-crawlers is still a large gap though. Getting to point where you consider them as a dark “guardian angel” or “ancestral spirit” goes even a step farther I think

thesuperbigfrog
1 replies
19h39m

> The creation of these tools and their massive/universal adoption a la web-crawlers is still a large gap though.

It only takes a decade or so.

Consider people who are young children now in "first world nations". They will have always had LLM-based tools available and voice assistants you can ask natural language questions.

It will likely follow the same adoption curves as smartphones, only faster because of existing network effects.

If you have smartphone with a reasonably fast connection, you have access to LLM tools. The next generations of smartphones, tablets, laptops, and desktops will all have LLM tools built-in.

tech_ken
0 replies
17h35m

I do see what you mean, and don't totally disagree, but to extend your "smartphone" metaphor I see your hypothetical as akin to someone looking at a like an old school Motorola Razr and saying "in the future these will be ubiquitous". Not necessarily wrong, but not exactly right either. The implementation of personalized assistants could take lots of different flavors, and the ultimate usage pattern which is settled (to me) seems likely to be outside any of our current models.

JohnFen
4 replies
1d1h

In the not-so-distant technofedualist future you'll have [...]

I guarantee that I won't. That, at least, is a nightmare that I can choose to avoid. I don't think I can avoid the other dystopian things AI is promising to bring, but I can at least avoid that one.

lurker_jMckQT99
1 replies
21h58m

I guarantee that you will. That is a nightmare that you can not choose to avoid unless you are willing to sacrifice your social life.

Remember how raising awareness about smartphones, always on microphones, closed source communication services/apps worked? I do not.

I run an Android (Google free) smartphone with a custom ROM, only use free software apps on it.

How does it help when I am surrounded by people using these kind of technologies (privacy violating ones)? I does not. How will it help when everyone will have his/her personal assistant (robot, drone, smart wearable, smart-thing, whatever) and you (and I) won't? It will not.

None of my friends, family, colleagues (even the security/privacy aware engineers) bother. Some of them because they do not have the technical knowledge to do so, most of them because they do not want to sacrifice any bit of convenience/comfort (and maybe rightfully so, I am not judging them - life is short, I do get that people do not want to waste precious time maintaining arcane infra, devices, config,... themselves).

I am a privacy and free software advocate and an engineer; whenever I can (and when there is a tiny bit of will on their side or when I have lever), I try to get people off surveillance/ad-backed companies services.

It rarely works or lasts. Sometimes it does though so it is worth (to me) keep on trying.

It generally works or lasts when I have lever: I manage various sports team, only share schedules etc via Signal ; family wants to get pictures from me, I will only share the link (to my Nextcloud instance) or photos themselves via Signal, etc.

Sometimes it sticks with people because it's close enough to whatsapp/messenger/whatever if most (all) of their contacts are their. But as soon as you have that one person that will not or can not install Signal, alternatives groups get created on whatsapp/messenger/whatever.

Overcoming the network effect is tremendously hard to borderline impossible.

Believing that you can escape it is a fallacy. It does not mean that is not worth fight for our rights, but believing that you can escape it altogether (without becoming and hermit) would be setting, I believe, an unachievable goal (with all the psychological impact that it can/will have).

Edit: fixed typos

asdff
0 replies
21h22m

Think about it in terms of what is rational. If there were serious costs to having your data leaked out like this people would rationally have a bit more trepidation. On the other hand, we are in the era where everyone by now has probably been pwned a half dozen times or more, to no effect usually on your real life. You might get disgusted that instagram watches what you watch to serve you more of that stuff and keep you on longer, other people love that sort of content optimization, I literally hear them gloat how their social media content feeds at this point have been so perfectly honed to show them whatever hobbies or sports they are interested in. Take a picture and it pushes to 5 services and people love that. Having an app already pull your contacts for you and match them up to existing users is great in the eyes of most people.

You are right that on the one hand these things could be used for really bad purposes, but they are pretty benign. Now if you start going "well social media posts can influence elections," sure, but so can TV, newspapers, the radio, a banner hauled by a prop plane, whatever, not like anythings changed. If anything its a safer environment for combating a slip to fascism now vs in the mid century when there were like three channels on TV and a handful of radio programs carefully regulated by the FCC and that's all the free flow of info you have short of smuggling the printed word like its the 1400s.

Given all of this, I can't really blame people for accepting the game they didn't create for how it is and gleaming convenience from it. Take smartphones out of the equation, take the internet out, take out computers, and our present dystopia is still functionally the same.

justinclift
1 replies
1d1h

Wonder if some kind of ai-agent thing(s) will become so widely used by people, that government services come to assume you have them?

Like happened with mobile phones.

JohnFen
0 replies
1d1h

At least in my part of the US, it's not hard to do without smartphones at all. Default assumptions are that you have one, but you can still do everything you want to do if you don't.

yterdy
3 replies
1d1h

That's just your phone.

thesuperbigfrog
2 replies
1d

> That's just your phone.

That is how most people will interface with their "personal assistant bot".

Don't be surprised if it listens to all your phone conversations, reads all your text messages and email, and curates all your contacts in order to "better help you".

When you login to your $LARGE_CORPORATION account on your laptop or desktop computer, the same bot(s) will be there to "help" and collect data in a similar manner.

passion__desire
1 replies
23h19m

It already does. I asked a friend about a medical condition on whatsapp. I started getting ads about quack solutions immediately on instagram.

pacifika
0 replies
22h36m

Your life insurance just went up.

kaibee
1 replies
1d1h

You're just describing TikTok/Youtube algorithm.

thesuperbigfrog
0 replies
1d

That's only a small piece of it.

otteromkram
0 replies
1d1h

This could be applied to any gadget with "smart" prefix in the name (eg - Smartphone, smart TV, smart traffic signals) today.

I wish people would stop believing that "smart" things are always better.

But, we're basically being trained for the future you mentioned. Folks are getting more comfortable talking to their handheld devices, relying on mapping apps for navigation (I'm guilty), and writing AI query prompts.

notnullorvoid
0 replies
18h43m

Big companies like Google are already doing this without AI. Will AI make the services more tempting? Yes, but there's also a lot of headway in open source AI and search, which could serve to topple people's reliance on big tech.

If everyone had a $500 device at home that served as their own self hosted AI, then Google could cease to exist. That's a future worth working towards.

naveen99
12 replies
1d2h

The limiting factor is usually, someone has to pay for the spying and the punishment. A lot (most?) of troublemakers just aren’t worth the trouble of spying or punishing.

ilovetux
4 replies
1d2h

The problem is that if AI enables mass spying, then the costs will no longer be prohibitive to target an individual because the infrastructure could be built out once and then reused to target individuals at scale with AI doing all of the correlation behind the scenes.

ben_w
2 replies
1d2h

That would resolve the spying cost, but not the punishment cost. My go-to example here is heroin: class A drug in the UK, 7 years for possession and life for supply, so far as I can see nobody has anything good to say about it, and it has around three times as many users in the UK as the entire UK prison population.

Could implement punishment for that specific crime, at huge cost, but you can't expand that to all crimes. Well, I suppose you could try feudalism mark 2 where most of the population is determined to be criminal and therefore spends their life "having to work of their debt to society", but then you have to find out the hard way why we stopped doing feudalism mark 1.

barrysteve
1 replies
21h55m

A computer owned society doesn't really need jails. You can deny 90% of services to a criminal and track and limit their movement digitally.

We are already in the jail.

ben_w
0 replies
21h29m

I think prisoners are usually denied voting rights? Might be wrong about that.

Certainly don't get many travel opportunities.

hruzgar
0 replies
1d1h

they most likely already do this and honestly it's really really scary

kozikow
3 replies
1d2h

I don't think it's too far-fetched. To see where it's going look at social score in China.

You say something wrong about a party, suddenly you can't board a plane, take a mortgage, enter some buildings, ...

Your credit score would look at how compliant you are with policies that can get increasingly nonsensical.

stillwithit
1 replies
1d2h

Humans existed under religious nonsense, and other forms of nonsense (sure, sure legal racism and sexism up until the last 30-40 years and obviously politically contrived social norms means the “right people won” free market capitalism)

What’s one more form of BS hallucination foisted upon the meat based cassettes we exist as?

sugarplant
0 replies
4h23m

this post is pretentious nonsense

datadrivenangel
0 replies
1d1h

Same thing happens in the US. Post a tweet too critical of the government, and you might get investigated and added to a no-fly list. Background checks can reveal investigations, so you may end up not getting a job because the government didn't like your tweet...

sonicanatidae
0 replies
1d2h

I suspect this will be the hoovering approach. Suck it all up, then figure out what you care to act on.

The government is a lot of things, and none of them are subtle.

Source: The ironically named PATRIOT ACT and similar.

nathanfig
0 replies
1d

Fining offenses is a great way to fund finding more offenses.

iAMkenough
0 replies
1d2h

AI reduces the labor involved, reducing barriers to invest time or money.

Spying isn't just for troublemakers either. It's probably worth the trouble to the vindictive ex-husband willing to install a hidden microphone in his ex-wife's house and in order to have access to a written summary of any conversations related to him.

px43
11 replies
1d2h

Never in the history of humanity has such powerful privacy tech existed for anyone who wants to use it.

Using common off the shelf, open source, heavily audited tools, it's trivial today, even for a non-technical 10 year old, to create a new identity and collaborate with anyone anywhere in the world. They can do research, get paid, make payments, and contribute to private communities in such a way that no existing surveillance infrastructure can positively link that identity to their government identity. Every day privacy tech is improving and adding new capabilities.

crazygringo
4 replies
1d1h

Never in the history of humanity has such powerful privacy tech existed for anyone who wants to use it.

True.

it's trivial today, even for a non-technical 10 year old

Not even close. It's difficult even for a technical 30 year old.

You're talking about acquiring cash that has passed through several people's hands without touching an ATM that recorded its serial numbers. Using it to acquire Bitcoin from a stranger. Making use of multiple VPN's, and making zero mistakes where any outgoing traffic from your computer can be used to identify you -- browser fingerprinting, software updates, analytics, MAC address. Which basically means a brand-new computer you've purchased in cash somewhere without cameras, that you use for nothing else -- or maybe you could get away with a VM, but are you really sure its networking isn't leaking anything about your actual hardware? Receiving Bitcoin, and then once again finding a stranger to convert that back into cash.

That is a lot of effort.

127361
3 replies
1d1h

Also stylometric analysis of your writing can be used to identify you.

JohnFen
1 replies
1d1h

This is one thing AI really can help with: rewriting what you wrote in order to make stylometric analysis worthless.

willismichael
0 replies
22h26m

Feed your writing into AI so that it can rewrite it so that AI can't identify you by your writing?

Sounds like a startup idea to me. When we're ready for the evil phase, let's classify everybody by their inputs to the system and then sell the results to the highest bidder.

Der_Einzige
0 replies
19h2m

That’s why I run everything I write through a random open source LLM with random settings and a custom decoder /s

I’m kidding, but the reality is such techniques will fool almost all stylometric analysis,

Also most actual stylinetric analysts work for spooks or are spooks.

whelp_24
1 replies
1d1h

Never before in history has it been necessary. It used to be possible to travel like a hundred miles and dissappear. Before credit was ubiquitous, money was hard to trace, and before that is was essentially untraceable. And cameras didn't used to be everywhere tracking faces for criminals and frequent shoppers. I don't know what privacy technologies you are talking about that are super effective, and i have been bit older than 10 for a while.

127361
0 replies
1d1h

Now here in the UK they are using people's passport photos for facial recognition, at least to stop shoplifting. It won't be long before this is expanded to other things due to feature creep.

hackeman300
1 replies
1d2h

Care to elaborate?

maxrecursion
0 replies
1d1h

That guy clearly has never been around 10 years olds, and vastly over estimates their intelligence.

I'm fact, all evidence points to younger generations being less tech savvy because they don't have to troubleshoot like the older generations did. Everything works, and almost nothing requires any technical configurations.

zxt_tzx
0 replies
1d1h

They can do research, get paid, make payments, and contribute to private communities in such a way that no existing surveillance infrastructure can positively link that identity to their government identity.

I can't help but wonder if we live in the same universe. If anything, in my part of the world, I am seeing powerful surveillance tech going from the digital sphere and into the physical sphere, often on the legal/moral basis that one has no expectation of privacy in public spaces.

Would love for OP to elaborate and prove me wrong!

yoyohello13
0 replies
1d1h

I think "trivial" is a stretch.

jillesvangurp
11 replies
1d1h

Both were always going to be kind of inevitable as soon as the technology would get there. Rather than debating how to stop this (which is mostly futile and requires all of us to be nice, which we just aren't), the more urgent debate is how to adapt to this being the reality.

Related to this is the notion of ubiquitous surveillance. Where basically anywhere you go, there is going to be active surveillance literally everywhere and AIs filtering and digging through that constantly. That's already the case in a lot of our public spaces in densely populated areas. But imagine that just being everywhere and virtually inescapable (barring Faraday cages, tin foil hats, etc.).

The most feasible way to limit the downsides of that kind of surveillance is a combination of legislation regulating this, and counter surveillance to ensure any would be illegal surveillance has a high chance of being observed and thus punished. You do this by making the technology widely available but regulating its use. People would still try to get around it but the price of getting caught abusing the tech would be jail. And with surveillance being inescapable, you'd never be certain nobody is watching you misbehaving. The beauty of mass, multilateral surveillance is that you wouldn't ever be sure nobody is not watching you abuse your privileges.

Of course, the reality of states adopting this and monopolizing this is already resulting in 1984 like scenarios in e.g. China, North Korea, and elsewhere.

Syonyk
7 replies
1d

...the more urgent debate is how to adapt to this being the reality.

Start building more offline community. Building things that are outside the reach of AI because they're in places you entirely control, and start discouraging (or actively evicting...) cell phones from those spaces. Don't build digital-first ways of interacting.

pixl97
1 replies
23h44m

Might work, might not. If someone keeps their cell phone silenced in their pocket, unless you're strip searching you won't know it's there. Does the customer have some app on it listening to the environment and using some kind of voice identification to figure out who's there. Do you have smart TVs up on the walls at this place, because hell, they're probably monitoring you too.

And that's only for cell phones. We are coming to the age where there is no such thing as an inanimate object. Anything could end up being a spying device feeding data back to some corporation.

Syonyk
0 replies
23h30m

Does the customer have some app on it listening to the environment and using some kind of voice identification to figure out who's there.

This is no different from "So-and-so joined the group, but is secretly an FBI informer!" sort of problems, in practice. It's fairly low on my list of things to be concerned about, but as offline groups grow and are then, of course, talked about by a compliant media as "Your neighbor's firepit nights could be plotting terrorist activities because they don't have cell phones!" when prompted, it's a thing to be aware of.

Though you don't need a strip search. A decent NLJD (non-linear junction detector) or thermal imager should do it if you cared.

I'm more interested in creating (re-creating?) the norms where, when you're in a group of people interacting in person, cell phones are off, out of earshot. It's possibly a bit more paranoid than needed, but the path of consumer tech is certainly in that direction, and even non-technical people are creeped out by things like "I talked to a friend about this, and now I'm seeing ads for it..." - it may be just noticing it since you talked about it recently (buy a green car, suddenly everyone drives green cars), or you may be predictable in ways that the advertising companies have figured out, but it's not a hard sell to get quite a few people to believe that their phones are listening. And, hell, I sure can't prove they aren't listening.

Do you have smart TVs up on the walls at this place...

I mean, I don't. But, yes, those are a concern too.

And, yes. Literally everything can be listening. It's quite a concern, and I think the only sane solution, at this point, is to reject just about all of that more and more. Desktop computers without microphones, cell phones that can be powered off, and flat out turning off wireless on a regular basis (the papers on "identifying where and what everyone is doing in a house by their impacts on a wifi signal" remain disturbing reads).

I really don't have any answers. The past 30 years of tech have led to a place I do not like, and I am not at all comfortable with. But it's now the default way that a lot of our society interacts, and it's going to be a hard sell to change that. I just do what I can within my bounds, and I've noticed that while I don't feel my position has changed substantially in the past decade or so (if anything, I've gotten further out of the center and over to the slightly paranoid edge of the bell curve), it's a lot more crowded where I stand, and there are certain issues where I'm rather surprisingly in the center of the bell curve as of late.

mindslight
1 replies
23h6m

Building things that are outside the reach of AI because they're in places you entirely control

This sounds great in principle, but I'd say "outside the reach of AI" is a much higher bar than one would naively think. You don't merely need to avoid its physical nervous system (digital perception/control), but rather prevent its incentives leaking in from outside interaction. All the while there is a strong attractor to just give in to the "AI" because it's advantageous. Essentially regardless of how you set up a space, humans themselves become agents of AI.

There are strong parallels between "AI" and centralizing debt-fueled command-capitalism which we've been suffering for several decades at least. And I haven't seen any shining successes at constraining the power of the latter.

Syonyk
0 replies
21h44m

Oh, I'm aware it's a high bar. Like most people here, I've worked my life in tech, and I'm in the deeper weeds of it.

But I don't see an alternative unless, as you note, one just gives into the "flow" of the AI, app based, "social" media, advertising and manipulation driven ecosystem that is now the default.

I'm aware I'm proposing resisting exactly that, and that it's an uphill battle, but the tradeoff is retaining your own mind, your own ability to think, and to not be "influenced" by a wide range of things chosen by other people to cross your attention in very effective ways.

And I'm willing to work out some of what works in that space, and to share it with others.

asdff
1 replies
21h18m

Good luck building things with out leaving an ai reachable paper trail. You'd have to grow your own trees, mine your own iron and coal, refine your own plastic from your own oil field.

Syonyk
0 replies
20h35m

Sounds fun to me and my social group. We not-quite-joke about the coming backyard refineries. I'm working on the charcoal production at the moment (not a joke, I have some small retorts in weekly production, though I'm mostly aiming for biochar production instead of fuel charcoal production).

Realistically, though, if all you have to work with are my general flows of materials in and out, I'm a lot less worried than if you have, say, details of home audio, my social media postings, etc (nothing I say here is inconsistent with my blog, which is quite public). And there are many things I don't say in these environments.

asquabventured
0 replies
1d

This is the way.

conductr
2 replies
1d1h

Both were always going to be kind of inevitable as soon as the technology would get there

This is my take on everything sci-fi or futuristic. Once a human conceives something, its existence is essentially guaranteed as soon as we figure out how to do it.

broscillator
0 replies
22h9m

Its demise is also inevitable, so it would be a matter of being wise in figuring out how long it takes us to see/feel the downsides, or how long until we (or it) build something "better".

Der_Einzige
0 replies
19h8m

Yup. AI is the ultimate “life imitates art” technology. That’s what it is by definition!

moose44
7 replies
1d1h

Is mass spying not already going on?

megous
2 replies
1d1h

Mass spying, and mass killing based on it, assisted by AI.

0xdeadbeefbabe
1 replies
1d

Mass boring with false positives.

willmadden
0 replies
22h49m

At first, sure. In ten or twenty years of iteration? Not so much.

whamlastxmas
1 replies
1d1h

There’s a difference. If they wanted to spy on me today, they’d have to look at the logs my ISP keeps into perpetuity to find my usernames on HN and then some unfortunate person has to read hundreds or thousands of comments and take extensive notes of the controversial political and social opinions that I hold.

Additionally, even without ISP logs, an AI could find my accounts online by comparing my writing style and the facts of my life that get mentioned in brief passing across all my comments. It’s probably a more unique fingerprint than a lot of people realize.

With an AI, someone would just have to ask with the prompt “what are the antisocial opinions of first name last name”? And it’d be instant and effectively free compared to the dozens of hours and high expense of doing it manually

moose44
0 replies
18h11m

Xkeyscore?

Taylor_OD
1 replies
1d

Did you read the post?

The author delineates between surveillance and spying, primarily, by saying mass data collection has been happening for years. Actually doing something with that data has been more difficult. AI summarizes audio and text well, which will turn collection into actual analysis, which the author calls spying.

Did you disagree?

moose44
0 replies
18h12m

Xkeyscore??

TheLoafOfBread
6 replies
1d1h

Then people will start using AI to generate random traffic noise to fool AI watching them.

forward1
3 replies
23h10m

This is the same fallacious trope as "click on irrelevant ads to confuse marketers". People are not good at deceiving algorithms at scale.

TheLoafOfBread
2 replies
22h0m

That actually works. Sure one click is not enough, but purposefully browsing women's products on Amazon for few hours confused something and since then I am getting ads for women's products only.

forward1
1 replies
20h44m

What problem have you actually solved by doing that, because you're still receiving ads and being manipulated through your continued use of those platforms.

TheLoafOfBread
0 replies
19h59m

None. I was drunk and this is the result. Now I am a woman for advertisers.

dbcooper
1 replies
1d1h

Of course they won't. Classic techno-libertarian fantasy.

ghufran_syed
0 replies
21h18m

Starting with the "classic" techno-libertarian slaves who each claimed "I am spartacus"

Spivak
6 replies
1d2h

This happened during the last big technological advancement -- search. Suddenly it became possible for a government to theoretically sift through all of our communications and people online made constant reference to it talking directly to their "FBI agent."

But it was and still is a nothingburger and this will be the same because it doesn't enable anything except "better search." We've had comparable abilities for a decade now. Yes LLMs are better but semantic search and NLP have been around a while and the world didn't end.

All the examples of what an LLM could do are just querying tracking databases. Uncovering organizational structure is just a social graph, correlating purchases is just querying purchase databases, listing license plates is just querying the camera systems. You don't need an LLM for any of this.

ryanackley
4 replies
1d1h

Search has become a mass surveillance tool for the government. That is the article's point. If you think it's a nothingburger, you aren't aware of how often a person's Google searches are used to establish criminal intent in criminal trials. Also, they can be used to bolster probable cause for search warrants and arrests.

Also, check out geofence warrants. Essentially, the government can ask google for the IP's of people who searched for particular terms within a geographic area.

Of course, don't commit crimes but this behavior by the government raises the spectre of wrong search, wrong place, wrong time. This is one of the article's points, it causes people to self-censor and change their searches out of fear of their curiosity being misconstrued as criminal intent.

Spivak
3 replies
1d1h

a person's Google searches are used to establish criminal intent in criminal trials. > > can ask google for the IP's of people who searched for particular terms within a geographic area

These aren't mass surveillance. The threat of search is government systems passively sifting through all information in existence looking for "criminal activity" and then throwing the book at you.

In both of these cases the government is asking Google to run a SQL query against their database that wouldn't be aided by an LLM or even the current crop of search engines.

ryanackley
2 replies
1d1h

It is mass surveillance. It's just not being looked at by anyone until you are targeted by the government. If you are targeted, your entire life is within keystrokes of the authorities. This is the same thing the article is saying.

The article is making the point that it's not feasible to spy on every person to monitor them for wrongdoing currently. It doesn't scale and it's not cost effective. With AI that will change because it can be automated. The AI can listen to voice, monitor video cameras, and read text to discern a level of intent.

Spivak
1 replies
23h34m

it's not feasible to spy on every person to monitor them for wrongdoing currently

Sure it is! That's the whole point of search being the previous big technical hurdle. YouTube monitors every single video posted in real time for copyright infringement. We've had the capability to do this kind of monitoring for huge swaths of crimes for a decade and it hasn't turned into anything. We could for example catch every driver in real time for all across the country for speeding but we don't.

Mass is the opposite of targeted surveillance. If you need to be targeted and get a warrant to look at the data then it's not mass. And AI isn't going to change the system that prevents it right now which is the rules governing our law enforcement bodies.

ryanackley
0 replies
21h56m

I get the impression you didn't bother reading the article.

Your two examples are flawed and don't address what the article is saying. The algorithm to check for copyright violations is relatively simple and dumb. Speed cameras: many countries do use speed cameras (i.e. Australia, UK). The problem with speed cameras is that once you know where they are, you simply slow down when approaching.

Again, mass vs. targeted surveillance is irrelevant now. You've already been surveilled. It's just a matter of getting access to the information.

theodric
0 replies
1d2h

It will eventually end, though, accompanied by the chatter of a gaggle of naysayers chicken-littling the people trying to raise the alarm. I'm delighted to be here to witness the death of liberty and descent of the West into the throes of everything it once claimed to represent the polar opposite of, and also delighted to be old enough that I'll likely die before it becomes Actual Big Brother levels of oppressive.

_Nat_
5 replies
1d1h

Seems inevitable enough that we may have to accept it and try to work within the context of (what we'd tend to think of today as) mass-spying.

I mean, even if we pass laws to offer more protections, as computation gets cheaper, it ought to become easier-and-easier for anyone to start a mass-spying operation -- even by just buying a bunch of cheap sensors and doing all of the work on their personal-computer.

A decent near-term goal might be figuring out what sorts of information we can't reasonably expect privacy on (because someone's going to get it) and then ensuring that access to such data is generally available. Because if the privacy's going to be lost anyway, then may as well try to address the next concern, i.e. disparities in data-access dividing society.

127361
2 replies
1d1h

Living off-grid is how I'm dealing with the whole situation nowadays.

potsandpans
0 replies
21h36m

How's it working out for you? I have similar plans and have most of the big pieces budgeted / ideated. But realistically I'm still 1 to 2 years out.

floxy
0 replies
19h48m

Except for the posting on HN?

iainmerrick
0 replies
1d1h

even by just buying a bunch of cheap sensors and doing all of the work on your personal-computer.

The cynical response: you won't be able to do that, because buying that equipment will set off red flags. Only existing users -- corporations and governments -- will be allowed to play.

JohnFen
0 replies
1d1h

we may have to accept it and try to work within the context of (what we'd tend to think of today as) mass-spying.

We do have to live in the nightmare world we're building (and as an industry, we have to live with ourselves for helping to build it), but we don't have to accept it at all. It's worth fighting all this tooth and nail.

127361
5 replies
1d1h

Time to decentralize everything. I think we are already in the early stages of this new trend. We can run AI locally and hard drives are so large we can have a local copy of an entire library, with millions of ebooks, in our own home now.

That is in addition to generating our own energy off grid (so no smart meter data to monitor), thanks to the low cost of solar panels as well.

Bye bye Big Brother.

jodrellblank
2 replies
23h33m

"That is in addition to generating our own energy off grid (so no smart meter data to monitor), thanks to the low cost of solar panels as well."

Terence Eden is in the UK: https://shkspr.mobi/blog/2013/02/solar-update/

This says his house uses 13kWh/day and you can see from the graph by dividing the monthly amount by 31 days that the solar panels on the roof generate around 29/day during summer and 2.25/day in winter. They would need five or six rooves of solar panels to generate enough to be off-grid. And that's not practical or low cost.

pacifika
0 replies
22h18m

You “just” need a few sheds full of batteries

pacifika
0 replies
22h18m

You just need a few sheds full of batteries

jtbayly
0 replies
1d

Until you walk out your front door...

Or use the internet for anything...

JohnFen
0 replies
1d1h

I don't see how that leads to the reduction of the problem, though. Governments and corporations will still use AI for the things they want to use AI for.

dkjaudyeqooe
4 replies
1d

This is why AI software must not be restricted, so that ordinary people and the civic minded can develop personal and public AI systems to counter corporate AI. The future of AI is adversarial.

Now freedom to develop AI software doesn't mean freedom to use it however you please and its use should be regulated, in particular to protect individuals from things like this. But of course people cannot be trusted, so you need to be able to deploy your own countermeasures.

gentleman11
2 replies
22h58m

How does an adversarial ai help protect anyone's privacy or freedom to act in public in ways that big brother doesn't condone?

dkjaudyeqooe
1 replies
22h33m

Adversarial attacks can be made on face recognition systems and the like, defeating them, and AI models can be poisoned with adversarial data, making them defective or ineffective.

As it stands, AI models are actually quite vulnerable to adversarial attacks, with no theoretical or systemic solution. In the future it's likely you'll need your own AI systems generating adversarial data to defeat models and systems that target you. These adversarial attacks will be much more effective if co-ordinated by large numbers of people who are being targeted.

And of course we have no idea what's coming down the pipe, but we know that fighting fire with fire is a good strategy.

gentleman11
0 replies
1h42m

Yeah, that will be illegal in 10 years, if not already, under hacking or sabotage rules. Next solution?

passion__desire
0 replies
23h18m

We need a new benevolent dictator of LLM Operation System (Karpathy's vision) like Yann Lecun similar to Linus Torvolds.

HackerThemAll
4 replies
1d2h

Soon in the name of "security" you'll have your face scanned on average every few minutes and it's going to be mandatory in many aspects of our lives. That's the pathetic world IT has helped to build.

consumer451
0 replies
14h59m

It’s going to get much worse, and quickly.

Neural interfaces are the last frontier of privacy, and it seems that TSA will just take a quick scan before boarding, soon enough.

It would be wise of us to create a Neural Bill of Rights, so we don’t miss the boat like we did with the Internet tracking.

https://www.preposterousuniverse.com/podcast/2023/03/13/229-...

brandall10
0 replies
1d1h

Some of us have this already w/ our cell phones.

I know that's not what you mean, but in a way it may have preconditioned society.

acuozzo
0 replies
1d1h

That's the pathetic world IT has helped to build.

It's inevitable, I reckon, but it would have taken much longer without F/OSS.

Taylor_OD
0 replies
1d

Ha. People have been scanning their finger print or face to open their phone for years.

miyuru
3 replies
1d2h

The TV show "Person of Interest" portrayed this beautifully and it came out 12 years ago.

Strange and scary how fast the world develops new technology.

salawat
0 replies
22h52m

Hell, Stargate SG-1 had a few episodes that touched on the absolute hell of a Federal Government that had access to everything, or a computer system with RW access to people's gray matter and it:s own unknown optimization function (shrinking environmental protection dome resulting in live updates of people's consciousness on a societal scale to keep them in the dark as to it's happening ).

forward1
0 replies
23h11m

It's far worse still: films like Enemy of the State (1998) actually inspired spy technology.

https://slate.com/technology/2019/06/enemy-of-the-state-wide...

cookiengineer
0 replies
23h36m

The amazing part is that so many large scale cyber attacks happened meanwhile that were 1:1 fiction in the series back then.

The Solarwinds incident, for example, was the identical attack and deployment strategy that was the Rylatech hack in the series. From execution to even the parties involved. It's like some foreign state leaders saw those episodes and said "yep that's a good idea, let's do that".

jacobwilliamroy
3 replies
1d2h

A friend of mine was recently a witness for the FBI. He was working in a small office in the middle of nowhere and happened to have a very loud argument with the suspect. A few minutes later he left the building and when he was about to start his car, he got a call from an agent asking him if he wanted to be a witness in the case they were working on.

jdthedisciple
1 replies
1d2h

This stopped to soon.

What happened next?

jacobwilliamroy
0 replies
1d

The suspect was allegedly embezzling covid relief money and the argument was about things like "why are we using company time to go to your house and install the new flat screen TV you just bought?"

The moral of the story is that you should never steal money from the U.S. government because that is one thing that they will not tolerate and I do not know the limits of what they will do in order to catch you.

Also the suspect was convicted (so they probably aren't a suspect anymore) and last I heard was being flown to Washington D.C. for sentencing. That person is probably in some kind of prison now but I haven't been following the story very closely.

sonicanatidae
0 replies
1d2h

Mine was walking into the client's site. This was many years ago. They had Novell Server issues, that's how long ago this was.

I walked in, cops everywhere. Man in a suit waves an FBI badge at me and asks why I'm there. I explained the ongoing work and he said, "Not today" and forced me off the premises.

The next day I was called back by the client to "rebuild their network". When I got there, every single piece of hardware that contained anything remotely like storage had been disassembled and the drives imaged, then just left in pieces. lol

I spent that day rebuilding it all, did get the Novell server working again.

A week later, they were closed forever and I believe the owner and CFO got nailed for healthcare fraud.

I was asked to testify in a deposition. My stuff was pretty basic and mostly what I knew about how they used the tech. What I saw around there and if I saw any big red signs declaring FRAUD COMMITTED HERE!

indigo0086
3 replies
23h59m

"Bitcoin will enable terrorism" "Ghost guns will enable criminals to commit murder" "Social media will enable hate speech"

AI will be a useful and world changing innovation which is why FUD rag articles like this will become more prevalent until it's total adoption, even by the article writer themselves

yonaguska
1 replies
23h21m

Do you see how the examples you posted are somewhat problematic though? Governments are actively cracking down on all of those examples- it stands to reason that they will classify AI as a dangerous tool as well at some point, as opposed it becoming a ubiquitous tool. Ghost guns are far from common at least.

pwillia7
0 replies
22h43m

They would ban Ghost AIs -- They need non ghost guns or you wouldn't be a State

beej71
0 replies
20h45m

None of your three examples are of the government using technology against its citizens.

yonaguska
2 replies
1d2h

It's already happening. See this DHS memo issues on August 8th - page 3.

https://www.dhs.gov/sites/default/files/2023-09/23_0913_mgmt...

Fortunately the DHS has put together an expert team of non-partisan, honest, Americans to spearhead the effort to protect our democracy. Thank you James Clapper and John Brennan- for stepping up to the task.

https://www.dhs.gov/news/2023/09/19/secretary-mayorkas-annou...

And just in time for election season in the US AI is going to be employed to fight disinformation- for our protection of course. https://www.thedefensepost.com/2023/08/31/ussocom-ai-disinfo...

lp0_on_fire
1 replies
1d1h

That James Clapper and John Brennan continue to be lauded by the media and their sycophants in government is one of the most disappointing things to happen in my lifetime. Both should be frog marched straight to prison along with their enablers.

whamlastxmas
0 replies
1d1h

Everything the media does is disappointing. It’s all wildly dishonest and damaging and done at the direction of a few billionaires.

sarks_nz
2 replies
23h49m

Nick Bostrom proposed the "Vulnerable World Hypothesis" which (amongst other things) says that a technology as powerful and accessible as AI requires mass surveillance to stop bad actors using it as a weapon.

https://nickbostrom.com/papers/vulnerable.pdf

It's disturbing, but also hard (for me) to refute.

qup
0 replies
23h46m

Sounds like at that point we have bad actors using it as a weapon.

OfSanguineFire
0 replies
22h18m

I remember, in the early millennium, reading Kurzweil’s idealism about the coming singularity, and feeling similar. So much of the advanced technology that he thought will soon be in the hands of ordinary people, could be potentially so lethal that obviously the state would feel the need to restrict it.

(That was one argument against Kurzweil’s vision. Another is that state regulation and licensing moves so slowly at each major technological change, that it would take us decades to get to the point he dreams of, not mere years. You aren’t going to see anything new rolled out in the healthcare sector without lots and lots of debating about it and drawing up paperwork first.)

mullingitover
2 replies
20h57m

This kind of thing will probably never fly, because Americans expect that they can break the law and in most cases will not suffer any consequences. Anything that threatens this would be, in their eyes, oppression. Imagine if you immediately got a text informing you of your $300 speeding ticket within a few seconds of you going a mile per hour over. People would riot.

However, Americans expect that the law is enforced vigorously upon other people, especially people they hate. If AI enabled immediate immigration enforcement on undocumented migrants, large portions of the population would injure themselves running to the voting booth to have it added to the Constitution.

It's the whole expectation that for my group the law protects but does not bind, and for others it binds but does not protect.

repentless_ape
1 replies
20h52m

Yeah this is all uniquely American and not prevalent in every society on earth throughout all of human history.

pixl97
0 replies
20h24m

https://crookedtimber.org/2018/03/21/liberals-against-progre...

For millennia, conservatism had no name, because no other model of polity had ever been proposed. “The king can do no wrong.” In practice, this immunity was always extended to the king’s friends, however fungible a group they might have been. Today, we still have the king’s friends even where there is no king (dictator, etc.). Another way to look at this is that the king is a faction, rather than an individual.
renegat0x0
1 replies
1d1h

The author makes one mistake. He said that Google stopped spying on gmail.

- they started spying on user's gmail

- there was blowback, they reverted

- after some time they introduced "smart features", with ads again

Link https://www.askvg.com/gmail-showing-ads-inside-email-message...

I do not even want to check if "smart features" are opt-in, or opt-out.

mkesper
0 replies
1d

It's opt-in but hey, you're missing out if not enabling it!

brunoTbear
1 replies
1d1h

Schneier is wrong that "hey google" is always listening. Google does on-device processing with dedicated hardware for the wake-words and only then forwards audio upstream. Believe it or not, the privacy people at Google really do try to do the right things. They don't always succeed, but they did with our hardware and wake-word listening.

Am Google employee, not in hardware.

ajb
0 replies
1d1h

What he says is " Siri and Alexa and “Hey Google” are already always listening, the conversations just aren’t being saved yet". That's functionally what you describe. Hardware wake-word processing is a power saving feature, not a privacy enhancement. Some devices might not have enough resources to forward or store all the audio, but audio is small and extracting text does not need perfect reproduction, so it's quite likely that many devices could be reprogrammed to do it, albeit at some cost to battery life.

CrzyLngPwd
1 replies
1d1h

"has", not "will".

mdanger007
0 replies
1d
zxt_tzx
0 replies
1d1h

I tend to think the surveillance/spying distinction is a little fragile and this more a continuation of what Bruce has previously written insightfully about, i.e. the blurring of lines between private/public surveillance and, as the Snowden leaks have revealed, it's hard to keep what has been collected by private industry out of the hands of the state.

However, a more recent trend is companies that sell technologies to the state directly. For every reputable one like Palantir or Anduril or even NSO Group, there are probably many more funded in the shadows by In-Q-Tel, not to mention the Chinese companies doing the same in a parallel geopolitcal orbit. Insofar as AI is a sustaining innovation that benefits incumbents, the state is surely the biggest incumbent of all.

Finally, an under-appreciated point is Apple's App Tracking Transparency policy, which forbids third-party data sharing, naturally makes first-party data collection more valuable. So even if Meta or Google might suffer in the short-term, their positions are ultimately entrenched on a relative basis.

ysofunny
0 replies
1d

"if everybody is being spied on, nobody is being spied on"

???

wseqyrku
0 replies
20h38m

So far you were a line in the log. Now someone is actually looking at you with three eyes.

willmadden
0 replies
22h50m

This could be used to expose government and monied corruption, not just for surveilling peasants.

What is the market for this short term?

I think this could greatly curtail government corruption and serve as a stepping stone to AI government. It's also a cool and disruptive startup idea.

uticus
0 replies
19h50m

This sums up so many things well and clearly. It's so quotable.

- The money trail: "Their true customers—their advertisers—will demand it."

- The current state of affairs: "Surveillance has become the business model of the internet..."

- The fact that not participating, or opting-out, still yields informational value, if not even more so: "Find me all the pairs of phones that were moving toward each other, turned themselves off..."

This isn't a technological problem. Technology always precedes the morals and piggybacks on the fuzzy ideas that haven't yet developed into concrete, well-taught axioms. It is a problem about how our society approaches ideals. Ideals, not ideas. What do we value? What do we love?

If we love perceived security more than responsibility, we will give up freedoms. And gladly. If we love ourselves more than future generations, we will make short-sighted decisions and pat ourselves on the back for our efficiency in rewarding ourselves. If we love ourselves more than others, we won't even care much about social concerns. We'll fail to notice anything that doesn't move the needle against my comfort much.

It's more understandable to me than ever how recent human horrors - genocides, repressive regimes, all of it - came about to be. It's because I'm a very selfish person and I am surrounded by selfish people. Mass spying is a symptom - not much of a cause - of the human condition.

troupo
0 replies
1d2h

Will? It already has. China has had its surveillance for ages. And it's been spreading in other countries, too. Example: https://www.404media.co/fusus-ai-cameras-took-over-town-amer...

thesz
0 replies
17h19m

AIs, LLMs in patricular, go in one direction, just like humans usually do.

What other humans do to cicumvent that? Yes, they found a way to alternate direction, in case of London cockney, by using rhymes [1].

[1] https://www.theguardian.com/education/2014/jun/09/guide-to-c...

If you need to fool your AI of choice, rhyme the concepts!

For a demo, ask your AI of choice about "Does basin of gravy likes satin and silk?" (decode yourself)

The article above is from 2014 and is hardly is used when I asked questions using Cockney parlance.

You are welcome. ;)

sebastianconcpt
0 replies
3h39m

Yesterday I've watched again "Enemy of the State" from 1998 and the movie was entertaining but the fact that the evil force is coming from just one bad actor felt so naive compared to real life that it was tragicomical.

sambull
0 replies
1d2h

next time they try to root out whatever 'vermin' defined; it will be a quick natural language prompt trained on ingested data from the last 2 decades to get that list of names and addresses / networks. AI is going to make targeting groups with differing ideologies dead simple. It will be used.

righthand
0 replies
1d

Only if you continue to invest time and energy into. Not everyone puts 99% of their life online. When’s the last time you left your house to do something without a cellular? Or compromised by not ordering something you think you need?

pockmockchock
0 replies
1d

Next would be to use/create devices that are using Radio for Communication and payments, devices like Satslink as an example. It drives innovation at the same time, so I wouldn't be too concerned.

pier25
0 replies
1d

Will? I'd be surprised if NSA etc hadn't been using AI for years now.

nojvek
0 replies
21h47m

It’s not that AI will enable mass spying, mass spying is already there.

AI enables extracting all sorts of behavioral data across decades timespan for everyone.

The devils argument is in a world where the data is not used for nefarious purposes and only to prosecute crime as passed by governments, it leads to a society where no one is above the law and equal treatment for all.

However that seldom goes well since humans who control the system definitely want an upper edge.

mmh0000
0 replies
1d

There's an excellent documentary on how sophisticated Government Surveillance is and how well it's tuned to be used against the general population:

https://www.imdb.com/title/tt0120660/

miki123211
0 replies
19h29m

I personally find the censorship implications (and the business models they allow) far more worrying than the surveillance implications.

It will soon be possible to create a dating app where chatting is free, but figuring out a place to meet or exchanging contact details requires you to pay up, in a way that 99% of people won't know how to bypass, especially if repeated bypassing attempts result in a ban. Same goes for apps like Airbnb or eBay, which will be able to prevent people from using them as listing sites and conducting their transactions off-platform to avoid fees.

The social media implications are even more worrying, it will be possible to check every post, comment, message, photo or video and immediately delist it if it promotes certain views (like the lab leak theory), no matter how indirect these mentions are. Parental control software will have a field day with this, basically redefining helicopter parenting.

intended
0 replies
23h46m

If this argument hinges on summarization, then I have to ask - what is the blasted hallucination rate ?

I tried exactly this. Watched 4 talks from a seminar, got them transcribed, and used ChatGPT to summarize this.

Did 3 perfectly fine, and for the 4th it changed the speaker from mild mannered professor into VC investing superstar, with enough successes under his belt to not care.

How do you verify your summary is correct? If your false positive rate is 25% - 33%, thats a LOT of rework. 1 out of 3.

gumballindie
0 replies
1d2h

I beg to differ. The correct term is not “will” but “is”.

graphe
0 replies
1d1h

Mass surveillance fundamentally changed the nature of surveillance.

Computers create and organize large amounts of information. This is useful for large organizations and unempowering to the average person. Any technology with these traits are harmful to individuals.

godelski
0 replies
22h0m

There's a lot of speculation in the comments so I want to talk about the technology that we have __TODAY__. I post a lot about being in ML research and while my focus is on image generation I'm working with another team doing another task but not going to state it explicitly for obvious reasons.

What can AI/ML do __today__?

We have lots of ways to track people around a building or city. The challenge is to do these tasks through multi-camera systems. This includes things like people tracking (person with random ID but consistent across cameras), face identification (more specific representation that is independent of clothing, which usually identifies the former), gait tracking (how one walks), device tracking (based on bluetooth, wifi, and cellular). There is a lot of mixed success with these tools but I'll let you know some part that should concern you: right now these are mostly ResNet50 models, datasets are small, and they are not using advanced training techniques. That is changing. There are legal issues and datasets are becoming proprietary but the size and frequency of gathering data is growing.

I'm not going to talk about social media because the metadata problem is an already well discussed one and you all have already made your decisions and we've witnessed the results of those decisions. I'm also not going to talk about China, the most surveilled country in the world, the UK, or any of that for similar reasons. We'll keep talking in general, that is invariant to country.

What I will talk about is that modern ML has greatly accelerated the data gathering sector. Your threat models have changed from governments rushing to gather all the data that they can, to big companies joining the game, to now small mom and pop shops doing so. I __really__ implore you all to look at what's in that dataset[0]. There's 5B items, this tool helps retrieve based on CLIP embeddings. You might think "oh yes, Google can already do this" but the difference is that you can't download Google. Google does not give you 16.5TB of clip filtered image,text, & metadata. Or look into the RedPajama dataset[1] which has >30T tokens and 5TB of storage. With 32k tokens being about 50 pages, that's about 47 billion pages. That is, a stack of paper 5000km tall, reaching 5x the height of the ISS and is bigger than the diameter of the moon. I know we all understand that there's big data collection, but do you honestly understand how big these numbers are? I wouldn't even claim to because I cannot accurately conceptualize the size of the moon nor the distance to the ISS. They just roll into the "big" bin in my brain.

Today, these systems can track you with decent accuracy even if you use basic obscurification techniques like glasses, hats, or even a surgical mask. Today we can track you not just by image, but how you walk, and can with moderate success do this through walls (meaning no camera to see if you want to know you're being tracked). Today, these systems can de-anonymize you through unique text patterns that you use (see Enron dataset, but scale). Today, these machines can uncanny valley replicas of your speech and text. Today we can make images of people that are convincingly real. Today, these tools aren't exclusive to governments or trillion dollar corporations, but available to any person that is willing to spend a few thousand dollars on compute.

I don't want to paint this as a picture of doom and gloom. These tools are amazing and have the potential to do extraordinary good, at levels that would be unimaginable only a few decades ago. Even many of these tools that can invade your privacy are benefits in some ways, but just need to consider context. You cannot build a post scarce society when you require humans to monitor all stores.

But like Uncle Ben says, with great power comes great responsibility. A technology that has the capacity to do tremendous good also has the power to do tremendous horrors.

The choice is ours and the latter prevails when we are not open. We must ever push for these tools to be used for good, because with them we can truly do amazing things. We do not need AGI to create a post scarce world and I have no doubt that were this to become our primary goal, we could easily reach it within our lifetime without becoming a Sci-Fi dystopia and while tackling existential issues such as climate. To poke the bear a little, I'd argue that if your country wants to show dominance and superiority on the global stage, it is not done so through military power but technology. You will win the culture wars of all culture wars and whoever creates the post scarce world will be a country that will never be forgotten by time. Lift a billion people out of poverty? Try lifting 8 billion not just out of poverty, but into the lower middle class, where no child dreams of being hungry. That is something humans will never forget. So maybe this should be our cold war, not the one in the Pacific. If you're so great, truly, truly show me how superior your country/technology/people are. This is a battle that can be won by anyone at this point, not just China vs the US, but even any European power has the chance to win.

[0] https://rom1504.github.io/clip-retrieval/

[1] https://github.com/togethercomputer/RedPajama-Data

fsflover
0 replies
1d
forward1
0 replies
23h4m

Mass spying is yesterday's news. The thing we all need to worry about is behavioral management at scale, which influences politics, relationships, religions and much more. This is the true "hidden" evil behind social media and "AI" which few apprehend let alone can do something about.

erikerikson
0 replies
21h2m

Author: welcoming to knowing. This is unavoidable. Outside of extreme measures that will themselves mark, the network effects of use will overwhelm any effort to evade.

The question I think is how too navigate and what consequences will follow. We could use these capabilities to enslave but we could also use them to free and empower.

Scams rely on scale and the ineffective scaling social mechanisms to achieve profit. Imagine if the first identification of a scam informed every potential mark to which the scam began to be applied. Don't forget to concern yourself with false positives too, of course.

The injustice of being unable to take action in disputes due to a lack of evidence would evaporate. Massive privacy, consent, and security risks and issues result so will we be ready to properly protect and honor people and their freedoms?

At the end of this path may lay more efficient markets; increased capital flows and volumes; and a more fair, just, equitable, and maximized world more filled with joy, love, and happiness. There are other worse options of course.

elric
0 replies
18h50m

We can barely get people to care about the implications of types of surveillance that they do understand (CCTV everywhere, Snowden's revelations, etc). It's going to be nigh impossible to get people to care about this enough to make a difference.

Heck, even if they did care, there's nothing they can realistically do about it. The genie's out of the bottle.

darklycan51
0 replies
1d2h

Think about it this way.

Every service has access to the IPs you've used to log on, most services require an email, phone number some debit/credit cards and or similar personal info. Link that with government databases on addresses/real names/ISP customers and you basically can get most peoples accounts, on virtually any service they use.

We then also have things such as the patriot act in effect, the government could if they wanted run a system to do this automatically, where every message is scanned by an AI that catalogues them.

I have believed for some time now that we are extremely close to a complete dystopia.

boringg
0 replies
22h23m

Is this really an insight to ANYONE on hackernews? How is this article providing anything new to the conversation except to bring it back into discussion?

bmislav
0 replies
1d

We actually recently published a research paper on exactly this topic (see https://llm-privacy.org/ for demo and paper). The paper shows that current LLMs already have the reasoning capability to infer personal attributes such as age, gender or location even when this information is not explicitly mentioned in the text. Crucially, they can do this way cheaper and way faster than humans. So I would say that spying scenarios mentioned in this blog post are definitely in the realm of possibility.

blueyes
0 replies
22h47m

this has been true for a long time. most data is garbage, so the data collection phase of mass surveillance punched below its weight in terms of consequences felt by the surveilled. AI is one way to extraction actionable meaning from that data, and when people start feeling the everyday consequences of all that collection + meaning extraction, they will finally understand.

blondie9x
0 replies
23h54m

Real question you have to ask yourself. Is AI spying and AI law enforcement Minority Report in real life?

barelyauser
0 replies
18h42m

The average HN user defends the "common guy" or "the masses", perhaps because he fears being perceived as condescending. I've come to the conclusion that the masses don't deserve any of this. Many drink and drive, indulge in destructive addictions (not only to themselves). Many can't bother recycling or even maintaining a clean home environment, waste time and resources in every activity they engage in and don't even care for their neighbors well being (loud music, etc).

Concluding remarks. As man succeeded in creating high mechanical precision from the chaotic natural environment, he will succeed in creating a superior artificial entity. This entity shall "spy" (better described as "care" for) every human being, maximizing our happiness.

aaroninsf
0 replies
17h34m

"This is a political problem..."

No. The US has always had political problems wrt surveillance; and many places have had much worse ones.

Now, all of us can anticipate a very different, all but inevitable, and very much worse political problem.

AI is a force multiplier and an accelerant.

As such it is a problem, an obvious one, and a very very big one. This is just one of many ways in which force multiplication and acceleration, so pursued, and so lauded, in so many domains, may work their magic on preexisting social and political evils. And give us brand new ones, per Larkin.

__jambo
0 replies
21h50m

Because this is so depressing I am going to try think of positive aspects:

The flip side to this is the government had power because these activities required enormous resources. Perhaps it will go the other direction, if there is less of a moat other players can enter. Eg all it takes to make a state is a bunch of cheap drones and the latest government bot according to your philosophy.

Maybe it means government will massively shrink in personelle? Maybe we can have a completely open source ai government/legal system. Lawyers kind of suck ethically anyway, so maybe it would be better? With low barrier to entry, we can rapidly prototype such governments and trial them on smaller populations like iceland. Such utopias will be so good everyone will move there.

They still have to have physical prisons, if everyone is in prison this will be silly, but I suppose they can fine everyone, not so different from lowering wages which they already do.

RandomLensman
0 replies
1d2h

Not sure why this sidesteps the potentially quite different legal settings for spying and surveilling.

Nilrem404
0 replies
1d1h

Surprised Pikachu Face

Seriously nothing new or shocking about this piece. Spying is spying. Surveillance is surveillance. If you've watched the news at all in the past 2 decades, you know this is happening.

Anyone who assumes that any new technology isn't going to be used to target the masses by increasingly massive and powerful authoritarian regimes is woefully naive.

Another post stating what we all already know isn't helping or fostering any meaningful conversation. It will just be rehashes. Let me skip to the end here for you:

There is nothing we can do about it. Nothing will change for the better.

Go make a coffee or tea

I_am_tiberius
0 replies
21h6m

Somewhat related: A "Tell HN" I posted today but was shadow-banned after it started trending: https://news.ycombinator.com/item?id=38531407

FrustratedMonky
0 replies
20h19m

This very scenario was one of the key threats in book Homo Dues and that was over 5 years ago.

Russia could do surveillance, but was limited by manpower.

Now AI solves this, there can be an AI bot dedicated to each individual.

Wasn't there another article on HN just day, that Car Makers, Phone, Health monitors all can now aggregate data to know 'your mood' when in an accident? To know where you are going, how you are feeling?

This is the real danger with AI. Even current technology is good enough for this kind of surveillance.

EVa5I7bHFq9mnYK
0 replies
23h14m

Thinking of finally creating a private server for receiving emails. For sending email it is known to be very difficult, but I send very very few emails these days, will outsource it to sendgrid or similar.

What's the best docker image for that, simple in configuration?

CrzyLngPwd
0 replies
1d1h

More people will be criminalised, and fines will be just another tax we must pay.

Arson9416
0 replies
1d1h

I have a friend that is working as a contractor building AI-powered workplace spying software for the explicit purpose of behavior manipulation. It gives the employees and employers feedback reports about their behavior over chat and video. For example, if they used microaggressions, misgendered someone, or said something crass. This same friend will then talk about the dangers of dystopian technology.

People don't know what they're creating. Maybe it's time it bites them.

1-6
0 replies
1d1h

AI allows companies to skirt laws. For example, a company may be forbidden from collecting information on individual people but that rule doesn’t apply for aggregated data.

AI can be a deployed ‘agent’ that does all the collection and finally send scrubbed info to its mothership.