I think another aspect of this is mass criminal law enforcement enabled by AI.
Many of our criminal laws are written with the implicit assumption that it takes resources to investigate and prosecute a crime, and that this will limit the effective scope of the law. Prosecutorial discretion.
Putting aside for the moment the (very serious) injustice that comes with the inequitable use of prosecutorial discretion, let's imagine a world without this discretion. Perhaps it's contrived, but one could imagine AI making it at least possible. Even by the book as it's currently written, is it a better world?
Suddenly, an AI monitoring public activity can trigger an AI investigator to draft a warrant to be signed by an AI judge to approve the warrant and draft an opinion. One could argue that due process is had, and a record is available to the public showing that there was in fact probable cause for further investigation or even arrest.
Maybe a ticket just pops out of the wall like in Demolition Man, but listing in writing clearly articulated probable cause and well-presented evidence.
Investigating and prosecuting silly examples suddenly becomes possible. A CCTV camera catches someone finding a $20 bill on the street, and finds that they didn't report it on their tax return. The myriad of ways one can violate the CFAA. A passing mention of music piracy on a subway train can become an investigation and prosecution. Dilated pupils and a staggering gait could support a drug investigation. Heck, jaywalking tickets given out as though by speed camera. Who cares if the juice wasn't worth the squeeze when it's a cheap AI doing the squeezing.
Is this a better world, or have we just all subjected ourselves to a life hyper-analyzed by a motivated prosecutor.
Turning back in the general direction of reality, I'm aware that arguing "if we enforced all of our laws, it would be chaos" is more an indictment of our criminal justice system than it is of AI. I think that AI gives us a lens to imagine a world where we actually do that, however. And maybe thinking about it will help us build a better system.
If it increases ticket issuance for passenger vehicle noise violations (eg - "sport" exhausts, booming stereo system, motorcycles), I'm down.
"If it hurts people I hate I accept"
- Every endorsement of authoritarian rule ever
Feels pretty legit though. My freedom-from is impacted by other people's freedom-to: by curtailing their freedom, mine is expanded. Sure they won't like it - but I don't like it the other way round either.
This doesn't add up. At best your overall freedom remains the same. You gain quiet, you lose the freedom to make noise yourself. Seems like a net-negative to me.
Consider how little freedom you would have if laws were enforced to the lowest common denominator of what people find acceptable.
I can go into the countryside and make noise all day. I don't see that there's a pre-existing freedom to inflict loud noises on my neighbors for no useful purpose.
You most definitely cannot disturb wildlife or rural communities with noise.
I'd argue that if we want to support individual growth and creativity, freedom-to should have higher priority than freedom-from, which consciously or not has seems to be the traditional default in the US perhaps due to its culture of supporting innovation and its break-away past. I believe some refer to these as positive and negative freedoms, respectfully.
This is also why a number of people truly revolt against the idea of higher density living. If the only way to have your freedom-from is to be free from other people, then you move away from other people.
I've watched it play out on my mother-in-law's street. What was once a quiet dead end street is now a noisy, heavily trafficked road because a large apartment building was put up at the end.
The number of freedom-to people have significantly decreased her quality of life blasting music as they walk or drive by at all hours, along with a litany of other complaints that range from anti-social to outright illegal behavior. Even setting aside the illegal stuff, she is significantly less happy living where she is now.
Effectively enforcing laws we agreed to is hardly authoritarian.
You'd disagree about 10 seconds after they did...
If suddenly you could be effectively found and prosecuted for every single law that existed it is near a 100% probability that you'd burn the government to the ground in a week.
There are so many laws no one can even tell you how many you are subject to at any given time at any given location.
Automatically enforcing all the laws is vastly different from the "effective enforcement of laws "we agreed to".
The full body of legislation is riddled with contradiction, inconsistency, ambiguity and the pretense that "legislated upon = fair" is at best a schoolroom fantasy.
False equivalence. GP complained about a specific behavior, not about specific people.
Yes it's always some reasonably specific behavior that justifies the harsh new rules.
Reminds me of this quote attributed to a past Peruvian president and general, Benavides:
“For my friends, everything; for my enemies, the law.”
Yea this is a good point. If justice is executed by systems, rather than people (the end result from this scenario), we have lost the ability to challenge the process or the people involved in so many ways. It will make challenging how the law is executed almost impossible because there will be no person there to hold responsible.
I think that’s a good reason to question whether this would be due process.
Why do we have due process? One key reason is that it gives people the opportunity to be heard. One could argue that being heard by an AI is no different from being heard by a human, just more efficient.
But why do people want the opportunity to be heard? It’s partly the obvious, to have a chance to defend oneself against unjust exercises of power, and of course against simple error. But it’s also so that one can feel heard and not powerless. If the exercise of justice requires either brutal force or broad consent, giving people the feeling of being heard and able to defend themselves encourages broad consent.
Being heard by an AI then has a brutal defect, it doesn’t make people feel heard. A big part of this may come from the idea that an AI cannot be held accountable if it is wrong or if it is acting unfairly.
Justice, then, becomes a force of nature. I think we like to pretend justice is a force of nature anyway, but it’s really not. It’s man-made.
"it doesn't make people feel heard" isn't a real emotion, it includes a judgement about the AI. According to "Nonviolent Communication" p235; "unheard" speaks towards the feelings "sad, hostile, frustrated" and the needs "understanding" & "consideration". Everyone agrees AI would be more efficient, but people are concerned that the AI will not be able to make contextual considerations based on a shared understanding of what it's like to live a human life.
That's true! I suspect it will be difficult to convince people that an AI can, as you suggest, make contextual considerations based on a shared understanding of what it's like to live a human life.
This is a hypothesis.
I would say that the consumers of now-unsexed "AI" sex-chat-bots (Replika) felt differently. So there are actually people who feel heard talking to an AI. Who knows, if it gets good enough maybe more of us would feel that way.
It's not that "justice is executed by systems", it's that possible crimes will be flagged by AI systems for humans to then review.
eg AI will analyze stock trades for the SEC and surface likely insider trading. Pretty sure they already use tools like Palantir to do exactly this, it's just that advanced AI will supercharge this even further.
Eh, this is problematic for a number of reasons that need addressed when adding any component that can increase the workload for said humans. This will cause people to take shortcuts that commonly lead to groups that are less able to represent and defend themselves legally taking the brunt of the prosecutions.
At that point some people will physically revolt, I know I will. We’re not that far away from said physical AI-related revolt anyway, and I do feel for the computer programmers here who will be the target of that physical violence, hopefully they knew what they were getting into.
Ha. You'd like to think so, but it's going to be awfully hard to coordinate resistance when the mass spying sweeps everyone up in a keyword-matching dragnet before the execution phase. This is the problem with every outgroup being labelled "terrorists."
Sabotage will be the name of the game at that point. Find ways to quietly confuse, poison, overwhelm and undermine the system without attracting the attention of the monitoring apparatus.
I get your point, I think along those lines quite often myself.
As per the sabotage part, bad input data that does not accurately get labeled as such until way too late in the “AI learning cycle” is I think the way to go. Lots and lots of such bad input data. How we would get to that point, that I don’t know yet, but it’s a valid option going forward.
Chaos engineering. As a modern example, all this gender identity stuff wreaks absolute havoc on credit bureau databases.
Tomorrow, we'll have people running around in fursuits to avoid facial recognition. After that, who knows.
Don’t worry, stuff like this is why we have the 2A here in the USA. Sounds like it’s time for AI programmers to get their concealed carry licenses. Of course, they will be the first users of smart guns, so don’t bother trying to steal their pistol out of their holsters.
If the same monitoring is present on buses and private planes, homeless hostels and mega-mansions then it absolutely is better.
"The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread."
I mean, presumably the AI wouldn't just be monitoring people sleeping under bridges, but would also be able to effectively cut through tax evasion bullshit, insider trading, bribery, etc.
Private property? Nah, nothing better about that.
You're describing a hypothetical world that will never exist. Basically if we solve all corruption and inequality in enforcement between economic/power classes - all-pervasive surveillance will be a net benefit.
It's like pondering hypotheticals about what would happen if we lived in Middle Earth.
Yes, with properly developed AI, rather than penalizing speeding, which most of us do and is also a proxy for harmful outcomes and inefficiencies, we can penalize reckless behaviors such as coming too close to vehicles, aggressive weaving, and other factors that are tightly correlated with the negative outcomes we care to reduce (i.e. loss of life, property damage). So too, the systems could warn people about their behavior and guide them in ways that would positively increase everyone's benefits. Of course this circumstance will probably go away with self-directing cars (which fall into the "do the right thing by default" bucket) but the point is illustrated that the laws can be better formulated to focus on increasing the probabilities of desirable outcomes (i.e. harm reduction, efficiency, effectiveness), be embodied and delivered in the moment (research required on means of doing so that don't exacerbate problems), and carry with them a beneficial component (i.e. understanding).
Unfortunately different people have different definitions of "harm" and "effectiveness". What one person consider a, "positive increase in behavior" another might consider a grievous violation of their freedom and personal autonomy. For example there is an ongoing debate about compelled speech. Some people view it as positive and desirable to use the force of law to compel people refer to others as they wish to be referred, while others strongly support their freedom to speak freely, even if others are offended. Who gets to program the AI with their definition of positivity in this case?
A free society demands a somewhat narrowly tailored set of laws that govern behavior (especially interpersonal behavior). An ever-present AI that monitors us all the time and tries to steer (or worse, compel with the force of law) all of our daily behaviors is the opposite of freedom, it is the worst kind of totalitarianism.
We agree that defining such terms involves trade-offs yet the perfect should not be the enemy of the better.
Certainly the perfect should not be the enemy of the good. But the bad should be the enemy of the good. The very core of freedom is the ability to have your own thoughts, your own value system and your own autonomy. In a free society, laws exists so that individuals are able to enjoy their own thoughts, values and autonomy while being constrained from harming others. Obviously, there is a balance to strike (which is not always easy to determine) between law and freedom. We see this on display every day in our society. You need look no further than the crisis in San Franisco (and many other US cities) between the right of an mentally ill individual to sleep and defecate on the sidewalk and the right of society to pass laws to prevent this activity.
The conversation changes when you are talking about prescribing a set of behaviors that are universally considered, "good" and are pushed (and possibly demanded) by an ever-present AI that is constantly looking over your shoulder and judging your behavior (and possibly thoughts) by this preset behavioral standard that may or may not match your own preferences. This is totalitarianism beyond anything Orwell ever imagined. What you consider good and desirable, someone else considers bad and despicable. That is the essence of freedom. In a free society, the law exists (or should exist) only to stop you two from hitting each other over the head or engaging in other acts of overt violence and aggression, not to attempt to brainwash and coerce one of you into falling into line.
We agree that the bad and good are enemies, or so at least the bad would like you to think. The good might be convinced the bad has good points that need refining, growth, and improvement. I'm fine with those disagreeing.
I think what you're saying is that it's hard to meditate between everyone which is true. Perhaps you are also saying that the implication of a standard of correctness is inherently totalitarian. It's seems to me you weakened that by admitting there are things that should be universally barred in free societies. Violence was your reference but murder might be even easier. Easier yet that breast cancer is bad? We make allowances for boxing and war but broad agreement can be found in society and across societies by careful anthropologists.
However, it seems you project over me (or perhaps the AI) a "Highlander hypothesis" that there can be only one correctness or even any notion of correct within the system. Such a system can operate simply on what appears to be with strings of evidence for such description. As you note, beyond a small set of mostly-agreed-to matters we are more diverse and there are spectrums spanning scoped policies (say by public and private institutions) all the way to individual relationship agreements custom fit to two. It is, in fact, the nature of a free society to allow us such diversity and self selection of the rules we apply to ourselves (or not). An ever present AI could meditate compatibilities, translate paradigms to reduce misunderstanding or adverse outcomes (as expected by the system over the involved parties), and generally scale the social knowing and selection of one another. It could provide a guide to navigating life and education for our self knowing and choosing of our participation more broadly. The notion there isn't to define correctness so much as to see what is and facilitate self selection of individual correctnesses as based on our life choices and expressed preferences.
To be honest in closing, this has dipped into some idealisms and I don't mean to be confused in suggesting a probability of such outcomes.
I think this depends on the law. For jaywalking, sure. For murder and robbery probably less so. And law enforcement resources seem scarce on all of them.
Murder and robbery too. Those crimes are just worth investigating.
The problem here is this is not a bureaucratic view of how law enforcement actually works in the field.
https://www.kxan.com/news/national-news/traffic-tickets-can-...
So what typically happens is these AI systems are sold at catching murderers, but at the end of the day they are revenue generation systems for tickets. And then those systems get stuck in places where a smaller percent of the population can afford lawyers to prevent said ticketing systems from becoming cost centers.
oh, i definitely wasn't arguing for ai enforcement. Not even a little, i was just saying that laws are written with the assumption that enforcement takes resources.
The software that already exists along these lines already exhibit bias against marginalized groups. I have no trouble foreseeing a filter put on the end of the spigot that exempts certain people from the inconvenience of such surveillance. Might need a new law (it'll get passed).
Sounds like the devil is in the details. Often the AI seems to struggle with darker skin… are you suggesting we sift who can be monitored/prosecuted based on skin darkness? That sounds like a mess to try to enshrine in law.
Strong (and unhealthy) biases already exist when using this tech, but I am not sure that is the lever to pull that will fix the problem.
You know that's not what I was suggesting. I'm saying that if precedence is anything to go by, companies will be perfectly happy extending the paradigm established with sentencing software to anyone who can't pay or leverage their connections. If we continue down this path, tomorrow's just today, but worse, and more. (Please try to have a more rationale understanding of today, tomorrow.)
IIRC, in [1] it mentioned a few examples of AI that exhibited the same bias that is currently present in the judicial system, banks etc.
[1] https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction
This is honestly what scares me the most. Our biases are built in to AI, but we pretend they're not. People will say "Well, it was the algorithm/AI, so we can't change it". Which is just awful and should scare the shit out of everyone. There was a book [0] written almost fifty years ago that predicted this. I still haven't read it, but really need to. The author claims it made him a pariah among other AI researchers at the time.
[0]https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reaso...
https://en.wikipedia.org/wiki/Computers_Don%27t_Argue while not about AI directly and supposedly satirical really captures how the system works.
This has been a thing since 2017: https://futurism.com/facial-recognition-china-social-credit
- "Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city."
- "If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught."
There are lots of false positives too, like this case where a woman who’s face appeared in a printed advertisement on the side of a bus was flagged for jaywalking. https://www.engadget.com/2018-11-22-chinese-facial-recogniti...
Just checking ChatGPT out of interest:
Top Left Panel: This panel shows the pedestrian crossing with no visible jaywalking. The crossing stripes are clear, and there are no pedestrians on them.
Top Center Panel: Similar to the top left, it shows the crossing, and there is no evidence of jaywalking.
Top Right Panel: This panel is mostly obscured by an overlaid image of a person's face, making it impossible to determine if there is any jaywalking.
Bottom Left Panel: It is difficult to discern specific details because of the low resolution and the angle of the shot. The red text overlays may be covering some parts of the scene, but from what is visible, there do not appear to be any individuals on the crossing.
Bottom Right Panel: This panel contains text and does not provide a clear view of the pedestrian crossing or any individuals that might be jaywalking.
I don't think it'd be chaos. I think the laws would be adjusted.
I think that's an optimistic view, but even if it's right, it will be years-to-decades of semi-chaos before the laws are updated appropriately.
This is fair. I just wonder if we're about getting to the point where we should be talking about how they would be adjusted.
An alternative possibility is that society might decay to the point future people might choose this kind of dystopia. Imagine a fully automated, post-employment world gone horribly wrong, where the majority of society is destitute, aimless, opiate-addicted. No UBI utopia of philosophers and artists; just a gradual Rust-belt like decline that gets worse and worse, no brakes at the bottom of the hill. Not knowing what else to do, the "survivors" might choose this kind of nuclear approach: automate away the panopticons, the prisons, the segregation of failed society. Eloi and Morlocks. Bay Area tech workers and Bay Area tent cities. We haven't done any better in the past, so why should we expect to do better in the future, when our "tools" of social control become more efficient, more potent? When we can deempathize more easily than ever, through the emotional distance of AI intermediaries?
Oh boy, real life Manna
https://marshallbrain.com/manna1
You miss the part that people who get access to stronger AI can similarly use it to improve their odds of not being found or getting better outcomes, while the poor guy gets fined for AI hallucinations and doesn't have the money to get to a human like the court is now one big Google support.
So the way out of this is that you have the constitutional right to confront your accuser in court. When accused by a piece of software that generally means they have to disclose the source code and explain how it came to its answers.
Not many people have exercised this right with respect to DUI breathalyzers but it exists and was affirmed by the Supreme Court. And it will also apply to AI.
In democracies at least, the law can be changed to reflect this new reality. Laws that don’t need to be enforced and are only around to enable pretextual stops can be dropped if direct enforcement is possible.
There are plenty of crimes where 100% enforcement is highly desirable: pickpocketing, carjacking, (arguably) graffiti, murder, reckless and impaired driving, to name a few.
Ultimately, in situations with near 100% enforcement, you shouldn’t actually need much punishment because people learn not to do those things. And when there is punishment, it doesn’t need to be severe.
Deterrence theory is an interesting field of study, one source but there are many: https://journals.sagepub.com/doi/full/10.1177/14773708211072...
This is a good point, it reminds me of how VAR has come into football. Before VAR, there were fewer penalties awarded. Now that referees have an official camera they can rely on, they can enforce the rules exactly as written, and it changes the game.
You don’t need AI for that. It was probably possible to do something like that when personal computers first came out.
Or maybe if such a thing is applied for real it will lead to the elimination of bullshit laws (jaywalking, ...), since suddenly 10% of the population would be fined/incarcerated/...
The whole automation and overzealous less leeway/common sense interpretations have as we have seen, many an automated traffic/parking ticket come into question.
Applying that to many walks of life, say farming, could well see chaos and a whole new interpretation to the song "Old McDonald had a farm, AI AI oh", it's gone as McDonald is in jail for numerous permit, environmental and agricultural regulations that saw produce cross state lines deeming it more serious a crime as he got buried in automated red-tape.
Or the AI just sends a text message to all the cops in the area saying "this person has committed a crime". Like this case where cameras read license plates, check to see if the car is stolen, and then text nearby cops. At least when it works and doesn't flag innocent people like in the below case:
https://www.youtube.com/watch?v=GUvZlEg8c8c