This seems like the Kim Dotcom situation again.
Why are these service providers being punished for what their users do? Specifically, these service providers? Because Google, Discord, Reddit, etc. all contain some amount of CSAM (and other illegal content), yet I don't see Pichai, Citron, or Huffman getting indicted for anything.
Hell, then there's the actual infrastructure providers too. This seems like a slippery slope with no defined boundaries where the government can just arbitrary use to pin the blame on the people they don't like. Because ultimately, almost every platform with user-provided content will have some quantity of illegal material.
But maybe I'm just being naive?
Because those providers cooperate with authorities and moderate their content to a fairly large degree?
How does Meta cooperate with the authorities? Isn't Whatsapp supposed to be end-to-end encrypted?
Telegram is for the most part not end-to-end encrypted, one to one chats can be but aren't by default, and groups/channels are never E2EE. That means Telegram is privy to a large amount of the criminal activity happening on their platform but allegedly chooses to turn a blind eye to it, unlike Signal or WhatsApp, who can't see what their users are doing by design.
Not to say that deliberately making yourself blind to what's happening on your platform will always be a bulletproof way to avoid liability, but it's a much more defensible position than being able to see the illegal activity on your platform and not doing anything about it. Especially in the case of seriously serious crimes like CSAM, terrorism, etc.
If law enforcement asked them nicely for access I bet they wouldn't refuse. Why take responsibility for something if you can just offload it to law enforcement?
The issue is law enforcement doesn't want that kind of access. Because they have no manpower to go after criminals. This would increase their caseload hundredfold within a month. So they prefer to punish the entity that created this honeypot. So it goes away and along with it the crime will go back underground where police can pretend it doesn't happen.
Telegram is basically punished for existing and not doing law enforcement job for them.
Apparently, they have. Sorry for your bet.
Maybe they didn't ask nicely. Or they asked for something else. There's literally zero drawback for service provider to provide secret access to the raw data that they hold to law enforcement. You'd be criminally dumb if you didn't do it. Literally criminally.
I bet that if they really asked, they pretty much asked Telegram to build them one click creator that would print them court ready documents about criminals on their platform so that law enforcement can just click a button and yell "we got one!" to the judge.
if its not not end-to-end encrypted, what does that mean? whats the method that govts access these messages?
You can simply join those channels. Getting an invite is not hard, or even unnecessary, from what I hear.
End-to-end encrypted means that the server doesn’t have access to the keys. When server does have access, they could read messages to filter them or give law enforcement access.
It is supposedly end-to-end encrypted. And in a shallow way. Also the app is closed source and you can't develop your own.
It's basically end-to-end-trust-me-bro-level encrypted.
I'm more disturbed by the fact that on HN we have 0 devs confirming or denying this thing about FBs internals wrt encryption. We know there are many devs that work there that are also HN users. But I've yet to see one of them chime in on this discussion.
That should scare a lot of us.
I find it pretty ridiculous to assume that any dev would comment on the inner workings of their employers software in any way beyond what is publicly available anyway. I certainly wouldn't.
I find it funny that they claim to be “end-to-end” at least once they have censored one of my messages.
The receiving end shared your message with the administrators? E2e doesn't mean you aren't allowed to do what you want with the messages you receive, they are yours.
Nope, it didn't even arrive on their end, it prevented me from sending the message and said I wasn't allowed to send that. So they are pre screening your messages before you send them.
Read the founder exit letter. whatsapp is definitely not e2e encrypted for all features.
You leak basic metadata (who talked to who at what time).
You leak 100% of messages with "business account", which are another way to say "e2e you->meta and then meta relays the message e2e to N reciptients handling that business account".
Then there's the all the links and images which are sent to e2e you->meta, meta stores the image/link once, sends you back a hash, you send that hash e2e to your contact.
there's so many leaks it's not even fun to poke fun at them.
And I pity anyone who is fool enough to think meta products are e2e anything.
Exactly. Another case of a business hijacking a term and abusing it to describe something else.
actually its a nominated end point, and then from there its up to the business. It works out better for meta, because they aren't liable for the content if something goes wrong. (ie a secret is leaked, or PII gets out.) Great for GDPR because as they aren't acting as processor of PII they are less likley to be taken to court.
Whatsapp has about the same level of practical "privacy" (encryption is a loaded word here) as iMessage. The difference is, there are many more easy ways to report nasty content in whatsapp, which reported ~1 million cases of CSAM a year vs apples' 267. (not 200k, just 267. Thats the whole of apple. https://www.missingkids.org/content/dam/missingkids/pdfs/202...)
Getting the content of normal messages is pretty hard, getting the content of a link, much easier.
Its not signal, but then its never meant to be.
isn't meta only end to end encrypted in the most original definition in so much that it is encrypted to each hop. but it's not end to end encrypted like signal.. ie meta can snoop all day
If a service provider can see plain text for a messaging app between the END users, that is NOT end-to-end encryption, by any valid definition. Service providers do not get to be one of the ends in E2EE, no matter what 2019 Zoom was claiming in their marketing. That's just lying.
Meta seems to shy away from saying they don't look at the content in some fashion. Eg they might scan it with some filters, they just don't send plaintext around.
Answering law enforcement letters, even if it's just to say that data cannot be provided, is some 80% of cooperation needed.
Meta can provide conversation and account metadata (Twitter does the same - or used to do at least), or suspend accounts
don’t you have an answer now?
You can report people and have their messages sent to Meta for review.
Supporting E2EE doesn’t imply a failure to cooperate. This is not the issue here.
In a number of ways, and probably all the ways that are required by law in your jurisdiction.
Learn more: https://about.meta.com/actions/safety/audiences/law/guidelin...
Yes, WA messages are supposed to be e2e encrypted. Unless end-to-end encryption is prohibited by law in your jurisdiction, I don't see how that question is relevant in this context.
Probably government portals that Meta provides
The chats are encrypted but the backup saved in the cloud isn't. So if someone gets access to your Google Drive he can read your WhatsApp chats. You can opt-in to encrypt the backup but it doesn't work well.
What has E2EE got to do with it? If you catch someone who sent CP you can open their phone and read their messages. Then you can tell Meta which ones to delete and they can do it from the metadata alone.
This distinction gets lost in these discussions all of the time. A company that makes an effort to comply with laws is in a completely different category than a company that makes the fact that they’ll look the other way one of their core selling points.
Years ago there was a case where someone built a business out of making hidden compartments in cars. He did an amazing job of making James Bond style hidden compartments that perfectly blended into the interior. He was later arrested because drug dealers used his hidden compartment business to help their drug trade.
There was an uproar about the fact that he wasn’t doing the drug crimes himself. He was only making hidden compartments which could be used for anything. How was he supposed to know that the hidden compartments were being used for illegal activities rather than keeping people’s valuables safe during a break-in?
Yet when the details of the case came out, IIRC, it was clear that he was leaning into the illegal trades and marketing his services to those people. He lost his plausible deniability after even a cursory look at how he was operating.
I don’t know what, if any, parts of that case apply to Pavel Durov. I do like to share it as an example of how intent matters and how one can become complicit in other crimes by operating in a manner where one of your selling points is that you’ll help anyone out even when their intent is to break the law. It’s also why smart corporate criminals will shut down and walk away when it becomes too obvious that they’re losing plausible deniability in a criminal enterprise.
Is it illegal to offer legal services to undesirables and/or criminals?
Yes.
If you are directly aiding and abetting without any plausible attempt to minimize bad actors from using your services then absolutely.
For example, CP absolutely exists on platforms like FB or IG, but Meta will absolutely try to moderate it away to the best of their ability and cooperate with law enforcement when it is brought to their attention.
And like I have mentioned a couple times before, Telegram was only allowed to exist because the UAE allowed them to, and both the UAE and Russia gained ownership stakes in Telegram by 2021. Also, messaging apps can only legally operate in the UAE if they provide decryption keys to the UAE govt because all instant messaging apps are treated as VoIP under their Telco regulation laws.
There have been plenty of cases where anti-Russian govt content was moderated away during the 2021 protests - https://www.wired.com/story/the-kremlin-has-entered-the-chat...
isn't this the definition of "criminal lawyer"?
If you are a criminal lawyer who is providing defense, that is acceptable because everyone is entitled to to a fair trial and defense.
If you are a criminal lawyer who is directly abetting in criminal behavior (eg. a Saul Goodman type) you absolutely will lose your Bar License and open yourself up to criminal penalties.
If you are a criminal lawyer who is in a situation where your client wants you to abet their criminal behavior, then you are expected to drop the client and potentially notify law enforcement.
Not a lawyer myself but I believe this is not a correct representation of the issue.
A lawyer abetting in criminal behaviour is committing a crime, but the crime is not offering his services to criminals, which is completely legal.
When offering their services to criminals law firm or individual lawyers in most cases are not required to report crimes they have been made aware of under the attorney-client privilege and are not required to ask to minimize bad actors from using their services.
In short: unless they are committing crimes themselves, criminal lawyers are not required to stay clear from criminals, actually, usually the opposite is true.
Again, presumption of innocence do exists.
Yep. Your explaination is basically what I was getting at
In this case, Telegram showed bad faith moderation. They are not a lawyer, and don't operate with the same constraints.
There was a recent gang related case in Georgia where several defense lawyers were charged for being a little too involved.
TV drama tends to give people the wrong idea. Your lawyers aren't allowed to aid you with doing any crimes, they're just advocates.
In most jurisdictions yes AFAIK, if those services directly help an illegal activity, and you knew about the illegal activity.
Ok, like selling gasoline to the getaway car driver?
Working as a car driver isn’t illegal. Working as a getaway car driver is.
You’re making the opposite point of what you intended.
Possibly.
https://en.m.wikipedia.org/wiki/Accessory_(legal_term)
Your analogy is terrible and doesn’t make sense.
If you provide a service that is used for illegal behavior AND you know it’s being used that way AND you explicitly market your services to users behaving illegally AND the majority of your product is used for illegal deeds THEN you’re gonna have a bad time.
If one out of ten thousand people use your product for illegal deeds you’re fine. If it’s 9 out of 10 you probably aren’t.
If you know your services are going to be used to commit a crime, then yes, that makes you an accessory and basically all jurisdictions (I know basically nothing about French criminal law) can prosecute you for that. Crime is, y'know, illegal.
I'm appalled that you would argue in good faith that a tool for communicating in secret can be reasonably described as a service used to commit a crime.
Why aren't all gun manufacturers in jail then? They must know a percentage of their products are going to be used to commit crimes. A much larger percentage than those using Telegram to commit one.
Shouldn't gmail be closing as they know a percentage will be used for crime?
Try to deposit 10k to your bank account and then, when they call you and ask the obvious question, answer that you sold some meth or robbed someone. They will totally be fine with this answer, as they are just a platform for providing money services and well, you can always just pay for everything in cash.
is it what happened here?
in my view Durov is the owner renting his apartment and not caring what people do inside it, which is not illegal, someone could go as fare as say that it is morally reprensible, but it's not illegal in any way.
It would be different if Durov knew but did not report it.
Which, again, doesn't seem what happened here and it must be proven in a court anyway, I believe everyone in our western legal systems still has the right to the presumption of innocence.
Telegram not spying on its users is the same thing as Mullvad not spying on its users and not saving the logs. I consider it a feature not a bug, for sure not complicity in any crime whatsoever.
Problem is if you know what these people do inside it and you don't do anything about it.
Which is something that should be proven in court.
Problem is if the police arrests the owner of the apartment but not those doing something illegal inside it.
Out of metaphor: Durov has been arrested because he's Russian and the west is retaliating as hard as they can.
Under the same assumptions Durov has been arrested for, Elon Musk and Jack Dorsey should be in jail too. [1]
[1] https://www.theguardian.com/world/2023/sep/04/twitter-saudi-...
What do you mean "look the other way?" Does the phone company "look the other way" when they don't listen in to your calls? Does the post office "look the other way" when they don't read your mail?
That guy who built the hidden compartments should absolutely not have gone to jail. The government needs to be put in check. This has gotten ridiculous.
If the police tell them illegal activity is happening and give them a warrant to wiretap and they are capable of doing so but refuse then yeah they’re looking the other way. That’s not even getting into things like PRISM.
Reddit moderates itself so well that even half the legitimate posts get immediately removed by mods or downvoted by users to oblivion
I've actually given up trying to post on Reddit for this reason. Whenever I've tried to join in on a discussion in some subreddit that's relevant(eg r/chess) my post has been autoremoved by a bot because my karma is too low or my account is "too new". Well how can I get any karma if all my posts are deleted?
Comment in subreddits without those restrictions for a bit. E.g. this list: https://www.reddit.com/r/NewToReddit/wiki/index/newusersubs/
I can see how it's frustrating, but the communities you're trying to post in are essentially offloading their moderation burden onto the big popular subreddits with low requirements -- if you can prove you're capable of posting there without getting downvoted into oblivion, you're probably going to be less hassle for the smaller moderator teams.
That's silly. I gotta go shitpost in subreddits I have no interest in as some sort of bizarre rite of passage? I'd rather just not use the site at that point.
Yet here you are posting on HN which does the exact same thing.
Actually, HN has a much better system. Comments from new accounts, like your throwaway, are dead by default, but any user can opt in to seeing dead posts, and any user with a small amount of karma can vouch those posts, reviving them. Like I just did to your post.
New account comments are not dead by default, they just render the author name in green.
You ask for the post to get approved. You probably can't imagine the amount of spam subreddits suffer under.
In the cases I remember there was no such recourse. It was just autodeleted by a bot.
You have to send the moderators a message manually. They can unhide comments held by AutoMod.
Mods simply ignore any such messages, especially from new or low karma accounts. Entreaties into the void.
Even those who farm accounts know the simple answer to your question. You have to spend a little time being civil in other subreddits before you reveal the real you. Just takes a few weeks.
The comments I made were quite serious and civil. Not sure what you mean. They were autodeleted by a bot. I wasn't trolling or anything.
I'm not particularly interested in spending a lot of time posting on reddit. But very occasionally I'll come across a thread I can contribute meaningfully to and want to comment. Even if allowed I'd probably just make a couple comments a year or something. But I guess the site isn't set up for that, so fuck it.
Sounds like you glossed over the phrase “in other subreddits”, which is the secret sauce. The point of my phrasing was not to suggest that you aim to be uncivil, but to highlight that the above works even for those who do aim to. So, surely, it should work for you, too.
Why don’t they arrest telecom CEOs for allowing terrorists to have uncensored phone calls with each other?
You do realize the police wiretap and get metadata from phone companies all the time, right? Not event counting Five Eyes stuff.
But why don’t they arrest them for allowing it to happen? Phone calls should be actively moderated to block customers who speak about terrorist activity.
You really should Google Snowden and PRISM at some point.
Phone calls are transcribed and stored forever.
Because the telcos _cooperate_ with law enforcement.
It's not whether the platform is being used for illegal activity (all platforms are to some extent, as your facile comment shows). It's whether the operator of a platform actively avoids cooperating with LE to stop that activity once found.
I know. That’s obviously true, but I hate that it happens and it makes no sense to me why more people aren’t upset by it. What I’m trying to get at is that complying with rules that are stupid, ineffective, and unfair is not a good thing and anyone who thinks these goals are reasonable should apply them to equivalent services to realize they’re bad. Cooperation with law enforcement is morally neutral and not important.
The real goal is hurting anyone that’s not aligned with people in power regardless of who is getting helped or harmed. Everyone knows this but so many people in this thread are lying about it.
There is a simpler explanation, those providers are controlled by Western governments (read US)
There's a simpler explanation. Those providers make an earnest attempt to obey western law.
It's simpler, the US wants to control the narrative everywhere and in everything, just like in the 90s and 00s. Things like Telegram and Tiktok and to some extent RT, stand in the way of that.
In other words, they give the government a cut of the power.
For Reddit it is a bit documented how some power-mods used to flood subreddits with child porn to get them taken down. It was seemingly done with the administration's best wishes. Not sure if it still going on, but some of these people are certainly around, in the same positions.
As far as I can see. CP is probably the fastest way to get a channel and related account wiped on telegram in a very short time. As a telegram group manager. I often see automated purge of CP related ad/contents, or auto lockout for managers to clear up the channel/group. Saying telegram isn't managing CP problems is just absurd. I really feel like they just created the reason for other purpose.
The difference is telegram wasn't cooperating with authorities in the jurisdictions in which it was operating; be that moderation, interception, etc.
It's incorrect to say that they weren't cooperating with authorities at all.
In the EU, Telegram blocked access to certain channels that the EU deemed to be Russian disinformation, for example.
Really? That’s a really disappointing example of censorship. The state has no business judging the truth.
One of those was @rtnews which is definitely state-sponsored propaganda and remains inaccessible to this day.
They cooperated to some degree, but I'll go out on a limb to say that the authorities wanted Telegram to be fully subservient to western government interests.
i should be allowed to watch whatever state propaganda i want, i'm a big boy
15 years ago in the US this would have been uncontroversial
15 years ago watching too much Taliban propaganda would have put you in Guantanamo pretty fast
you earnestly think that is true of an american citizen in 2009?
I'm sure the US government would have been real keen on you reading Kremlin news source 40 year ago...
there were multiple Kremlin propaganda outlets you could read in the US 40 years ago, although it is true that (IIRC) there were restrictions on broadcast television
Don't get me wrong, if you really want to watch it, I think you should be allowed to.
Personally I'm undecided about whether these channels should be publicly available on e.g. free TV channels, but that's getting off topic.
It's still uncontroversial in the US, where RT remains widely available, although their local TV operations folded after a boycott by cable providers.
Eliminating child pornography and organised crime is a societal rather than 'government' interest. And rightly.
Empirically speaking, governments have had absolutely zero success at this, but their attempts to do so have gotten them the kind of legal power over your life that organised crime could only dream about.
Huh? The traditional mafia is almost non-existent in the US today. RICO and its application has been highly successful at taking down the mafia.
You could certainly argue that RICO was too powerful and is often misapplied, but I've never before seen anyone argue that it has been ineffective.
Are you implying that after the Italian mafia there were no more organised crime gangs in the US? There's a huge number of organised crime gangs nowadays; who do you think is distributing the drugs responsible for America's massive drug problem? https://en.wikipedia.org/wiki/List_of_gangs_in_the_United_St... . A policy isn't a success if it kills one crime group only for it to be replaced with more, and the overall drug consumption/distribution rate doesn't decrease. More people are using illicit drugs than ever before: https://www.ibanet.org/unodc-report-drug-use-increase
think there is a societal interest in unsnoopable messaging.
there are other low-hanging fruit EU governments could do to address crime, NL has basically become a narcostate and they are just sitting by and watching - Telegram is not the problem.
So what if it is state sponsored propaganda? Most media is biased in some way. It shouldn’t be censored. I want to hear their side of the story too.
In this instance (RT being banned), it's Russia's quite candid strategy to undermine social cohesion in their enemies' societies, using disinformation. Margarita Simonyan and Vladislav Surkov have each bragged about its success. So yes, for social cohesion, when there's a malign external actor poisoning public discourse with the intention of splitting societies, a responsible government ought to tackle it.
The old "enemy of the people" argument.
I think your subtle arguments are wasted on EU's decision to stop the spread of misinformation and manipulation. It's that simple for them. Black and white. Us vs them. Don't think too much, you are taken care of by your "representatives" ...
How do you propose jurisdiction to work without judging the truth?
It is government's role to protect speech, not to censor.
It’s also the government’s role to take measures against harmful actions. Personal rights end where they start to harm others, or harm society in general. They are not an absolute, and always have to be balanced against other concerns.
However, my GP comment was against the claim that “The state has no business judging the truth”. That claim as stated is absurd, because judging what is true is necessary for a state being able to function in the interest of its people. The commenter likely didn’t mean what they wrote.
One can argue what is harmful and what isn’t, and I certainly don’t agree with many things that are being over-moderated. But please discuss things on that level, and don’t absolutize “free speech”, or argue that authorities shouldn’t care about what is true or not.
As far as I've heard, they did that only under threat of getting kicked out of the Apple and Google app stores. Supposedly, the non-app-store versions don't have these blocks.
In other words, Apple and Google are the only authorities they recognize (see also [1]). I'm not surprised this doesn't sit well with many governments.
[1] https://news.ycombinator.com/item?id=41348666
I don't think you can pick and choose what you comply with.
Because those services don't get shown reports of CSAM and then turn a blind eye to it and do nothing about it.
A person witnessing a crime by itself is not a crime. However, a person witnessing a crime and choosing not to report it is a crime.
I don't think it's a crime not to report a crime, at least not where I live. But facilitating a crime, which is something you could accuse telegram of is.
You're technically right (I think). However, I believe if you witness a murder and know the murderer and the police asks you: "Do you know anything about X murder?" Then I think you're legally required to tell the truth here.
Or you can just not respond, at least in the US. https://en.wikipedia.org/wiki/Self-incrimination
That only applies if you're the defendant.
If you're the witness to a murder and you're subpoena'd to court and refuse to testify then you are committing contempt of court. There was a guy in Illinois who got 20 years (reduced to 6 on appeal) for refusing to testify in a murder.
https://illinoiscaselaw.com/clecourses/contempt-of-court-max...
Contempt of court usually has no boundaries on the punishment, nor any jury trials. A judge can just order you to be executed on the spot if you say, fall asleep in his courtroom. Sheriffs in Illinois have the same unbridled power over jail detainees.
i think in actual practice you will rarely get contempt for refusing to testify or taking the fifth for questions that could only tenuously implicate yourself in practice.
That guy needed a better lawyer. He could just have said "I don't remember. Can't say for sure" repeatedly
I don't think it's necessarily self-incrimination to report a crime you witnessed, though I think it's dependent based on the time from when it occurred to the time of reporting.
Such laws exist in most countries. I'm not aware of any that provide such a right to business entities though.
I think in most modern democracies you aren’t legally required to tell the police anything. Courts are a different case though.
As a suspect. At least in court, as a completely non-involved bystander you have no right of refusal to testify in most jurisdictions.
Not sure whether that extends to police questioning though.
It doesn’t extend to police questioning, i also pointed out it’s a different thing when you are in a court. For the police an innocent bystander can turn into a suspect real fast.
If someone says I need a cab for after I rob a bank and you give them a ride after waiting then you’re almost certainly an accessory. If they flag a random cab off the street then not.
Generally, there is no general legal obligation for bystanders or witnesses to report a crime, but there are exceptions.
To what jurisdiction are you referring?
As far as I know all western judicial systems, both civil and common law. But as I said, there are exceptions for certain professions, and situations.
CSAM is different - in the US, as well as france, the law designates the service provider as a mandatory reporter. If you find CSAM and don't report the user who posted it to authorities (and Telegram have phone numbers of users) then they are breaking the law.
https://www.icmec.org/csam-model-legislation/
That heavily depends on the jurisdiction. It's explicitly a crime in Germany, for example: https://www.gesetze-im-internet.de/stgb/__138.html
On top of that, if you can be shown to benefit from the crime (e.g. by knowingly taking payment for providing services to those that commit it), that presumably makes you more than just a bystander in most jurisdictions anyway.
I have my dead creepy uncle's phone in my drawer right now, and can give you soft core child porn from his instagram. His algorithm was even tuned to keep giving endless supply of children dancing in lingerie, naked women breastfeeding children while said children play with her private part, prostitutes of unknown age sharing their number on the screen, and porn frames hidden in videos.
Nobody's arresting Zuckerberg for that.
YouTube ignored reports for CSAM links in comments of "family videos" of children bathing for years until a channel that made a large report on it went viral.
Who you are definitely determines how the law handles you. If you're Google execs, you don't have to worry about the courts of the peasantry.
That’s generally not true, at least in the Anglo legal system.
How do you know this is the case?
Back in the 1990s the tech community convinced itself (myself included) that Napster had zero ethical responsibility for the mass piracy it enabled. In reality, law in a society is supposed to serve that society. The tech community talked itself into believing that the only valid arguments were for Napster. In hindsight, it's less cut-and-dry.
I have never believed E2EE to be viable, in the real world, without a back-door. It makes several horrendous kinds of crime too difficult to combat. It also has some upsides, but including a back-door, in practice, won't erase the upsides for most users.
It is naive to think people (and government) will ignore E2EE; a feature that facilitates child porn, human trafficking, organized crime, murder-for-hire, foreign spying, etc etc. The decision about whether the good attributes justify the bad ones is too impactful on society to defer to chat app CEOs.
you comment strikes me as naive in the same lines as "i have nothing to hide"
It's worse than this. The author argues that backdoors are necessary rather than simply being willing to share his/her data for inspection.
Yes, that is my position. E2EE back-doors might not affect my communications or yours, but have serious and undesirable repercussions for some journalists and whistleblowers. The thing is, regular people aren't going to tolerate a sustained parade of news stories in which E2EE helps the world's worst people to evade justice.
Like, say, whistle blowers, and journalists who speak out and reveal evidence of government crimes? Like Julian Assange and Edward Snowden?
That's how most law works. I have to give up my right to murder someone in order to enjoy a society where it's illegal for everyone.
If you believe privacy not inspectable by law enforcement is wrong the prerequisite is saying that you're willing to have the the law apply to you as well.
I believe that privacy not inspectable by law enforcement is a fundamental right. I'm willing to accept that aids some crimes but also willing to change my mind if the latter becomes too much of a problem. It doesn't seem to be the case at all ATM.
Think of my comment as a prediction, rather than a value judgement.
How could they have any moral responsibility for ethically neutral thing other people were doing?
Not much has changed, I see.
Nothing. In other news, murder still immoral as always.
Even if you think they had no moral responsibility, it's clear they had legal responsibility.
What's legal is very malleable with the use of money. Which copyright holders weren't shy about spending.
This should be obvious to everyone here, but it's pretty much inevitable that if a backdoor exists, criminals will eventually find their way through it. Not to mention the "legitimate" access by corrupt and oppressive governments that can put people in mortal danger for daring to disagree.
No doubt that is true, and presumably Cory Doctorow has written some article making that seem like the only concern. The alternative makes it difficult to enforce all kinds of laws, though.
This comment can itself be said to take for granted the naive view of what law it exposes.
Law is a way to enforce a policy on massive scale, sure. But there is no guarantee that it enforces things that are aiming the best equilibrium of everyone flourishing in society. And even when it does, laws are done by humans, so unless they results from a highly dynamic process that gather feedback from those on which it applies and strive to improve over time, there is almost no chance laws can meet such an ambitious goal.
What if Napster was a symptom, but not of ill behavior? Supposing that unconditional sharing cultural heritage is basically the sane way to go can be backed on solid anthropological evidences, over several hundred millennia.
What if information monopolies is the massive ethical atrocity, enforced by corrupted governments which were hijacked by various sociopaths whose chief goal is to parasite as much as possible resources from societies?
Horrendous crimes, yes there are many out there, often commissioned by governments who will shamelessly throw outrageous lies at there citizens to transform them into cannon fodders and other atrocities.
Regarding fair retribution of most artists out there, we would certainly be better served with universal unconditional net revenue for everyone. The current fame lottery is just as fair as a national bingo as a way to make a decent career.
You can go ahead and encrypt messages yourself, without explicit E2E support on the platform. In fact, choosing your own secure channel for communicating the key would probably be more secure than anything in-band.
Dotcom got extradited (which was declared legal much later). Durov landed in a country that had an arrest warrant out for him.
I hope his situation isn't similar to Dotcom's, as Dotcom was shown to be complicit in the crimes he was being persecuted for. Convicting the megaupload people would've been a LOT harder if they hadn't been uploading and curating illegal content on their platform themselves.
As a service provider, you're not responsible for what your users post as long as you take appropriate action after being informed of illegal content. That's where they're trying to get Telegram, because Telegram is known to ignore law enforcement as much as possible (to the point of doing so illegally and getting fined for it).
From my understanding the arrest warrant only was created while he was en route; sneaky sneaky..
That’s really dark and dystopian
really? we seal warrants in the US all the time - we don't want people who we are trying to apprehend to always know ahead of time we are trying to apprehend them
The US is on it’s way to becoming a dystopia so not the best argument…
maybe, but i don't think sealed warrants are the reason
There purpose is to hide charges as longas possible to deceive or trick which is against a fully transparent process
I found this airplane trickery amusing.
https://www.nbcnews.com/news/us-news/son-el-chapo-another-si...
your understanding is based on what? i would assume this is just standard unsealed warrant like they have in the US
I found this article that explains that the arrest warrant was only to be activated if Pavel was on the French territory.
(French) https://www.sudouest.fr/economie/reseaux-sociaux/le-patron-d...
It could have been a warrant that was not communicated to Durov himself. This would have helped to catch him by surprise.
yeah so sounds like (what in the US we call) a sealed warrant, not that it was literally issued while he was in the air
The Sud-Ouest article must have been updated because the version currently online does not mention that at all. Quite the opposite, the article quotes an official that was surprised that Durov would come to Paris anyway even though he knew he was under an arrest warrant in France, and another source says that he might have decided to come in France anyway because he believed he'll never be held accountable.
Fuck around and find out. If he legitimately ignored legal French documents forcing him to share information, as the French have declared, he's got got.
You don't step foot on a country with an extradition treaty, even less so the country itself, where you're flouting their warrants for your company's data.
so which country doesnt dubai and uae extradite to?
According to the more detailed news sources I can find about this, it seems he knew the French were looking for him. I don't know if he knew about the contents of the warrant, but it does seem he knew the authorities were planning to arrest him.
From what I can tell the warrant has been out for longer, but he was arrested when the airport police noticed his name was on a list. There's not a lot of information out there, with neither the French authorities nor Telegram providing any official statements to the media.
https://restoreprivacy.com/telegram-sharing-user-data/
> the operators of the messenger app Telegram have released user data to the Federal Criminal Police Office (BKA) in several cases. According to SPIEGEL information, this was data from suspects in the areas of child abuse and terrorism. In the case of violations of other criminal offenses, it is still difficult for German investigators to obtain information from Telegram, according to security circles.
https://threema.ch/en/blog/posts/chat-apps-government-ties-a...
> two popular chat services have accused each other of having undisclosed government ties. According to Signal president Meredith Whittaker, Telegram is not only “notoriously insecure” but also “routinely cooperates with governments behind the scenes.” Telegram founder Pavel Durov, on the other hand, claims that “the US government spent $3 million to build Signal’s encryption” and Signal’s current leaders are “activists used by the US state department for regime change abroad.”
And of which he’s a citizen, fwiw
Watch his interview with Tucker Carlson and you’ll see. He doesn’t acquiesce to government requests for moderation control, censorship, and sharing private user data so they target him. He refuses to implement backdoors as well. In stark contrast to western social media companies.
We have no way to know this, and (unlike Signal), Telegram doesn't give us best-effort assurances by doing things like open-sourcing its code.
What? Literally all Telegram clients are open source.
What about the server? Telegram is not strictly e2e.
An "open source server"... are you trolling?
XMPP and Matrix services run open source software such as ejabberd
Running open source software != "Open source server"
Huh, I was going to point out that the Signal server isn't Free Software either, since for a while it wasn't being published, but it seems they have gotten back into publishing it.
https://github.com/signalapp/Signal-Server
while it's amazing for them to keep maintaining it, as the person mentioned down the thread, it's hard to know what they are actually running, right? and it's not a lot of work to patch this or clone/branch as necessary before deploying. Oh well, i already resigned that a part of my life will be run by someone else by now.
Well, other than his arrest ;-)
The arrest tells us that he said no to one country, it doesn't say much about all the others.
What exactly do you think this tells you?
Open source is irrelevant as the protocol is plain text.
Wait... you're saying if the protocol is binary, that's different somehow?
Either way, you're saying the MTProto is binary? How do you mean that?
https://i.imgur.com/ixak5vq.png
https://youtu.be/VvfSkVDF8uE
Fun Thread:
https://old.reddit.com/r/conspiracy/comments/1f0i2yi/guess_w...
You really ask why? This isn't about serving justice - it's statuary example for anyone trying to run an unmoderated platform.
IANAL and not that familiar with the legal situation, but if we assume that running a platform of this type requires you, by law, to moderate such a platform and he fails to do that, idk what we are talking about. Yes, he would clearly be breaking the law. Why would that not get prosecuted in the completely normal, boring way that I would hope all law breaking will eventually be prosecuted?
If you are alleging that there's comparable, specific and actual legal infringements on the part of meta/google, that somehow go uninvestigated and unpunished, free free to point to that.
i don't think that platform providers of encrypted messaging should be required to ensure that none of the images being sent contain CP.
You're missing the key part: Telegram doesn’t have E2EE enabled by default. Group chats and channels aren’t encrypted at all.
The only E2EE in Telegram is called "secret chats" and they're 1-on-1.
frankly, even with unencrypted chats, any law/precedent requiring that platform providers have to scale moderation linearly with the number of users (which is effectively what this is saying) sounds like really bad policy (and probably further prevents the EU from building actual competitors to American tech companies)
Seems like other platforms without E2EE are managing to do that without any issues whatsoever (e.g. Discord)
discord has hundreds of content moderators, telegram is made by a team of 30 people
i don’t think messaging startups should be required to employ hundreds of people to read messages
Telegram started 11 years ago. I know the term has been diluted for ages, but it still rubs me the wrong way to use the word startup for decade old businesses.
A startup wouldn't need hundreds of people, they don't have millions of daily messages yet. Only successful businesses like Telegram would.
Isn't this a consequence of Telegram's actions?
It was their decision to become something bigger than a simple messaging app by adding channels and group chats with tons of participants. It was also their decision to understaff content moderation team.
Sometimes the consequence is a legal action, like the one we're seeing right now. All this could have been easily avoided if they had E2EE or enough people to review reported content and remove it when necessary.
A straightforward legal responsibility should be shirked because scaling moderation is hard? How many other difficult things do you propose moving outside the law?
That's not the case here though. Most of the communication on Telegram is not E2E Encrypted.
Even E2EE messaging service providers have to cooperate in terms of providing communication metadata and responding to takedown requests. Ignoring law enforcement lands you in a lot of shit everywhere, in Russia you'll just be landing out of a window.
These laws have applied for decades in some shape or form in pretty much all countries, so it shouldn't come as a surprise.
Where are "moderated" and "platform" defined in the relevant legislation?
Have you used Telegram before making this comment? It is moderated. You really think this is about the company, the platform, not about politics? Well you should think again.
Isn't this what Section 230 was supposed to protect against?
Please explain how a US law affects an arrest in France.
Yes, but that’s an American law
Section 230 does not apply as law in France.
WORSE, you get banned for reporting CSAM to Discord, and I guarantee if you report it to the proper authorities (FBI) they tell them to bug off and get a warrant. Can we please be consistent? If we're going to hold these companies liable for anything, let's be much more consistent. Worse yet, Discord doesnt even have End to End encryption, and the number of child abuse scandals on that platform are insane. People build up communities, where the admins (users, not Discord employees) have perceived power, users (children) want to partake in such things. Its essentially the Roblox issue all over again, devs taking advantage of easily impressionable minors.
Yep. At this point, it's clear to me that Discord is acting with malice. On top of banning people for reporting abuse on their platform, which is by itself insanity, they changed their report system [0] so it's longer possible to report servers/channel/users at all, only specific messages, with no way to report messages in bulk being provided.
Reddit isn't much better. [1]
[0]: https://www.reddit.com/r/discordapp/comments/14sx8fz/discord...
[1]: https://www.theverge.com/2021/4/25/22399306/reddit-lawsuit-c...
They had a scandal where they allowed the furry equivalent of child porn, and quietly banned that type of porn from the platform later on. I assume due to legal requirements.
Edit:
I think the lack of bulk reporting is a pain too. They used to ask for more context. One time I reported a literal nazi admin (swastika posting, racial slurs, and what have you), but the post was "months old" and they told me essentially to "go screw myself" they basically asked why I was in the server.
I’m not a fan of this arrest and I don’t believe service providers have a duty to contravene their security promises so as to monitor their users.
But it seems pretty obvious that governments find the monitoring that Google / Reddit / etc do acceptable, and do not find operation of unmonitorable services acceptable.
All right, what about logless VPN providers like Mullvad?
Sounds like something straight out of a dystopian surveillance state novel, very bad outlook if true.
VPNs don't pose an obstacle to monitoring any specific activity, and as many VPN-using criminals have found, even their ability to stop law enforcement from identifying you is limited. So they've been less of an issue. Having said that, I would note that Mullavad was forced to remove port forwarding in response to law enforcement interest, and I don't think it would be too surprising (or too dystopian) if in the future "connection laundering" is a crime just like money laundering.
I strongly suspect there's more to it than just running a chat system used by criminals. If that were the issue then tons of services would be under indictment.
We'll have to wait and see, but I suspect some kind of more direct participation or explicit provable look-the-other-way at CSAM etc.
Or its just intimidation like FBI raid on Scott Ritter.
Usually as part of a plea agreement the criminal is required to let law enforcement search them without a warrant.
Because these countries are hypocrites. Because politics, because these guys are from Russia, China. You can so obviously see there's discrimination against companies from those countries. Can you imagine France do this if it's a US company?
Rhetorical question: for what reason should a country be anything other than a hypocrite when it comes to situations such as this? Nations prioritize their own self-interests and that of their allies, even if that makes them appear hypocritical from an outside, or indeed, even an inside perspective. But that doesn't mean there's no legitimacy to what they do.
I believe both cases come down to how much effort the leaders put into identifying and purging the bad activities on their platforms.
One would hope that there is clear evidence to support a claim that they’re well aware what they’re profiting off and aren’t aggressively shutting it down.
To use Reddit as an example: in the early days it was the Wild West, and there were some absolutely legally gray subreddits. They eventually booted those, and more recently even seem to ban subreddits just because The Verge wrote an article about how people say bad things there.
It seems there has been a misunderstanding; laws for service providers never exempted them from having to cooperate and provide data available to them when ordered.
US has section 230, other countries don't.
pour encourager les autres
It's like mafia. If you cooperate, you're safe. If not, mafia destroys you
I'm not sure where this myth originated—perhaps from Kim Dotcom's Twitter account? I clearly remember the Megaupload case. They knew they were hosting pirated content, didn't delete it after requests[1], and shared money with the people who uploaded it because that was their business model.
1. https://www.forbes.com/sites/timworstall/2013/12/21/the-fasc...
Because they crossed the line from common carrier to editor - an entirely different set of obligations.
Also, even such common carrier as telcos must abide to state injunctions against their users.
Intent matters.
There is a legal distinction here between what happens on your platform despite your best efforts (what you might call "incidental" use) vs what your platform is designed specifically to do or enable.
Megaupload is a perfect example. It was used to pirate movies. Everyone knew it. The founders knew it. You can't really argue it's incidental or unintended or simply that small amount that gets past moderation.
Telegram, the authorities will argue, fails to moderate CSAM and other illegal activity to the point that it enables it and profits from it, which is legally indistinguishable from specifically designing your platfrom for it.
Many tech people fall into a binary mode of thinking because that's how tech usually works. Either your code works or it doesn't. You see it when arguments about people pirating IP being traced to a customer. Tech people will argue "you can't prove it's me". While technically true, that's not the legal standard.
Legal standards relay on tests. In the ISP case, authorities will look at what was pirated, was it found on your hard drive, was the activity done when you were home or not and so on to establish a balance of probabilities. Is it more likely that all this evidence adds up to your guilt or that an increasingly unlikely set of circumstances explains it where you're innocent?
In the early days of Bitcoin I stayed away (to my detriment) because I coudl see the obvious use case of it being used for illegal stuff, whichh it is. The authorities don't currently care. Bitcoin however is the means that enables ransomware. When someone decides this is a national security issue, Bitcoin is in for a bad time.
Telegram had (for the French at least) risen to the point where they considered it a serious enough issue to warrant their attention and the full force of the government may be brought to bear on it.
That would seem to be the key bit. Makes one wonder what level of cooperation is required to not be charged with a slew of the worst crimes imaginable. Is there a French law requiring that messaging providers give up encryption keys that he is known to be in violation of?
Those platforms are more cooperative with authorities.
Given how it's all plaintext on their servers, telegram is essentially also a storage for those criminal data.
It's better not the Kim Dotcom situation, that would mean Durov encouraged the illegal use of Telegram like Megaupload rewarded file uploads which generated heavy download traffic.
If that would be the case he would be at least a accomplice if not even the Initiator of criminal activities.
Otherwise it would be just an abuse of his service by criminals.
The difference is that this is not an isolated case on telegram(you said it yourself: "some amount", which implies "limited"). At the same time, you can literally open up the app and with 0 effort find everything they are accusing them of - drugs, terrorist organizations, public decapitations, you name it. They also provide the ability to search for people and groups around you, and I am literally seeing a channel where people are buying and selling groups "800 meters away" from me and another one for prostitution, which is also illegal in my country. Meanwhile, see their TOS[1]. They have not complied with any of the reports or requests from users (and governments by the looks of it) to crack down on them. While 1:1 chats are theoretically private and encrypted(full disclosure, I do not trust Telegram or any of the people behind it), telegram's security for public channels and groups is absolutely appalling and they are well aware of it - they just chose to look the other way and hope they'd get away with it. You could have given them the benefit of the doubt if those are isolated("some") instances, sure. But just as in the case of Kim Dot-I-support-genocide-com, those are not isolated cases and saying that they had no idea is an obvious lie.
2000/31/EC[2], states that providers are generally not liable for the content they host IF they do not have actual knowledge of illegal activity or content AND upon obtaining such knowledge, they take action and remove and disable access to that content(telegram has been ignoring those). Service providers have no general obligation to monitor but they need to provide notice and take down mechanisms. Assuming that their statement are correct, and they had no idea, they should be in the clear. Telegram provides a notice and take down mechanism. But saying that there are channels with +500k subscribers filled with people celebrating a 4 year old girl with a blown off leg in Ukraine and no one has reported it in 2 and a half years after it was created is indeed naive.
[1] https://telegram.org/tos/eu
[2] https://eur-lex.europa.eu/eli/dir/2000/31/oj
Everyone knows why and you’re not being naive
Because they let their users do it and benefited from it. Try doing the same thing as a bank :) Or a newspaper :)
Internet cannot be anarchy forever. Every anarchy ends up as oligarchy. It needs regulation and fast.
I think this is simplified. Certainly yes, if "all" Telegram was doing was operating a neutral/unmoderated anonymized chat service, then it's hard to see criminal culpability for the reasons you list.
But as people are pointing out, that doesn't seem to be technically correct. Telegram isn't completely anonymous, does have access to important customer data, and is widely suspected of complying with third party requests for that data for law enforcement and regulatory reasons.
So... IF they are doing that, and they're doing it in a non-neutral/non-anonymized way, then they're very plausibly subject to prosectution. Say, if you get a report of terrorist activity and provide data on the terrorists, then a month later get notified that your service is being used to distribute CSAM, and you refuse to cooperate, then it's not that far a reach to label you an accessory to the crime.
It's called selective enforcement.
Let’s just say I encrypt illegal.content prior to uploading it to Platform A. And share the public key separately via Platform B. Maybe even refer Platform A users to a private forum on Platform B to obtain the keys. Are both platforms now on the wrong side of the law?