return to table of content

Judges rule Big Tech's free ride on Section 230 is over

nsagent
200 replies
1d1h

The current comments seem to say this is rings the death knell of social media and that this just leads to government censorship. I'm not so sure.

I think the ultimate problem is that social media is not unbiased — it curates what people are shown. In that role they are no longer an impartial party merely hosting content. It seems this ruling is saying that the curation being algorithmic does not absolve the companies from liability.

In a very general sense, this ruling could be seen as a form of net neutrality. Currently social media platforms favor certain content, while down weighting others. Sure, it might be at a different level than peer agreements between ISPs and websites, but it amounts to a similar phenomenon when most people interact on social media through the feed.

Honestly, I think I'd love to see what changes this ruling brings about. HN is quite literally the only social media site (loosely interpreted) I even have an account on anymore, mainly because of how truly awful all the sites have become. Maybe this will make social media more palatable again? Maybe not, but I'm inclined to see what shakes out.

nox101
106 replies
1d

I'm probably mis-understanding the implications but, IIUC, as it is, HN is moderated by dang (and others?) but still falls under 230 meaning HN is not responsible for what other users post here.

With this ruling, HN is suddenly responsibly for all posts here specifically because of the moderation. So they have 2 options.

(1) Stop the moderation so they can be safe under 230. Result, HN turns to 4chan.

(2) enforce the moderation to a much higher degree by say, requiring non-anon accounts and TOS that make each poster responsible for their own content and/or manually approve every comment.

I'm not even sure how you'd run a website with user content if you wanted to moderate that content and still avoid being liable for illegal content.

lcnPylGDnU4H9OF
62 replies
1d

With this ruling, HN is suddenly responsibly for all posts here specifically because of the moderation.

I think this is a mistaken understanding of the ruling. In this case, TikTok decided, with no other context, to make a personalized recommendation to a user who visited their recommendation page. On HN, your front page is not different from my front page. (Indeed, there is no personalized recommendation page on HN, as far as I'm aware.)

crummy
35 replies
1d

The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment.

I don't see how this is about personalization. HN has an algorithm that shows what it wants in the way it wants.

lcnPylGDnU4H9OF
14 replies
1d

From the article:

TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.”
unyttigfjelltol
13 replies
23h44m

That's the difference between the case and a monolithic electronic bulletin board like HN. HN follows an old-school BB model very close to the models that existed when Section 230 was written.

Winding up in the same place as the defendant would require making a unique, dynamic, individualized BB for each user tailored to them based on pervasive online surveillance and the platform's own editorial "secret sauce."

tsimionescu
4 replies
22h9m

The HN team explicitly and manually manages the front page of HN, so I think it's completely unarguable that they would be held liable under this ruling if at least the front page contained links to articles that caused harm. They manually promote certain posts that they find particularly good, even if they didn't get a lot of votes, so this is even more direct than what TikTok did in this case.

philistine
2 replies
21h42m

The decision specifically mentions algorithmic recommandation as being speech, ergo the recommandation itself is the responsibility of the platform.

Where is the algorithmic recommandation that differs per user on HN?

amitport
1 replies
17h24m

where does it say that it matters if it differs per user?

klik99
0 replies
2h35m

Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech.
klik99
0 replies
2h28m

It is absolutely still arguable in court, since this ruling interpreted the Supreme Court ruling to pertain to “a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,”

In other words, the Supreme Court decision mentions editorial decisions but no court case has yet backed up if that means editorial decisions in the HN front page sense (as in mods make some choices but it’s not personalized.) Common sense may say mods making decisions is editorial decisions but it’s a gray area until a court case makes it clear. Precedence is the most important thing when interpreting law, and the only precedence we have is that it pertains to personalized feeds.

skeptrune
3 replies
19h55m

Key words are "editorial" and "secret sauce". Platforms should not be liable for dangerous content which slips through the cracks, but certainly should be when their user-personalized algorithms mess up. Can't have your cake and eat it to.

krapp
2 replies
19h28m

Dangerous content slipping through the cracks and the algorithms messing up is the same thing. There is no way for content to "slip through the cracks" other than via the algorithm.

Jensson
1 replies
18h38m

You can view the content via direct links or search, recommendation algorithms isn't the only way to view it.

If you child porn that gets shared via direct links then that is bad even if nobody can see it, but it is much much worse if you start recommending that to people as well.

krapp
0 replies
17h18m

Everything is related. Search results are usually generated based on recommendations, and direct links usually influence recommendations, or include recommendations as related content.

It's rarely if ever going to be the case that there is some distinct unit of code called "the algorithm" that can be separated and considered legally distinct from the rest of the codebase.

empressplay
3 replies
22h40m

HN is _not_ a monolithic bulletin board -- the messages on a BBS were never (AFAIK) sorted by 'popularity' and users didn't generally have the power to demote or flag posts.

Although HN's algorithm depends (mostly) on user input for how it presents the posts, it still favours some over others and still runs afoul here. You would need a literal 'most recent' chronological view and HN doesn't have that for comments. It probably should anyway!

@dang We need the option to view comments chronologically, please

philipkglass
0 replies
21h57m

Writing @dang is a no-op. He'll respond if he sees the mention, but there's no alert sent to him. Email hn@ycombinator.com if you want to get his attention.

That said, the feature you requested is already implemented but you have to know it is there. Dang mentioned it in a recent comment that I bookmarked: https://news.ycombinator.com/item?id=41230703

To see comments on this story sorted newest-first, change the link to

https://news.ycombinator.com/latest?id=41391868

instead of

https://news.ycombinator.com/item?id=41391868

kardos
0 replies
12h26m

We need the option to view comments chronologically

You might like this then: https://hckrnews.com/

Majromax
0 replies
17h21m

HN is _not_ a monolithic bulletin board -- the messages on a BBS were never (AFAIK) sorted by 'popularity' and users didn't generally have the power to demote or flag posts.

I don't think the feature was that unknown. Per Wikipedia, the CDA passed in 1996 and Slashdot was created in 1997, and I doubt the latter's moderation/voting system was that unique.

wk_end
6 replies
1d

It’d be interesting to know what constitutes an “algorithm”. Does a message board sorting by “most recent” count as one?

saratogacx
5 replies
23h3m

algorithm that reflects “editorial judgments”

I don't think timestamps are, in any way, construed editorial judgement. They are a content agnostic related attribute.

srj
3 replies
22h1m

What about filtering spam? Or showing the local weather / news headlines?

bitshiftfaced
1 replies
20h2m

Or ordering posts by up votes/down votes, or some combination of that with the age of the post.

remich
0 replies
19h5m

The text of the Third Circuit decision explicitly distinguishes between algorithms that respond to user input -- such as by surfacing content that was previously searched for, or favorited, or followed. Allowing users to filter content by time, upvotes, number of replies etc would be fine.

The FYP algorithm that's contested in the case surfaced the video to the minor without her searching for that topic, following any specific content creator, or positively interacting (liking/favoriting/upvoting) with previous instances of said content. It was fed to her based on a combination of what TikTok knew about her demographic information, what was trending on the platform, and TikTok's editorial secret sauce. TikTok's algorithm made an active decision to surface this content to her, despite knowing that other children had died from similar challenge videos, they promoted it and should be liable for that promotion.

remich
0 replies
19h10m

Moderating content is explicitly protected by the text of Section 230(c)(2)(a):

"(2)Civil liability No provider or user of an interactive computer service shall be held liable on account of— (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or"

Algorithmic ranking, curation, and promotion are not.

Izkata
0 replies
17h31m

On HN, timestamps are adjusted when posts are given a second-chance boost. While the boost is done automatically, candidates are chosen manually.

klik99
5 replies
23h14m

Specifically NetChoice argued that personalized feeds based on user data were protected due to first person speech. This went to supreme court and supreme court agreed. Now precedent is set by the highest court that those feeds are "expressive product". It doesn't make sense, but that's how the law works - by trying to define as best as possible the things in gray areas.

And they probably didn't think through how this particular argument could affect other areas of their business.

remich
4 replies
19h15m

It absolutely makes sense. What NetChoice held was that the curation aspect of algorithmic feeds makes the weighting approach equivalent to the speech of the platforms and therefore when courts evaluated challenges to government imposed regulation, they had to perform standard First Amendment analysis to determine if the contested regulation passed muster.

Importantly, this does not mean that before the Third Circuit decision platforms could just curate any which way they want and government couldn't regulate at all -- the mandatory removal regime around CSAM content is a great example of government regulating speech and forcing platforms to comply.

The Third Circuit decision, in a nutshell, is telling the platforms that they can't have their cake and eat it too. If they want to claim that their algorithmic feeds are speech that is protected from most government regulation, they can't simultaneously claim that these same algorithmic feeds are mere passive vessels for the speech of third parties. If that were the case, then their algorithms would enjoy no 1A protection from government regulation. (The content itself would still have 1A protection based on the rights of the creators, but the curation/ranking/privileging aspect would not).

phire
2 replies
19h0m

Yeah, I agree.

This ruling is a natural consequence of the NetChoice ruling. Social media companies can't have it both ways.

> If that were the case, then their algorithms would enjoy no 1A protection from government regulation.

Well, the companies can still probably claim some 1st Amendment protections for their recommendation algorithms (for example, a law banning algorithmic political bias would be unconstitutional). All this ruling does is strip away the safe harbour protections, which weren't derived from the 1A in the first place.

codersfocus
1 replies
15h36m

law banning algorithmic political bias would be unconstitutional

Would it? The TV channels of old were heavily regulated well past 1st amendment limits.

kloop
0 replies
15h7m

Only because they were using public airwaves.

Cable was never regulated like that. The medium actually mattered in this case

klik99
0 replies
2h23m

I misunderstood the Supreme Court ruling that it hinged on personalization per user of algorithms and thought it made a distinction between editorial decisions that show to everyone vs individual users. I thought that part didn’t make sense. I see now it’s really the third circuit ruling that interpreted the user customization part as editorial decisions, not excluding the non-per user algorithms.

phire
2 replies
19h12m

It's worth noting that personalisation isn't moderation, An app like TikTok needs both.

Personalisation simply matches users with the content the algorithm thinks they want to see. Moderation (which is typically also an algorithm) tries to remove harmful content from the platform altogether.

The ruling isn't saying that Section 230 doesn't apply because TikTok moderated. It's saying Section 230 doesn't apply because TikTok personalised, allegedly knew about the harmful content and allegedly didn't take enough action to moderate this harmful content.

cdchn
1 replies
15h33m

Personalisation simply matches users with the content the algorithm thinks they want to see.

These algorithms aren't matching you with what you want to see, they're trying to maximize your engagement- or, its what the operator wants you to see, so you'll use the site more and generate more data or revenue. Its a fine, but extremely important distinction.

bryanrasmussen
0 replies
12h52m

What the operator wants you to see also gets into the area of manipulation, hence 230 shouldn't apply - by making algorithms based on manipulation or paid for boosting companies move from impartial unknowing deliverers of harmful content into committed distributors of it.

zerocrates
1 replies
22h24m

So, yes, the TikTok FYP is different from a forum with moderation.

But the basis of this ruling is basically "well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that's your speech and not somebody else's and so 230 doesn't apply and you can be liable for it." That rationale extends to basically any form of moderation or selection, personalized or not, and would blow a big hole in 230's protections.

Given generalized anti-Big-Tech sentiment on both ends of the political spectrum, I could see something that claimed to carve out just algorithmic personalization/suggestion from protection meeting with success, either out of the courts or Congress, but it really doesn't match the current law.

lupusreal
0 replies
11h35m

"well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that's your speech and not somebody else's and so 230 doesn't apply and you can be liable for it."

I see a lot of people saying this is a bad decision because it will have consequences they don't like, but the logic of the decision seems pretty damn airtight as you describe it. If the recommendation systems and moderation policies are the company's speech, then the company can be liable when the company "says", by way of their algorithmic "speech", to children that they should engage in some reckless activity likely to cause their death.

pessimizer
1 replies
21h28m

Doesn't seem to have anything to do with personalization to me, either. It's about "editorial judgement," and an algorithm isn't necessarily a get out of jail free card unless the algorithm is completely transparent and user-adjustable.

I even think it would count if the only moderation you did on your Lionel model train site was to make sure that most of the conversation was about Lionel model trains, and that they be treated in a positive (or at least neutral) manner. That degree of moderation, for that purpose, would make you liable if you left illegal or tortious content up i.e. if you moderate, you're a moderator, and your first duty is legal.

If you're just a dumb pipe, however, you're a dumb pipe and get section 230.

I wonder how this works with recommendation algorithms, though, seeing as they're also trade secrets. Even when they're not dark and predatory (advertising related.) If one has a recommendation algo that makes better e.g. song recommendations, you don't want to have to share it. Would it be something you'd have to privately reveal to a government agency (like having to reveal the composition of your fracking fluid to the EPA, as an example), and they would judge whether or not it was "editorial" or not?

[edit: that being said, it would probably be very hard to break the law with a song recommendation algorithm. But I'm sure you could run afoul of some financial law still on the books about payola, etc.]

Majromax
0 replies
17h19m

That degree of moderation, for that purpose, would make you liable if you left illegal or tortious content up i.e. if you moderate, you're a moderator, and your first duty is legal.

I'm not sure that's quite it. As I read the article and think about its application to Tiktok, the problem was more that "the algorithm" was engaged in active and allegedly expressive promotion of the unsafe material. If a site like HN just doesn't remove bad content, then the residual promotion is not exactly Hacker News's expression, but rather its users'.

The situation might change if a liability-causing article were itself given 'second chance' promotion or another editorial thumb on the scale, but I certainly hope that such editorial management is done with enough care to practically avoid that case.

lesuorac
13 replies
1d

Per the court of appeals, TikTok is not in trouble for showing a blackout challenge video. TikTok is in trouble for not censoring them after knowing they were causing harm.

"What does all this mean for Anderson’s claims? Well, § 230(c)(1)’s preemption of traditional publisher liability precludes Anderson from holding TikTok liable for the Blackout Challenge videos’ mere presence on TikTok’s platform. A conclusion Anderson’s counsel all but concedes. But § 230(c)(1) does not preempt distributor liability, so Anderson’s claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children can proceed."

As-in, Dang would be liable if say somebody started a blackout challenge post on HN and he didn't start censoring all of them once news reports of programmers dieing broke out.

https://fingfx.thomsonreuters.com/gfx/legaldocs/mopaqabzypa/...

whartung
5 replies
23h25m

Does TikTok have to know that “as a category blackout videos are bad” or that “this specific video is bad”.

Does TikTok have preempt this category of videos in the future or simply respond promptly when notified such a video is posted to their system?

jay_kyburz
4 replies
22h9m

Are you asking about the law, or are you asking our opinion?

Do you think its reasonable for social media to send videos to people without considering how harmful they are?

Do you even think its reasonable for search engine to respond to a specific request for this information?

oceanplexian
1 replies
21h18m

Did some hands come out of the screen, pull a rope out then choke someone? Platforms shouldn’t be held responsible when 1 out of a million users wins a Darwin award.

autoexec
0 replies
20h18m

I think it's a very different conversation when you're talking about social media sites pushing content they know is harmful onto people who they know are literal children.

autoexec
1 replies
20h21m

Personally, I wouldn't want search engines censoring results for things explicitly searched for, but I'd still expect that social media should be responsible for harmful content they push onto users who never asked for it in the first place. Push vs Pull is an important distinction that should be considered.

MadnessASAP
0 replies
18h8m

That IS the distinction at play here.

wahnfrieden
2 replies
1d

What constitutes "censoring all of them"

mattigames
0 replies
23h49m

Any good will attempt at censoring would have been as a reasonable defense even if technically they don't censor 100% of them, such as blocking videos with the word "blackout" on their title or manually approving videos with such thing, but they did nothing instead.

altairprime
0 replies
23h55m

Trying to define "all" is an impossibility; but, by virtue of having taken no action whatsoever, answering that question is irrelevant in the context of this particular judgment: Tiktok took no action, so the definition of "all" is irrelevant. See also for example: https://news.ycombinator.com/item?id=41393921

In general, judges will be ultimately responsible for evaluating whether "any", "sufficient", "appropriate", etc. actions were taken in each future case judgement they make. As with all things legalese, it's impossible to define with certainty a specific degree of action that is the uniform boundary of acceptable; but, as evident here, "none" is no longer permissible in that set.

(I am not your lawyer, this is not legal advice.)

sangnoir
2 replies
23h50m

TikTok is in trouble for not censoring them after knowing they were causing harm.

This has interesting higher-order effects on free speech. Let's apply the same ruling to vaccine misinformation, or the ability to organize protests on social media (which opponents will probably call riots if there are any injuries)

lesuorac
1 replies
22h40m

Uh yeah, the court of appeals has reached an interesting decision.

But I mean what do you expect from a group of judges that themselves have written they're moving away from precedent?

sangnoir
0 replies
22h19m

I don't doubt the same court relishes the thought of deciding what "harm" is on a case-by-case basis. The continued politicization of the courts will not end well for a society that nominally believes in the rule of law. Some quarters have been agitating for removing §230 safe harbor protections (or repealing it entirely), and the courts have delivered.

mattigames
0 replies
23h43m

The ingenuity of kids to believe and be easily influenced by what they see online had a big role in this ruling, disregarding that is a huge disservice to a productive discussion.

Manuel_D
8 replies
23h32m

But something like Reddit would be held liable for showing posts, then. Because you get shown different results depending on the subreddits you subscribe to, your browsing patterns, what you've upvoted in the past, and more. Pretty much any recommendation engine is a no-go of this ruling becomes precedence.

lesuorac
3 replies
22h30m

TBH, Reddit really shouldn't have 230 protection anyways.

You can't be licensing user content to AI as it's not yours. You also can't be undeleting posts people make (otherwise it's really reddit's posts and not theirs).

When you start treating user data as your own; it should become your own and that erodes 230.

raydev
0 replies
20h37m

It belongs to reddit, the user handed over the content willingly.

autoexec
0 replies
20h46m

You also can't be undeleting posts people make

undeleting is bad enough, but they've edited the content of user's comments too.

Manuel_D
0 replies
20h28m

You can't be licensing user content to AI as it's not yours.

It is theirs. Users agreed to grant Reddit a license to use the content when they accepted the terms of service.

TheGlav
2 replies
22h57m

From my reading, if the site only shows you based on your selections, then it wouldn't be liable. For example, if someone else with the exact same selections gets the same results, then that's not their platform deciding what to show.

If it does any customization based on what it knows about you, or what it tries to sell you because you are you, then it would be liable.

Yep., recommendation engines would have to be very carefully tuned, or you risk becoming liable. Recommending only curated content would be a way to protect yourself, but that costs money that companies don't have to pay today. It would be doable.

djhn
0 replies
22h19m

It could be difficult to draw the line. I assume TikTok’s suggestions are deterministic enough that an identical user would see the same things - it’s just incredibly unlikely to be identical at the level of granularity that TikTok is able to measure due to the type of content and types of interactions the platform has.

Manuel_D
0 replies
22h11m

For example, if someone else with the exact same selections gets the same results, then that's not their platform deciding what to show.

This could very well be true for TikTok. Of course "selection" would include liked videos, how long you spend watching each video, and how many videos you have posted

And on the flip side a button that brings you to a random video would supply different content to users regardless of "selections".

juliangmp
0 replies
20h5m

Pretty much any recommendation engine is a no-go of this ruling becomes precedence.

That kind of sounds... great? The only instance where I genuinely like to have a recommendation engine around is music steaming. Like yeah sometimes it does recommend great stuff. But anywhere else? No thank you

mr_toad
0 replies
12h7m

On HN, your front page is not different from my front page.

It’s still curated, and not entirely automatically. Does it make a difference whether it’s curated individually or not?

cbsmith
0 replies
18h42m

The personalized aspect wasn't emphasized at all in the ruling. It was the curation. I don't think TikTok would have avoided liability by simply sharing the video with everyone.

1vuio0pswjnm7
0 replies
12h23m

"I think this is a mistaken understanding of the ruling."

I think that is quite generous. I think it is a deliberate reinterpretation of what the order says. The order states that 230(c)(1) provides immunity for removing harmful content after being made aware of it, i.e., moderation.

spamizbad
10 replies
1d

I feel like the end result of path #1 is that your site just becomes overrun with spams and scams. See also: mail, telephones.

aftbit
5 replies
1d

Yeah, no moderation leads to spams, scams, rampant hate, and CSAM. I spent all of an hour on Voat when it was in its heyday and it mostly literal Nazis calling for the extermination of undesirables. The normies just stayed on moderated Reddit.

redeeman
3 replies
22h31m

voat wasnt exactly a single place, any more than reddit is

snapcaster
2 replies
22h21m

Were there non KKK/nazi/qanon whatever subvoats (or whatever they call them?) the one time i visited the site every single post on the frontpage was alt right nonsense

tzs
0 replies
21h24m

Yes. There were a ton of them for various categories of sex drawings, mostly in the style common in Japanese comics and cartoons.

autoexec
0 replies
20h29m

It was the people who were chased out of other websites that drove much of their traffic so it's no surprise that their content got the front page. It's a shame that they scared so many other people away and downvoted other perspectives because it made diversity difficult.

commandlinefan
0 replies
1h54m

stayed on moderated Reddit

... being manipulated by the algorithm (per this judges decision).

stale2002
3 replies
17h43m

No, that's not the end the result.

It would be perfectly legal for a platform to choose to allow a user to decide on their own to filter out spam.

Maybe a user could sign up for such an algorithm, but if they choose to whitelist certain accounts, that would also be allowed.

Problem solved.

jojobas
2 replies
17h28m

Exactly. Moderation is not a problem as long as you can opt out of it, for both reading and writing.

mr_toad
1 replies
12h0m

If I were to start posting defamatory material about you on various internet forums, how would you opt out of that?

jojobas
0 replies
6h49m

Same as if you were to post it on notice boards, I would opt to not give a fuck.

akira2501
9 replies
23h54m

There's moderation to manage disruption to a service. There's editorial control to manage the actual content on a service.

HN engages in the former but not the latter. The big three engage in the latter.

closeparen
6 replies
23h45m

HN engages in the latter. For example, user votes are weighted based on their alignment with the moderation team's view of good content.

akira2501
5 replies
23h29m

I don't understand your explanation. Do you mean just voting itself? That's not controlled or managed by HN. That's just more "user generated content." That posts get hidden or flagged due to thresholding is non-discriminatory and not _individually_ controlled by the staff here.

Or.. are you suggesting there's more to how this works? Is dang watching votes and then making decisions based on those votes?

"Editorial control" is more of a term of art and has a narrower definition then you're allowing for.

tsimionescu
2 replies
22h13m

The HN moderation team makes a lot of editorial choices, which is what gives HN its specific character. For example, highly politically charged posts are manually moderated and kept off the main page regardless of votes, with limited exceptions entirely up to the judgement of the editors. For example, content about the wars in Ukraine and Israel is not allowed on the mainpage except on rare occasions. dang has talked a lot about the reasoning behind this.

The same applies to comments on HN. Comments are not moderated based purely on legal or certain general "good manners" grounds, they are moderated to keep a certain kind of discourse level. For example, shallow jokes or meme comments are not generally allowed on HN. Comments that start discussing controversial topics, even if civil, are also discouraged when they are not on-topic.

Overall, HN is very much curated in the direction of a newspaper "letter to the editor" section, then more algorithmic and hands-off like the Facebook wall or TikTok feed. So there is no doubt whatsoever, I believe, that HN would be considered responsible for user content (and is, in fact, already pretty good at policing that in my experience, at least on the front page).

zahlman
1 replies
21h4m

The HN moderation team makes a lot of editorial choices, which is what gives HN its specific character. For example, highly politically charged posts are manually moderated and kept off the main page regardless of votes, with limited exceptions entirely up to the judgement of the editors. For example, content about the wars in Ukraine and Israel is not allowed on the mainpage except on rare occasions. dang has talked a lot about the reasoning behind this.

This is meaningfully different in kind from only excluding posts that reflect certain perspectives on such a conflict. Maintaining topicality is not imposing a bias.

mr_toad
0 replies
11h54m

This is meaningfully different in kind from only excluding posts that reflect certain perspectives on such a conflict. Maintaining topicality is not imposing a bias.

Maintaining topicality is literally a bias. Excluding posts that reflect certain perspectives is censorship.

krapp
0 replies
22h33m

Dang has been open about voting being only one part of the way HN works, and that manual moderator intervention does occur. They will downweigh the votes of "problem" accounts, manually adjust the order of the frontpage, and do whatever they feel necessary to maintain a high signal to noise ratio.

empressplay
0 replies
22h36m

There's things like 'second chance' where the editorial team can re-up posts they feel didn't get a fair shake the first time around, sometimes if a post gets too 'hot' they will cool it down -- all of this is understandable but unfortunately does mean they are actively moderating content and thus are responsible for all of it.

immibis
1 replies
18h24m

Every time you see a comment marked as [dead] that means a moderator deleted it. There is no auto-deletion resulting from downvotes.

Even mentioning certain topics, such as Israel's invasion of Palestine, even when the mention is on-topic and not disruptive, as in this comment you are reading, is practically a death sentence for a comment. Not because of votes, but because of the moderators. Downvotes may prioritize which comments go in front of moderators (we don't know) but moderators make the final decision; comments that are downvoted but not removed merely stick around in a light grey colour.

By enabling showdead in your user preferences and using the site for a while, especially reading controversial threads, you can get a feel for what kinds of comments are deleted by moderators exercising. It is clear that most moderation is about editorial control and not simply the removal of disruption.

This comment may be dead by the time you read it, due to the previous mention of Palestine - hi to users with showdead enabled. Its parent will probably merely be down voted because it's wrong but doesn't contain anything that would irk the mods.

philipkglass
0 replies
18h4m

Comments that are marked [dead] without the [flagged] indicator are like that because the user that posted the comment has been banned. For green (new) accounts this can be due to automatic filters that threw up false positives for new accounts. For old accounts this shows that the account (not the individual comment) has been banned by moderators. Users who have been banned can email hn@ycombinator.com pledging to follow the rules in the future and they'll be granted another chance. Even if a user remains banned, you can unhide a good [dead] comment by clicking on its timestamp and clicking "vouch."

Comments are marked [flagged] [dead] when ordinary users have clicked on the timestamp and selected "flag." So user downvotes cannot kill a comment, but flagging by ordinary non-moderator users can kill it.

jtriangle
7 replies
1d

(1) 4chin is too dumb to use HN, and there's no image posting so, I doubt they'd even be interested in raiding us (2) I've never seen anything illegal here, I'm sure it happens, and it gets dealt with quickly enough that it's not really ever going to be a problem if things continue as they have been.

They may lose 230 protection, sure, but probably not really a problem here. For Facebook et al, it's going to be an issue, no doubt. I suppose they could drop their algos and bring back the chronological feeds, but, my guess is that wouldn't be profitable given that ad-tech and content feeds are one in the same at this point.

I'd also assume that "curation" is the sticking point here, if a platform can claim that they do not curate content, they probably keep 230 protection.

wredue
5 replies
1d

Certain boards most definitely raid various HN threads.

Specifically, every political or science thread that makes it, is raided by 4chan. 4chan also regularly pushes anti/science and anti-education agenda threads to the top here, along with posts from various alt-right figures on occasion.

jtriangle
4 replies
23h30m

search: site:4chan.org news.ycombinator.com

Seems pretty sparse to me, and from a casual perusal, I haven't seen any actual calls to raiding anything here, it's more of a reference where articles/posts have happened, and people talking about them.

Remember, not everyone who you disagree with comes from 4chan, some of them probably work with you, you might even be friends with them, and they're perfectly serviceable people with lives, hopes, dreams, same as yours, they simply think differently than you.

wredue
3 replies
22h41m

lol dude. Nobody said that 4chan links are posted to HN, just that 4chan definitely raids HN.

4chan is very well known for brigading. It is also well known that using 4chan as well as a number of other locations, such as discord, to post links for brigades are an extremely common thing that the alt-right does to try to raise the “validity” of their statements.

I also did not claim that only these opinions come from 4chan. Nice strawman bro.

Also, my friends do not believe these things. I do not make a habit of being friends with people that believe in genociding others purely because of sexual orientation or identity.

jtriangle
2 replies
19h45m

Go ahead and type that search query into google and see what happens.

Also the alt-right is a giant threat, if you categorize everyone right of you as alt-right, which seems to be the standard definition.

That's not how I've chosen to live, and I find that it's peaceful to choose something more reasonable. The body politic is cancer on the individual, and on the list of things that are important in life, it's not truly important. With enough introspection you'll find that the tendency to latch onto politics, or anything politics-adjacent, comes from an overall lack of agency over the other aspects of life you truly care about. It's a vicious cycle. You have a finite amount of mental energy, and the more you spend on worthless things, the less you have to spend on things that matter, which leads to you latching further on to the worthless things, and having even less to spend on things that matter.

It's a race to the bottom that has only losers. If you're looking for genocide, that's the genocide of the modern mind, and you're one foot in the grave already. You can choose to step out now and probably be ok, but it's going to be uncomfortable to do so.

That's all not to say there aren't horrid, problem-causing individuals out in the world, there certainly are, it's just that the less you fixate on them, the more you realize that they're such an extreme minority that you feel silly fixating on them in the first place. That goes for anyone that anyone deems 'horrid and problem-causing' mind you, not just whatever idea you have of that class of person.

wredue
0 replies
13h27m

These people win elections and make news cycles. They are not an “ignorable, small minority”.

For the record, ensuring that those who wish to genocide LGBT+ people are not the majority voice on the internet is absolutely not “a worthless matter”, not by any stretch. I would definitely rather not have to do this, but then, the people who dedicate their lives to trolling and hate are extremely active.

halfcat
0 replies
16h36m

Go ahead and type that search query into google and see what happens.

What are you expecting it to show? That site removes all content after a matter of days.

Dr_Incelheimer
0 replies
22h2m

4chin is too dumb to use HN

I don't frequent 4cuck, I use soyjak.party which I guess from your perspective is even worse, but there are of plenty of smart people on the 'cuck thoughbeit, like the gemmy /lit/ schizo. I think you would feel right at home in /sci/.

supriyo-biswas
3 replies
1d

Not sure about the downvotes on this comment; but what parent says has precedent in Cubby Inc. vs Compuserve Inc.[1] and this is one of the reasons Section 230 came about to be in the first place.

HN is also heavily moderated with moderators actively trying to promote thoughtful comments over other, less thoughtful or incendiary contributions by downranking them (which is entirely separate from flagging or voting; and unlike what people like to believe, this place relies more on moderator actions as opposed to voting patterns to maintain its vibe.) I couldn't possibly see this working with the removal of Section 230.

[1] https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.

singleshot_
2 replies
1d

If I upvote something illegal, my liability was the same before, during, and after 230 exists, right?

robbiewxyz
0 replies
16h17m

I'd probably like the upvote itself to be considered "speech". The practical effect of upvoting is to endorse, together with the site's moderators and algorithm-curators, the comment to be shown to a wider audience.

Along those lines then then an upvote i.e. endorsement would be protected, up to any point where it violated one of the free speech exceptions, e.g. incitement.

hn_acker
0 replies
20h7m

Theoretically, your liability is the same because the First Amendment is what absolves you of liability for someone else's speech. Section 230 provides an avenue for early dismissal in such a case if you get sued; without Section 230, you'll risk having to fight the lawsuit on the merits, which will require spending more time (more fees).

jojobas
1 replies
17h24m

Result, HN turns to 4chan.

As if it was something bad. 4chan has /g and it's absolutely awesome.

71bw
0 replies
9h4m

Nuff said. Underneath the ever-lasting political cesspool from /pol/ and... _specific_ atmosphere, it's still one of the best places to visit for tech-based discussion.

doikor
1 replies
12h13m

4chan is moderated and the moderation is different on each board with the only real global moderation rule being "no illegal stuff". In addition to that the site does curate the content it shows you using an algorithm even though it is a very basic one (the thread with last reply goes to the top of the page and threads older then X are removed automatically.)

For example the qanon conspiracy nuts got moderated out of /pol/ for arguing in bad faith/just being too crazy to actually have any kind of conversation with and they fled to another board (8chan and later 8kun) that has even less moderation.

commandlinefan
0 replies
1h45m

4chan is moderated

Yep, 4chan isn't bad because "people I disagree with can talk there", it's bad because the interface is awful and they can't attract enough advertisers to meet their hosting demands.

wredue
0 replies
1d

Nah. HN is not the same as these others.

TikTok. Facebook. Twitter. YouTube.

All of these have their algorithms specifically curated to try to keep you angry. YouTube outright ignores your blocks every couple months, and no matter how many people dropping n-bombs you report and block, it never endingly pushes more and more.

These company know that their algorithms are harmful and they push them anyway. They absolutely should have liability for what their algorithm pushes.

tboyd47
0 replies
23h50m

Under Judge Matey's interpretation of Section 230, I don't even think option 1 would remain on the table. He includes every act except mere "hosting" as part of publisher liability.

pointnatu
0 replies
22h34m

Freedom of speech, not reach of their personal curation preferences, narrative shaping due to confirmation bias and survivorship bias. Tech is in the put them on scales to increase their signal, decrease others based upon some hokey story of academic and free market genius.

The pro-science crowd (which includes me fwiw) seems incapable of providing a proof any given scientist is that important. Same old social politics norms inflate some deflate others and we confirm our survival means we special. Ones education is vacuous prestige given physics applies equally; oh you did the math! Yeah I just tell the computer to do it. Oh you memorized the circumlocutions and dialectic of some long dead physicist. Outstanding.

There’s a lot of ego driven banal classist nonsense in tech and science. At the end of the day just meat suits with the same general human condition.

pdpi
0 replies
12h35m

Section 230 hasn't changed or been revoked or anything, so, from what I understand, manual moderation is perfectly fine, as long as that is what it is: moderation. What the ruling says is that "recommended" content and personalised "for you" pages are themselves speech by the platform, rather than moderation, and are therefore not under the purview of Section 230.

For HN, Dang's efforts at keeping civility don't interfere with Section 230. The part relevant to this ruling is whatever system takes recency and upvotes, and ranks the front page posts and comments within each post.

itishappy
0 replies
22h51m

4chan is actually moderated too.

coryrc
0 replies
21h13m

2) Require confirmation you are a real person (check ID) and attach accounts per person. The commercial Internet has to follow the laws they're currently ignoring and the non-commercial Internet can do what they choose (because of being untraceable).

whatshisface
25 replies
1d1h

The diverse biases of newspapers or social media sites are preferable to the monolithic bias a legal solution will impress.

nick238
21 replies
1d1h

So the solution is "more speech?" I don't know how that will unhook minors from the feedback loop of recommendation algorithms and their plastic brains. It's like saying 'we don't need to put laws in place to combat heroin use, those people could go enjoy a good book instead!'.

nostrademons
15 replies
1d1h

Yes, the solution is more speech. Teach your kids critical thinking or they will be fodder for somebody else who has it. That happens regardless of who's in charge, government or private companies. If you can't think for yourself and synthesize lots of disparate information, somebody else will do the thinking for you.

chongli
7 replies
23h40m

You're mistaken as to what this ruling is about. Ultimately, when it comes right down to it, the Third Circuit is saying this (directed at social media companies):

"The speech is either wholly your speech or wholly someone else's. You can't have it both ways."

Either they get to act as a common carrier (telephone companies are not liable for what you say on a phone call because it is wholly your own speech and they are merely carrying it) or they act as a publisher (liable for everything said on their platforms because they are exercising editorial control via algorithm). If this ruling is upheld by the Supreme Court, then they will have to choose:

* Either claim the safe harbour protections afforded to common carriers and lose the ability to curate algorithmically

or

* Claim the free speech protections of the First Amendment but be liable for all content as it is their own speech.

whatshisface
6 replies
22h31m

Algorithmic libel detectors don't exist. The second option isn't possible. The result will be the separation of search and recommendation engines from social media platforms. Since there's effectively one search company in each national protectionist bloc, the result will be the creation of several new monopolies that hold the power to decide what news is front-page, and what is buried or practically unavailable. In the English-speaking world that right would go to Alphabet.

chongli
4 replies
21h50m

The second option isn’t really meant for social media anyway. It’s meant for traditional publishers such as newspapers.

If this goes through I don’t think it will be such a big boost for Google search as you suggest. For one thing, it has no effect on OpenAI and other LLM providers. That’s a real problem for Google, as I see a long term trend away from traditional search and towards LLMs for getting questions answered, especially among young people. Also note that YouTube is social media and features a curation algorithm to deliver personalized content feeds.

As for social media, I think we’re better off without it! There’s countless stories in the news about all the damage it’s causing to society. I don’t think we’ll be able to roll all that back but I hope we’ll be able to make things better.

whatshisface
3 replies
21h43m

If the ruling was upheld, Google wouldn't gain any new liability for putting a TikTok-like frontend on video search results; the only reason they're not doing it now is that all existing platforms (including YouTube) funnel all the recommendation clicks back into themselves. If YouTube had to stop offering recommendations, Google could take over their user experience and spin them off into a hosting company that derived its revenue from AdSense and its traffic from "Google Shorts."

This ruling is not a ban on algorithms, it's a ban on the vertical integration between search or recommendation and hosting that today makes it possible for search engines other than Google to see traffic.

chongli
2 replies
17h49m

I actually don't think Google search will be protected in its current form. Google doesn't show you unadulterated search results anymore, they personalize (read: editorialize) the results based on the data they've collected on you, the user. This is why two different people entering the same query can see dramatically different results.

If Google wants to preserve their safe harbour protections they'll need to roll back to a neutral algorithm that delivers the same results to everyone given an identical query. This won't be the end of the world for Google but it will produce lower quality results (at least in the eyes of normal users who aren't annoyed by the personalization). Lower quality results will further open the doors to LLMs as a competitor to search.

whatshisface
1 replies
17h28m

Newspapers editorialize and also give the same results to everybody.

chongli
0 replies
15h3m

And newspapers decide every single word they publish, because they’re liable for it. If a newspaper defames someone they can be sued.

This whole case comes down to having your cake and eating it too. Newspapers don’t have that. They have free speech protections but they aren’t absolved of liability for what they publish. They aren’t protected under section 230.

If the ruling is upheld by SCOTUS, Google will have to choose: section 230 (and no editorial control) or first amendment plus liability for everything they publish on SERPs.

kelnos
1 replies
1d

Solution that require everyone to do a thing, and do it well, are doomed to fail.

Yes, it would be great if parents would, universally, parent better, but getting all of them (or a large enough portion of them for it to make a difference) to do so is essentially impossible.

nostrademons
0 replies
1d

Government controls aren't a solution either though. The people with critical thinking skills, who can effectively tell others what to think, simply capture the government. Meet the new boss, same as the old boss.

jrockway
1 replies
1d

I agree with this. Kids are already subject to an agenda; for example, never once in my K-12 education did I learn anything about sex. This was because it was politically controversial at the time (and maybe it still is now), so my school district just avoided the issue entirely.

I remember my mom being so mad about the curriculum in general that she ran for the school board and won. (I believe it was more of a math and science type thing. She was upset with how many coloring assignments I had. Frankly, I completely agreed with her then and I do now.)

nostrademons
0 replies
1d

I was lucky enough to go to a charter school where my teachers encouraged me to read books like "People's History of the U.S" and "Lies My Teacher Told Me". They have an agenda too, but understanding that there's a whole world of disagreement out there and that I should seek out multiple information sources and triangulate between them has been a huge superpower since. It's pretty shocking to understand the history of public education and realize that it wasn't created to benefit the student, but to benefit the future employers of those students.

wvenable
0 replies
1d

Yes, the solution is more speech.

I think we've reached the point now that there is more speech than any person can consume by a factor of a million. It now comes to down to picking what speech you want to hear. This is exactly what content algorithms are doing -> out of the millions of hours of speech produced in a day, it's giving you your 24 hours of it.

Saying "teach your kids critical thinking" is a solution but it's not the solution. At some point, you have to discover content out of those millions or hours a day. It's impossible to do yourself -- it's always going to be curated.

EDIT: To whomever downvoted this comment, you made my point. You should have replied instead.

forgetfreeman
0 replies
1d

K so several of the most well-funded tech companies on the planet sink literally billions of dollars into psyops research to reinforce addictive behavior and average parents are expected to successfully compete against it with...a lecture.

bsder
0 replies
21h4m

We have seen that adults can't seem to unhook from these dopamine delivery systems and you're expecting that children can do so?

Sorry. That's simply disingenuous.

Yes, children and especially teenagers do lots of things even though their parents try to prevent them from doing so. Even if children and teenagers still get them, we don't throw up our hands and sell them tobacco and alcohol anyway.

aeternum
4 replies
1d1h

Open-source the algorithm and have users choose. A marketplace is the best solution to most problems.

It is pretty clear that china already forces a very different tiktok ranking algo for kids within the country vs outside the country. Forcing a single algo is pretty unamerican though and can easily be abused, let's instead open it up.

kelnos
2 replies
1d

80% of users will leave things at the default setting, or "choose" whatever the first thing in the list is. They won't understand the options; they'll just want to see their news feed.

aeternum
1 replies
22h45m

I'm not so sure, the feed is quite important and users understand that. Look at how many people switched between X and Threads given their political view. People switched off Reddit or cancelled their FB account at times in the past also.

kfajdsl
0 replies
22h25m

I'm pretty sure going from X to Threads had very little to do with the feed algorithm for most people. It had everything to do with one platform being run by Musk and the other one not.

mindslight
0 replies
21h0m

"Open-source the algorithm" would be at best openwashing. The way to create the type of choice you're thinking is to force the unbundling of client software from hosting services.

mathgradthrow
0 replies
1d1h

Seems like the bias will be against manipulative algorithms. How does tiktok escape liability here? They give control of what is promoted to users to users.

itsdrewmiller
0 replies
14h4m

Newspaper biases are more diverse despite being subject to the liability social media companies are trying to escape.

danaris
0 replies
1d1h

Unfortunately, the biases of newspapers and social media sites are only diverse if they are not all under the strong influence of the wealthy.

Even if they may have different skews on some issues, under a system where all such entities are operated entirely for-profit, they will tend to converge on other issues, largely related to maintaining the rights of capital over labor and over government.

cbsmith
13 replies
18h43m

The rise of social media was largely predicated on the curation it provided. People, and particularly advertisers, wanted a curated environment. That was the key differentiator to the wild west of the world wide web.

The idea that curation is a problem with social media is always a head scratcher for me. The option to just directly publish to the world wide web without social media is always available, but time and again, that option is largely not chosen... this ruling could well narrow it down that being the only option.

Now, in practice, I don't think that will happen. This will raise the costs of operating social media, and those costs will be reflected in prices advertisers pay to advertise on social media. That may shrink the social media ecosystem, but what it will definitely do is raise the draw bridge over the moat around the major social media players. You're going to see less competition.

stale2002
8 replies
17h39m

People, and particularly advertisers, wanted a curated environment

Then give the choice to the user.

If a user wants to opt in, or change their moderation preferences then they should be allowed.

By all means offer a choice of moderation decisions. And let the user change them, opt out conditionally and ignore them if they so choose.

cbsmith
5 replies
17h11m

You say that like that choice doesn't exist.

stale2002
4 replies
15h35m

You say that like that choice doesn't exist.

You said this: "People, and particularly advertisers, wanted a curated environment."

If moderation choices are put in the hands of the user, then what you are describing is not a problem, as the user can have that.

Therefore, you saying that this choice exists, means that there isn't a problem for anyone who chooses to not have the spam, and your original complaint is refuted.

cbsmith
3 replies
14h42m

There absolutely can be a problem despite choice existing. I'm not saying otherwise.

I'm saying the choice exists. The choices we make are the problem.

stale2002
2 replies
3h9m

I'm saying the choice exists. The choices we make are the problem.

Well then feel free to choose differently for yourself.

Your original statement was this: "People, and particularly advertisers, wanted a curated environment."

You referencing what people "want" is directly refuted by the idea that they should be able to choose whatever their preferences are.

And your opinion on other people's choices doesn't really matter here.

cbsmith
1 replies
2h8m

You referencing what people "want" is directly refuted by the idea that they should be able to choose whatever their preferences are. > > And your opinion on other people's choices doesn't really matter here.

I think maybe we're talking past each other. What I'm saying what people "want" is a reflection of the overwhelming choices they make. They're choosing the curated environments.

The "problem" that is being referenced is the curation. The claim is that the curation is a problem; my observation is that it is the solution all the parties involved seem to want, because they could, at any time, choose otherwise.

stale2002
0 replies
1h3m

They're choosing the curated environments

Ok, and if more power is given to the user and the user is additionally able to control their current curation, then that's fine and you can continue to have your own curated environment, and other people will also have more or less control over their own curation.

Problem solved! You get to keep your curation, and other people can also change the curation on existing platforms for their own feeds.

The claim is that the curation is a problem

Nope. Few people have a problem with other people having s choice of curation.

Instead the solution that people are advocating for is for more curating powers to be giving to individual users so that they can choose, on current platforms, how much is curated for themselves.

Easy solution.

habinero
1 replies
15h33m

You're free to to make your own site with your own moderation controls. And nobody will use it, because it'll rapidly become 99.999% spam, CSAM and porn.

stale2002
0 replies
2h58m

You're free

Actually it seems like with these recent rulings, we will be free to use major social media platforms where the choice of moderation is given to the user, lest those social media platforms are otherwise held liable for their "speech".

I am fully fine with accepting the idea that if a social media platform doesn't act as a dumb pipe, then their choice of moderation is their "speech" as long as they can be held fully legally liable for every single moderation/algorithm choice that they make.

Fortunately for me, we are commenting on a post where a legal ruling was made to this effect, and the judge agrees with me that this is how things aught be.

Dalewyn
3 replies
17h59m

The option to just directly publish to the world wide web without social media is always available,

Not exactly. You still have to procure web hosting somewhere, and that hosting provider might choose to refuse your money and kick you off.

You might also have to procure the services of Cloudflare if you face significant traffic, and Cloudflare might choose to refuse your money and kick you off.

that option is largely not chosen...

That's because most people do not have neither the time nor the will to learn and speak computer.

Social media and immediate predecessors like Wordpress were and are successful because they brought down the lowest common denominator to "Smack keys and tap Submit". HTML? CSS? Nobody has time for our pig latin.

cbsmith
2 replies
16h59m

You still have to procure web hosting somewhere, and that hosting provider might choose to refuse your money and kick you off.

Who says you need to procure a web hosting provider?

But yes, if you connect your computer up to other computers, the other computers may decide they don't want any part of what you have to offer.

Without that, I wouldn't want to be in the Internet. I don't want to be forced to ingest bytes from anyone who would send them my way. That's just not a good value proposition for me.

That's because most people do not have neither the time nor the will to learn and speak computer.

I'm sorry, but no. You can literally type in to a word processor or any number of other tools and select "save as web content", and then use any number of products to take a web page and serve it up to the world wide web. It's been that way for the better part of 25 years. No HTML or CSS knowledge needed. If you can't handle that you can just record a video, save it to a file, and serve it up over a web server. Yes, you need to be able to use a computer to participate on the world wide web, but no more than you do to use social media.

Now, what you won't get is a distribution platform that gets your content up in front of people who never asked for it. That is what social media provides. It lowers the effort for the people receiving the content, as in exactly the curation process that the judge was ruling about.

Dalewyn
1 replies
11h55m

You can literally type in to a word processor or any number of other tools

Most people these days don't have a word processor or, indeed, "any number of other tools". It's all "in the cloud", usually Google Docs or Office 365 Browser Edition(tm).

select "save as web content"

Most people these days don't (arguably never) understand files and folders.

and then use any number of products to take a web page and serve it up to the world wide web.

Most people these days cannot be bothered. Especially when the counter proposal is "Make an X account, smash some keys, and press Submit to get internet points".

If you can't handle that you can just record a video, save it to a file, and serve it up over a web server.

I'm going to stop you right here: You are vastly overestimating both the will and the computer-aptitude of most people. There is a reason Youtube and Twitch have killed off literally every other video sharing service; there is a reason smartphones killed off personal computers (desktops and to a lesser degree laptops).

Social media became the juggernaut it is today because businesses figured out how to capitalize on the latent demand for easy sharing of information: Literal One Click Solutions(tm) that anyone can understand.

what you won't get is a distribution platform that gets your content up in front of people who never asked for it.

The internet and more specifically search engines in general have always been that distribution platform. The only thing that changed in the last 30 years is how easy it is to get your stuff on that platform.

cbsmith
0 replies
11h14m

Most people these days don't have a word processor or, indeed, "any number of other tools". It's all "in the cloud", usually Google Docs or Office 365 Browser Edition(tm).

Read that again. ;-)

Most people these days don't (arguably never) understand files and folders.

We can debate on the skills of "most people" back and forth, but I think it's fair to say that "save as web content" is easier to figure out than figuring out how to navigate a social media site (and that doesn't necessarily require files or folders). If that really is too hard for someone, there are products out there designed to make it even easier. Way back before social media took over, everyone and their dog managed to figure out how to put stuff on the web. People who couldn't make it through high school were successfully producing web pages, blogs, podcasts, video content, you name it.

I'm going to stop you right here: You are vastly overestimating both the will and the computer-aptitude of most people.

I disagree. I think they don't have the will to do it, because they'd rather use social media. I do believe if they had the will to do it, they would. I agree there are some people who lack the computer-aptitude to get content on the web. Where I struggle is believing those same people manage to put content on social media... which I'll point out is on the web.

There is a reason Youtube and Twitch have killed off literally every other video sharing service

Yes, because video sharing at scale is fairly difficult and requires real skill. If you don't have that skill, you're going to have to pay someone to do it, or find someone who has their own agenda that makes them want to do it without charging you... like Youtube or Twitch.

On the other hand, putting a video up on the web that no one knows about, no one looks for, and no one consumes unless you personally convince them to do so is comparatively simple.

there is a reason smartphones killed off personal computers (desktops and to a lesser degree laptops)

Yes, that reason is that smartphones were subsidies by carriers. ;-)

But it's good that you mentioned smartphones, because smart phones will let you send content to anyone in your contacts without you having anything that most would describe as "computer-aptitude". No social media needed... and yet the prevailing preference is for people to go through a process of logging in, shaping content to suit the demands of social media services, attempting to tune the content to get "the algorithm" to show it to as many people as possible, and put their content there. That takes more will/aptitude/whatever, but they do it for the distribution/audience.

Social media became the juggernaut it is today because businesses figured out how to capitalize on the latent demand for easy sharing of information: Literal One Click Solutions(tm) that anyone can understand.

I'd agree with you if you said "distribute" instead of "sharing". It's really hard to get millions of people to consume your content. That is, until social media came along and basically eliminated the cost of distribution. So any idiot can push their content out to millions and fill the world with whatever they want.... and now there's a sense of entitlement about it, where if a platform doesn't push that content on other people, at no cost to them, that they're being censored.

Yup, that does really require social media.

The internet and more specifically search engines in general have always been that distribution platform. The only thing that changed in the last 30 years is how easy it is to get your stuff on that platform.

No, the Internet & the web required you to go looking for the content you wanted. Search engines (at least at one time) were designed to accelerate that proces of find exactly the content you were looking for faster, and get you off their platform ASAP. Social media is kind of the opposite of search engines. They want you to stay on their platform; they want you to keep scrolling at whatever "engaging" content they can find, regardless of what you're looking for; if you forget about whatever you were originally looking for, that's a bonus. It's that ability to have your content show up when no one is looking for it where social media provides an advantage over the web for content makers.

AnthonyMouse
11 replies
21h34m

I think the ultimate problem is that social media is not unbiased — it curates what people are shown.

This is literally the purpose of Section 230. It's Section 230 of the Communications Decency Act. The purpose was to change the law so platforms could moderate content without incurring liability, because the law was previously that doing any moderation made you liable for whatever users posted, and you don't want a world where removing/downranking spam or pornography or trolling causes you to get sued for unrelated things you didn't remove.

zahlman
4 replies
21h12m

The purpose was to change the law so platforms could moderate content

What part of deliberately showing political content to people algorithmically expected to agree with it, constitutes "moderation"?

What part of deliberately showing political content to people algorithmically expected to disagree with it, constitutes "moderation"?

What part of deliberately suppressing or promoting political content based on the opinions of those in charge of the platform, constitutes "moderation"?

What part of suppressing "misinformation" on the basis of what's said in "reliable sources" (rather than any independent investigation - but really the point would still stand), constitutes "moderation"?

What part of favouring content from already popular content creators because it brings in more ad revenue, constitutes "moderation"?

What part of algorithmically associating content with ads for specific products or services, constitutes "moderation"?

tomrod
2 replies
21h10m

Prosaically, all of your examples are moderation. And as a private space that a user must choose to access, I'd argue that's great.

Dalewyn
1 replies
18h5m

There is (or should be, in any case) a difference between moderation and recommendation.

habinero
0 replies
15h35m

There is no difference. Both are editorial choices and protected 1A activity.

crooked-v
0 replies
21h4m

What part of deliberately showing political content to people algorithmically expected to agree with it, constitutes "moderation"?

Well, maybe it's just me, but only showing political content that doesn't include "kill all the (insert minority here)", and expecting users to not object to that standard, is a pretty typical aspect of moderation for discussion sites.

What part of deliberately suppressing or promoting political content based on the opinions of those in charge of the platform, constitutes "moderation"?

Again, deliberately suppressing support for literal and obvious facism, based on the opinions of those in charge of the platform, is a kind of moderation so typical that it's noteworthy when it doesn't happen (e.g. Stormfront).

What part of suppressing "misinformation" on the basis of what's said in "reliable sources" (rather than any independent investigation - but really the point would still stand), constitutes "moderation"?

Literally all of Wikipedia, where the whole point of the reliable sources policy is that the people running it don't have to be experts to have a decently objective standard for what can be published.

samrus
2 replies
21h23m

Yeah but they're not just removing spam and porn. They're picking out things that makes them money even if it harms people. That was never in the spirit of the law

habinero
1 replies
15h36m

Yes, it is. Section 230 doesn't replace the 1A, and deciding what you want to show or not show is classic 1A activity.

singleshot_
0 replies
2h56m

It's also classic commercial activity. Because 230 exists, we are able to have many intentionally different social networks and web tools. If there was no moderation -- for example, if you couldn't delete porn from linkedin -- all social networks would be the same. Likely there would only be one large one. If all moderation was pushed to the client side, it might seem like we could retain what we have but it seems very possible we could lose the diverse ecosystem of Online and end up with something like Walmart.

This would be the worst outcome of a rollback of 230.

itsdrewmiller
2 replies
14h17m

The CDA was about making it clearly criminal to send obscene content to minors via the internet. Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn't remove your common carrier protections, but I don't believe that was a response to pre-CDA status quo.

dragonwriter
0 replies
12h37m

The CDA was about making it clearly criminal to send obscene content to minors via the internet.

Basically true.

Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content.

No, it wasn't, and you can tell that because there is literally not a single word to that effect in Section 230. It was to enable information service providers to exercise editorial control over user-submitted content without acquiring publisher-style liability, because the alternative, giving liability decisions occurring at the time and the way providers were reacting to them, was that any site using user-sourced content at scale would, to mitigate legal risk, be completely unmoderated, which was the opposite of the vision the authors of Section 230 and the broader CDA had for the internet. There are no "common carrier" obligations or protections in Section 230. The terms of the protection are the opposite of common carrier, and while there are limitations on the protections, there are no common carrier like obligations attached to them.

AnthonyMouse
0 replies
12h44m

The CDA was about making it clearly criminal to send obscene content to minors via the internet.

That part of the law was unconstitutional and pretty quickly got struck down, but it still goes to the same point that the intent of Congress was for sites to remove stuff and not be "common carriers" that leave everything up.

Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn't remove your common carrier protections, but I don't believe that was a response to pre-CDA status quo.

If you can forgive Masnick's chronic irateness he does a decent job of explaining the situation:

https://www.techdirt.com/2024/08/29/third-circuits-section-2...

ein0p
9 replies
1d

These are some interesting mental gymnastics. Zuckerberg literally publicly admitted the other day that he was forced by the government to censor things without a legal basis. Musk disclosed a whole trove of emails about the same at Twitter. And you’re still “not so sure”? What would it take for you to gain more certainty in such an outcome?

vundercind
8 replies
23h41m

Haven’t looked into the Zuckerberg thing yet but everything I’ve seen of the “Twitter Files” has done more to convince me that nothing inappropriate or bad was happening, than that it was. And if those selective-releases were supposed to be the worst of it? Doubly so. Where’s the bad bit (that doesn’t immediately stop looking bad if you read the surrounding context whoever’s saying it’s bad left out)?

ein0p
7 replies
23h24m

Means you haven’t really looked into the Twitter files. They were literally holding meetings with the government officials and were told what to censor and who to ban. That’s plainly unconstitutional and heads should roll for this.

kstrauser
6 replies
23h10m

How did the government force Facebook to comply with their demands, as opposed to going along with them voluntarily?

oceanplexian
2 replies
21h7m

How did the government force Facebook to comply

By asking.

The government asking you to do something is like a dangerous schoolyard bully asking for your lunch money. Except the gov has the ability to kill, imprison, and destroy. Doesn’t matter if you’re an average Joe or a Zuckerberg.

habinero
0 replies
15h48m

Any proof that they were threatened? I've never seen any.

Terr_
0 replies
19h48m

So it's categorically impossible for the government to make any non-coercive request or report for anything because it's the government?

I don't think that's settled law.

For example, suppose the US Postal Service opens a new location, and Google Maps has the pushpin on the wrong place or the hours are incorrect. A USPS employee submits a report/correction through normal channels. How is that trampling on Google's first-amendment rights?

ein0p
1 replies
22h50m

This is obviously not a real question, so instead of answering I propose we conduct a thought experiment. The year is 2028, and Zuck had a change of heart and fully switched sides. Facebook, Threads, and Instagram now block the news of Barron Trump’s drug use, of his lavishly compensated board seat on the board of Russia’s Gazprom, and bans the dominant electoral candidate off social media. In addition it allows the spread of a made up dossier (funded by the RNC) about Kamala Harris’ embarrassing behavior with male escorts in China.

What you should ask yourself is this: irrespective of whether compliance is voluntary or not, is political censorship on social media OK? And what kind of a logical knot one must contort one’s mind into to suggest that this is the second coming of net neutrality? Personally I think the mere fact that the government is able to lean on a private company like that is damning AF.

kstrauser
0 replies
22h38m

You're grouping lots of unrelated things.

All large sites have terms of service. If you violate them, you might be removed, even if you're "the dominant electoral candidate". Remember, no one is above the law, or in this case, the rules that a site wishes to enforce.

I'm not a fan of political censorship (unless that means enforcing the same ToS that everyone else is held to, in which case, go for it). Neither am I for the radical notion of legislation telling a private organization that they must host content that they don't wish to.

This has zero to do with net neutrality. Nothing. Nada.

Is there evidence that the government leaned on a private company instead of meeting with them and asking them to do a thing? Did Facebook feel coerced into taking actions they wouldn't have willingly done otherwise?

commandlinefan
0 replies
1h43m

How did the government force Facebook to comply

Everybody who paid protection to the mafia did so "voluntarily", too.

tboyd47
8 replies
1d

It all comes down to the assertion made by the author:

There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.
philippejara
2 replies
1d

I find it hard to see a way to run a targeted ad social media company at all if you have to make sure children aren't harmed by your product.

stevenicr
1 replies
23h31m

don't let children use? In TN it that will be illegal Jan 1 - unless social media creates a method for parents to provide ID and opt out of them being blocked I think?

Wouldn't that put the responsibility back on the parents?

The state told you XYZ was bad for your kids and it's illegal for them to use, but then you bypassed that restriction and put the sugar back into their hands with an access-blocker-blocker..

Random wondering

ghaff
0 replies
23h24m

Age limitations for things are pretty widespread. Of course, they can be bypassed to various degrees but, depending upon how draconian you want to be, you can presumably be seen as doing the best you reasonably can in a virtual world.

hyeonwho4
2 replies
23h52m

I'm not sure about video, but we are no longer in an era when manual moderation is necessary. Certainly for text, moderation for child safety could be as easy as taking the written instructions currently given to human moderators and having an LLM interpreter (only needs to output a few bits of information) do the same job.

tboyd47
1 replies
23h45m

That's great, but can your LLM remove everything harmful? If not, you're still liable for that one piece of content that it missed under this interpretation.

itsdrewmiller
0 replies
14h7m

There are two questions - one is "should social media companies be globally immune from liability for any algorithmic decisions" which this case says "no". Then there is "in any given case, is the social media company guilty of the harm of which it is accused". Outcomes for that would evolve over time (and I would hope for clarifying legislation as well).

aftbit
1 replies
1d

What about 0% margins? Is there actually enough money in social media to pay for moderation even with no profit?

Ajedi32
0 replies
21h30m

At the scale social media companies operate at, absolutely perfect moderation with zero false negatives is unavailable at any price. Even if they had a highly trained human expert manually review every single post (which is obviously way too expensive to be viable) some bad stuff would still get through due to mistakes or laziness. Without at least some form of Section 230, the internet as we know it cannot exist.

ryandrake
1 replies
1d

I look at forums and social media as analogous to writing a "Letter to the Editor" to a newspaper:

In the newspaper case, you write your post, send it to the newspaper, and some editor at the newspaper decides whether or not to publish it.

In Social Media, the same thing happens, but it's just super fast and algorithmic: You write your post, send it to the Social Media site (or forum), an algorithm (or moderator) at the Social Media site decides whether or not to publish it.

I feel like it's reasonable to interpret this kind of editorial selection as "promotion" and "recommendation" of that comment, particularly if the social media company's algorithm deliberately places that content into someone's feed.

jay_kyburz
0 replies
21h53m

I agree.

I think if social media companies relayed communication between it's users with no moderation at all, then they should be entitled to carrier protections.

As soon as they start making any moderation decisions, they are implicitly endorsing all other content, and should therefore be held responsible for it.

There are two things social media can do. Firstly, they should accurately identify its users before allowing them to post, so they can counter sue that person if post harms them, and secondly, they can moderate every post.

Everybody says this will kill social media as we know it, but I say the world will be a better place as a result.

kstrauser
1 replies
1d

"Social media" is a broad brush though. I operate a Mastodon instance with a few thousand users. Our content timeline algorithm is "newest on top". Our moderation is heavily tailored to the users on my instance, and if a user says something grossly out of line with our general vibe, we'll remove them. That user is free to create an account on any other server who'll have them. We're not limiting their access to Mastodon. We're saying that we don't want their stuff on our own server.

What are the legal ramifications for the many thousands of similar operators which are much closer in feel to a message board than to Facebook or Twitter? Does a server run by Republicans have to accept Communist Party USA members and their posts? Does a vegan instance have to allow beef farmers? A PlayStation fan server host pro-PC content?

dudus
0 replies
23h52m

You are directly responsible for everything they say and legally liable for any damages it may cause. Or not IANAL

immibis
1 replies
23h4m

Refusal to moderate, though, is also a bias. It produces a bias where the actors who post the most have their posts seen the most. Usually these posts are Nigerian princes, Viagra vendors, and the like. Nowadays they'll also include massive quantities of LLM-generated cryptofascist propaganda (but not cryptomarxist propaganda because cryptomarxists are incompetent at propaganda). If you moderate the spam, you're biasing the site away from these groups.

itsdrewmiller
0 replies
22h43m

You can't just pick anything and call it a "bias" - absolutely unmoderated content may not (will not) represent the median viewpoint, but it's not the hosting provider "bias" doing so. Moderating spam is also not "bias" as long as you're applying content-neutral rules for how you do that.

amarant
1 replies
13h31m

But what are the implications?

No more moderation? This seems bad.

No more recommendation/personalization? This could go either way, I'm also willing to see where this one goes.

No more public comment sections? Arstechnica claimed back in the day when section 230 was under fire last time that this would be the result if it was ever taken away. This seems bad.

I'm not sure what will happen, I see 2 possible outcomes that are bad and one that is maybe good. At first glance this seems like bad odds.

Actually there's a fourth possibility, and that's holding Google responsible for whatever links they find for you. This is the nuclear option. If this happens, the internet will have to shut all of its American offices to get around this law.

jacoblambda
0 replies
13h24m

Would bluesky not solve this issue?

The underlying hosted service is nearly completely unmoderated and unpersonalised. It's just streams of bits and data routing. You can scan for/limit the propagation of CSAM or DMCA content to some degree as an infrastructure provider but that's really about it and even then you can only really do so to fairly limited degrees and that doesn't stop other providers (or self hosted participants) from propagating that anyways.

Then you provide custom feed algorithms, labelling services, moderation services, etc on top of that but none of them change or control the underlying data streams. They just annotate on top or provide options to the client.

Then the user's client is the one that directly consumes all these different services on top of the base service to produce the end result.

It's a true, unbiased section 230 compatible protocol (under even the strictest interpretation) that the user then can optionally combine with any number of secondary services and addons that they use to craft their personalised social media experience.

IG_Semmelweiss
1 replies
14h45m

I always wondered why Section 230 does not have a carve-out exemption to deal with the censorship issue.

I think we'd all agree that most websites are better off with curation and moderation of some kind. If you don't like it, you are free to leave the forum, website, etc. The problem is that Big Tech fails to work in the same way, because those properties are becoming effectively the "public highways" where everyone must pass by.

This is not dissimilar from say, public utilities.

So, why not define how a tech company becomes a Big Tech "utility", and therefore, cannot hide behind 230 exception for things that it willingly does, like censorship ?

itsdrewmiller
0 replies
14h14m

Wonder no longer! It's Section 230 of the communications "decency" act, not the communication freedoms and regulations act. It doesn't talk about censorship because that wasn't in the scope of the bill. (And actually it does talk about censorship of obscene material in order to explicitly encourage it.)

smrtinsert
0 replies
19h39m

This is a much needed regulation. If anything it will probably spur innovation to solve safety in algorithms.

I think of this more along the lines of preventing a factoring from polluting a water supply or requiring a bank to have minimum reserves.

shadowgovt
0 replies
20h53m

HN also has an algorithm.

I'll have to read the third circuit's ruling in detail to figure out whether they are trying to draw a line in the Sand on whether an algorithm satisfies the requirements for section 230 protection or falls outside of it. If that's what they're doing, I wouldn't assume a priori that a site like Hacker News won't also fall afoul of the law.

raxxorraxor
0 replies
11h36m

In a very general sense, this ruling could be seen as a form of net neutrality

In reality this will not be the case and instead it will introduce the bias of regulators to replace the bias companies want there to be. And even with their motivation to sell users attention, I cannot see this as an improvement. No, the result will probably be worse.

hungie
0 replies
13h35m

social media is not unbiased ...

Media, generally, social or otherwise, is not unbiased. All media has bias. The human act of editing, selecting stories, framing those stories, authoring or retelling them... it's all biased.

I wish we would stop seeking unbiased media as some sort of ideal, and instead seek open biases -- tell me enough about yourself and where your biases lie, so I can make informed decisions.

This reasoning is not far off from the court's thinking: editing is speech. A for you page is edited, and is TikTok's own speech.

That said, I do agree with your meta point. Social media (hn not excluded) is a generally unpleasant place to be.

gorgoiler
0 replies
13h57m

For the case in question the major problem seems to be, specifically, what content do we allow children to access.

There’s an enormous difference in the debate between what should be prohibited and what should be prohibited for children.

gigatexal
0 replies
12h19m

If it is a reckoning for social media then so be it. Social media net-net was probably a mistake.

But I doubt this gets held on appeal. Given how fickle this Supreme Court is they’ll probably overrule themselves to fit their agenda since they don’t seem to think precedent is worth a damn.

commandlinefan
0 replies
2h34m

is that social media is not unbiased

That's how I read it, too. Section 230 doesn't say you can't get in trouble for failure to moderate, it says that you can't get in trouble for moderating one thing but not something else (in other words, the government can't say, "if you moderated this, you could have moderated that"). They seem to be going back on that now.

Real freedom from censorship - you cannot be held liable for content you hosted - has never been tried. The US government got away with a lot of COVID-era soft censorship by just strong-arming social media sites into suppressing content because there were no first-amendment style protections against that sort of soft censorship. I'd love to see that, but there's no reason to think that our government is going in that direction.

bsder
0 replies
20h54m

I think the ultimate problem is that social media is not unbiased — it curates what people are shown.

It is not only biased but also biased for maximum engagement.

People come to these services for various reasons but then have this specifically biased stuff jammed down their throats in a way to induce specific behavior.

I personally don't understand why we don't hammer these social media sites for conducting psychological experiments without consent.

amelius
0 replies
10h24m

We should just ditch advertisements as a monetization model, and see what happens.

__loam
0 replies
17h38m

Threads is actually pretty good if you ruthlessly block people that you dislike.

WCSTombs
0 replies
1d1h

Yeah, pretty much. What's not clear to me though is how non-targeted content curation, like simply "trending videos" or "related videos" on YouTube, is impacted. IMO that's not nearly as problematic and can be useful.

EasyMark
0 replies
20h32m

I think HN sees this as just more activist judges trying to overrule the will of the people (via Congress). This judge is attempting to interject his opinion on the way things should be vs what a law passed by the highest legislative body in the nation as if that doesn’t count. He is also doing it on very shaky ground, but I wouldn’t expect anything less of the 3rd circuit (much like the 5th)

chucke1992
46 replies
1d2h

So basically closer and closer to governmental control over social networks. Seems like a global trend everywhere. Governments will define the rules by which communication services (and social networks) should operate.

passwordoops
30 replies
1d1h

How is an elected government with checks and balances worse than a faceless corporation?

bentley
11 replies
1d1h

A faceless corporation can't throw me in jail for hosting an indie web forum.

nradov
6 replies
1d1h

True, but this particular case and Section 230 are only about civil liability. Regardless of the final outcome after the inevitable appeals, no one will go to jail. At most they'll have to pay damages.

falcolas
5 replies
1d1h

no one will go to jail

Did you know that there's has been a homeowner jailed for breaking his HOA's rules about lawn maintenance?

The chances are good that someone will go to jail.

nradov
4 replies
1d

I don't know that because it's obviously false. If someone was jailed in relation to such a case then it was because they did something way beyond violating the HOA CC&Rs, such as assaulting an HOA employee or refusing to comply with a court order. HOAs have no police powers and private criminal prosecutions haven't been allowed in any US state for many years.

Citation needed.

falcolas
3 replies
1d

Google is your friend. Sorry to be so trite, but there are literally dozens upon dozens of sources.

One such example happened in 2008. The man's name is "Joseph Prudente", and he was jailed because he could not pay the HOA fine for a brown lawn. Yes, there was a judge hitting Joseph Prudente with a "contempt of court" to land him in jail (with an end date of "the lawn is fixed or the fine is paid"), but his only "crime" was ever being too poor to maintain his lawn to the HOA's standards.

“It’s a sad situation,” says [HOA] board president Bob Ryan. “But in the end, I have to say he brought it upon himself.”
nradov
2 replies
22h41m

It's not my job to do your legal research for you and you're misrepresenting the facts of the case.

As I expected, Mr. Prudente wasn't jailed for violating a HOA rule but rather for refusing to comply with a regular court order. It's a tragic situation and I sympathize with the defendant but when someone buys property in an HOA they agree to comply with the CC&R. If they subsequently lack the financial means to comply then they have the option of selling the property, or of filing bankruptcy which would at least delay most collections activities. HOAs are not charities, and poverty is not a legally valid reason for failing to meet contractual obligations.

falcolas
1 replies
19h36m

So, having a bad lawn is ultimately worse than being convicted of a crime, maybe even of killing someone, since there's no sentence. There's no appeal. There's no concept of "doing your time". Your lawn goes brown, and you can be put in jail forever because they got a court order which makes it all perfectly legal.

It's not my job to do your legal research for you and you're misrepresenting the facts of the case.

So, since it's not your job, you're happy to be ignorant of what can be found with a simple Google search? It's not looking up legal precedent or finding a section in the reams of law - it's a well reported and repeated story.

And let's be honest with each other - while by the letter of the law he was put into jail for failing to fulfill a court order, in practice he was put into jail for having a bad lawn. I'll go so far to assert that the bits in between don't really matter, since the failure to maintain the lawn lead directly to being in jail until the lawn was fixed.

So no, we don't have a de jure debtor's prison. But we do have a de facto debtor's prison.

nradov
0 replies
18h59m

Let's be honest with each other: you're attempting to distort and misrepresent what happened in one Florida case to try and support your narrative about what happened in a different and entirely unrelated federal case. The case of Anderson v. TikTok under discussion here doesn't involve a contempt of court order, no one has gone to jail, nor has the trial court even reached a decision on damages.

The reality is that this case is going to spend years working through the normal appeals process. Before anyone panics or celebrates let's be patient and wait for that to run its course. Until that happens it's all speculation. Calm down.

The US legal system gives authority to judges to use contempt orders to jail people when necessary as a last resort. This is essential to make the system work because otherwise some people would just ignore orders with no consequence. Whether the underlying case is about a debt owed to an HOA or any other issue is irrelevant. And the party subject to a contempt order can always take that up with a higher court.

mikewarot
3 replies
1d1h

A faceless corporation could be encouraged to use it's algorithm for profit in a way that gets you killed.... as was the main point of the article.

tedunangst
0 replies
22h2m

So the theory is the girl in question was going to start competing with TikTok, so they showed a suicide inducing video to silence her?

krapp
0 replies
1d1h

That's far more abstract than sending men with guns to your house.

falcolas
0 replies
1d1h

That can also be done today by way of the government (at least in the US): swatting.

To be a bit cliched, there's a rather a lot of inattention and time that lets a child kill themselves after watching a video.

dehrmann
5 replies
1d1h

I can sue the corporation. I can start a competing corporation.

Elected governments also aren't as free as you'd think. Two parties control 99% of US politics. Suppose I'm not a fan of trade wars; both parties are in favor of them right now.

kmeisthax
2 replies
1d

Big Tech is a government, we just call it a corporation.

passwordoops
0 replies
20h43m

That's what most people miss

matwood
0 replies
23h32m

And it's unelected.

pixl97
0 replies
1d1h

I can sue the corporation. I can start a competing corporation.

Ah, the libertarian way.

I, earning $40,000 a year will take on the corporate giant that has a multimillion dollar legal budget and 30 full time lawyers and win... I know, I saw it in a movie once.

The law books are filled with story after story of corporations doing fully illegal shit, then using money to delay it in court for decades... then laughably getting a tiny fine that represents less than 1% of the profits.

TANSTAFL.

BriggyDwiggs42
0 replies
11h41m

I can sue the corporation. I can start a competing corporation.

Yeah good luck with that buddy. I’m sorry, but you can’t do a thing to these behemoths. At least when a government bends you over it loses your vote, which sorta kinda matters to them. A corporation is incentivized to disregard your interests unless you are profitable to them, in which case they treat you like glorified livestock.

gspencley
4 replies
1d

Government is force. It is laws, police, courts and the ability to seriously screw up your life if it chooses.

A corporation might have "power" in an economic sense. It might have market significant presence in the marketplace. That presence might pressure or influence you in certain ways that you would prefer it not, such as the fact that all of your friends and family are customers/users of that faceless corporation.

But what the corporation cannot do is put you in jail, seize your assets, prevent you from starting a business, dictate what you can or can't do with your home etc.

Government is a necessary good. I'm no anarchist. But government is far more of a potential threat to liberty than the most "powerful" corporation could ever be.

em-bee
2 replies
22h5m

But what the corporation cannot do is put you in jail, seize your assets, prevent you from starting a business, dictate what you can or can't do with your home etc.

a corporation can "put me in jail" for copyright violations, accuse me of criminal conduct (happened in the UK, took them years to fix), seize my money (paypal, etc), destroy my business (amazon, google)...

But government is far more of a potential threat to liberty than the most "powerful" corporation could ever be.

you (in the US) should vote for a better government. i'll trust my government to protect my liberty over most corporations any day.

srackey
1 replies
20h38m

No, they can appeal to the state to get them to do it.

But you still think parliament actually controls the government as opposed to Whitehall, so I understand why this may be a little intellectually challenging for you.

em-bee
0 replies
18h43m

they can appeal to the state to get them to do it

the end result is the same.

Whitehall

1: i was talking about the US government and my own.

2: i am not from the UK

therefore your comment is entirely inappropriate.

genocidicbunny
3 replies
1d1h

The government tends to have a monopoly on violence, which is quite the difference. A faceless corporation will have a harder time fining you, garnishing your wages, charging you with criminal acts. (For now at least...)

mrguyorama
1 replies
1d

The government tends to have a monopoly on violence

They don't literally, as can be seen by that guy who got roughed up by the Pinkertons for the horror of accidentally being sent a Magic card he shouldn't have been.

Nobody went to jail for that. So corporations have at least as much power over your life as the government, and you don't get to vote out corporations.

Tell me, how do I "choose a different company" with, for example, Experian, who keeps losing my private info, refuses to assign me a valid credit score despite having a robust financial history, and can legally ruin my life?

aidenn0
0 replies
1d

They don't literally, as can be seen by that guy who got roughed up by the Pinkertons for the horror of accidentally being sent a Magic card he shouldn't have been.

Source for that?

I found [1] which sounds like intimidation; maybe a case for assault depending on how they "frightened his wife" but nothing about potentiall battery, which "roughed up" would seem to imply. The Pinkertons do enough shady stuff that there's not a need to exaggerate what they do.

1: https://www.polygon.com/23695923/mtg-aftermath-pinkerton-rai...

lcnPylGDnU4H9OF
0 replies
1d1h

Conversely, the US government in particular will have a harder time with bans (first amendment), shadow bans (sixth amendment), hiding details about their recommendation algorithms (FOIA). The "checks and balances" part is important.

lostmsu
2 replies
1d

You can trivially choose not to associate with a corporation. You can't really do so with your government.

vkou
1 replies
23h46m

Trivially is doing a lot of lifting in that.

By that same logic, you can 'trivially' influence a democratic government, you have no such control over a corporation.

opo
0 replies
23h3m

...By that same logic, you can 'trivially' influence a democratic government, you have no such control over a corporation.

That is a misrepresentation of the message you are replying too:

>You can trivially choose not to associate with a corporation. You can't really do so with your government.

You won't get into legal trouble if you don't have a Facebook account, or a Twitter account, or use a search engine than Google, etc. Try to ignore the rules setup by your government and you will very quickly learn what having a monopoly of physical force within a given territory means. This is a huge difference between the two.

As far as influencing a government or a corporation, I suspect (for example) that a letter to the CEO of even a large corporation will generally have more impact than a letter to the POTUS. (For example, customer emails forwarded from Bezos: https://www.quora.com/Whats-it-like-to-receive-a-question-ma...). This obviously will vary from company to company and maybe the President does something similar but my guess is maybe not.

amscanne
3 replies
1d1h

Not at all. It’s merely a question of whether social networks are shielded from liability for their recommendations, recognizing that what they choose to show you is a form of free expression that may have consequences — not an attempt to control that expression.

srackey
2 replies
20h45m

Of course Comrade, there must be consequences for these firms pushing Counter-Revolutionary content. They can have free expression, but they must realize these algorithms are causing great harm to the Proletariat by platforming such content.

stale2002
0 replies
17h24m

Well it would be the same exact protections that are provided to everyone else's free speech.

Yes, if you as a person start making death threats or direct calls to violence, then you could be held liable for that.

Were you not aware of that?

An algorithm isn't any different from any other sort of speech.

BriggyDwiggs42
0 replies
11h49m

Brother, the child asphyxiation challenge isn’t political content getting unfairly banned. They would only be liable for harm that can be proven, far as I’m aware, so political speech wouldn’t be affected unless it was defamatory or something like a direct threat.

whatshisface
2 replies
1d1h

Given that the alternative was public control over governments, I guess it's inevitable that this would become a worldwide civil rights battle.

zerodensity
1 replies
23h14m

What does public control over governments mean?

whatshisface
0 replies
22h41m

It means that the process of assimilating new information, coming to conclusions, and deciding what a nation should do is carried out in the minds of the public, not in the offices of relatively small groups who decide what they want the government to do, figure out what conclusions would support it, and then make sure the public only assimilates information that would lead them to such conclusions.

titusjohnson
0 replies
1d1h

Is it really adding governmental control, or is it removing a governmental control? From my perspective Section 280 was controlling me, a private citizen, by saying "you cannot touch these entities"

pelorat
0 replies
22h19m

All large platforms already enact EU law over US law. Moderation is required of all online services which actively target EU users in order to shield themselves from liability for user generated content. The directive in question is 2000/31/EC and is 24 years old already. It's the precursor of the EU DSA and just like it, 2000/31/EC has extraterritorial reach.

krapp
0 replies
1d1h

The fix was in as soon as both parties came up with a rationale to support it and people openly started speaking about "algorithms" in the same spooky scary tones usually reserved for implied communist threats.

jlarocco
0 replies
1d

I feel like that's a poor interpretation of what happened. Corporations and businesses don't inherently have rights - they only have them because we've granted them certain rights, and we already put limits on them. We don't allow cigarette, alcohol, and marijuana advertising to children, for example. And now they'll have to face the consequences of sending stupid stuff like the "black out challenge" to children.

It's one thing to say, "Some idiot posted this on our platform." It's another thing altogether to promote and endorse the post and send it out to everybody.

Businesses should be held responsible for their actions.

dartharva
0 replies
14h54m

Well as these social networks are increasingly dominating internet use to the level that they end up being the only thing used in the internet by constituent plebeians, it makes sense that they receive as much regulatory oversight as telecom providers do.

blackeyeblitzar
0 replies
1d

I think it is broader than that. It’s government control over the Internet. Sure we’re talking about forced moderation (that is, censorship) and liability issues right now. But it ultimately normalizes a type of intervention and method of control that can extend much further. Just like we’ve seen the Patriot Act normalize many violations of civil liberties, this will go much further. I hope not, but I can’t help but be cynical when I see the degree to which censorship by tech oligarchs has been accepted by society over the last 8 years.

aidenn0
0 replies
1d

IANAL, but it seems to me that Facebook from 20ish years ago would likely be fine under this ruling; it just showed you stuff that people you have marked as friends post. However, if Facebook wants to specifically pick things to surface, that's where potential liability is involved.

The alleged activity in this lawsuit was TikTok either knew or should have known that it was targeting content to minors that contained challenges that was likely to result in harm if repeated. That goes well beyond simple moderation, and is even something that various social media companies have argued in court is speech made by the companies.

JumpCrisscross
0 replies
1d1h

Governments will define the rules by which communication services (and social networks) should operate

As opposed to when they didn’t?

delichon
41 replies
1d

  TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.” One video depicted the “Blackout Challenge,” which encourages viewers to record themselves engaging in acts of self-asphyxiation. After watching the video, Nylah attempted the conduct depicted in the challenge and unintentionally hanged herself. -- https://cases.justia.com/federal/appellate-courts/ca3/22-3061/22-3061-2024-08-27.pdf?ts=1724792413
An algorithm accidentally enticed a child to hang herself. I've got code running on dozens of websites that recommends articles to read based on user demographics. There's nothing in that code that would or could prevent an article about self-asphyxiation being recommended to a child. It just depends on the clients that use the software not posting that kind of content, people with similar demographics to the child not reading it, and a child who gets the recommendation not reading it and acting it out. If those assumptions fail should I or my employer be liable?

mihaaly
18 replies
22h42m

Yes.

Or you do things that gives you rewards - and do not care what it will result otherwise - but you want to be saved from any responsibility (automatically!) for what it causes just because it is an algorithm?

The enjoying the benefits but running away from responsibility is a cowardly and childish act. Childish acts need supervision from adults.

EasyMark
8 replies
20h26m

What happened to that child is on the parents not some programmer who coded an optimization algorithm. It’s really as simple as that. No 10 year old should be on TikTok, I’m not sure anyone under 18 should be given the garbage, dangerous misinformation, intentional disinformation, and lack of any ability to control what your child sees.

itishappy
7 replies
20h18m

Do you feel the same way about the sale of alcohol? I do see the argument for parental responsibility, but I'm not sure how parents will enforce that if the law allows people to sell kids alcohol free from liability.

EasyMark
4 replies
20h8m

Alcohol (the consumption form) serves only one purpose to get you buzzed. Unlike algorithms and hammers which are generic and serve many purposes, some of which are positive, especially when used correctly. You can’t sue the people who make hammers if someone kills another person with one.

rbetts
1 replies
19h28m

You could sue a hammer manufacturer if they regularly advertised hammers as weapons to children and children started killing each other with them, though.

tines
0 replies
19h12m

You said sue the hammer manufacturer. Why didn’t you say to sue the newspaper that ran the ads? The fact that you couldn’t keep that straight in your analogy undermines your argument significantly imo.

rfrey
0 replies
17h6m

We're not talking about "all algorithms" any more than the alcohol example is talking about "all liquids". Social media algorithms have one purpose: to manipulate people into more engagement, to manoeuvre them into forgoing other activities in favour of more screen time, in the service of showing them more ads.

nmeagent
0 replies
18h43m

Alcohol (the consumption form) serves only one purpose to get you buzzed.

Since consumable alcohol has other legitimate uses besides getting a buzz on, I don't think this point stands. For example, it's used quite often in cooking and (most of the time?) no intoxicating effects remain in the final product.

tines
1 replies
20h10m

This is a good argument I didn't think of before. What's the response to it?

Flozzin
0 replies
19h43m

We regulate the sale of all sorts of things that can do damage but also have other uses. You can't buy large amounts of certain cold medicines, and you need to be an adult to do so. You can't buy fireworks if you are a minor in most places. In some countries they won't even sell you a set of steak knives if you are underage.

Someone else's response was that a 10 year old should not be on ticktoc. Well then how did they get past the age restrictions?(I'm guessing its a check box at best). So its inadequately gated. But really, I don't think its the sort of thing that needs an age gate.

They are responsible for a product that is actively targeting harmful behavior at children and adults. It's not ok in either situation. You cannot allow your platform to be hijacked for content like this. Full stop.

These 'services' need better ways to moderate content. If that is more controls that allow them to delete certain posts and videos or some other method to contain videos like this. You cannot just allow users to upload and share whatever they want. And further, have your own systems promote these videos.

Everyone who makes a product(especially for mass consumption), has a responsibility to make sure their product is safe. If your product is so complicated that you can't control it, then you need to step back and re-evaluate how it's functioning. Not just plow ahead, making money, letting it harm people.

Nasrudith
7 replies
17h21m

You want to bake cookies yet refuse to take responsibility for the possibility of somebody choking on them, or sell cars without maling crashes impossible!

Impossible goals are an asinine standard and "responsibility" and "accountability" are the favorite weasel words of those who want absolute discretion to abuse power.

BriggyDwiggs42
6 replies
11h56m

Aren’t there lots of regulations on safety standards for food and for cars? I think you might have chosen the wrong examples.

pigeonhole123
4 replies
8h52m

Is Mercedes liable if I run over someone on purpose in a car they made?

leereeves
2 replies
5h17m

That's a poor comparison to this case.

Would Mercedes be liable if the car (i.e. the algorithm) decided to run over someone?

int3
1 replies
4h16m

Does the algorithm "decide" to show something, or is it operating mechanically, driven by the inputs of the user?

leereeves
0 replies
1h54m

If it is operating mechanically, then it is following a process chosen by the developers who wrote the code. They work for the company, so the consequences are still the company's responsibility.

ang_cire
0 replies
2h10m

If the Mercedes infotainment screen had shown you a curated recommendation that you run them over, prior to you doing so, they very possibly would (and should).

cryptonector
0 replies
1h34m

Lack of regulations isn't the industry's fault though. I GP's example is indeed relevant.

anigbrowl
0 replies
19h21m

You seem to be overlooking the fact of the late plaintiff being 10 years old. The case turns on whether it's reasonable to expect that Tiktok would knowingly share content encouraging users to attempt life-threatening activities to children.

depingus
9 replies
23h18m

Isn't it usually the case that when someone builds a shitty thing and people get hurt, the builder is liable?

ineedaj0b
4 replies
22h55m

Yeah, but buying a hammer and hitting yourself with it is different.

The dangers of social media are unknown to most still.

ThunderSizzle
1 replies
20h59m

It's be more akin to buying a hammer and then the hammer starts morphing into a screw driver without you noticing.

Then when you accidentally hit your hand with the hammer, you actually stabbed yourself. And that's when you realized your hammer is now a screwdriver.

ineedaj0b
0 replies
19h28m

Yes, I thought that’s what I said - no one knows the shape of danger social media is currently.

It’s like trying to draw tiger but you’ve never seen an animal. We only have the faintest clue what social media is right now. It will change in the next 25+ years as well.

Sure we know some dangers but… I think we need more time to know them all.

spacemadness
0 replies
21h17m

Yes, because a mechanical tool made of solid metal is the same thing as software that can change its behavior at any time and is controlled live by some company with its own motives.

depingus
0 replies
22h40m

Yes. Buying a hammer and hitting yourself with it IS different.

x0x0
3 replies
21h21m

You would think so, wouldn't you?

Except right now youtube have a self advertisement in the middle of the page warning people not to trust the content on youtube. A company warning people not to trust the product they built and the videos they choose to show you... we need to rethink 230. We've gone seriously awry.

tines
2 replies
20h11m

It's more nuanced than that. If I sent a hateful letter through the mail and someone gets hurt by it (even physically), who is responsible, me or the post office?

I know youtube is different in important ways than the post, but it's also different in important ways from e.g. somebody who builds a building that falls down.

cmrdporcupine
0 replies
13h49m

The Post Office just delivers your mail, it doesn't do any curation.

YouTube, TikTok, etc. differ by applying an algorithm to "decide" what to show you. Those algorithms have all sorts of weights and measures, but they're ultimately personalized to you. And if they're making personalized recommendations that include "how to kill yourself"... I think we have a problem?

It's simply not just a FIFO of content in content out, and in many cases (Facebook & Instagram especially) the user barely gets to a choice in what is shown in the feed...

Contrast with e.g. Mastodon where there is no algorithm and it only shows you what you explicitly followed, and in the exact linear order it was posted.

(Which is actually how Facebook used to be)

ang_cire
0 replies
2h7m

If the post office opened your letter, read it, and then decided to copy it and send it to a bunch of kids, you would be responsible for your part in creating it, and they would be responsible for their part in disseminating it.

troyvit
2 replies
21h35m

Right?

Like if I'm a cement company, and I build a sidewalk that's really good and stable, stable enough for a person to plant a milk crate on it, and stand on that milk crate, and hold up a big sign that gives clear instructions on self-asphyxiation, and a child reads that sign, tries it out and dies, am I going to get sued? All I did was build the foundation for a platform.

averageRoyalty
1 replies
21h23m

That's not a fair analogy though. To be fairer, you'd have to monitor said footpath 24/7 and have a robot and/or a number of people removing milk crate signs that you deemed inappropriate for your foothpath. They'd also move various milk crate signs in front of people as they walked and hide others.

If you were indeed monitoring the footpath for milk crate signs and moving them, yes you may be liable for showing or not removing one to someone it wouldn't be appropriate for.

troyvit
0 replies
20h54m

That's a good point, and actually the heart of the issue, and what I missed.

In my analogy the stable sidewalk that can hold the milk crate is both the platform and the optimization algorithm. But to your point there's actually a lot more going on with the optimization than just building a place where any rando can market self-asphyxiation. It's about how they willfully targeted people with that content.

itishappy
1 replies
22h20m

It sounds like your algorithm targets children with unmoderated content. That feels like a dangerous position with potential for strong arguments in either direction. I think the only reasonable advice here is to keep close tabs on this case.

Sohcahtoa82
0 replies
6m

Does it specifically target children or does it simply target people and children happen to be some of the people using it?

If a child searches Google for "boobs", it's not fair to accuse Google of showing naked women to children, and definitely not fair to even say Google was targeting children.

drpossum
1 replies
22h19m

You sure are if you knew about it (like tiktok was)

yadaeno
0 replies
2h36m

Even if they didn’t know it should be their responsibility to know regardless

thinkingtoilet
0 replies
20h36m

Of course you should be. Just because an algorithm gave you an output doesn't absolve you from using it. It's some magical mystical thing. It's something you created and you are 100% responsible for what you do with the output of it.

thih9
0 replies
20h32m

Yes, if a product actively contributes to child fatalities then the manufacturer should be liable.

Then again, I guess your platform is about article recommendation and not about recording yourself doing popular trends. And perhaps children are not your target audience, or an audience at all. In many ways the situation was different for TikTok.

plandis
0 replies
18h59m

Part of the claim is that TikTok knew about this content being promoted and other cases where children had died as a result.

But by the time Nylah viewed these videos, TikTok knew that: 1) "the deadly Blackout Challenge was spreading through its app," 2) "its algorithm was specifically feeding the Blackout Challenge to children," and 3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31-32. Yet TikTok "took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their [For You Pages]." App. 32-33. Instead, TikTok continued to recommend these videos to children like Nylah.

Do you think this should be legal? Would you do nothing if you knew children were dying directly because of the content you were feeding them?

awongh
0 replies
21h4m

I think it depends on some technical specifics, like which meta data was associated with that content, and the degree to which that content was surfaced to users that fit the demographic profile of a ten year old child.

If your algorithm decides that things in the 90th percentile of shock value will boost engagement to a user profile that can also include users who are ten years old then you maybe have built a negligent algorithm. Maybe that’s not the case in this particular instance but it could be possible.

ang_cire
0 replies
2h12m

"I have a catapult that launches loosely demo-targeted things, without me checking what is being loaded into it. I only intend for harmless things to be loaded. Should I be liable if someone loads a boulder and it hurts someone?"

Devasta
24 replies
1d2h

This could result in the total destruction of social media sites. Facebook, TikTok, Youtube, Twitter, hell even Linkedin cannot possibly survive if they have to take responsibility for what users post.

Excellent news, frankly.

thephyber
15 replies
1d2h

I don’t understand how people can be so confident that this will only lead to good things.

First, this seems like courts directly overruling the explicit wishes of Congress. As much as Congress critters complain about DCA Sec230, they can’t agree on any improvements. Judges throwing a wrench at it won’t improve it, they will only cause more uncertainty.

not liking what social media has done to people doesn’t seem like a good reason to potentially destroy the entire corpus of videos created on YouTube.

bryanlarsen
9 replies
1d1h

No, 230 is not overturned.

The original video is still the original poster's comment, and thus still 230 protected. If the kid searched specifically for the video and found it, TikTok would have been safe.

However, TikTok's decision to show the video to the child is TikTok's speech, and TikTok is liable for that decision.

https://news.ycombinator.com/item?id=41392710

falcolas
8 replies
1d1h

If the child hears the term "blackout" and searches for it on TikTok and reaches the same video, is that TikTok's speech - fault - as well? TikTok used an algorithm to sort search results, after all.

preciousoo
6 replies
1d1h

I think the third sentence of the comment you’re replying to answers that

falcolas
5 replies
1d

So you believe that presenting the results (especially if you filter on something like 'relevance') of a search now makes the website liable?

That's going to be hell for Google. Well, maybe not, they have many and decent lawyers.

preciousoo
4 replies
1d

I’m not sure you read the sentence in question correctly

falcolas
3 replies
1d

However, TikTok's decision to show the video to the child is TikTok's speech, and TikTok is liable for that decision.

How is my interpretation incorrect, please? TikTok (or any other website like Google) can show a video to a child in any number of ways - all of which could be considered to be their speech.

supernewton
2 replies
1d

The third sentence is "If the kid searched specifically for the video and found it, TikTok would have been safe."

falcolas
1 replies
23h27m

Aah, I counted paragraphs - repeatedly - for some reason. That's my bad.

That said, this is a statement completely unsubstantiated in the original post or in the post that it links to, or the decision in TFA. It's the poster's opinion stated as if it were a fact or a part of the Judge's ruling.

bryanlarsen
0 replies
23h20m

You're right, I did jump to that conclusion. It turns out it was the correct conclusion, although I definitely shouldn't have said it.

https://news.ycombinator.com/item?id=41394465

ndiddy
0 replies
23h2m

From page 11 of the decision:

"We reach this conclusion specifically because TikTok’s promotion of a Blackout Challenge video on Nylah’s FYP was not contingent upon any specific user input. Had Nylah viewed a Blackout Challenge video through TikTok’s search function, rather than through her FYP, then TikTok may be viewed more like a repository of third-party content than an affirmative promoter of such content."

gizmo686
1 replies
1d1h

Congress did not anticipate the type of algorithmic curation that the modern internet is built on. At the time, if you were to hire someone to create a daily list of suggested reading, that list would not be subject to 230 protections. However, with the rise of algorithmic media, that is precisely what modern social media companies have been doing.

rtkwe
0 replies
2h43m

Congress has had ample opportunity to pass changes since the rise of algorithmic feeds too though and they haven't done so.

Devasta
1 replies
1d1h

Well, if we consider the various social media sites:

Meta - Helped facilitate multiple ethnic cleansings.

Twitter - Now a site run by white supremacists for white supremacists.

Youtube - Provides platforms to Matt Walsh, Ben Shapiro and a whole constellation of conspiracy theorist nonsense.

Reddit - Initially grew its userbase through hosting of softcore CP, one of the biggest pro-ana sites on the web and a myriad of smaller but no less vile subreddits. Even if they try to put on a respectable mask now its still a cesspit.

Linkedin - Somehow has the least well adjusted userbase of them all, its destruction would do its users a kindness.

My opinion of social media goes far and beyond what anyone could consider "not liking".

In any case, it would mean that those videos would have to be self hosted and published, we'd see an en masse return of websites like college humor and cracked and the like, albeit without the comments switched on.

falcolas
0 replies
1d1h

YouTube and Facebook were also the original hosts of the Blackout trend videos and pictures, as I recall.

karaterobot
0 replies
22h8m

The person you're responding to didn't say they were confident about anything, they said (cynically, it seems to me) that it could lead to the end of many social media sites, and that'd be a good thing in their opinion.

This is a pedantic thing to point out, but I do it because the comment has been downvoted, and the top response to it seems to misunderstand it, so it's possible others did too.

Mistletoe
2 replies
1d2h

The return of the self hosted blog type internet where we go to more than 7 websites? One can dream. Where someone needs an IQ over 70 to post every thought in their head to the universe? Yes that’s a world I’d love to return to.

krapp
0 replies
1d1h

Where someone needs an IQ over 70 to post every thought in their head to the universe? Yes that’s a world I’d love to return to.

I remember the internet pre social media but I don't exactly remember it being filled with the sparkling wit of genius.

The internet is supposed to belong to everyone, it wasn't meant to be a playground only for a few nerds. It's really sad that hacker culture has gotten this angry and elitist. It means no one will ever create anything with as much egalitarian potential as the internet again.

falcolas
0 replies
1d1h

Nah, ISPs (and webhosts) are protected by Section 230 as well, and they're likely to drift into the lawyer's sights as well - intentionally or unintentionally.

krapp
1 replies
1d1h

Negative externalities aside, social media has been the most revolutionary and transformative paradigm shift in mass communication and culture since possibly the invention of the telegraph. Yes something that provides real value to many people would be lost if all of that were torn asunder.

skydhash
0 replies
22h24m

You're missing radio and TV. Social media is mostly giving everyone a megaphone and the platform the one in control of the volume.

mikewarot
0 replies
1d1h

What is likely to happen is that Government will lean on "friendly" platforms that cooperate in order to do political things that should be illegal, in exchange for looking the other way on things the government should stop. This is the conclusion I came to after watching Bryan Lunduke's reporting on the recent telegram arrest.[1]

[1] https://www.youtube.com/watch?v=7wm-Vv1kRk8

bentley
0 replies
1d1h

But it’s more likely to go the other way around: the big sites with their expensive legal teams will learn how to thread the needle to remain compliant with the law, probably by oppressively moderating and restricting user content even more than they already do, while hosting independent sites and forums with any sort of user‐submitted content will become completely untenable due to the hammer of liability.

WCSTombs
0 replies
1d1h

There's nothing in the article about making the social media sites liable for what their users post. However, they're made liable for how they recommend content to their users, at least in certain cases.

mjevans
17 replies
1d1h

"""The Court held that a platform's algorithm that reflects "editorial judgments" about "compiling the third-party speech it wants in the way it wants" is the platform's own "expressive product" and is therefore protected by the First Amendment.

Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too."""

I've agreed for years. It's a choice in selection rather than a 'natural consequence' such as a chronological, threaded, or even '__end user__ upvoted /moderated' (outside the site's control) weighted sort.

bentley
16 replies
1d1h

If I as a forum administrator delete posts by obvious spambots, am I making an editorial judgment that makes me legally liable for every single post I don’t delete?

If my forum has a narrow scope (say, 4×4 offroading), and I delete a post that’s obviously by a human but is seriously off‐topic (say, U.S. politics), does that make me legally liable for every single post I don’t delete?

What are the limits here, for those of us who unlike silicon valley corporations, don’t have massive legal teams?

Phrodo_00
4 replies
1d

I'm guessing you're not a lawyer, and I'm not either, so there might be some details that are not obvious about it, but the regulation draws the line at allowing you to do[1]:

any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

I think that allows your use case without liability.

[1] https://www.law.cornell.edu/uscode/text/47/230

kelnos
2 replies
1d

Wow, "or otherwise objectionable" would seemingly give providers a loophole wide enough to drive a truck through.

throwup238
1 replies
1d

It's not a loophole. That's the intended meaning, otherwise it would be a violation of freedom of association.

That doesn't mean anyone is free to promote content without liability, just that moderating by deleting content doesn't make it an "expressive product."

habinero
0 replies
15h29m

Both are protected, because both are 1A activity.

zerocrates
0 replies
22h53m

That subsection of 230 is about protecting you from being sued for moderating, like being sued by the people who posted the content you took down.

The "my moderation makes me liable for everything I don't moderate" problem, that's what's addressed by the preceding section, the core of the law and the part that's most often at issue, which says that you can't be treated as publisher/speaker of anyone else's content.

lesuorac
2 replies
1d

If my forum has a narrow scope (say, 4×4 offroading), and I delete a post that’s obviously by a human but is seriously off‐topic (say, U.S. politics), does that make me legally liable for every single post I don’t delete?

No.

From the court of appeals [1], "We reach this conclusion specifically because TikTok’s promotion of a Blackout Challenge video on Nylah’s FYP was not contingent upon any specific user input. Had Nylah viewed a Blackout Challenge video through TikTok’s search function, rather than through her FYP, then TikTok may be viewed more like a repository of third-party content than an affirmative promoter of such content."

So, given (an assumption) that users on your forum choose some kind of "4x4 Topic" they're intending to navigate a repository of third-party content. If you curate that repository it's still a collection of third-party content and not your own speech.

Now, if you were to have a landing page that showed "featured content" then that seems like you could get into trouble. Although one wonders what the difference is between navigating to a "4x4 Topic" or "Featured Content" since it's both a user-action.

[1]: https://fingfx.thomsonreuters.com/gfx/legaldocs/mopaqabzypa/...

shagie
0 replies
23h49m

Now, if you were to have a landing page that showed "featured content" then that seems like you could get into trouble. Although one wonders what the difference is between navigating to a "4x4 Topic" or "Featured Content" since it's both a user-action.

Consider HackerNews's functionality of flamewar suppression. https://news.ycombinator.com/item?id=39231821

And this is the difference between https://news.ycombinator.com/news and https://news.ycombinator.com/newest (with showdead enabled).

ApolloFortyNine
0 replies
23h45m

then TikTok may be viewed more like a repository of third-party content than an affirmative promoter of such content."

"may"

Basically until the next court case when someone learns that search is an algorithm too, and asks why the first result wasn't a warning.

The real truth is, if this is allowed to stand, it will be selectively enforced at best. If it's low enough volume it'll just become a price of doing business, sometimes a judge has it out for you and you have to pay a fine, you just have to work it into the budget. Fine for big companies, game ender for small ones.

jay_kyburz
1 replies
21h42m

Let me ask you a question in return.

If you discovered a thread on the forum where a bunch of users were excitedly talking about doing something incredibly dangerous in their 4x4s, like getting high and trying some dangerous maneuver, would you let sit on your forum?

How would you feel if somebody read about it on your forum and died trying to do it?

Update: The point I'm trying to make is that _I_ wouldn't let this sit on my forum, so I don't think its unethical to ask others to remove it from their forums as well.

romanows
0 replies
16h52m

Not the OP, but if I thought we were all joking around, and it was the type of forum that allowed people to be a bit silly, I would let it stand. Or if I thought people on the forum would point out the danger and hopefully dissuade the poster and/or others from engaging in that behavior, I would let it stand.

However, if my hypothetical forum received a persistent flood of posts designed to soften people up to dangerous behaviors, I'd be pretty liberal removing posts that smelled funny until the responsible clique moved elsewhere.

doe_eyes
1 replies
1d1h

I think you're looking for the kind of precision that just doesn't exist in the legal system. It will almost certainly hinge on intent and the extent to which your actions actually stifle legitimate speech.

I imagine that getting rid of spam wouldn't meet the bar, and neither would enforcing that conversations are on-topic. But if you're removing and demoting posts because they express views you disagree with, you're implicitly endorsing the opinions expressed in the posts you allow to stay up, and therefore are exercising editorial control.

I think the lesson here is: either keep your communities small so that you can comfortably reason about the content that's up there, or don't play the thought police. The only weird aspect of this is that you have courts saying one thing, but then the government breathing down your neck and demanding that you go after misinformation.

Sakos
0 replies
1d

A lot of people seem to missing the part where if it ends up in court, you have to argue that what you removed was objectionable on the same level as the other named types of content and there will be a judge you'll need to convince that you didn't re-interpret the law to your benefit. This isn't like arguing on HN or social media, you being "clever" doesn't necessarily protect you from liability or consequences.

supriyo-biswas
0 replies
1d

What the other replies are not quite getting is that there can be other kinds of moderator actions that aren't acting on posts that are offtopic or offensive, but that do not meet the bar for the forum in question — are they considered out of scope with this ruling?

As an example, suppose on a HN thread about the Coq theorem prover, someone starts a discussion about the name, and it's highly upvoted but the moderators downrank that post manually to stimulate more productive discussions. Is this considered curation, and can this be no longer done given this ruling?

It seems to me that this is indeed the case, but in case I'm mistaken I'd love to know.

mathgradthrow
0 replies
1d1h

You are simply not shielded from liability, I cannot imagine a scenario in which this moderation policy would result in significant liability. I'm sure someobe would be willing to sell you some insurance to that effect. I certainly would.

_DeadFred_
0 replies
1d1h

Wouldn't it more be you are responsible for pinned posts at the top of thread lists? If you pin a thread promoting an unsafe onroad product, say telling people they should be replacing their steering with heim joints that aren't street legal, you could be liable. Whereas if you just left the thread among all the others you aren't. (Especially if the heim joints are sold by a forum sponsor or the forum has a special 'discount' code for the vendor).

WCSTombs
0 replies
1d1h

If my forum has a narrow scope (say, 4×4 offroading), and I delete a post that’s obviously by a human but is seriously off‐topic (say, U.S. politics), does that make me legally liable for every single post I don’t delete?

According to the article, probably not:

A platform is not liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

"Otherwise objectionable" looks like a catch-all phrase to allow content moderation generally, but I could be misreading it here.

seydor
16 replies
1d1h

The ruling itself says that this is not about 230, it's about TikTok's curation and collation of the specific videos. TikTok is not held liable for the user content but for the part that they do their 'for you' section. I guess it makes sense, manipulating people is not OK whether it's for political purposes as facebook and twitter do, or whatever. So 230 is not over

It would be nice to see those 'For you' and youtube's recomendations gone. Chronological timelines are the best , and will bring back some sanity. Don't like it? don't follow it

Accordingly, TikTok’s algorithm, which recommended the Blackout Challenge to Nylah on her FYP, was TikTok’s own “expressive activity,” id., and thus its first-party speech.

Section 230 immunizes only information “provided by another[,]” 47 U.S.C. § 230(c)(1), and here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.
falcolas
7 replies
1d1h

Don't like it? don't follow it

How did you find it in the first place? A search? Without any kind of filtering (that's an algorithm that could be used to manipulate people), all you'll see is pages and pages of SEO.

Opening up liability like this is a quagmire that's not going to do good things for the internet.

seydor
2 replies
1d1h

retweets/sharing. that's how it used to be

falcolas
1 replies
1d

How did they find it? How did you see the tweet? How did the shared link show up in any of your pages?

Also, lists of content (or blind links to random pages - web rings) have been a thing since well before Twitter or Dig.

skydhash
0 replies
22h36m

How do rumors and news propagates? And we still consider that the person sharing it with us is partially responsible (especially if it's fake).

pixl97
2 replies
1d1h

not going to do good things for the internet.

Not sure if you've noticed, but the internet seemingly ran out of good things quite some time back.

rtkwe
0 replies
1d

The question though is how do you do a useful search without having some kind of algorithmic answer to what you think the user will like. Explicit user or exact match strings are simple but if I search "cats" looking for cat videos how does that list get presented without being a curated list made by the company?

falcolas
0 replies
1d

Irrelevant and untrue.

For example, just today there was a highly entertaining and interesting article about how to replace a tablet-based thermostat. And it was posted on the internet, and surfaced via an algorithm on Hacker News.

jimbob45
0 replies
1d

Without any kind of filtering (that's an algorithm that could be used to manipulate people)

Do you genuinely believe a judge is going to rule that a Boyer-Moore implementation is fundamentally biased? It seems likely that sticking with standard string matching will remain safe.

aiauthoritydev
3 replies
1d1h

Chronological timelines are the best , and will bring back some sanity. Don't like it? don't follow it

You realize that there is immense arrogance in this statement where you have decided that something is good for me ? I am totally fine with youtube's recommendations or even Tiktok's algorithms that according to you "manipulate" me.

seydor
2 replies
1d

You can have them, but they have legal consequences for the owner.

cvalka
1 replies
23h11m

How can they have them if they are prohibited?

skydhash
0 replies
22h35m

They're not prohibited. They're just liable for it, just like manufacturers are liable for defective products that endangers people.

WalterBright
2 replies
1d1h

manipulating people is not OK whether it's for political purposes as facebook and twitter do

Not to mention CNN, MSNBC, the New York Times, NPR, etc.

seydor
1 replies
1d

Those are subject to legal liability for the content they produce.

Nasrudith
0 replies
17h8m

But not for manipulation. That isn’t a crime.

dvngnt_
0 replies
21h59m

how does that work for something like tiktok. Chronological doesn't have much value if you're trying to discover interesting content relevant to your interest.

WCSTombs
13 replies
1d1h

From the article:

Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech. And now TikTok has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens.

If that reading is correct, then Section 230 isn't nullified, but there's something that isn't shielded from liability any more, which IIUC is basically the "Recommended For You"-type content feed curation algorithms. But I haven't read the ruling itself, so it could potentially be more expansive than that.

But assuming Matt Stoller's analysis there is accurate: frankly, I avoid those recommendation systems like the plague anyway, so if the platforms have to roll them back or at least be a little more thoughtful about how they're implemented, it's not necessarily a bad thing. There's no new liability for what users post (which is good overall IMO), but there can be liability for the platform implementation itself in some cases. But I think we'll have to see how this plays out.

falcolas
11 replies
1d1h

What is "recommended for you" if not a search result with no terms? From a practical point of view, unless you go the route of OnlyFans and disallow discovery on your own website, how do you allow any discovery if any form of algorithmic recommendation is outlawed?

lcnPylGDnU4H9OF
10 replies
1d1h

If it were the results of a search with no terms then it wouldn't be "for" a given subject. The "you" in "recommended for you" is the search term.

falcolas
9 replies
1d1h

That's just branding. It's called Home in Facebook and Instagram, and it's the exact same thing. It's a form of discovery that's tailored to the user, just like normal searches are (even on Google and Bing etc).

lcnPylGDnU4H9OF
8 replies
1d1h

Indeed, regardless of the branding for the feature, the service is making a decision about what to show a given user based on what the service knows about them. That is not a search result with no terms; the user is the term.

falcolas
7 replies
1d1h

Now for a followup question: How does any website surface any content when they're liable for the content?

When you can be held liable for surfacing the wrong (for unclear definitions of wrong) content to the wrong person, even Google could be held liable. Imagine if this child found a blackout video on the fifth page of their search results on "blackout". After all, YouTube hosted such videos as well.

kaibee
3 replies
1d

Now for a followup question: How does any website surface any content when they're liable for the content?

Chronological order, location based, posts-by-followed-accounts, etc. "Most liked", etc.

Essentially by only using 'simple' algorithms.

TylerE
2 replies
23h38m

Is not the set of such things offered still editorial judgement?

(And as an addendum, even if you think the answer to that is no, do you trust a judge who can probably barely work an iphone to come to the same conclusion, with your company in the crosshairs?)

skydhash
0 replies
22h51m

Not really, as the variables comes from the content itself, not from the company intention.

And for the addendum, that's why we have hearings and experts. No judge can be expected to be knowledgable about everything in life.

buildbot
0 replies
23h19m

I'd say no, because they averages over the entire group. If you ranked based on say, most liked in your friends circle, or most liked by people with a high cosine similarity to your profile, then it starts to slide back into editorial judgment.

lcnPylGDnU4H9OF
2 replies
1d

TikTok is not being held liable for hosting and serving the content. They're being held liable for recommending the content to a user with no other search context provided by said user. In this case, it is because the visitor of the site was a young girl that they chose to surface this video and there was no other context. The girl did not search "blackout".

falcolas
1 replies
1d

because the visitor of the site was a young girl that they chose to surface this video

That's one hell of a specific accusation - that they looked at her age alone and determined solely based on that to show her that specific video?

First off, at 10, she should have had an age-gated account that shows curated content specifically for children. There's nothing to indicate that her parents set up such an account for her.

Also, it's well understood that Tiktok takes a user's previously watched videos into account when recommending videos. It can identify traits about the people based off that (and by personal experience, I can assert that it will lock down your account if it thinks you're a child), but they have no hard data on someone's age. Something about her video history triggered displaying this video (alongside thousands of other videos).

Finally, no, the girl did not do a search (that we're aware of). But would the judge's opinion have changed? I don't believe so, based off of their logic. TikTok used an algorithm to recommend a video. TikTok uses that same algorithm with a filter to show search results.

In any case, a tragedy happened. But putting the blame on TikTok seems more like an attack on TikTok and not an attempt to reign in the industry at large.

Plus, at some point, we have to ask the question: where were the parents in all of this?

Anyways.

lcnPylGDnU4H9OF
0 replies
1d

That's one hell of a specific accusation - that they looked at her age alone and determined solely based on that to show her that specific video?

I suppose I did not phrase that very carefully. What I meant is that they chose to surface the video because a specific young girl visited the site -- one who had a specific history of watched videos.

In any case, a tragedy happened. But putting the blame on TikTok seems more like an attack on TikTok and not an attempt to reign in the industry at large.

It's always going to start with one case. This could be protectionism but it very well could instead be the start of reining in the industry.

itsdrewmiller
0 replies
22h49m

This is only a circuit court ruling - there is a good chance it will be overturned by the supreme court. The cited supreme court case (Moody v. NetChoice) does not require personalization:

presenting a curated and “edited compilation of [third party] speech” is itself protected speech.

This circuit court case mentions the personalization but doesn't limit its judgment based on its presence - almost any type of curation other than the kind of moderation explicitly exempted by the CDA could create liability, though in practice I don't think "sorting by upvotes with some decay" would end up qualifying.

tboyd47
8 replies
1d2h

Fantastic write-up. The author appears to be making more than a few assumptions about how this will play out, but I share his enthusiasm for the end of the "lawless no-man’s-land" (as he put it) era of the internet. It comes at a great time too, as we're all eagerly awaiting the AI-generated content apocalypse. Just switch one apocalypse for a kinder, more human-friendly one.

So what happens going forward? Well we’re going to have to start thinking about what a world without this expansive reading of Section 230 looks like.

There was an internet before the CDA. From what I remember, it was actually pretty rad. There can be an internet after, too. Who knows what it would look like. Maybe it will be a lot less crowded, less toxic, less triggering, and less addictive without these gigantic megacorps spending buku dollars to light up our amygdalas with nonsense all day.

tboyd47
6 replies
1d

I read the decision. -> https://cases.justia.com/federal/appellate-courts/ca3/22-306...

Judge Matey's basic point of contention is that Section 230 does not provide immunity for any of TikTok's actions except "hosting" the blackout challenge video on its server.

Defining it in this way may lead to a tricky technical problem for the courts to solve... While working in web, I understand "hosting" to mean the act of storing files on a computer somewhere. That's it. Is that how the courts will understand it? Or does their definition of hosting include acts that I would call serving, caching, indexing, linking, formatting, and rendering? If publishers are liable for even some of those acts, then this takes us to a very different place from where we were in 1995. Interesting times ahead for the industry.

itsdrewmiller
5 replies
22h36m

You're reading it too literally here - the CDA applies to:

(2) Interactive computer service The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
tboyd47
4 replies
22h16m

What definition of "hosting" do you think the courts would apply instead of the technical one?

jen20
1 replies
22h6m

I’d imagine one that reasonable people would understand to be the meaning. If a “web hosting” company told me they only stored things on a server with no way to serve it to users, I’d laugh them out the room.

tboyd47
0 replies
21h31m

Good point

itsdrewmiller
1 replies
22h0m

"hosting" isn't actually used in the text of the relevant law - it's only shorthand in the decision. If they want to know what the CDA exempts they would read the CDA along with caselaw specifically interpreting it.

tboyd47
0 replies
21h31m

True

HDThoreaun
0 replies
15h30m

but I share his enthusiasm for the end of the "lawless no-man’s-land"

That's crazy, I feel like being a lawless no-man's land is the best part of the internet.

Xcelerate
7 replies
1d1h

I'm not at all opposed to implementing new laws that society believes will reduce harm to online users (particularly children).

However, if Section 230 is on its way out, won't this just benefit the largest tech companies that already have massive legal resources and the ability to afford ML-based or manual content moderation? The barriers to entry into the market for startups will become insurmountable. Perhaps I'm missing something here, but it sounds like the existing companies essentially got a free pass with regard to liability of user-provided content and had plenty of time to grow, and now the government is pulling the ladder up after them.

tboyd47
2 replies
1d

The assertion made by the author is that the way these companies grew is only sustainable in the current legal environment. So the advantage they have right now by being bigger is nullified.

xboxnolifes
1 replies
23h18m

Yes, the way they grew is only sustainable from the current. What about not growing, but maintaining?

lelandbatey
0 replies
21h6m

The parent said "grew", but I think a closer reading of the article indicates a more robust idea that tboyd47 merely misrepresented. A better sentence is potentially:

are able to profit to the tune of a 40% margin on advertising revenue

With that, they're saying that they're only going to be able to profit this much in this current regulatory environment. If that goes away, so too does much of their margin, potentially all of it. That's a big blow no matter the size, though Facebook may weather it better than smaller competitors.

2OEH8eoCRo0
1 replies
23h20m

won't this just benefit the largest tech companies

I'd wager the bigger you are the harder it gets. How would they fend off tens of thousands of simultaneous lawsuits?

Nasrudith
0 replies
17h10m

By turning them all into expensive tarpits of time and money - through the power of strategic spite. Making it so expensive that all plantiffs cannot win even if they prevail in a lawsuit. It is a far harder standard to get legal costs covered and if it costs tens of millions to possibly get a few million in a decade interest dries up fast.

root_axis
0 replies
14h6m

Section 230 isn't on it's way out, this happened because the court found that TikTok knowingly headlined dangerous content that lead to someone's death.

fedeb95
0 replies
10h58m

not necessarily. It can also open a new market for startups. Content moderation.

blueflow
6 replies
23h47m

Might be a cultural difference (im not from the US), but leaving a 10 year unsupervised with content from (potentially malicious) strangers really throws me off.

Wouldn't this be the perfect precedence case on why minors should not be allowed on social media?

Yeul
2 replies
21h18m

Look your kids are going to discover all kinds of nasty things online or offline so either you prepare them for it or it's going to be like that scene in Stephen King's Carrie.

blueflow
1 replies
8h52m

At age 10?

rtkwe
0 replies
2h48m

Yes, unless you watch them literally every moment they're using the internet they're going to encounter something eventually.

hyeonwho4
1 replies
21h31m

I am also a little confused by this. I thought websites were not allowed to collect data from minors under 13 years of age, and that TikTok doesn't allow minors under 13 to create accounts. Why is TikTok not liable for personalizing content to minors? Apparently (from the court filings) TikTok even knew these videos were going viral among children... which should increase their liability under the Children's Online Privacy Protection Act.

ratorx
0 replies
19h44m

Assuming TikTok collect age, and the minimum possible age is 13 (ToS) and a parent lets their child access the app despite that, I don’t see how TikTok is liable.

Also, I’m not sure how TikTok would know that the videos are viral among the protected demographic if the protected demographic cannot even put in the information to classify them as such?

I don’t think requiring moderation is the answer in all cases. As an adult, I should be allowed to consume unmoderated content. Should people younger than 18 be allowed to? Maybe.

I agree that below age X, all content should be moderated. If you choose not to do this for your platform, then age-restrict the content. However, historically age-restriction on the internet is an unsolved problem. I think what would be useful is tighter legislation on how this is enforced etc.

This case is not a moderation question. It is a liability question, because a minor has been granted access to age-restricted content. I think the key question is whether TikTok should be liable for the child/their parents having bypassed the age restriction (too easily)? Maybe. I’m leaning towards the opinion that a large amount of this responsibility is on the parents. If this is onerous, then the law should legislate stricter guidelines on content targeting the protected demographic as well as the gates blocking them.

EasyMark
0 replies
20h18m

You are correct. US parents often use social media as a baby sitter and don’t pay attention to what they are watching. No 10 year old should be on social media or even the internet in an unsupervised manner; they are simply too impressionable and trusting. It’s just negligence, my kids never got SM accounts before 15, after I’d had time to introduce them to some common sense and much needed skepticism of people and information on the internet.

Animats
5 replies
23h59m

This turns on what TikTok "knew":

"But by the time Nylah viewed these videos, TikTok knew that: 1) “the deadly Blackout Challenge was spreading through its app,” 2) “its algorithm was specifically feeding the Blackout Challenge to children,” and 3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31–32. Yet TikTok “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their [For You Pages].” App. 32–33. Instead, TikTok continued to recommend these videos to children like Nylah."

We need to see another document, "App 31-32", to see what TikTok "knew". Could someone find that, please? A Pacer account may be required. Did they ignore an abuse report?

See also Gonzales vs. Google (2023), where a similar issue reached the U.S. Supreme Court.[1] That was about whether recommending videos which encouraged the viewer to support the Islamic State's jihad led someone to go fight in it, where they were killed. The Court rejected the terrorism claim and declined to address the Section 230 claim.

[1] https://en.wikipedia.org/wiki/Gonzalez_v._Google_LLC

Scaevolus
2 replies
23h7m

IIRC, TikTok has (had?) a relatively high-touch content moderation pipeline, where any video receiving more than a few thousand views is checked by a human reviewer.

Their review process was developed to hit the much more stringent speech standards of the Chinese market, but it opens them up to even more liability here.

I unfortunately can't find the source articles for this any more, they're buried under "how to make your video go viral" flowcharts that elide the "when things get banned" decisions.

Izkata
1 replies
17h25m

Their review process was developed to hit the much more stringent speech standards of the Chinese market

TikTok isn't available in China. They have a separate app called Douyin.

janalsncm
0 replies
11h8m

They are saying that the reason TikTok also has high-touch moderation is that it grew out of Douyin.

itsdrewmiller
0 replies
22h40m

I don't think any of that actually matters for the CDA liability question, but it is definitely material in whether they are found guilty assuming they can be held liable at all.

falcolas
4 replies
1d1h

the internet grew tremendously, encompassing the kinds of activities that did not exist in 1996

I guess that's one way to say that you never experienced the early internet. In three words: rotten dot com. Makes all the N-chans look like teenagers smoking on the corner, and Facebook et.al. look like toddlers in paddded cribs.

This will frankly hurt any and all attempts to host any content online, and if anyone can survive it, it will be the biggest corporations alone. Section 230 also protected ISPs and hosting companies (linode, Hetzer, etc) after all.

Their targeting may not be intentional, but will that matter? Are they willing to be jailed in a foreign country because of their perceived inaction?

amanaplanacanal
2 replies
1d1h

Jail? This was a civil suit, no criminal penalties apply, just monetary.

falcolas
1 replies
1d1h

Thanks to "Contempt of Court" anybody can go to jail, even if they're not found liable for the presented case.

But more on point, we're discussing modification of how laws are interpreted. If someone can be held civilly liable, why can't they be held criminally liable if the "recommended" content breaks criminal laws (CSAM, for example)? There's nothing that prevents this interpretation from being considered in a criminal case.

hn_acker
0 replies
19h35m

Section 230 already doesn't apply to content that breaks federal criminal liability, so CSAM is already exempted. Certain third-party liability cases will still be protected by the First Amendment (no third-party liability without knowledge of CSAM, for example) but won't be dismissed early by Section 230.

stackskipton
0 replies
23h26m

This was purely about was "Is using algorithms made you a publisher?", this judge ruled yes and therefore, no Section 230.

The Judge made no ruling on Section 230 protection for anyone who truly just hosts the content so ISPs/Hosting Companies should be fine.

mikewarot
3 replies
1d1h

There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.

So, we actually have to watch out for kids, and maybe only have a 25% profit margin? Oh, so terrible! /s

I'm 100% against the political use of censorship, but 100% for the reasonable use of government to promote the general welfare, secure the blessings of liberty for ourselves, and our posterity.

FireBeyond
2 replies
1d1h

Right? I missed the part where a business is "entitled" to that. There was a really good quote I've never been able to find again, along the lines of "just because a business has always done things a certain way, doesn't mean they are exempt from changes".

turol
1 replies
21h50m

"There has grown up in the minds of certain groups in this country the notion that because a man or corporation has made a profit out of the public for a number of years, the government and the courts are charged with the duty of guaranteeing such profit in the future, even in the face of changing circumstances and contrary to the public interest. This strange doctrine is not supported by statute or common law. Neither individuals nor corporations have any right to come into court and ask that the clock of history be stopped, or turned back."

Robert Heinlein in "Life-Line"

FireBeyond
0 replies
20h47m

Wow. Thank you. I saw this years ago, and despite my best efforts, I could never find it again! Thank you.

hn_acker
3 replies
19h20m

For anyone making claims about what the authors of Section 230 intended or the extent to which Section 230 applies to targeted recommendations by algorithms, the authors of Section 230 (Ron Wyden and Chris Cox) wrote an amicus brief [1] for Google v. Gonzalez (2023). Here is an excerpt from the corresponding press release [2] by Wyden:

“Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” the members wrote. “That interpretation enables Section 230 to fulfill Congress’s purpose of encouraging innovation in content presentation and moderation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Section 230’s protection remains as essential today as it was when the provision was enacted.”

[1][PDF] https://www.wyden.senate.gov/download/wyden-cox-amicus-brief...

[2] https://www.wyden.senate.gov/news/press-releases/sen-wyden-a...

nsagent
1 replies
17h58m

This statement from Wyden's press release seems to be in contrast to Chris Cox's reasoning in his journal article [1] (linked in the amicus).

  It is now firmly established in the case law that Section 230 cannot act as a shield whenever a website is in any way complicit in the creation or development of illegal content.

  ...

  In FTC v. Accusearch,[69] the Tenth Circuit Court of Appeals held that a website’s mere posting of content that it had no role whatsoever in creating — telephone records of private individuals — constituted “development” of that information, and so deprived it of Section 230 immunity. Even though the content was wholly created by others, the website knowingly transformed what had previously been private information into a publicly available commodity. Such complicity in illegality is what defines “development” of content, as distinguished from its creation.
He goes on to list multiple similar cases and how they fit the original intent of the law. Then further clarifies that it's not just about illegal content, but all legal obligations:

  In writing Section 230, Rep. Wyden and I, and ultimately the entire Congress, decided that these legal rules should continue to apply on the internet just as in the offline world. Every business, whether operating through its online facility or through a brick-and-mortar facility, would continue to be responsible for all of its own legal obligations.
Though, ultimately the original reasoning matters little in this case, as the courts are the ones to interpret the law. In fact Section 230 is one part of the larger Communications Decency Act that was mostly struck down by the Supreme Court.

EDIT: Added quote about additional legal obligations.

[1]: https://jolt.richmond.edu/2020/08/27/the-origins-and-origina...

hn_acker
0 replies
16h3m

The Accusearch case was a situation in which the very act of reselling a specific kind of private information would've been illegal under the FTC Act if you temporarily ignore Section 230. If you add Section 230 into consideration, then you have to consider knowledge, but the knowledge analysis is trivial. Accusearch should've known that reselling any 1 phone number was illegal, so it doesn't matter whether Accusearch knew the actual phone numbers it sold. Similarly, a social media site that only allows blackout challenge posts would be illegal regardless of whether the site employees know whether post #123 is actually a blackout challenge post. In contrast, most of the posts on TikTok are legal, and TikTok is designed for an indeterminate range of legal posts. Knowledge of specific posts matters.

Whether an intermediary has knowledge of specific content that is illegal to redistribute is very different from whether the intermediary has "knowledge" that the algorithm it designed to rank legally distributable content can "sometimes" produce a high ranking to "some" content that's illegal to distribute. The latter case can be split further into specific illegal content that the intermediary has knowledge of and illegal content that the intermediary lacks knowledge of. Unless a law such as KOSA passes (which it shouldn't [1]), the intermediary has no legal obligation to search for the illegal content that it isn't yet aware of. The intermediary need only respond to reports, and depending on the volume of reports the intermediary isn't obligated to respond within a "short" time period (except in "intellectual property cases", which are explicitly exempt from Section 230). "TikTok knows that TikTok has blackout challenge posts" is not knowledge of post PQR. "TikTok knows that post PQR on TikTok is a blackout challenge post" is knowledge of post PQR.

Was TikTok aware that specific users were being recommended specific "blackout challenge" posts? If so, then TikTok should've deleted those posts. Afterward, TikTok employees should've known that its algorithm was recommending some blackout challenge posts to some users. Suppose that TikTok employees are already aware of post PQR. Then TikTok has an obligation to delete PQR. If in a week blackout challenge post HIJ shows up in the recommendations for user @abc and @xyz, then TikTok shouldn't be liable for recommendations of HIJ until TikTok employees read a report about it and then confirm that HIJ is a blackout challenge post. Outwardly, @abc and @xyz will think that TikTok has done nothing or "not enough" even though TikTok removed PQR and isn't yet aware of HIJ until a second week passes. The algorithm doesn't create knowledge of HIJ no matter how high the algorithm ranks HIJ for user @abc. The algorithm may be TikTok's first-party speech, but the content that is being recommended is still third-party speech. Suppose that @abc sues TikTok for failing to prevent HIJ from being recommended to @abc during the first elapsed week. The First Amendment would prevent TikTok from being held liable for HIJ (third party speech that TikTok lacked knowledge of during the first week). As a statute that provides an immunity (as opposed to a defense) in situations involving redistribution of third-party speech, Section 230 would allow TikTok to dismiss the case early; early dismissals save time and court fees. Does the featured ruling by the Third Circuit mean that Section 230 wouldn't apply to TikTok's recommendation of HIJ to @abc in the first elapsed week? Because if so, then I really don't think that the Third Circuit is reading Section 230 correctly. At the very least, the Third Circuit's ruling will create a chilling effect on complex algorithms in violation of social media websites' First Amendment freedom of expression. And I don't believe that Ron Wyden and Chris Cox intended for websites to only sort user posts by chronological order (like multiple commenters on this post are hoping will happen as a result of the ruling) when they wrote Section 230.

[1] https://reason.com/2024/08/20/censoring-the-internet-wont-pr...

remich
0 replies
19h1m

I'm skeptical that Ron Wyden anticipated algorithmic social media feeds in 1996. But I'm pretty sure he gets a decent amount of lobbying cash from interested parties.

renewiltord
2 replies
1d1h

If I spam filter comments am I subject to this? That is, the remaining comments are effectively like I was saying them?

amanaplanacanal
1 replies
1d1h

No. Section 230 protects you if you remove objectionable content. This is about deciding which content to show to each individual user. If all your users get the same content, you should be fine.

renewiltord
0 replies
1d

I see. Thanks!

If they can customize the feed, does that make it their speech or my speech? Like if I give them a "subscribe to x communities" thing with "hide already visited". It'll be a different feed, and algorithmic (I suppose) but user controlled.

I imagine if you have explicitly ask the user "what topics" and then use a program to determine which topic then it's a problem.

I've got a WIP mastodon client that uses a llama3 to follow topics. I suppose that's not releasable.

ratorx
2 replies
20h5m

I think a bigger issue in this case is the age. A 10-year old should not have access to TikTok unsupervised, especially when the ToS states the 13-year age threshold, regardless of the law’s opinion on moderation.

I think especially content for children should be much more severely restricted, as it is with other media.

It’s pretty well-known that age is easy to fake on the internet. I think that’s something that needs tightening as well. I’m not sure what the best way to approach it is though. There’s a parental education aspect, but I don’t see how general content on the internet can be restricted without putting everything behind an ID-verified login screen or mandating parental filters, which seems quite unrealistic.

Terr_
1 replies
19h35m

I’m not sure what the best way to approach it is though.

Pretty much every option is full of pain, but I think the least-terrible approach would be for for sites to describe content with metadata (e.g. HTTP headers) and push all responsibility for blocking/filtering onto the client device.

This has several benefits:

1. Cost. The people paying the most expense for the development and maintenance of blocking infrastructure will be the same parents who want to actually use it, instead of creating an enormous implicit tax on the entire digital world.

2. Privacy. The websites of the world don't need to know anything at all about the user. No birthdays, no geographical information to figure out what legal jurisdiction they live in, and no giant national lookup database that can track every website any resident registers to. Just isolated local devices that could be as simple as a Boolean for whether the child lock is currently unabled. (In practice I'm sure there will be local user accounts.)

3. Leveraging physical security. Parents do not need to be programmers to understand and enforce "little Timmy shouldn't be using anything except the tablet we specially set up for him that's covered with stickers of his favorite cartoon." Sure, Timmy might gain access to an unlocked device, but that's a challenge parents and communities are equipped to understand and handle.

4. Rule complexity. The individual devices can be programmed with whatever the local legal rules are for ages of majority, or it can simply be parents' responsibility to change things on a notable birthday. Parents who think ankles on women should never be shown at any age would be responsible for putting on plugins that add extra restrictions, instead of forcing that logic on the rest of the world.

ratorx
0 replies
18h49m

I think this is the most privacy-friendly and reasonable approach. However, as a devil’s advocate, this is still pretty fingerprintable.

“Most users load n pages with ankles, the likelihood of a user only loading a single page with ankles is someone under the age of X from country Y with Z% likelihood”

kevwil
2 replies
21h33m

Whatever this means, I hope it means less censorship. That's all my feeble brain can focus on here: free speech good, censorship bad. :)

EasyMark
1 replies
20h21m

This judge supports censorship and not free speech, it a tendency of the current generation of judges populating the court. They prefer government control over personal responsibility in most cases, especially the more conservative they get.

CatWChainsaw
0 replies
4h14m

The judge who made the ruling was a conservative Trump appointee.

2OEH8eoCRo0
2 replies
23h5m

I love this.

Court: Social Media algos are protected speech

Social Media: Yes! Protect us

Court: Since you're speech you must be liable for harmful speech as anyone else would be

Social Media: No!!

srackey
1 replies
20h31m

Ah yes “social media bad”. Lemme guess, “Orange man bad” too?

You’re cheering on expansion of government power and the end of the free internet as we know it.

2OEH8eoCRo0
0 replies
7h47m

I'm not cheering the end of the free internet, I strongly believe that the web will adapt and be better off.

You’re cheering on expansion of government power

More like a shrinking of megacorp tech giant power.

tomcam
1 replies
1d

Have to assume dang is moderating his exhausted butt off, because the discussion on this page is vibrant and courteous. Thanks all!

itsdrewmiller
0 replies
22h47m

I agree, and for that reason I will be suing Hacker News in Pennsylvania, New Jersey, Delaware, or the Virgin Islands.

ssalka
1 replies
19h16m

There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.

More specific than being harmed by your product, Section 230 cares about content you publish and whether you are acting as a publisher (liable for content) or a platform (not liable for content). This quote is supposing what would happen if Section 230 were overturned. But in fact, there is a way that companies would protect themselves: simply don't moderate content at all. Then you act purely as a platform, and don't have to ever worry about being treated as a publisher. Of course, this would turn the whole internet into 4chan, which nobody wants. IMO, this is one of the main reasons Section 230 continues to be used in this way.

ssalka
0 replies
18h37m

Also want to note that the inverse solution that companies could take is to be overly Draconian in moderating content, so as to take down anything that could come back negatively on them (in this case, the role of publisher is assumed and thus content moderation needs to be sufficiently robust so as to cover the company's ass).

octopoc
1 replies
1d1h

In other words, the fundamental issue here is not really whether big tech platforms should be regulated as speakers, as that’s a misconception of what they do. They don’t speak, they are middlemen. And hopefully, we will follow the logic of Matey’s opinion, and start to see the policy problem as what to do about that.

This is a pretty good take, and it relies on pre-Internet legal concepts like distributor and producer. There's this idea that our legal / governmental structures are not designed to handle the Internet age and therefore need to be revamped, but this is a counterexample that is both relevant and significant.

postalrat
0 replies
20h0m

They are more than middlemen when they are very carefully choosing what content each person sees or doesn't see.

jmyeet
1 replies
22h30m

What I want to sink in for people that whenever people talk about an "algorithm", they're regurgitating propaganda specifically designed to absolve the purveyor of responsibility for anything that algorithm does.

An algorithm in this context is nothing more than a reflection of what all the humans who created it designed it to do. In this case, it's to deny Medicaid to make money. For RealPage, it's to drive up rents for profit. Health insurance companies are using "AI" to deny claims and prior authorizations, forcing claimants to go through more hoops to get their coverage. Why? Because the extra hoops will discourage a certain percentage.

All of these systems come down to a waterfall of steps you need to go through. Good design will remove steps to increase the pass rate. Intentional bad design will add steps and/or lower the pass rate.

Example: in the early days of e-commerce, you had to create an account before you could shop. Someone (probably Amazon) realized they lost customers this way. The result? You could create a shopping cart all you want and you didn't have to create an account unti lyou checked out. At this point you're already invested. The overall conversion rate is higher. Even later, registration itself became optional.

Additionally, these big consulting companies are nothing more than leeches designed to drain the public purse

2OEH8eoCRo0
0 replies
22h18m

I like it. What would be a better word than algorithm then? Design? Product?

TikTok's design presented harmful information to a minor resulting in her death.

TikTok's product presented harmful information to a minor resulting in her death.

game_the0ry
1 replies
20h15m

Pavel gets arrested, Brazil threatens Elon, now this.

I am not happy with how governments think they can dictate what internet users can and cannot see.

With respect to TikTok, parents need have some discipline and not give smart phones to their ten-year-olds. You might as well give them a crack pipe.

CuriouslyC
0 replies
19h43m

shrug maybe our communication protocols should be distributed and not owned by billionaires. That would solve this problem neatly.

drpossum
1 replies
23h56m

I hope this makes certain streaming platforms liable for the things certain podcast hosts say while they shovel money at and promote them above other content.

itsdrewmiller
0 replies
13h43m

I am guessing this is about spotify and joe rogan - they would have a pretty tough time pleading section 230 for content they fully sponsor and exclusively publish, with or without the decision in question.

carapace
1 replies
23h38m

Moderation doesn't scale, it's NP-complete or worse. Massive social networks sans moderation cannot work and cannot be made to work. Social networks require that the moderation system is a super-set of the communication system and that's not cost effective (except where the two are co-extensive, e.g. Wikipedia, Hacker News, Fediverse.) We tried it because of ignorance (in the first place) and greed (subsequently). This ruling is just recognizing reality.

LargeWu
0 replies
21h56m

This isn't a question of moderation. It's about recommendation.

Smithalicious
1 replies
19h49m

Hurting kids, hurting kids, hurting kids -- but, of course, there is zero chance any of this makes it to the top 30 causes of child mortality. Much to complain about with big tech, but children hanging themselves is just an outlier.

itsdrewmiller
0 replies
13h53m

This would be considered "accidental injury" which is the #1 cause of teenager mortality. The #3 cause is suicide which is influenced by social media as well - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6278213/

BurningFrog
1 replies
23h20m

Surely this will bubble up to the Supreme Court?

Once they've weighed in, we'll know if the "free ride" really is over, and if so what ride replaces it.

barryrandall
0 replies
22h57m

I think there are a few very interesting ways this could play out.

Option 1: ByteDance appeals, loses, and the ruling stands

Option 2: ByteDance appeals, wins, and the ruling is overturned

Option 3: ByteDance doesn’t appeal, the ruling stands, and nobody has standing to appeal the ruling without bringing a new case.

zmmmmm
0 replies
12h40m

What about "small tech"?

... because it's small tech that need Section 230. If anything, retraction of 230 will be the real free ride for big tech, because it will kill all chance of threatening competition at the next level down.

trinsic2
0 replies
19h34m

When I see CEO's, CFO's going to prison for the actions of there corporations, then I'll believe laws actually make things better. Otherwise any court decisions that say some action is now illegal is just posturing.

tempeler
0 replies
12h12m

Finally, it goes to end of global social media. jurisdiction cannot be use as a weapon. if you use it as a weapon. they don't hesitate use that to you as a weapon.

stainablesteel
0 replies
22h46m

tiktok in general is great at targeting young women

the chinese and iranians are taking advantage of this and thats not something i would want to entrust to them

skeptrune
0 replies
19h57m

My interpretation of this is it will push social media companies to take a less active role in what they recommend to their users. It should not be possible to intentionally curate content while simultaneously avoiding the burden of removing content which would cause direct harm justifying a lawsuit. Could not be more excited to see this.

skeltoac
0 replies
13h46m

Disclosures: I read the ruling before reading Matt Stoller’s article. I am a subscriber of his. I have written content recommendation algorithms for large audiences. I recommend doing one of these three things.

Section 230 is not canceled. This is a significant but fairly narrow refinement of what constitutes original content and Stoller’s take (“The business model of big tech is over”) is vastly overstating it.

Some kinds of recommendation algorithms produce original content (speech) by selecting and arranging feeds of other user generated content and the creators of the algorithms can be sued for harms caused by those recommendations. This correctly attaches liability to risky business.

The businesses using this model need to exercise a duty of care toward the public. It’s about time they start.

rsingel
0 replies
10h51m

With no sense of irony, this blog is written on a platform that allows some Nazis, algorithmically promotes publishers, allows comments, and is thus only financially viable because of Section 230.

If you actually want to understand something about the decision, I highly recommend Eric Goldman's blog post:

https://blog.ericgoldman.org/archives/2024/08/bonkers-opinio...

phendrenad2
0 replies
20h30m

I have no problem with this. Section 230 is almost 100 years old, long before anyone could have imagined an ML algorithm curating user content.

Section 230 absolutely should come with an asterisk that if you train an algorithm to do your dirty work you don't get to claim it wasn't your fault.

oldgregg
0 replies
1d

Insane reframing. Big tech and politicians are pushing this, pulling the ladder up behind them-- X and new decentralized networks are a threat to their hegemony and this is who they are going after. Startups will not be able to afford whatever bullshit regulatory framework they force feed us. How about they mandate any social network over 10M MAU has to publish their content algorithms.. ha!

nness
0 replies
1d1h

Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech.

This is fascinating and raises some interesting questions about where the liability starts and stops i.e. is "trending/top right now/posts from following" the same as a tailored algorithm per user? Does Amazon become culpable for products on their marketplace? etc.

For good or for bad, this century's Silicon Valley was built on Section 230 and I don't foresee it disappearing any time soon. If anything, I suspect it will be supported by future/refined by legislation instead of removed. No one wants to be the person who legisliate away all online services...

nitwit005
0 replies
1d

I am puzzled why there are no arrests in this sort of case. Surely, convincing kids to kill themselves is a form of homicide?

linotype
0 replies
16h12m

Twitter sold at the perfect time. Wow.

jrockway
0 replies
1d

I'm not sure that Big Tech is over. Media companies have had a viable business forever. What happens here is that instead of going to social media and hearing about how to fight insurance companies, you'll just get NFL Wednesday Night Football Presented By TikTok.

janalsncm
0 replies
11h6m

This seems like it contradicts the case where YouTube wasn’t liable for recommending terrorist videos to someone.

janalsncm
0 replies
10h57m

Part of the reason social media has grown so big and been so profitable is that these platforms have scaled past their own abilities to do what normal companies are required to do.

Facebook has a “marketplace” but no customer support line. Google is serving people scam ads for months, leading to millions in losses. (Imagine if a newspaper did that.) And feeds are allowed to recommend content that would be beyond the pale if a human were curating it. But because “it’s just an algorithm bro” we give them a pass because they can claim plausible deniability.

If fixing this means certain companies can’t scale to a trillion dollars with no customer support, too bad. Google can’t vet every ad? They could, but choose not to. Figure it out.

And content for children should have an even higher bar than that. Kids should not be dying from watching videos.

intended
0 replies
11h13m

Hoo boy.

So- platforms aren’t publishers, they are distributors (like news stands or pharmacies).

So they are responsible for the goods they sell.

They aren’t responsible for user content - but they are responsible for what they choose to show.

This is going to be dramatic.

hnburnsy
0 replies
22h30m

To me this decision doesn't feel it is demolishing 230, but reducing its scope, a scope that was exanded by other court decisions. Per the article 230 said not liable for user content and not liable for restricting content. This case is about liability for reinforcing content.

Would love to have a timeline only, non reinforcing content feed.

hello_computer
0 replies
1d1h

This is a typical anglosphere move: Write another holy checklist (I mean, "Great Charter"), indoctrinate the plebes into thinking that they were made free because of it (they weren't), then as soon as one of the bulleted items leaves the regime's hiney exposed, have the "judges" conjure a new interpretation out of thin-air for as long as they think the threat persists.

Whether it was Eugene Debs being thrown in the pokey, or every Japanese civilian on the west coast, or some harmless muslim suburbanite getting waterboarded, nothing ever changes. Wake me up when they actually do something to Facebook.

endtime
0 replies
21h27m

Not that it matters, but I was curious and so I looked it up: the three-judge panel comprised one Obama-appointed judge and two Trump-appointed judges.

drbojingle
0 replies
20h57m

There's no reason,as far as I'm concerned, that we shouldn't have a choice in algorithms on social media platforms. I want to be able to pick an open source algorithm that i can understand the pros and cons of. Hell let me pick 5. Why not?

deafpolygon
0 replies
1d1h

Section 230 is alive and well, and this ruling won't impact it. What will change is that US social media firms will move away from certain types of algorithmic recommendations. Tiktok is owned by Bytedance which is a Chinese firm, so in the long run - no real impact.

ang_cire
0 replies
1h54m

This is wonderful news.

The key thing people are missing is that TikTok is not being held responsible for the video content itself, they are being held responsible for their own code's actions. The video creator didn't share (or even attempt to share) the video with the victim- TikTok did.

If adults want to subscribe themselves to that content, that is their choice. Hell, if kids actively seek out that content themselves, I don't think companies should be responsible if they find it.

But if the company itself is the one proactively choosing to show that content to kids, that is 100% on them.

This narrative of being blind to the vagaries of their own code is playing dumb at best: we all know what the code we write does, and so do they. They just don't want to admit that it's impossible to moderate that much content themselves with automatic recommendation algorithms.

They could avoid this particular issue entirely by just showing people content they choose to subscribe to, but that doesn't allow them to inject content-based ads to a much broader audience, by showing that content to people who have not expressed interest/ subscribed to that content. And that puts this on them as a business.

Nasrudith
0 replies
16h59m

It is amazing how people were programmed to completely forget the meaning of Section 230 over the years just by repetition of the stupidest propaganda.

DidYaWipe
0 replies
13h17m

While this guy's missives are not always on target (his one supporting the DOJ's laughable and absurd case against Apple being an example of failure), some are on target... and indeed this ruling correctly calls out sites for exerting editorial control.

If you're going to throw up your hands and say, "Well, users posted this, not us!" then you'd better not promote or bury any content with any algorithm, period. These assholes (TikTok et al) are now getting what they asked for with their abusive behavior.

6gvONxR4sf7o
0 replies
22h32m

So under this new reading of the law, is it saying that AWS is still not liable for what someone says on reddit, but now reddit might be responsible for it?

2OEH8eoCRo0
0 replies
23h27m

Fantastic! If I had three wishes, one of them might be to repeal Section 230.

1vuio0pswjnm7
0 replies
7h34m

"In other words, the fundamental issue here is not really whether big tech platforms should be regulated as speakers, as that's a misconception of what they do. They don't speak, they are middlemen."

Parasites.