The current comments seem to say this is rings the death knell of social media and that this just leads to government censorship. I'm not so sure.
I think the ultimate problem is that social media is not unbiased — it curates what people are shown. In that role they are no longer an impartial party merely hosting content. It seems this ruling is saying that the curation being algorithmic does not absolve the companies from liability.
In a very general sense, this ruling could be seen as a form of net neutrality. Currently social media platforms favor certain content, while down weighting others. Sure, it might be at a different level than peer agreements between ISPs and websites, but it amounts to a similar phenomenon when most people interact on social media through the feed.
Honestly, I think I'd love to see what changes this ruling brings about. HN is quite literally the only social media site (loosely interpreted) I even have an account on anymore, mainly because of how truly awful all the sites have become. Maybe this will make social media more palatable again? Maybe not, but I'm inclined to see what shakes out.
I'm probably mis-understanding the implications but, IIUC, as it is, HN is moderated by dang (and others?) but still falls under 230 meaning HN is not responsible for what other users post here.
With this ruling, HN is suddenly responsibly for all posts here specifically because of the moderation. So they have 2 options.
(1) Stop the moderation so they can be safe under 230. Result, HN turns to 4chan.
(2) enforce the moderation to a much higher degree by say, requiring non-anon accounts and TOS that make each poster responsible for their own content and/or manually approve every comment.
I'm not even sure how you'd run a website with user content if you wanted to moderate that content and still avoid being liable for illegal content.
I think this is a mistaken understanding of the ruling. In this case, TikTok decided, with no other context, to make a personalized recommendation to a user who visited their recommendation page. On HN, your front page is not different from my front page. (Indeed, there is no personalized recommendation page on HN, as far as I'm aware.)
I don't see how this is about personalization. HN has an algorithm that shows what it wants in the way it wants.
From the article:
That's the difference between the case and a monolithic electronic bulletin board like HN. HN follows an old-school BB model very close to the models that existed when Section 230 was written.
Winding up in the same place as the defendant would require making a unique, dynamic, individualized BB for each user tailored to them based on pervasive online surveillance and the platform's own editorial "secret sauce."
The HN team explicitly and manually manages the front page of HN, so I think it's completely unarguable that they would be held liable under this ruling if at least the front page contained links to articles that caused harm. They manually promote certain posts that they find particularly good, even if they didn't get a lot of votes, so this is even more direct than what TikTok did in this case.
The decision specifically mentions algorithmic recommandation as being speech, ergo the recommandation itself is the responsibility of the platform.
Where is the algorithmic recommandation that differs per user on HN?
where does it say that it matters if it differs per user?
It is absolutely still arguable in court, since this ruling interpreted the Supreme Court ruling to pertain to “a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,”
In other words, the Supreme Court decision mentions editorial decisions but no court case has yet backed up if that means editorial decisions in the HN front page sense (as in mods make some choices but it’s not personalized.) Common sense may say mods making decisions is editorial decisions but it’s a gray area until a court case makes it clear. Precedence is the most important thing when interpreting law, and the only precedence we have is that it pertains to personalized feeds.
Key words are "editorial" and "secret sauce". Platforms should not be liable for dangerous content which slips through the cracks, but certainly should be when their user-personalized algorithms mess up. Can't have your cake and eat it to.
Dangerous content slipping through the cracks and the algorithms messing up is the same thing. There is no way for content to "slip through the cracks" other than via the algorithm.
You can view the content via direct links or search, recommendation algorithms isn't the only way to view it.
If you child porn that gets shared via direct links then that is bad even if nobody can see it, but it is much much worse if you start recommending that to people as well.
Everything is related. Search results are usually generated based on recommendations, and direct links usually influence recommendations, or include recommendations as related content.
It's rarely if ever going to be the case that there is some distinct unit of code called "the algorithm" that can be separated and considered legally distinct from the rest of the codebase.
HN is _not_ a monolithic bulletin board -- the messages on a BBS were never (AFAIK) sorted by 'popularity' and users didn't generally have the power to demote or flag posts.
Although HN's algorithm depends (mostly) on user input for how it presents the posts, it still favours some over others and still runs afoul here. You would need a literal 'most recent' chronological view and HN doesn't have that for comments. It probably should anyway!
@dang We need the option to view comments chronologically, please
Writing @dang is a no-op. He'll respond if he sees the mention, but there's no alert sent to him. Email hn@ycombinator.com if you want to get his attention.
That said, the feature you requested is already implemented but you have to know it is there. Dang mentioned it in a recent comment that I bookmarked: https://news.ycombinator.com/item?id=41230703
To see comments on this story sorted newest-first, change the link to
https://news.ycombinator.com/latest?id=41391868
instead of
https://news.ycombinator.com/item?id=41391868
You might like this then: https://hckrnews.com/
I don't think the feature was that unknown. Per Wikipedia, the CDA passed in 1996 and Slashdot was created in 1997, and I doubt the latter's moderation/voting system was that unique.
It’d be interesting to know what constitutes an “algorithm”. Does a message board sorting by “most recent” count as one?
I don't think timestamps are, in any way, construed editorial judgement. They are a content agnostic related attribute.
What about filtering spam? Or showing the local weather / news headlines?
Or ordering posts by up votes/down votes, or some combination of that with the age of the post.
The text of the Third Circuit decision explicitly distinguishes between algorithms that respond to user input -- such as by surfacing content that was previously searched for, or favorited, or followed. Allowing users to filter content by time, upvotes, number of replies etc would be fine.
The FYP algorithm that's contested in the case surfaced the video to the minor without her searching for that topic, following any specific content creator, or positively interacting (liking/favoriting/upvoting) with previous instances of said content. It was fed to her based on a combination of what TikTok knew about her demographic information, what was trending on the platform, and TikTok's editorial secret sauce. TikTok's algorithm made an active decision to surface this content to her, despite knowing that other children had died from similar challenge videos, they promoted it and should be liable for that promotion.
Moderating content is explicitly protected by the text of Section 230(c)(2)(a):
"(2)Civil liability No provider or user of an interactive computer service shall be held liable on account of— (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or"
Algorithmic ranking, curation, and promotion are not.
On HN, timestamps are adjusted when posts are given a second-chance boost. While the boost is done automatically, candidates are chosen manually.
Specifically NetChoice argued that personalized feeds based on user data were protected due to first person speech. This went to supreme court and supreme court agreed. Now precedent is set by the highest court that those feeds are "expressive product". It doesn't make sense, but that's how the law works - by trying to define as best as possible the things in gray areas.
And they probably didn't think through how this particular argument could affect other areas of their business.
It absolutely makes sense. What NetChoice held was that the curation aspect of algorithmic feeds makes the weighting approach equivalent to the speech of the platforms and therefore when courts evaluated challenges to government imposed regulation, they had to perform standard First Amendment analysis to determine if the contested regulation passed muster.
Importantly, this does not mean that before the Third Circuit decision platforms could just curate any which way they want and government couldn't regulate at all -- the mandatory removal regime around CSAM content is a great example of government regulating speech and forcing platforms to comply.
The Third Circuit decision, in a nutshell, is telling the platforms that they can't have their cake and eat it too. If they want to claim that their algorithmic feeds are speech that is protected from most government regulation, they can't simultaneously claim that these same algorithmic feeds are mere passive vessels for the speech of third parties. If that were the case, then their algorithms would enjoy no 1A protection from government regulation. (The content itself would still have 1A protection based on the rights of the creators, but the curation/ranking/privileging aspect would not).
Yeah, I agree.
This ruling is a natural consequence of the NetChoice ruling. Social media companies can't have it both ways.
> If that were the case, then their algorithms would enjoy no 1A protection from government regulation.
Well, the companies can still probably claim some 1st Amendment protections for their recommendation algorithms (for example, a law banning algorithmic political bias would be unconstitutional). All this ruling does is strip away the safe harbour protections, which weren't derived from the 1A in the first place.
Would it? The TV channels of old were heavily regulated well past 1st amendment limits.
Only because they were using public airwaves.
Cable was never regulated like that. The medium actually mattered in this case
I misunderstood the Supreme Court ruling that it hinged on personalization per user of algorithms and thought it made a distinction between editorial decisions that show to everyone vs individual users. I thought that part didn’t make sense. I see now it’s really the third circuit ruling that interpreted the user customization part as editorial decisions, not excluding the non-per user algorithms.
It's worth noting that personalisation isn't moderation, An app like TikTok needs both.
Personalisation simply matches users with the content the algorithm thinks they want to see. Moderation (which is typically also an algorithm) tries to remove harmful content from the platform altogether.
The ruling isn't saying that Section 230 doesn't apply because TikTok moderated. It's saying Section 230 doesn't apply because TikTok personalised, allegedly knew about the harmful content and allegedly didn't take enough action to moderate this harmful content.
These algorithms aren't matching you with what you want to see, they're trying to maximize your engagement- or, its what the operator wants you to see, so you'll use the site more and generate more data or revenue. Its a fine, but extremely important distinction.
What the operator wants you to see also gets into the area of manipulation, hence 230 shouldn't apply - by making algorithms based on manipulation or paid for boosting companies move from impartial unknowing deliverers of harmful content into committed distributors of it.
So, yes, the TikTok FYP is different from a forum with moderation.
But the basis of this ruling is basically "well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that's your speech and not somebody else's and so 230 doesn't apply and you can be liable for it." That rationale extends to basically any form of moderation or selection, personalized or not, and would blow a big hole in 230's protections.
Given generalized anti-Big-Tech sentiment on both ends of the political spectrum, I could see something that claimed to carve out just algorithmic personalization/suggestion from protection meeting with success, either out of the courts or Congress, but it really doesn't match the current law.
"well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that's your speech and not somebody else's and so 230 doesn't apply and you can be liable for it."
I see a lot of people saying this is a bad decision because it will have consequences they don't like, but the logic of the decision seems pretty damn airtight as you describe it. If the recommendation systems and moderation policies are the company's speech, then the company can be liable when the company "says", by way of their algorithmic "speech", to children that they should engage in some reckless activity likely to cause their death.
Doesn't seem to have anything to do with personalization to me, either. It's about "editorial judgement," and an algorithm isn't necessarily a get out of jail free card unless the algorithm is completely transparent and user-adjustable.
I even think it would count if the only moderation you did on your Lionel model train site was to make sure that most of the conversation was about Lionel model trains, and that they be treated in a positive (or at least neutral) manner. That degree of moderation, for that purpose, would make you liable if you left illegal or tortious content up i.e. if you moderate, you're a moderator, and your first duty is legal.
If you're just a dumb pipe, however, you're a dumb pipe and get section 230.
I wonder how this works with recommendation algorithms, though, seeing as they're also trade secrets. Even when they're not dark and predatory (advertising related.) If one has a recommendation algo that makes better e.g. song recommendations, you don't want to have to share it. Would it be something you'd have to privately reveal to a government agency (like having to reveal the composition of your fracking fluid to the EPA, as an example), and they would judge whether or not it was "editorial" or not?
[edit: that being said, it would probably be very hard to break the law with a song recommendation algorithm. But I'm sure you could run afoul of some financial law still on the books about payola, etc.]
I'm not sure that's quite it. As I read the article and think about its application to Tiktok, the problem was more that "the algorithm" was engaged in active and allegedly expressive promotion of the unsafe material. If a site like HN just doesn't remove bad content, then the residual promotion is not exactly Hacker News's expression, but rather its users'.
The situation might change if a liability-causing article were itself given 'second chance' promotion or another editorial thumb on the scale, but I certainly hope that such editorial management is done with enough care to practically avoid that case.
Per the court of appeals, TikTok is not in trouble for showing a blackout challenge video. TikTok is in trouble for not censoring them after knowing they were causing harm.
As-in, Dang would be liable if say somebody started a blackout challenge post on HN and he didn't start censoring all of them once news reports of programmers dieing broke out.
https://fingfx.thomsonreuters.com/gfx/legaldocs/mopaqabzypa/...
Does TikTok have to know that “as a category blackout videos are bad” or that “this specific video is bad”.
Does TikTok have preempt this category of videos in the future or simply respond promptly when notified such a video is posted to their system?
Are you asking about the law, or are you asking our opinion?
Do you think its reasonable for social media to send videos to people without considering how harmful they are?
Do you even think its reasonable for search engine to respond to a specific request for this information?
Did some hands come out of the screen, pull a rope out then choke someone? Platforms shouldn’t be held responsible when 1 out of a million users wins a Darwin award.
I think it's a very different conversation when you're talking about social media sites pushing content they know is harmful onto people who they know are literal children.
Personally, I wouldn't want search engines censoring results for things explicitly searched for, but I'd still expect that social media should be responsible for harmful content they push onto users who never asked for it in the first place. Push vs Pull is an important distinction that should be considered.
That IS the distinction at play here.
What constitutes "censoring all of them"
Any good will attempt at censoring would have been as a reasonable defense even if technically they don't censor 100% of them, such as blocking videos with the word "blackout" on their title or manually approving videos with such thing, but they did nothing instead.
Trying to define "all" is an impossibility; but, by virtue of having taken no action whatsoever, answering that question is irrelevant in the context of this particular judgment: Tiktok took no action, so the definition of "all" is irrelevant. See also for example: https://news.ycombinator.com/item?id=41393921
In general, judges will be ultimately responsible for evaluating whether "any", "sufficient", "appropriate", etc. actions were taken in each future case judgement they make. As with all things legalese, it's impossible to define with certainty a specific degree of action that is the uniform boundary of acceptable; but, as evident here, "none" is no longer permissible in that set.
(I am not your lawyer, this is not legal advice.)
This has interesting higher-order effects on free speech. Let's apply the same ruling to vaccine misinformation, or the ability to organize protests on social media (which opponents will probably call riots if there are any injuries)
Uh yeah, the court of appeals has reached an interesting decision.
But I mean what do you expect from a group of judges that themselves have written they're moving away from precedent?
I don't doubt the same court relishes the thought of deciding what "harm" is on a case-by-case basis. The continued politicization of the courts will not end well for a society that nominally believes in the rule of law. Some quarters have been agitating for removing §230 safe harbor protections (or repealing it entirely), and the courts have delivered.
The ingenuity of kids to believe and be easily influenced by what they see online had a big role in this ruling, disregarding that is a huge disservice to a productive discussion.
But something like Reddit would be held liable for showing posts, then. Because you get shown different results depending on the subreddits you subscribe to, your browsing patterns, what you've upvoted in the past, and more. Pretty much any recommendation engine is a no-go of this ruling becomes precedence.
TBH, Reddit really shouldn't have 230 protection anyways.
You can't be licensing user content to AI as it's not yours. You also can't be undeleting posts people make (otherwise it's really reddit's posts and not theirs).
When you start treating user data as your own; it should become your own and that erodes 230.
It belongs to reddit, the user handed over the content willingly.
undeleting is bad enough, but they've edited the content of user's comments too.
It is theirs. Users agreed to grant Reddit a license to use the content when they accepted the terms of service.
From my reading, if the site only shows you based on your selections, then it wouldn't be liable. For example, if someone else with the exact same selections gets the same results, then that's not their platform deciding what to show.
If it does any customization based on what it knows about you, or what it tries to sell you because you are you, then it would be liable.
Yep., recommendation engines would have to be very carefully tuned, or you risk becoming liable. Recommending only curated content would be a way to protect yourself, but that costs money that companies don't have to pay today. It would be doable.
It could be difficult to draw the line. I assume TikTok’s suggestions are deterministic enough that an identical user would see the same things - it’s just incredibly unlikely to be identical at the level of granularity that TikTok is able to measure due to the type of content and types of interactions the platform has.
This could very well be true for TikTok. Of course "selection" would include liked videos, how long you spend watching each video, and how many videos you have posted
And on the flip side a button that brings you to a random video would supply different content to users regardless of "selections".
That kind of sounds... great? The only instance where I genuinely like to have a recommendation engine around is music steaming. Like yeah sometimes it does recommend great stuff. But anywhere else? No thank you
It’s still curated, and not entirely automatically. Does it make a difference whether it’s curated individually or not?
The personalized aspect wasn't emphasized at all in the ruling. It was the curation. I don't think TikTok would have avoided liability by simply sharing the video with everyone.
"I think this is a mistaken understanding of the ruling."
I think that is quite generous. I think it is a deliberate reinterpretation of what the order says. The order states that 230(c)(1) provides immunity for removing harmful content after being made aware of it, i.e., moderation.
I feel like the end result of path #1 is that your site just becomes overrun with spams and scams. See also: mail, telephones.
Yeah, no moderation leads to spams, scams, rampant hate, and CSAM. I spent all of an hour on Voat when it was in its heyday and it mostly literal Nazis calling for the extermination of undesirables. The normies just stayed on moderated Reddit.
voat wasnt exactly a single place, any more than reddit is
Were there non KKK/nazi/qanon whatever subvoats (or whatever they call them?) the one time i visited the site every single post on the frontpage was alt right nonsense
Yes. There were a ton of them for various categories of sex drawings, mostly in the style common in Japanese comics and cartoons.
It was the people who were chased out of other websites that drove much of their traffic so it's no surprise that their content got the front page. It's a shame that they scared so many other people away and downvoted other perspectives because it made diversity difficult.
... being manipulated by the algorithm (per this judges decision).
No, that's not the end the result.
It would be perfectly legal for a platform to choose to allow a user to decide on their own to filter out spam.
Maybe a user could sign up for such an algorithm, but if they choose to whitelist certain accounts, that would also be allowed.
Problem solved.
Exactly. Moderation is not a problem as long as you can opt out of it, for both reading and writing.
If I were to start posting defamatory material about you on various internet forums, how would you opt out of that?
Same as if you were to post it on notice boards, I would opt to not give a fuck.
There's moderation to manage disruption to a service. There's editorial control to manage the actual content on a service.
HN engages in the former but not the latter. The big three engage in the latter.
HN engages in the latter. For example, user votes are weighted based on their alignment with the moderation team's view of good content.
I don't understand your explanation. Do you mean just voting itself? That's not controlled or managed by HN. That's just more "user generated content." That posts get hidden or flagged due to thresholding is non-discriminatory and not _individually_ controlled by the staff here.
Or.. are you suggesting there's more to how this works? Is dang watching votes and then making decisions based on those votes?
"Editorial control" is more of a term of art and has a narrower definition then you're allowing for.
The HN moderation team makes a lot of editorial choices, which is what gives HN its specific character. For example, highly politically charged posts are manually moderated and kept off the main page regardless of votes, with limited exceptions entirely up to the judgement of the editors. For example, content about the wars in Ukraine and Israel is not allowed on the mainpage except on rare occasions. dang has talked a lot about the reasoning behind this.
The same applies to comments on HN. Comments are not moderated based purely on legal or certain general "good manners" grounds, they are moderated to keep a certain kind of discourse level. For example, shallow jokes or meme comments are not generally allowed on HN. Comments that start discussing controversial topics, even if civil, are also discouraged when they are not on-topic.
Overall, HN is very much curated in the direction of a newspaper "letter to the editor" section, then more algorithmic and hands-off like the Facebook wall or TikTok feed. So there is no doubt whatsoever, I believe, that HN would be considered responsible for user content (and is, in fact, already pretty good at policing that in my experience, at least on the front page).
This is meaningfully different in kind from only excluding posts that reflect certain perspectives on such a conflict. Maintaining topicality is not imposing a bias.
Maintaining topicality is literally a bias. Excluding posts that reflect certain perspectives is censorship.
Dang has been open about voting being only one part of the way HN works, and that manual moderator intervention does occur. They will downweigh the votes of "problem" accounts, manually adjust the order of the frontpage, and do whatever they feel necessary to maintain a high signal to noise ratio.
There's things like 'second chance' where the editorial team can re-up posts they feel didn't get a fair shake the first time around, sometimes if a post gets too 'hot' they will cool it down -- all of this is understandable but unfortunately does mean they are actively moderating content and thus are responsible for all of it.
Every time you see a comment marked as [dead] that means a moderator deleted it. There is no auto-deletion resulting from downvotes.
Even mentioning certain topics, such as Israel's invasion of Palestine, even when the mention is on-topic and not disruptive, as in this comment you are reading, is practically a death sentence for a comment. Not because of votes, but because of the moderators. Downvotes may prioritize which comments go in front of moderators (we don't know) but moderators make the final decision; comments that are downvoted but not removed merely stick around in a light grey colour.
By enabling showdead in your user preferences and using the site for a while, especially reading controversial threads, you can get a feel for what kinds of comments are deleted by moderators exercising. It is clear that most moderation is about editorial control and not simply the removal of disruption.
This comment may be dead by the time you read it, due to the previous mention of Palestine - hi to users with showdead enabled. Its parent will probably merely be down voted because it's wrong but doesn't contain anything that would irk the mods.
Comments that are marked [dead] without the [flagged] indicator are like that because the user that posted the comment has been banned. For green (new) accounts this can be due to automatic filters that threw up false positives for new accounts. For old accounts this shows that the account (not the individual comment) has been banned by moderators. Users who have been banned can email hn@ycombinator.com pledging to follow the rules in the future and they'll be granted another chance. Even if a user remains banned, you can unhide a good [dead] comment by clicking on its timestamp and clicking "vouch."
Comments are marked [flagged] [dead] when ordinary users have clicked on the timestamp and selected "flag." So user downvotes cannot kill a comment, but flagging by ordinary non-moderator users can kill it.
(1) 4chin is too dumb to use HN, and there's no image posting so, I doubt they'd even be interested in raiding us (2) I've never seen anything illegal here, I'm sure it happens, and it gets dealt with quickly enough that it's not really ever going to be a problem if things continue as they have been.
They may lose 230 protection, sure, but probably not really a problem here. For Facebook et al, it's going to be an issue, no doubt. I suppose they could drop their algos and bring back the chronological feeds, but, my guess is that wouldn't be profitable given that ad-tech and content feeds are one in the same at this point.
I'd also assume that "curation" is the sticking point here, if a platform can claim that they do not curate content, they probably keep 230 protection.
Certain boards most definitely raid various HN threads.
Specifically, every political or science thread that makes it, is raided by 4chan. 4chan also regularly pushes anti/science and anti-education agenda threads to the top here, along with posts from various alt-right figures on occasion.
search: site:4chan.org news.ycombinator.com
Seems pretty sparse to me, and from a casual perusal, I haven't seen any actual calls to raiding anything here, it's more of a reference where articles/posts have happened, and people talking about them.
Remember, not everyone who you disagree with comes from 4chan, some of them probably work with you, you might even be friends with them, and they're perfectly serviceable people with lives, hopes, dreams, same as yours, they simply think differently than you.
lol dude. Nobody said that 4chan links are posted to HN, just that 4chan definitely raids HN.
4chan is very well known for brigading. It is also well known that using 4chan as well as a number of other locations, such as discord, to post links for brigades are an extremely common thing that the alt-right does to try to raise the “validity” of their statements.
I also did not claim that only these opinions come from 4chan. Nice strawman bro.
Also, my friends do not believe these things. I do not make a habit of being friends with people that believe in genociding others purely because of sexual orientation or identity.
Go ahead and type that search query into google and see what happens.
Also the alt-right is a giant threat, if you categorize everyone right of you as alt-right, which seems to be the standard definition.
That's not how I've chosen to live, and I find that it's peaceful to choose something more reasonable. The body politic is cancer on the individual, and on the list of things that are important in life, it's not truly important. With enough introspection you'll find that the tendency to latch onto politics, or anything politics-adjacent, comes from an overall lack of agency over the other aspects of life you truly care about. It's a vicious cycle. You have a finite amount of mental energy, and the more you spend on worthless things, the less you have to spend on things that matter, which leads to you latching further on to the worthless things, and having even less to spend on things that matter.
It's a race to the bottom that has only losers. If you're looking for genocide, that's the genocide of the modern mind, and you're one foot in the grave already. You can choose to step out now and probably be ok, but it's going to be uncomfortable to do so.
That's all not to say there aren't horrid, problem-causing individuals out in the world, there certainly are, it's just that the less you fixate on them, the more you realize that they're such an extreme minority that you feel silly fixating on them in the first place. That goes for anyone that anyone deems 'horrid and problem-causing' mind you, not just whatever idea you have of that class of person.
These people win elections and make news cycles. They are not an “ignorable, small minority”.
For the record, ensuring that those who wish to genocide LGBT+ people are not the majority voice on the internet is absolutely not “a worthless matter”, not by any stretch. I would definitely rather not have to do this, but then, the people who dedicate their lives to trolling and hate are extremely active.
What are you expecting it to show? That site removes all content after a matter of days.
I don't frequent 4cuck, I use soyjak.party which I guess from your perspective is even worse, but there are of plenty of smart people on the 'cuck thoughbeit, like the gemmy /lit/ schizo. I think you would feel right at home in /sci/.
Not sure about the downvotes on this comment; but what parent says has precedent in Cubby Inc. vs Compuserve Inc.[1] and this is one of the reasons Section 230 came about to be in the first place.
HN is also heavily moderated with moderators actively trying to promote thoughtful comments over other, less thoughtful or incendiary contributions by downranking them (which is entirely separate from flagging or voting; and unlike what people like to believe, this place relies more on moderator actions as opposed to voting patterns to maintain its vibe.) I couldn't possibly see this working with the removal of Section 230.
[1] https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.
If I upvote something illegal, my liability was the same before, during, and after 230 exists, right?
I'd probably like the upvote itself to be considered "speech". The practical effect of upvoting is to endorse, together with the site's moderators and algorithm-curators, the comment to be shown to a wider audience.
Along those lines then then an upvote i.e. endorsement would be protected, up to any point where it violated one of the free speech exceptions, e.g. incitement.
Theoretically, your liability is the same because the First Amendment is what absolves you of liability for someone else's speech. Section 230 provides an avenue for early dismissal in such a case if you get sued; without Section 230, you'll risk having to fight the lawsuit on the merits, which will require spending more time (more fees).
As if it was something bad. 4chan has /g and it's absolutely awesome.
Nuff said. Underneath the ever-lasting political cesspool from /pol/ and... _specific_ atmosphere, it's still one of the best places to visit for tech-based discussion.
4chan is moderated and the moderation is different on each board with the only real global moderation rule being "no illegal stuff". In addition to that the site does curate the content it shows you using an algorithm even though it is a very basic one (the thread with last reply goes to the top of the page and threads older then X are removed automatically.)
For example the qanon conspiracy nuts got moderated out of /pol/ for arguing in bad faith/just being too crazy to actually have any kind of conversation with and they fled to another board (8chan and later 8kun) that has even less moderation.
Yep, 4chan isn't bad because "people I disagree with can talk there", it's bad because the interface is awful and they can't attract enough advertisers to meet their hosting demands.
Nah. HN is not the same as these others.
TikTok. Facebook. Twitter. YouTube.
All of these have their algorithms specifically curated to try to keep you angry. YouTube outright ignores your blocks every couple months, and no matter how many people dropping n-bombs you report and block, it never endingly pushes more and more.
These company know that their algorithms are harmful and they push them anyway. They absolutely should have liability for what their algorithm pushes.
Under Judge Matey's interpretation of Section 230, I don't even think option 1 would remain on the table. He includes every act except mere "hosting" as part of publisher liability.
Freedom of speech, not reach of their personal curation preferences, narrative shaping due to confirmation bias and survivorship bias. Tech is in the put them on scales to increase their signal, decrease others based upon some hokey story of academic and free market genius.
The pro-science crowd (which includes me fwiw) seems incapable of providing a proof any given scientist is that important. Same old social politics norms inflate some deflate others and we confirm our survival means we special. Ones education is vacuous prestige given physics applies equally; oh you did the math! Yeah I just tell the computer to do it. Oh you memorized the circumlocutions and dialectic of some long dead physicist. Outstanding.
There’s a lot of ego driven banal classist nonsense in tech and science. At the end of the day just meat suits with the same general human condition.
Section 230 hasn't changed or been revoked or anything, so, from what I understand, manual moderation is perfectly fine, as long as that is what it is: moderation. What the ruling says is that "recommended" content and personalised "for you" pages are themselves speech by the platform, rather than moderation, and are therefore not under the purview of Section 230.
For HN, Dang's efforts at keeping civility don't interfere with Section 230. The part relevant to this ruling is whatever system takes recency and upvotes, and ranks the front page posts and comments within each post.
4chan is actually moderated too.
2) Require confirmation you are a real person (check ID) and attach accounts per person. The commercial Internet has to follow the laws they're currently ignoring and the non-commercial Internet can do what they choose (because of being untraceable).
The diverse biases of newspapers or social media sites are preferable to the monolithic bias a legal solution will impress.
So the solution is "more speech?" I don't know how that will unhook minors from the feedback loop of recommendation algorithms and their plastic brains. It's like saying 'we don't need to put laws in place to combat heroin use, those people could go enjoy a good book instead!'.
Yes, the solution is more speech. Teach your kids critical thinking or they will be fodder for somebody else who has it. That happens regardless of who's in charge, government or private companies. If you can't think for yourself and synthesize lots of disparate information, somebody else will do the thinking for you.
You're mistaken as to what this ruling is about. Ultimately, when it comes right down to it, the Third Circuit is saying this (directed at social media companies):
"The speech is either wholly your speech or wholly someone else's. You can't have it both ways."
Either they get to act as a common carrier (telephone companies are not liable for what you say on a phone call because it is wholly your own speech and they are merely carrying it) or they act as a publisher (liable for everything said on their platforms because they are exercising editorial control via algorithm). If this ruling is upheld by the Supreme Court, then they will have to choose:
* Either claim the safe harbour protections afforded to common carriers and lose the ability to curate algorithmically
or
* Claim the free speech protections of the First Amendment but be liable for all content as it is their own speech.
Algorithmic libel detectors don't exist. The second option isn't possible. The result will be the separation of search and recommendation engines from social media platforms. Since there's effectively one search company in each national protectionist bloc, the result will be the creation of several new monopolies that hold the power to decide what news is front-page, and what is buried or practically unavailable. In the English-speaking world that right would go to Alphabet.
The second option isn’t really meant for social media anyway. It’s meant for traditional publishers such as newspapers.
If this goes through I don’t think it will be such a big boost for Google search as you suggest. For one thing, it has no effect on OpenAI and other LLM providers. That’s a real problem for Google, as I see a long term trend away from traditional search and towards LLMs for getting questions answered, especially among young people. Also note that YouTube is social media and features a curation algorithm to deliver personalized content feeds.
As for social media, I think we’re better off without it! There’s countless stories in the news about all the damage it’s causing to society. I don’t think we’ll be able to roll all that back but I hope we’ll be able to make things better.
If the ruling was upheld, Google wouldn't gain any new liability for putting a TikTok-like frontend on video search results; the only reason they're not doing it now is that all existing platforms (including YouTube) funnel all the recommendation clicks back into themselves. If YouTube had to stop offering recommendations, Google could take over their user experience and spin them off into a hosting company that derived its revenue from AdSense and its traffic from "Google Shorts."
This ruling is not a ban on algorithms, it's a ban on the vertical integration between search or recommendation and hosting that today makes it possible for search engines other than Google to see traffic.
I actually don't think Google search will be protected in its current form. Google doesn't show you unadulterated search results anymore, they personalize (read: editorialize) the results based on the data they've collected on you, the user. This is why two different people entering the same query can see dramatically different results.
If Google wants to preserve their safe harbour protections they'll need to roll back to a neutral algorithm that delivers the same results to everyone given an identical query. This won't be the end of the world for Google but it will produce lower quality results (at least in the eyes of normal users who aren't annoyed by the personalization). Lower quality results will further open the doors to LLMs as a competitor to search.
Newspapers editorialize and also give the same results to everybody.
And newspapers decide every single word they publish, because they’re liable for it. If a newspaper defames someone they can be sued.
This whole case comes down to having your cake and eating it too. Newspapers don’t have that. They have free speech protections but they aren’t absolved of liability for what they publish. They aren’t protected under section 230.
If the ruling is upheld by SCOTUS, Google will have to choose: section 230 (and no editorial control) or first amendment plus liability for everything they publish on SERPs.
Automatic libel generators, on the other hand, are mych closer at hand. :p
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4546063
Solution that require everyone to do a thing, and do it well, are doomed to fail.
Yes, it would be great if parents would, universally, parent better, but getting all of them (or a large enough portion of them for it to make a difference) to do so is essentially impossible.
Government controls aren't a solution either though. The people with critical thinking skills, who can effectively tell others what to think, simply capture the government. Meet the new boss, same as the old boss.
I agree with this. Kids are already subject to an agenda; for example, never once in my K-12 education did I learn anything about sex. This was because it was politically controversial at the time (and maybe it still is now), so my school district just avoided the issue entirely.
I remember my mom being so mad about the curriculum in general that she ran for the school board and won. (I believe it was more of a math and science type thing. She was upset with how many coloring assignments I had. Frankly, I completely agreed with her then and I do now.)
I was lucky enough to go to a charter school where my teachers encouraged me to read books like "People's History of the U.S" and "Lies My Teacher Told Me". They have an agenda too, but understanding that there's a whole world of disagreement out there and that I should seek out multiple information sources and triangulate between them has been a huge superpower since. It's pretty shocking to understand the history of public education and realize that it wasn't created to benefit the student, but to benefit the future employers of those students.
I think we've reached the point now that there is more speech than any person can consume by a factor of a million. It now comes to down to picking what speech you want to hear. This is exactly what content algorithms are doing -> out of the millions of hours of speech produced in a day, it's giving you your 24 hours of it.
Saying "teach your kids critical thinking" is a solution but it's not the solution. At some point, you have to discover content out of those millions or hours a day. It's impossible to do yourself -- it's always going to be curated.
EDIT: To whomever downvoted this comment, you made my point. You should have replied instead.
K so several of the most well-funded tech companies on the planet sink literally billions of dollars into psyops research to reinforce addictive behavior and average parents are expected to successfully compete against it with...a lecture.
We have seen that adults can't seem to unhook from these dopamine delivery systems and you're expecting that children can do so?
Sorry. That's simply disingenuous.
Yes, children and especially teenagers do lots of things even though their parents try to prevent them from doing so. Even if children and teenagers still get them, we don't throw up our hands and sell them tobacco and alcohol anyway.
Open-source the algorithm and have users choose. A marketplace is the best solution to most problems.
It is pretty clear that china already forces a very different tiktok ranking algo for kids within the country vs outside the country. Forcing a single algo is pretty unamerican though and can easily be abused, let's instead open it up.
80% of users will leave things at the default setting, or "choose" whatever the first thing in the list is. They won't understand the options; they'll just want to see their news feed.
I'm not so sure, the feed is quite important and users understand that. Look at how many people switched between X and Threads given their political view. People switched off Reddit or cancelled their FB account at times in the past also.
I'm pretty sure going from X to Threads had very little to do with the feed algorithm for most people. It had everything to do with one platform being run by Musk and the other one not.
"Open-source the algorithm" would be at best openwashing. The way to create the type of choice you're thinking is to force the unbundling of client software from hosting services.
Seems like the bias will be against manipulative algorithms. How does tiktok escape liability here? They give control of what is promoted to users to users.
Newspaper biases are more diverse despite being subject to the liability social media companies are trying to escape.
Unfortunately, the biases of newspapers and social media sites are only diverse if they are not all under the strong influence of the wealthy.
Even if they may have different skews on some issues, under a system where all such entities are operated entirely for-profit, they will tend to converge on other issues, largely related to maintaining the rights of capital over labor and over government.
The rise of social media was largely predicated on the curation it provided. People, and particularly advertisers, wanted a curated environment. That was the key differentiator to the wild west of the world wide web.
The idea that curation is a problem with social media is always a head scratcher for me. The option to just directly publish to the world wide web without social media is always available, but time and again, that option is largely not chosen... this ruling could well narrow it down that being the only option.
Now, in practice, I don't think that will happen. This will raise the costs of operating social media, and those costs will be reflected in prices advertisers pay to advertise on social media. That may shrink the social media ecosystem, but what it will definitely do is raise the draw bridge over the moat around the major social media players. You're going to see less competition.
Then give the choice to the user.
If a user wants to opt in, or change their moderation preferences then they should be allowed.
By all means offer a choice of moderation decisions. And let the user change them, opt out conditionally and ignore them if they so choose.
You say that like that choice doesn't exist.
You said this: "People, and particularly advertisers, wanted a curated environment."
If moderation choices are put in the hands of the user, then what you are describing is not a problem, as the user can have that.
Therefore, you saying that this choice exists, means that there isn't a problem for anyone who chooses to not have the spam, and your original complaint is refuted.
There absolutely can be a problem despite choice existing. I'm not saying otherwise.
I'm saying the choice exists. The choices we make are the problem.
Well then feel free to choose differently for yourself.
Your original statement was this: "People, and particularly advertisers, wanted a curated environment."
You referencing what people "want" is directly refuted by the idea that they should be able to choose whatever their preferences are.
And your opinion on other people's choices doesn't really matter here.
I think maybe we're talking past each other. What I'm saying what people "want" is a reflection of the overwhelming choices they make. They're choosing the curated environments.
The "problem" that is being referenced is the curation. The claim is that the curation is a problem; my observation is that it is the solution all the parties involved seem to want, because they could, at any time, choose otherwise.
Ok, and if more power is given to the user and the user is additionally able to control their current curation, then that's fine and you can continue to have your own curated environment, and other people will also have more or less control over their own curation.
Problem solved! You get to keep your curation, and other people can also change the curation on existing platforms for their own feeds.
Nope. Few people have a problem with other people having s choice of curation.
Instead the solution that people are advocating for is for more curating powers to be giving to individual users so that they can choose, on current platforms, how much is curated for themselves.
Easy solution.
You're free to to make your own site with your own moderation controls. And nobody will use it, because it'll rapidly become 99.999% spam, CSAM and porn.
Actually it seems like with these recent rulings, we will be free to use major social media platforms where the choice of moderation is given to the user, lest those social media platforms are otherwise held liable for their "speech".
I am fully fine with accepting the idea that if a social media platform doesn't act as a dumb pipe, then their choice of moderation is their "speech" as long as they can be held fully legally liable for every single moderation/algorithm choice that they make.
Fortunately for me, we are commenting on a post where a legal ruling was made to this effect, and the judge agrees with me that this is how things aught be.
Not exactly. You still have to procure web hosting somewhere, and that hosting provider might choose to refuse your money and kick you off.
You might also have to procure the services of Cloudflare if you face significant traffic, and Cloudflare might choose to refuse your money and kick you off.
That's because most people do not have neither the time nor the will to learn and speak computer.
Social media and immediate predecessors like Wordpress were and are successful because they brought down the lowest common denominator to "Smack keys and tap Submit". HTML? CSS? Nobody has time for our pig latin.
Who says you need to procure a web hosting provider?
But yes, if you connect your computer up to other computers, the other computers may decide they don't want any part of what you have to offer.
Without that, I wouldn't want to be in the Internet. I don't want to be forced to ingest bytes from anyone who would send them my way. That's just not a good value proposition for me.
I'm sorry, but no. You can literally type in to a word processor or any number of other tools and select "save as web content", and then use any number of products to take a web page and serve it up to the world wide web. It's been that way for the better part of 25 years. No HTML or CSS knowledge needed. If you can't handle that you can just record a video, save it to a file, and serve it up over a web server. Yes, you need to be able to use a computer to participate on the world wide web, but no more than you do to use social media.
Now, what you won't get is a distribution platform that gets your content up in front of people who never asked for it. That is what social media provides. It lowers the effort for the people receiving the content, as in exactly the curation process that the judge was ruling about.
Most people these days don't have a word processor or, indeed, "any number of other tools". It's all "in the cloud", usually Google Docs or Office 365 Browser Edition(tm).
Most people these days don't (arguably never) understand files and folders.
Most people these days cannot be bothered. Especially when the counter proposal is "Make an X account, smash some keys, and press Submit to get internet points".
I'm going to stop you right here: You are vastly overestimating both the will and the computer-aptitude of most people. There is a reason Youtube and Twitch have killed off literally every other video sharing service; there is a reason smartphones killed off personal computers (desktops and to a lesser degree laptops).
Social media became the juggernaut it is today because businesses figured out how to capitalize on the latent demand for easy sharing of information: Literal One Click Solutions(tm) that anyone can understand.
The internet and more specifically search engines in general have always been that distribution platform. The only thing that changed in the last 30 years is how easy it is to get your stuff on that platform.
Read that again. ;-)
We can debate on the skills of "most people" back and forth, but I think it's fair to say that "save as web content" is easier to figure out than figuring out how to navigate a social media site (and that doesn't necessarily require files or folders). If that really is too hard for someone, there are products out there designed to make it even easier. Way back before social media took over, everyone and their dog managed to figure out how to put stuff on the web. People who couldn't make it through high school were successfully producing web pages, blogs, podcasts, video content, you name it.
I disagree. I think they don't have the will to do it, because they'd rather use social media. I do believe if they had the will to do it, they would. I agree there are some people who lack the computer-aptitude to get content on the web. Where I struggle is believing those same people manage to put content on social media... which I'll point out is on the web.
Yes, because video sharing at scale is fairly difficult and requires real skill. If you don't have that skill, you're going to have to pay someone to do it, or find someone who has their own agenda that makes them want to do it without charging you... like Youtube or Twitch.
On the other hand, putting a video up on the web that no one knows about, no one looks for, and no one consumes unless you personally convince them to do so is comparatively simple.
Yes, that reason is that smartphones were subsidies by carriers. ;-)
But it's good that you mentioned smartphones, because smart phones will let you send content to anyone in your contacts without you having anything that most would describe as "computer-aptitude". No social media needed... and yet the prevailing preference is for people to go through a process of logging in, shaping content to suit the demands of social media services, attempting to tune the content to get "the algorithm" to show it to as many people as possible, and put their content there. That takes more will/aptitude/whatever, but they do it for the distribution/audience.
I'd agree with you if you said "distribute" instead of "sharing". It's really hard to get millions of people to consume your content. That is, until social media came along and basically eliminated the cost of distribution. So any idiot can push their content out to millions and fill the world with whatever they want.... and now there's a sense of entitlement about it, where if a platform doesn't push that content on other people, at no cost to them, that they're being censored.
Yup, that does really require social media.
No, the Internet & the web required you to go looking for the content you wanted. Search engines (at least at one time) were designed to accelerate that proces of find exactly the content you were looking for faster, and get you off their platform ASAP. Social media is kind of the opposite of search engines. They want you to stay on their platform; they want you to keep scrolling at whatever "engaging" content they can find, regardless of what you're looking for; if you forget about whatever you were originally looking for, that's a bonus. It's that ability to have your content show up when no one is looking for it where social media provides an advantage over the web for content makers.
This is literally the purpose of Section 230. It's Section 230 of the Communications Decency Act. The purpose was to change the law so platforms could moderate content without incurring liability, because the law was previously that doing any moderation made you liable for whatever users posted, and you don't want a world where removing/downranking spam or pornography or trolling causes you to get sued for unrelated things you didn't remove.
What part of deliberately showing political content to people algorithmically expected to agree with it, constitutes "moderation"?
What part of deliberately showing political content to people algorithmically expected to disagree with it, constitutes "moderation"?
What part of deliberately suppressing or promoting political content based on the opinions of those in charge of the platform, constitutes "moderation"?
What part of suppressing "misinformation" on the basis of what's said in "reliable sources" (rather than any independent investigation - but really the point would still stand), constitutes "moderation"?
What part of favouring content from already popular content creators because it brings in more ad revenue, constitutes "moderation"?
What part of algorithmically associating content with ads for specific products or services, constitutes "moderation"?
Prosaically, all of your examples are moderation. And as a private space that a user must choose to access, I'd argue that's great.
There is (or should be, in any case) a difference between moderation and recommendation.
There is no difference. Both are editorial choices and protected 1A activity.
Well, maybe it's just me, but only showing political content that doesn't include "kill all the (insert minority here)", and expecting users to not object to that standard, is a pretty typical aspect of moderation for discussion sites.
Again, deliberately suppressing support for literal and obvious facism, based on the opinions of those in charge of the platform, is a kind of moderation so typical that it's noteworthy when it doesn't happen (e.g. Stormfront).
Literally all of Wikipedia, where the whole point of the reliable sources policy is that the people running it don't have to be experts to have a decently objective standard for what can be published.
Yeah but they're not just removing spam and porn. They're picking out things that makes them money even if it harms people. That was never in the spirit of the law
Yes, it is. Section 230 doesn't replace the 1A, and deciding what you want to show or not show is classic 1A activity.
It's also classic commercial activity. Because 230 exists, we are able to have many intentionally different social networks and web tools. If there was no moderation -- for example, if you couldn't delete porn from linkedin -- all social networks would be the same. Likely there would only be one large one. If all moderation was pushed to the client side, it might seem like we could retain what we have but it seems very possible we could lose the diverse ecosystem of Online and end up with something like Walmart.
This would be the worst outcome of a rollback of 230.
The CDA was about making it clearly criminal to send obscene content to minors via the internet. Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn't remove your common carrier protections, but I don't believe that was a response to pre-CDA status quo.
Basically true.
No, it wasn't, and you can tell that because there is literally not a single word to that effect in Section 230. It was to enable information service providers to exercise editorial control over user-submitted content without acquiring publisher-style liability, because the alternative, giving liability decisions occurring at the time and the way providers were reacting to them, was that any site using user-sourced content at scale would, to mitigate legal risk, be completely unmoderated, which was the opposite of the vision the authors of Section 230 and the broader CDA had for the internet. There are no "common carrier" obligations or protections in Section 230. The terms of the protection are the opposite of common carrier, and while there are limitations on the protections, there are no common carrier like obligations attached to them.
That part of the law was unconstitutional and pretty quickly got struck down, but it still goes to the same point that the intent of Congress was for sites to remove stuff and not be "common carriers" that leave everything up.
If you can forgive Masnick's chronic irateness he does a decent job of explaining the situation:
https://www.techdirt.com/2024/08/29/third-circuits-section-2...
These are some interesting mental gymnastics. Zuckerberg literally publicly admitted the other day that he was forced by the government to censor things without a legal basis. Musk disclosed a whole trove of emails about the same at Twitter. And you’re still “not so sure”? What would it take for you to gain more certainty in such an outcome?
Haven’t looked into the Zuckerberg thing yet but everything I’ve seen of the “Twitter Files” has done more to convince me that nothing inappropriate or bad was happening, than that it was. And if those selective-releases were supposed to be the worst of it? Doubly so. Where’s the bad bit (that doesn’t immediately stop looking bad if you read the surrounding context whoever’s saying it’s bad left out)?
Means you haven’t really looked into the Twitter files. They were literally holding meetings with the government officials and were told what to censor and who to ban. That’s plainly unconstitutional and heads should roll for this.
How did the government force Facebook to comply with their demands, as opposed to going along with them voluntarily?
By asking.
The government asking you to do something is like a dangerous schoolyard bully asking for your lunch money. Except the gov has the ability to kill, imprison, and destroy. Doesn’t matter if you’re an average Joe or a Zuckerberg.
Any proof that they were threatened? I've never seen any.
So it's categorically impossible for the government to make any non-coercive request or report for anything because it's the government?
I don't think that's settled law.
For example, suppose the US Postal Service opens a new location, and Google Maps has the pushpin on the wrong place or the hours are incorrect. A USPS employee submits a report/correction through normal channels. How is that trampling on Google's first-amendment rights?
This is obviously not a real question, so instead of answering I propose we conduct a thought experiment. The year is 2028, and Zuck had a change of heart and fully switched sides. Facebook, Threads, and Instagram now block the news of Barron Trump’s drug use, of his lavishly compensated board seat on the board of Russia’s Gazprom, and bans the dominant electoral candidate off social media. In addition it allows the spread of a made up dossier (funded by the RNC) about Kamala Harris’ embarrassing behavior with male escorts in China.
What you should ask yourself is this: irrespective of whether compliance is voluntary or not, is political censorship on social media OK? And what kind of a logical knot one must contort one’s mind into to suggest that this is the second coming of net neutrality? Personally I think the mere fact that the government is able to lean on a private company like that is damning AF.
You're grouping lots of unrelated things.
All large sites have terms of service. If you violate them, you might be removed, even if you're "the dominant electoral candidate". Remember, no one is above the law, or in this case, the rules that a site wishes to enforce.
I'm not a fan of political censorship (unless that means enforcing the same ToS that everyone else is held to, in which case, go for it). Neither am I for the radical notion of legislation telling a private organization that they must host content that they don't wish to.
This has zero to do with net neutrality. Nothing. Nada.
Is there evidence that the government leaned on a private company instead of meeting with them and asking them to do a thing? Did Facebook feel coerced into taking actions they wouldn't have willingly done otherwise?
Everybody who paid protection to the mafia did so "voluntarily", too.
It all comes down to the assertion made by the author:
I find it hard to see a way to run a targeted ad social media company at all if you have to make sure children aren't harmed by your product.
don't let children use? In TN it that will be illegal Jan 1 - unless social media creates a method for parents to provide ID and opt out of them being blocked I think?
Wouldn't that put the responsibility back on the parents?
The state told you XYZ was bad for your kids and it's illegal for them to use, but then you bypassed that restriction and put the sugar back into their hands with an access-blocker-blocker..
Random wondering
Age limitations for things are pretty widespread. Of course, they can be bypassed to various degrees but, depending upon how draconian you want to be, you can presumably be seen as doing the best you reasonably can in a virtual world.
I'm not sure about video, but we are no longer in an era when manual moderation is necessary. Certainly for text, moderation for child safety could be as easy as taking the written instructions currently given to human moderators and having an LLM interpreter (only needs to output a few bits of information) do the same job.
That's great, but can your LLM remove everything harmful? If not, you're still liable for that one piece of content that it missed under this interpretation.
There are two questions - one is "should social media companies be globally immune from liability for any algorithmic decisions" which this case says "no". Then there is "in any given case, is the social media company guilty of the harm of which it is accused". Outcomes for that would evolve over time (and I would hope for clarifying legislation as well).
What about 0% margins? Is there actually enough money in social media to pay for moderation even with no profit?
At the scale social media companies operate at, absolutely perfect moderation with zero false negatives is unavailable at any price. Even if they had a highly trained human expert manually review every single post (which is obviously way too expensive to be viable) some bad stuff would still get through due to mistakes or laziness. Without at least some form of Section 230, the internet as we know it cannot exist.
I look at forums and social media as analogous to writing a "Letter to the Editor" to a newspaper:
In the newspaper case, you write your post, send it to the newspaper, and some editor at the newspaper decides whether or not to publish it.
In Social Media, the same thing happens, but it's just super fast and algorithmic: You write your post, send it to the Social Media site (or forum), an algorithm (or moderator) at the Social Media site decides whether or not to publish it.
I feel like it's reasonable to interpret this kind of editorial selection as "promotion" and "recommendation" of that comment, particularly if the social media company's algorithm deliberately places that content into someone's feed.
I agree.
I think if social media companies relayed communication between it's users with no moderation at all, then they should be entitled to carrier protections.
As soon as they start making any moderation decisions, they are implicitly endorsing all other content, and should therefore be held responsible for it.
There are two things social media can do. Firstly, they should accurately identify its users before allowing them to post, so they can counter sue that person if post harms them, and secondly, they can moderate every post.
Everybody says this will kill social media as we know it, but I say the world will be a better place as a result.
"Social media" is a broad brush though. I operate a Mastodon instance with a few thousand users. Our content timeline algorithm is "newest on top". Our moderation is heavily tailored to the users on my instance, and if a user says something grossly out of line with our general vibe, we'll remove them. That user is free to create an account on any other server who'll have them. We're not limiting their access to Mastodon. We're saying that we don't want their stuff on our own server.
What are the legal ramifications for the many thousands of similar operators which are much closer in feel to a message board than to Facebook or Twitter? Does a server run by Republicans have to accept Communist Party USA members and their posts? Does a vegan instance have to allow beef farmers? A PlayStation fan server host pro-PC content?
You are directly responsible for everything they say and legally liable for any damages it may cause. Or not IANAL
Refusal to moderate, though, is also a bias. It produces a bias where the actors who post the most have their posts seen the most. Usually these posts are Nigerian princes, Viagra vendors, and the like. Nowadays they'll also include massive quantities of LLM-generated cryptofascist propaganda (but not cryptomarxist propaganda because cryptomarxists are incompetent at propaganda). If you moderate the spam, you're biasing the site away from these groups.
You can't just pick anything and call it a "bias" - absolutely unmoderated content may not (will not) represent the median viewpoint, but it's not the hosting provider "bias" doing so. Moderating spam is also not "bias" as long as you're applying content-neutral rules for how you do that.
But what are the implications?
No more moderation? This seems bad.
No more recommendation/personalization? This could go either way, I'm also willing to see where this one goes.
No more public comment sections? Arstechnica claimed back in the day when section 230 was under fire last time that this would be the result if it was ever taken away. This seems bad.
I'm not sure what will happen, I see 2 possible outcomes that are bad and one that is maybe good. At first glance this seems like bad odds.
Actually there's a fourth possibility, and that's holding Google responsible for whatever links they find for you. This is the nuclear option. If this happens, the internet will have to shut all of its American offices to get around this law.
Would bluesky not solve this issue?
The underlying hosted service is nearly completely unmoderated and unpersonalised. It's just streams of bits and data routing. You can scan for/limit the propagation of CSAM or DMCA content to some degree as an infrastructure provider but that's really about it and even then you can only really do so to fairly limited degrees and that doesn't stop other providers (or self hosted participants) from propagating that anyways.
Then you provide custom feed algorithms, labelling services, moderation services, etc on top of that but none of them change or control the underlying data streams. They just annotate on top or provide options to the client.
Then the user's client is the one that directly consumes all these different services on top of the base service to produce the end result.
It's a true, unbiased section 230 compatible protocol (under even the strictest interpretation) that the user then can optionally combine with any number of secondary services and addons that they use to craft their personalised social media experience.
I always wondered why Section 230 does not have a carve-out exemption to deal with the censorship issue.
I think we'd all agree that most websites are better off with curation and moderation of some kind. If you don't like it, you are free to leave the forum, website, etc. The problem is that Big Tech fails to work in the same way, because those properties are becoming effectively the "public highways" where everyone must pass by.
This is not dissimilar from say, public utilities.
So, why not define how a tech company becomes a Big Tech "utility", and therefore, cannot hide behind 230 exception for things that it willingly does, like censorship ?
Wonder no longer! It's Section 230 of the communications "decency" act, not the communication freedoms and regulations act. It doesn't talk about censorship because that wasn't in the scope of the bill. (And actually it does talk about censorship of obscene material in order to explicitly encourage it.)
This is a much needed regulation. If anything it will probably spur innovation to solve safety in algorithms.
I think of this more along the lines of preventing a factoring from polluting a water supply or requiring a bank to have minimum reserves.
HN also has an algorithm.
I'll have to read the third circuit's ruling in detail to figure out whether they are trying to draw a line in the Sand on whether an algorithm satisfies the requirements for section 230 protection or falls outside of it. If that's what they're doing, I wouldn't assume a priori that a site like Hacker News won't also fall afoul of the law.
In reality this will not be the case and instead it will introduce the bias of regulators to replace the bias companies want there to be. And even with their motivation to sell users attention, I cannot see this as an improvement. No, the result will probably be worse.
Media, generally, social or otherwise, is not unbiased. All media has bias. The human act of editing, selecting stories, framing those stories, authoring or retelling them... it's all biased.
I wish we would stop seeking unbiased media as some sort of ideal, and instead seek open biases -- tell me enough about yourself and where your biases lie, so I can make informed decisions.
This reasoning is not far off from the court's thinking: editing is speech. A for you page is edited, and is TikTok's own speech.
That said, I do agree with your meta point. Social media (hn not excluded) is a generally unpleasant place to be.
For the case in question the major problem seems to be, specifically, what content do we allow children to access.
There’s an enormous difference in the debate between what should be prohibited and what should be prohibited for children.
If it is a reckoning for social media then so be it. Social media net-net was probably a mistake.
But I doubt this gets held on appeal. Given how fickle this Supreme Court is they’ll probably overrule themselves to fit their agenda since they don’t seem to think precedent is worth a damn.
That's how I read it, too. Section 230 doesn't say you can't get in trouble for failure to moderate, it says that you can't get in trouble for moderating one thing but not something else (in other words, the government can't say, "if you moderated this, you could have moderated that"). They seem to be going back on that now.
Real freedom from censorship - you cannot be held liable for content you hosted - has never been tried. The US government got away with a lot of COVID-era soft censorship by just strong-arming social media sites into suppressing content because there were no first-amendment style protections against that sort of soft censorship. I'd love to see that, but there's no reason to think that our government is going in that direction.
It is not only biased but also biased for maximum engagement.
People come to these services for various reasons but then have this specifically biased stuff jammed down their throats in a way to induce specific behavior.
I personally don't understand why we don't hammer these social media sites for conducting psychological experiments without consent.
We should just ditch advertisements as a monetization model, and see what happens.
Threads is actually pretty good if you ruthlessly block people that you dislike.
Yeah, pretty much. What's not clear to me though is how non-targeted content curation, like simply "trending videos" or "related videos" on YouTube, is impacted. IMO that's not nearly as problematic and can be useful.
I think HN sees this as just more activist judges trying to overrule the will of the people (via Congress). This judge is attempting to interject his opinion on the way things should be vs what a law passed by the highest legislative body in the nation as if that doesn’t count. He is also doing it on very shaky ground, but I wouldn’t expect anything less of the 3rd circuit (much like the 5th)