return to table of content

Bluesky's stackable approach to moderation

pfraze
99 replies
1d15h

I'm on the team that implemented this, so I'm happy to answer questions. I'll give a brief technical overview.

It's broadly a system for publishing metadata on posts called "Labels". Application clients specify which labeling services they want to use in request headers. Those labels get attached to the responses, where they can then be interpreted by client.

This is an open system. Clients can choose which labelers they use, and while the Bluesky client hardcodes the Bluesky moderation another client can choose a different primary Labeler. Users can then add their community labelers, which I describe below. We aim to do the majority of our moderation at that layer. There are also "infrastructure takedowns" for illegal content and network abuse, which we execute at the services layer (ie the relay).

Within the app this looks like special accounts you can subscribe to in order to get additional filters. The labels can be neutral or negative, which means they can also essentially function as user badges. Over time we'll continue to extend the system to support richer metadata and more behaviors, which can be used to do things like community notes or label-driven reply gates.

apollo_mojave
17 replies
1d11h

Seems like a really, really good way to create a really, really boring website.

ETA: Rereading this, that is probably not a very helpful HNy comment, so let me elaborate.

Maybe I am old-fashioned, but one of the things that the internet is most useful for is exploring places and ideas you would otherwise never encounter or consider. And just like taking a wooden ship to reach the North Pole, browsing around the internet comes with significant risk. But given the opportunity for personal growth and development, for change, and so on, those risks might well be worth it.

That model of the internet, as I said, is somewhat old-fashioned. Now, the internet is mostly about entertainment. Bluesky exists to keep eyeballs on phones, just like Tiktok or Instagram or whatever. Sure, Bluesky is slightly more cerebral -- but only slightly.

People are generally not entertained by things that frustrate them (generally -- notable exceptions exist), so I can understand an entertainment company like Bluesky focusing on eliminating frustrations via obsessive focus on content moderation to ensure only entertaining content reaches the user. In that sense, this labeling thing seems really useful, just like movie ratings give consumers a general idea of whether the movie is something appropriate for them.

So in that sense, wonderful for Bluesky! But I think I'll politely decline joining and stick with other platforms with different aims.

SamoyedFurFluff
7 replies
1d8h

I think I can have lively, intellectually stimulating exposure without say, someone advocating for the mass killing of gay people. Or engaging in an interesting political discussion without bad-faith conspiracy theorists shitting up the place. For example, the “chiller” which as far as I know is just designed to cool down a hot button discussion actually sounds super amazing for this purpose.

One of the things that frustrated me about browsing twitter now is the constant bad faith discussions about everything, one-off potshots that waste pixels and lead nowhere. A moderation tool that sifts that and just gets me to the people that actually know wtf they’re talking about and are engaging honestly would benefit me greatly!

apollo_mojave
3 replies
1d6h

Definitely -- but the problem isn't really "content" moderation. What it seems like you actually want is personality / tone / user moderation -- which Bluesky isn't really doing.

To analogize to real life, I have friends with whom I agree 100% on politics, but I never talk to them about it, because they're annoying when they do it. But I also have friends who disagree with me on political and other issues, but we have wonderful conversations because of the manner in which we disagree.

I don't what Bluesky is doing will actually help with this problem. For one thing, I think it's design as a "feed" basically precludes any solid sort of discussion (compared to an Internet forum). The medium kind of encourages the "one-off potshots" you mentioned, and moderation won't do much to cure it.

I could be wrong though!

pjc50
1 replies
1d5h

isn't really "content" moderation. What it seems like you actually want is personality / tone / user moderation

As McLuhan didn't say, the content is the message. That is, all those things arrive at you via "content", which is where the moderation lies.

apollo_mojave
0 replies
1d4h

All respect to McLuhan, there are obvious and meaningful differences between a message and its delivery!

BryantD
0 replies
1d4h

Composable moderation means we’re not limited to what Bluesky does, however. If I want to set up a moderation server that does tone moderation, there’s nothing stopping me from doing that.

I tend to agree about the utility of Bluesky as a medium for discussion, but that’s not what I want to use it for, so that’s fine by me.

kortilla
2 replies
1d8h

In modern US political discourse, there is no nuance in “us vs them”. Your moderators that are meant to just tag “advocating for the mass killing of gay people” will also put a “here’s why I think you should vote for Trump” post in the same category.

_heimdall
0 replies
1d6h

Any moderated system depends on trust that moderators will act fairly. If moderators begin categorizing content into labels that they don't belong in, presumably either the moderator would be removed or the service will slowly devolve and go away.

SamoyedFurFluff
0 replies
1d7h

I strongly disagree with this position and I believe that such a rhetoric-focused moderation tool as the chiller examples in the article will assist in my desire for intellectual discussion without dealing with inflamed nonsense.

That being said, if this affects one political group more heavy-handedly than another because their political strategy is more inflamed, I’m willing to hear less from them or only hear from the members who can communicate in a sensible manner.

ChicagoDave
2 replies
1d10h

BlueSky doesn’t care about eyeballs. It’s a non-profit enabling a common good.

comice
1 replies
1d8h

Not to be nitpicky but it's not quite that simple. BlueSky is a Public Benefit LLCs which is explicitly for-profit but does have some other limits - so it does count for something. I can't find exactly what BlueSky's public benefit is claimed to be though.

https://theintercept.com/2023/06/01/bluesky-owner-twitter-el...

"Liu, who answered some of my questions, did not respond when I asked for the exact language the Bluesky PBLLC used to describe its public benefit mission when incorporating the company. She also didn’t say whether the company would publish its annual benefits reports — reports that PBLLCs are required to create each year, but PBLLCs incorporated in Delaware, where Bluesky was incorporated, are not required to make them public."

arcalinea
0 replies
10h28m

Our mission statement is in the first blog post we ever published about the company. https://bsky.social/about/blog/2-7-2022-overview

"Our mission is to develop and drive large-scale adoption of technologies for open and decentralized public conversation."

mplewis
1 replies
1d11h

Why do you think so?

apollo_mojave
0 replies
1d11h

I just edited my comment; see above.

altacc
1 replies
1d8h

The internet isn't one size fits all, all the time. Most people don't want to be challenged all the time and everywhere. Sometimes you want to watch a challenging documentary about socioeconomics in 17th century Poland and other times you want to watch Friends. I see a good use case here for BlueSky allowing users to vary moderation & use curated lists to separate interests & moods.

apollo_mojave
0 replies
1d6h

This is true! I'm not sure I 100% agree with your analogy, but your basic point is of course correct.

PaulHoule
1 replies
1d4h

What I want is a filter for angry posts. Social media exposes me to a wider cross section than I get in person and there is really a limit to the amount of distress I can absorb.

glenstein
0 replies
1d1h

Right, and I think you've zeroed in on what I feel is the most important point here. Somehow, for a lot of people, "diversity of opinions" and "angry posts subject to moderation" are more or less the same thing. For me, those are distinct things, and don't think diversity of opinions, at least not on things of interest to me (philosophy, astronomy etc) are under the crosshairs. Of course I feel that way because I feel like I'm right about something, and that something is the idea that diversity of opinion has a lot more to it than whether something is or isn't moderated.

IncreasePosts
14 replies
1d14h

How will content that is illegal in some jurisdictions and legal in others be handled? Is there a presumed default jurisdiction, like California, or something?

tlamponi
7 replies
1d11h

Their stackable moderation system might actually allow one to implement this relatively easily.

Add a moderation channel per country and let clients apply them depending on location/settings. It's naturally not perfect, but as one can just travel to other countries and get their (potentially less restricted) view or even simpler use a VPN, it's as good as basically any other such censorship measurement.

supriyo-biswas
6 replies
1d7h

This wouldn’t still work though. If someone uploads CSAM and it’s distributed to multiple users in a jurisdiction where it’s banned (which is virtually all of them) but only hidden by the moderation filters, then Bluesky would still be in a lot of pain from distributing said material.

Also, filters which are optional on the user’s part can’t really be counted as moderation.

catapart
3 replies
1d5h

From the root comment:

There are also "infrastructure takedowns" for illegal content and network abuse, which we execute at the services layer (ie the relay).

My understanding, here, is that any app has the ability to shut down entire accounts from being able to provide content for that app. And my expectation is that states will have laws that say "operators of an app must ensure that they don't provide illegal material" - at least to the extent of CSAM. So you have state motivation for app-runners to moderate illegal content on their app, and you have app-level mechanisms for shutting down content. And while it can still be hosted on whatever relay was hosting it to start with (if that isn't the one that shut down the content), I would be surprised if sharing that content to another relay didn't give away a ton of information that a person doing illegal activities wouldn't necessarily want published. Put more simply: it's unlikely that if I have to shut down some CSAM coming from your relay, that I can't also turn that relay data over to the authorities. Meaning you have a pretty strong incentive to not actually share your CSAM content to any law-abiding apps.

numpad0
2 replies
1d1h

So to cut all sugarcoating off, the problem isn't criminals doing knowingly criminal things. It's Japanese users disporportionately obliterating Twitter-style social media and absolutely hammering the system with bunch of risque selfies that don't look adult to Europeans and anime style arts that don't involve child in making, neither of which qualify as CSAM by local laws and therefore not understandable to offending Japanese users in context of potential legal outcomes such that it would change majority behaviors. It is simply legal as drinking at legal ages.

This Japanese flood casually nears or exceeds 50% of content by volume and is a specifically Japanese phenomenon; it does not generalize into Asian cultures or Sinosphere languages[1] - all the others are easily 1% or less or proportional relative to English. It also isn't happening with Facebook but it is with Mastodon.

To be even more frank, everyone should just set up a Japanese containment server with an isolate IdP, and get Yahoo! Japan or NTT Corp fund it, have it monetized via phone contract or something, and that could solve a huge bulk of problems with microblogging moderation. Then everyone could go back to weeding out those few of actual pedophiles, smugglers and casino spams, occasionally reinstating not-too-radical political activists.

Should "outside" users be eligible for signup with such isolate system is a separate problem, but that will be foreign crime anyway and should not bother the main branch operators that cater to the rest of the world that CAN unite.

1: https://bsky.app/profile/jaz.bsky.social/post/3klwzzdbvi22t

2: worse version of [1]: https://images.ctfassets.net/vfkpgemp7ek3/5kYcWXcFUYSLBAEkrS...

wmf
1 replies
19h16m

It seems like Bluesky's architecture is ideal for this case. Label it, apps don't show it by default, and let people opt-in to seeing it.

numpad0
0 replies
2h44m

AIUI Bluesky team has a lot of ex-Twitters, who'd fought this problem for years, so it'll be very reasonable that this architecture is good as it gets without departing from their mission(of making a locked-open global microblogging social media).

_heimdall
1 replies
1d6h

Wouldn't Bluesky be able to have an admin rule that hides all content tagged with labels that are illegal in Bluesky's own jurisdiction?

Doxin
0 replies
8h6m

The problem is that hiding content isn't enough. It's illegal to even have the content.

pfraze
5 replies
1d14h

I’m unsure how it will play out in practice. I think it’s possible that different infra could wind up being deployed in jurisdictions that differ too significantly. Certainly that could happen outside of Bluesky.

Bluesky itself is US-based.

westhanover
2 replies
1d14h

So I guess when Bluesky gets a take down from the Indian government the plan is just for you guys not to go to India anymore?

perihelions
1 replies
1d13h

Let's be realistic. The only question is whether they'll censor dissident speech globally for the world audience—or merely georestrict it to individual nations falling to autocracy.

Coming from Silicon Valley companies, all the federation stuff aren't sincerely-intended ideas to promote free society values. They're an escape hatch. It's a free-speech zone to divert unwanted activists onto—away from the company and its brand, and away from its users—where they can blow off steam quietly. And it's something their PR can point to to deflect accountability for the awful things they will definitely end up doing, in their uncompromising pursuit of market share. ("Oh, we're not really censoring that; it's only censored on the main corporate instance, which is the only one people use").

PaulHoule
0 replies
1d4h

With distributed moderation the repressive government can, in principle, publish its own labels, but by publishing labels they make public what it is they want to suppress.

Federated social media could be used for very bad purposes and it would be impossible to stop. Consider this situation

https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...

in the centralized case we can point to Facebook as a responsible entity. The thing is Facebook later got kicked out of Myanmar not as accountability for those crimes but because they were insufficiently pliant to the government. In the future a country like that might just run its own Mastodon network where it is all genocide all the time.

samstave
0 replies
1d12h

Seems like following the EU's rules - and use the below to have tags that could be placed on a post as per the EU categories?

--

https://www.europarl.europa.eu/RegData/etudes/ATAG/2020/6581...

As you can use their guidelines:

European Union (EU), there are several types of social media content that are considered illegal.

---

* Incitement to Terrorism: Any content that encourages or promotes terrorist acts, violence, or extremism is prohibited.

* Illegal Hate Speech: Social media posts that spread hate based on race, ethnicity, religion, gender, sexual orientation, or other protected characteristics are not allowed.

* Child Sexual Abuse Material: Sharing, distributing, or creating content related to child sexual abuse is strictly illegal.

* Infringements of Intellectual Property Rights: Posting copyrighted material without proper authorization violates intellectual property rights.

* Consumer Protection Violations: Misleading advertisements, scams, or fraudulent content that harms consumers are prohibited.

--

These rules are further strengthened by stricter regulations for four specific types of content, which have been harmonized at the EU level:

* Counter-Terrorism Directive: Addresses terrorist content.

* Child Sexual Abuse and Exploitation Directive: Focuses on combating child sexual abuse material.

* Counter-Racism Framework: Aims to prevent and combat racism and xenophobia.

* Copyright in Digital Single Market Directive: Deals with copyright infringement online1.

---

You could ideally put the EU illegal categories in a drop down like old Slashdot Style - but each mod who selects the same selection from the drop down adds a point to the tag - the post is removed upon some thresholds of the points.

This could be different for each reagion - and a post could be flagged with points for each region... so have a region selection, then the illegal tag list sets to that region. A post maybe could be tagged by multiple regions infractions based on where the mod sits?

Also - you can keep metrics for all posts flagged for what infraction types for - plus you can move them to the "if law enforcement needs this post S3 bucket" -- based on whatever time period the laws require.

BryantD
0 replies
1d2h

Putting aside the social issues for a moment, I can imagine a government deciding to run its own moderation server and mandating use of that server in the country in question. I'd prefer that Bluesy not enable that by geolocking users of the official client to individual moderation servers, though.

jdprgm
9 replies
1d14h

I have been loosely following Bluesky for awhile and read some blog posts now but haven't delved super deep. Can you expand on the "infrastructure takedowns"? Does this still effect third party clients? I am trying to understand to what degree this is a point of centralization and open to moderation abuse versus bluesky acting as a protocol and even if we really want to we can't take something down other than off our own client.

pfraze
8 replies
1d14h

The network can be reduced to three primary roles: data servers, the aggregation infrastructure, and the application clients. Anybody can operate any of these, but generally the aggregation infra is high scale (and therefore expensive).

So you can have anyone fulfilling these roles. At present there are somewhere around 60 data servers with one large one we run; one aggregator infra; and probably around 10 actively developed clients. We hope to see all of these roles expand over time, but a likely stable future will see about as many aggregator infrastructure as the Web has search engines.

When we say an infrastructure takedown, we mean off the aggregator and the data server we run. This is high impact but not total. The user could migrate to another data server and then use another infra to persist. If we ever fail (on policy, as a business, etc) there is essentially a pathway for people to displace us.

Vinnl
5 replies
1d11h

Why would anyone run their own aggregator? (i.e. if you run a search engine, you can show contextual ads to recoup your investment and then some.)

Sorry about going off-topic, I realise it's only tangentially about labelling.

pfraze
4 replies
1d11h

We'll let you know when we figure out why we're doing it.

Vinnl
3 replies
1d6h

I guess I should have asked about anyone else :) I know why you would - you're planning to sell services around Bluesky [0], and thus need Bluesky itself to be working.

But if it's already working (because you're running an aggregator), there doesn't seem much reason for anyone else to run one? In other words, isn't there a significant risk that there will be fewer aggregators than there are search engines, i.e. just a single one?

[0] https://bsky.social/about/blog/7-05-2023-business-plan

BryantD
2 replies
1d2h

I think this is a really good question. Let me offer one possible answer:

It might not be necessary or useful to have multiple aggregators right now. However, I do feel better knowing that if Bluesky the company goes under or changes to a degree where I'm not happy with their decisions, it's possible for someone to stand up a second aggregator.

For that matter, if someone's a free speech absolutist and if they care enough about it to spend the money, they could stand up an aggregator right now with more permissive standards.

jakebsky
1 replies
1d1h

Even at scale, running a Relay should be well within the means of a motivated individual or org that is willing to spend hundreds of dollars per month. Right now it'd cost just tens of dollars per month to run a whole network Relay. Some people are already doing this I believe.

Running an "AppView" (an atproto application aggregator/indexer/API server) is generally an order of magnitude more expensive and complicated. But still not beyond the reach of a user coop, non-profit, or small startup.

So these services should all be well within the capabilities of at least multiple companies operating in the atproto ecosystem as it scales.

And in many cases it should make good sense for these companies to do this since it will improve their performance by colocating their services and enable them to do things like schedule their own maintenance windows, etc.

Vinnl
0 replies
22h40m

Thanks! "It costs thousands of dollars a month, which is feasible enough that people will find a way" sounds pretty reasonable.

bobajeff
1 replies
1d12h

Would it be possible to do a p2p aggregator (Like yacy but for atprotocol)?

pfraze
0 replies
1d12h

It might be worth trying, but essentially what you're trying to do is cost/load sharing on the aggregation system. You could do that by computing indexes and sharing them around, to reduce some amount of required compute, and I suspect we'll be doing things like that. (For example, having the precomputed follow graph index as a separate dataset.) However if you're trying to replace the full operational system, I think the only kind of load sharing that could work would require federated queries, which I consider a pretty unproven concept.

baq
8 replies
1d6h

I need to moderate the moderators.

Not in a 'I can ban these moderators from moderating my instance' way. I need a metamoderatorion mechanism. I need to see how good moderators are to establish trust and when a moderator is taken over by a hostile actor I need to see its score tank.

Do you have something like this on the roadmap?

jmull
7 replies
1d6h

It sounds like, perhaps, the moderators only label the content. Then it’s up to your own client (and how you configure it) to filter the content, based on those labels.

If I’ve got that right, then a client could be created that, e.g., displays labels from different moderators rather than filter the content. In fact, I’d guess most clients will have that mode.

baq
3 replies
1d5h

I need labels on labels and labels on labellers. I also need labellers for labellers. With that, I can create a network of labellers which can keep each other honest with enough distribution; think DNS root servers but which constantly check if every other root server is still reasonably trustworthy to be authoritative.

Then I need users who (hopefully) vote on/rate/report labels, which is its own problem.

wmf
1 replies
19h21m

You sure do "need" a lot of things.

baq
0 replies
11h3m

Yes.

We had all this on slashdot before quite a few folks here were born.

PaulHoule
0 replies
1d4h

Practically, labels are probabilistic. Different people who are trained on how to label will label most things the same way but will disagree about some things. I know my judgement in the morning might not be the same as the afternoon. If you had a lot of people making judgements you could say that "75% of reviewers think this is a scam".

But "lots of reviewers" could be tough. Look at the "Spider Shield" example: if Spider Shield is going to block 95% of spider images, they're going to have to look at 95% of the content that I see, before I see it. This is a big ask if the people doing the labeling hate spiders! (Someone who values a clean feed might want to have a time-delayed feed)

It seems also that the labels themselves would become a thing for people to argue about, particularly if they get attached at the 50% point of the visibility of a post as opposed the first or last 2%.

Something based on machine learning is a more realistic strategy in 2024. Today anti-spiders could make a pretty good anti-spider model with 5000 or spider images. The tools would look a bit like what Bluesky is offering but instead of attaching public tags to images, you would publish a model. You could use standardized embeddings for images and text and let people publish classical ML models out of a library, I am looking at one of my old recommender models right now, it is 1kb serialized, a better model might be 5kb. Maybe every two years they update the embeddings and you retrain.

catapart
1 replies
1d5h

That's my understanding, too. And since it's underpinned by ATProto, rather than being coupled with Bluesky, "moderator score" apps could be built that independently track how 'useful' the labels are (and, by extension, the labeling services), subjective to each individual app's preferences. Then users could rely on moderation rankings from their favorite moderation ranking app to determine which moderators to use and when to switch if the quality tanks.

steveklabnik
0 replies
1d4h

Yes! This is why I’m so bullish on atproto as a platform. This stuff operates at an infrastructure level, meaning you can build other applications to use it too.

StefanBatory
0 replies
1d4h

Wouldn't that be deeply problematic in case of illegal content being posted?

sedatk
6 replies
1d8h

Reddit's subreddit structure and the underlying moderation system is quite scalable: site admins only deal with the things that subreddit moderators have failed to. And, in case they keep failing, admins can shut down the subreddit or demote moderators responsible for it. The work is clearly split between admins and mods, and mods only work on the content they're interested in.

Now, with this model, I don't see such a scalable structure. You're not really offloading any work to moderation, and also, all mods will be working on all of the content. No subreddit-like boundaries to reduce the overlaps. I know, mods can only work on certain feeds, but, feeds overlap too.

It's also impossible to scale up mod power with this model when it's needded: For example, Reddit mods can temporarily lock posts for comments, lock subreddit, quarantine subreddit to deal with varying degrees of moderation demand. It's impossible to have that here because there can't be a single authority to control the content flow.

How do you plan to address these scalability and efficiency issues?

dghughes
3 replies
1d8h

Reddit's subreddit structure and the underlying moderation system is quite scalable: site admins only deal with the things that subreddit moderators have failed to.

All that happens is mods just lock any post with any hint of a problem. It's become or rather started out as being ridiculous. They just lock instead of moderate.

supriyo-biswas
1 replies
1d7h

The strict moderation you hinted at is quite okay from a legal perspective, it’s just suboptimal for community building and healthy conversations.

friendzis
0 replies
1d5h

it’s just suboptimal for community building and healthy conversations

By design Reddit's moderation model is a tool incentivizing unhealthy, one sided conversations and echo chamber-y communities. Reddit moderators have to go the extra mile to avoid those.

sedatk
0 replies
23h59m

True. Mod power is abused a lot. But that’s a different problem, not necessarily mutually exclusive with scaling.

mcherm
0 replies
1d6h

all mods will be working on all of the content. No subreddit-like boundaries to reduce the overlaps

Not necessarily, that's up to the moderator.

Today, I subscribe to the #LawSky and AppellateSky feeds because I am interested in legal issues. Sometimes these feeds have irrelevant material: either posts who happened to use the "" emoji for some non-legal reason or just people chatting about their political opinions on some legal case.

Someone could offer to label JUST the posts in these feeds with a "NotLegalTopic" tag and I would find that filter quite useful.

h0l0cube
0 replies
1d6h

You're not really offloading any work to moderation

I think everyone at some stage has been burnt by top-down moderation (e.g., overzealous mods, brigading, account suspensions, subreddit shutdowns, etc.) and generally everyone finds it lacking because what's sensitive to one person, might be interesting to another. Community driven moderation liberalizes this model and allows people to live in whatever bubble they want to (or none at all). This kind of nit-picky moderation can be offloaded in this way, but it doesn't obviate top-down moderation completely (e.g., illegal content, incitement to violence, disinformation, etc.) Though a scoring system could be used for useful labellers, and global labels could be automated according to consensus (e.g., many high-rated labellers signalling disinformation on particular posts)

matesz
6 replies
1d12h

Just learned about Bluesky’s labelling approach. The first thing comes to mind is who is responsible for the content on the platform - Bluesky? Labellers?

For example, some rouge user starts posting offensive content about other users, on the brink of breaking the law. Let’s say these other users will mention it to labellers, who this time will refuse to take this content down.

Can you tell me what will happen in such scenario?

pests
2 replies
1d11h

Can't you start your own labeler and just agree to take it down? Then others can subscribe to your labeled to avoid those posts?

matesz
1 replies
1d8h

What if others won't subscribe to my labeler? Why would they subscribe to it in the first place?

pests
0 replies
23h29m

Why would they subscribe to it in the first place?

They also don't want to see offensive content?

What if others won't subscribe to my labeler?

Then maybe your labeler is not providing any value? Or you need to market it more?

xoa
1 replies
1d7h

For example, some rouge user starts posting offensive content about other users, on the brink of breaking the law. Let’s say these other users will mention it to labellers, who this time will refuse to take this content down.

Under US law, the user posting the content is the only one legally responsible for it. Someone hosting the content could be required to take it down by court order or other legal process (like under the DMCA SH provisions) if subject to US jurisdiction. Bluesky is, so they'd have a process same as anyone else in the US, and of course could make their own moderation decisions regardless on top. But the protocol allows 3rd parties to take on any role in the system technically (though certain infra roles sound like they'd be quite expensive to run as a practical matter), so they could be subject to different law. Foreign judgements are not enforceable in the US if they don't meet the same bar of the 1st Amendment a domestic one would have to.

Labellers from the description would never have any legal responsibility in the US, and they do not "take content down", they're only adding speech (meta information, their opinion on what applies to a given post) on top, best-effort. Clients and servers then can use the labels to decide what to show, or not.

At any rate "on the brink of breaking the law" would mean nothing, legally. And "offensive" is not a legal category either. Bluesky or anyone else would be free to take it down anyway, there is zero restriction on them doing whatever they want and on the contrary that itself is protected speech. But they would be equally free to not do so, and if someone believed it actually broke one of the very limited categories of restrictions on free speech and was worth the trouble they'd have to go to court over it.

matesz
0 replies
1d6h

Okey this is clear now. Thanks!

willsoon
0 replies
1d12h

labelling approach

I see what you did there.

jchw
6 replies
1d3h

Honestly, something here doesn't quite sit right with me.

From the article:

No single company can get online safety right for every country, culture, and community in the world.

From this post:

There are also "infrastructure takedowns" for illegal content and network abuse, which we execute at the services layer (ie the relay).

If there's really no point in running relays other than to keep the network online, and running relays is expensive and hard work that can't really be funded by individuals, then it seems like most likely there will be one relay forever. If that turns out to be true, then it seems like we really are stuck with one set of views on morality and legality. This is hardly a theoretical concern when it comes to the Japanese users flooding Bluesky largely out of dissatisfaction with Twitter's moderation of 'obscene' artworks.

kps
5 replies
1d1h

Before the Elon event (and maybe again now), Pawoo was by far the most active Mastodon instance, and there's an almost complete partition between ‘Western’ and ‘Eastern’ Mastodon networks.

jchw
4 replies
23h59m

Yeah, this issue continues to cause a lot of strife across the Fediverse. Misskey.io and mstdn.jp are both extremely popular (presumably only second to Mastodon.social) and obviously these Japanese sites follow Japanese law and norms with regards to obscenity.

I certainly am not saying that server operators should feel obliged to content they do not like, especially if they believe it is illegal or immoral. After all, a huge draw of the Fediverse is the fact that you get to choose, right? Sure, personally I think all obscenity law is weapons-grade bullshit regardless of how despicable the subject matter may be, but also, server operators shouldn't feel pressure to compromise their ideals, attract a crowd of people they simply don't like, or (of course) run the risk of breaking the law in their jurisdiction, so what happens on the Fediverse seems like it is the right way for things to go, even if it is harmful to the federation in the short term.

But that's kind of the double-edged sword. You either have centralization where someone decrees "the ultimate line" or you don't. With Bluesky, there's a possibility that it will wind up being decentralized properly, but it could wind up being defacto centralized even if they uphold their promises and I think that strongly devalues the benefits of decentralization where they count most. Today, there is in fact one company that holds the line, and it's unclear if that's going to meaningfully change.

There are some aspects of AT proto and Bluesky that I think are extremely cool: an example is identity, identity in AT proto is MUCH better than it is in ActivityPub right now. However I'm not surprised they are not going to acknowledge this problem. Just know that they are 100% aware of it, and my opinion is that they really do want to find an answer that won't piss everyone off, but they also probably want to avoid the perception that Bluesky is a haven for degenerates, especially early on when there are less network effects and they desperately need to appear "better" than what is already out there. Unfortunately, I can only conclude that their strategy is most likely the best one for the future of their network, but it still rubs me the wrong way.

wmf
3 replies
19h7m

If Japanese users aren't willing to run their own relay I don't think we can blame Bluesky for that. Ultimately decentralization devolves some level of responsibility onto users.

jchw
2 replies
17h16m

Actually, I would like you to genuinely reconsider this thought, we can definitely blame Bluesky, in fact, we can't really blame anyone else, certainly not users. The whole thing was their idea, after all, so if it doesn't work in practice, I mean, who else's fault is it? It's not that people are lazy or unwilling, after all, there are plenty of large Japanese Fediverse instances, they're some of the largest in the entire federation! Right now, as the service is exploding in popularity... you can't run a relay.

IMO, the point is, all of this architecture and the headache of decentralizing it is a waste of time if the most critical part for actually ensuring user sovereignty on a network winds up being centralized anyways. And based on the replies so far, what I'm getting is that nobody at Bluesky has a clear idea why anyone else would ever run a relay other than for the good of society. That's nice, but of the people running relays, none of the other entities that might run relays have the same level of stakes in it as they do. Bluesky is a company that presumably needs to eventually make money to stay afloat, but they basically built out the entire protocol, will run the app, and all of that jazz. They are the face of this. The users that are on Bluesky right now are by and large going through their infrastructure; presumably millions of users. Why in the world would someone else run a relay in this condition? You'd essentially be doing it to mostly support a network that is vastly controlled by one single entity, and there is no obvious reason why it will stop being vastly controlled by one single entity, too. It is a problem that the vast majority of users are on a small number of Mastodon servers, but with Bluesky there's little reason to believe right now that a vast majority of users ever won't be using Bluesky's infrastructure. If we're going to compare it to e-mail, then this is like if the only e-mail provider you could use for years was just GMail, and then slowly other providers were allowed to get whitelisted in. It is nice that the design of the network is so amenable to the potential of having it not be controlled by one entity, but there's the rub, I don't think they actually have any incentive to actually make serious efforts to fixing that problem. I don't know if they would even with substantial pressure from users to do so. Right now their messaging is basically "There might be a few entities that would run relays" suggesting that it is basically out of reach for people not at the scale of Google.

One could easily misread this thread to be about fringe content or more charitably, censorship, but I think the most accurate way to sum up my concerns is this: I don't think Bluesky's own incentives, nor the incentives that they are laying out with this design, are particularly fantastic for the health of the network they are building. It's not impossible that things could wind up working out anyways, but I worry that this is not actually going to be a sustainable system and that if there isn't substantially more thought put into this entire operation, it may basically wind up being just a very elaborate and overly complicated way to re-implement the exact same problems that centralization has.

That's not to say I believe they are working in bad faith or that they don't care or anything like that. Like I said, I assume they are actively trying to figure out how to make everything work without pissing everybody off. And I'd love to be highly optimistic and say that it will all work out, but I have my doubts. The silence is deafening.

wmf
1 replies
16h48m

nobody at Bluesky has a clear idea why anyone else would ever run a relay other than for the good of society ... I don't think Bluesky's own incentives, nor the incentives that they are laying out with this design, are particularly fantastic for the health of the network they are building.

I fully agree with this but I'm giving Bluesky a lot of benefit of the doubt because I think Mastodon has even worse incentives and I can't come up with anything better. (I can come up with different tradeoffs but they always leave somebody pissed off.)

Getting back to the Japan problem, I think "content we care about is blocked on the main relay" should be enough incentive to run an alternate relay and maybe a patched client. Obviously there's a bootstrapping issue there but you could say the same about Bluesky vs. Mastodon and yet here we are millions of users later.

jchw
0 replies
16h30m

I do like Bluesky's take on identity in particular. Mastodon is pretty catastrophic in the way that identity works, and AT proto offers some really compelling ideas here. That said, I don't know. I desperately hope for Bluesky to succeed, because some of the ideas would be great in an ideal world where they've got it all figured out, but I hope that the resources necessary to run relays winds up being a lot less than its made out to be. Otherwise, if it really does require massive amounts of resources, I just can't see anyone choosing to do this. For Japan, I think it is much more likely they would pour resources into domestic SNS services like Misskey.io, as Skeb, Inc recently did.

fallingknife
5 replies
1d4h

and while the Bluesky client hardcodes the Bluesky moderation

And what if I don't like your moderation? Can it be overruled, or is this just a system for people who want stricter moderation, not lighter?

kitd
3 replies
1d4h

First, we've built our own moderation team dedicated to providing around-the-clock coverage to uphold our community guidelines.

A partial answer* is that the Bluesky moderation enforces their community standards, so if you don't like that, then the platform may not be for you.

* - because, yes, this does still have a single entity at fundamental control. But I presume their focus is on the basic threshold (ie legality under US law) of content.

fallingknife
1 replies
1d4h

I don't presume that when I hear terms like "moderation" or "community standards." I think he would have said "Bluesky automatically removes illegal content..." if that were true.

pfraze
0 replies
1d2h

You have to first distinguish between infra moderation and app moderation. Infra is the aggregation layer (the relay and appview). App is the application client.

The application client sets its labelers and filters entirely client side. Our client hardcodes our labeler then lets users choose more. Other clients could choose otherwise.

The infra layer handles illegal content takedowns. We do that, and we try to keep that minimally scoped. Other people can run their own infra which is outside of our control and with its own infra moderation

metabagel
0 replies
1d4h

A partial answer* is that the Bluesky moderation enforces their community standards, so if you don't like that, then the platform may not be for you.

This article is about how different hosting instances can customize the moderation which is offered, and how the user can choose among the offered moderation settings. The whole point is to allow different moderation implementations, because different communities may have different needs.

dpassens
0 replies
1d4h

Sounds like the answer is in the next part of that sentence:

another client can choose a different primary Labeler

So you can overrule the moderation, but not if you use the official client.

cornholio
5 replies
1d15h

Moderation does not sound like an additive function, i.e multiple moderation *filters" that add up to the final experience. That seems an almost usenet like interaction, where each user has its own skizoid killfile and the default experience is bad.

Rather, moderation is a cohesive whole that defines the direction of the community, the same rules and actions apply to everybody.

pfraze
3 replies
1d15h

This was a very active topic of debate within the team. We ended up establishing the idea of "jurisdictions" to talk about it. If a moderation decision is universal to all viewers, we'd say that it's under a specific jurisdiction of a moderator. This is how a subreddit functions, with Reddit being the toplevel jurisdiction and the subreddits acting as child jurisdictions.

The model of labelers as we're releasing in this first iteration is, as you say, an additive filtration system. They are "jurisdictionless." We chose this model because Bluesky isn't (presently) segmented into communities like Reddit is, and so we felt this was the right way to introduce things.

That said, along the way we settled on a notion of the "user's personal jurisdiction," meaning essentially that you have certain rights to universally control your own interactions. Blocking is essentially under this umbrella, as are thread gates (who can reply). What's then interesting is that you can enlist others to help run your personal jurisdiction. Blocklists are an example of that which we have now: you can subscribe to blocks created by other people.

This is why I'm interested in integrating labels into threadgates, and also exploring account-wide gates that can get driven by labels. Because then it does enable these labelers to apply uniform rules and actions to those who request it. In a way, it's a kind of dynamic subreddit that fits the social model.

_heimdall
1 replies
1d6h

This sounds very much like a federal system of government like the US, with each level of jurisdiction applying their own rules.

For Bluesky, by default does power lie with the highest authority by or the lowest authority (i.e. the user)?

The US model was originally designed bottom up, with power having to be granted to the higher authority. Admittedly we've effectively abandoned this today.

pfraze
0 replies
1d3h

I’m a huge nerd about this so I enjoy this question. The logic of the system is that the protocol offers users rights by architecture. They include rights to an identity, to speak, and to choose your providers. The right to control reach is controlled by the applications, and they apply business logic to construct the threads and feeds. So: users are independently publishing structured data (a reply to a post) and applications are interpreting that data into app experiences.

As a consequence, the personal jurisdiction model as I described is a construct of the application. It can ultimately choose the logic of what gets shown. Therefore it’s top down in this case. What an application can’t do is remove a user’s identity or posts from the internet. And consequently other applications can make other decisions.

All decentralized networks are an expression of a political philosophy.

ekidd
0 replies
1d7h

That said, along the way we settled on a notion of the "user's personal jurisdiction," meaning essentially that you have certain rights to universally control your own interactions. Blocking is essentially under this umbrella, as are thread gates (who can reply).

As as user, "personal jurisdiction" is a critical feature to me. If I start a thread, I want to maintain some minimal level of agreeable behavior in the responses associated with my original post.

It's sort of like online newspaper comments sections. Many unmoderated comment sections were once full of 20 disagreeable trolls who drove everyone else away. The bad drives out the good, and trolls accumulate over time. This doesn't even need to be ideological—I knew a semi-famous tech personality that had a handful of personal stalkers who infested every open comment section. Many newspapers fixed this by disabling comments or actually hiring moderators.

I won't post to a service if the average reader of my posts will see a pile of nasty, unpleasant comments immediately following each of my posts.

This is why I mostly prefer blogs, and moderated community forums. Smaller, well-moderated subreddits are great, as are private Discords.

saurik
0 replies
1d14h

Once you support delegating your killfile to other people it no longer functions the same as each user having their own. And FWIW, as an example, here on Hacker News, many of us have showdead turned on all the time and so while I am aware of the moderation put in place by the site, I actually see everything.

Also: frankly, if there were someone willing to put a lot of effort into moderating stuff Hacker News doesn't--stuff like people asking questions you can answer in the article or via Google--I would opt into that as I find it wastes my time to see that stuff.

And with delegation of moderation, I think it will start to feel like people voting for rulesets more than a bunch of eclectic chaos; if a lot of people agree about some rule that should exist, you will have to decide when you make a post how many people you are willing to lose in your potential audience.

didntcheck
3 replies
1d9h

This sounds like a good approach. Pretty much exactly this "opt-in/out, pluggable trust moderation" is something I'd thought about a number of times over the years, yet I'd never come across the relatively simple idea implemented in the real world until now

Do you/anyone reading know of any prior work? The closest I know of is this site, in fact, which is opt-out but not pluggable. Or maybe email spam filters, from the POV of the server admin at least

pfraze
2 replies
1d2h

There aren’t a lot of exact matches that I’m aware of. Spam filters, ad blockers, Reddit, mastodon, and block party all came up during the discussions.

kps
1 replies
1d1h

So, on Reddit, there's the problem that if you're interested in railroad trains, you have r/trains and r/trains2 and r/bettertrains and r/onlytrains due to differing moderation policies and clique drama, with plenty of duplication (and Reddit crossposts are always disconnected conversations).

My understanding of Bluesky is that the equivalent would be a ‘feed’ tracking #trains filtered by some union or intersection of moderation teams. Is that correct?

pfraze
0 replies
1d

That's how you'd build a subreddit-like experience on Bluesky, yep. Something we're looking at right now is bundling all these lego pieces to make that a bit more convenient to do.

username3
1 replies
1d2h

Do you plan to add custom labels?

Let’s say we ask a question to a politician and they ignore it.

Can we label the question as unanswered, so clients will remind users the question is unanswered?

pfraze
0 replies
1d1h

Custom labels are in the v1 yes.

legutierr
1 replies
1d5h

Can you explain how user accounts and signups work? Can anyone run a user account service hosting accounts and performing account authentication? Or is that centralized?

CaptainFever
1 replies
1d13h

It seems to me that the relay is still a single point of failure here for moderation. What happens if my PDS gets blocked by the relay, for reasons that I disagree with? (Let's assume the content I post is legal within my jurisdiction). Are there any separate relays that I can use?

I think what might be needed here is that anyone with enough resources can run their own relay, and PDSes can subscribe to multiple relays and deduplicate certain things.

pfraze
0 replies
1d13h

I think what might be needed here is that anyone with enough resources can run their own relay, and PDSes can subscribe to multiple relays and deduplicate certain things.

That is how it's designed, yes!

tegling
0 replies
1d4h

How are labels managed? Assume I'm a labeller labeling certain posts as "rude". Will my "rude" label be identified as the same label as other labellers who also label posts as "rude", or will they be tied to my labeller identity (actually being e.g. "xyz123-rude" under the hood)?

t0lo
0 replies
1d7h

On a side note I really like the freedom to speak and freedom to ignore approach, it's the thing i cherish the most about internet communication- the ability to be the free and uninhibited self

samstave
0 replies
1d12h

Maybe add some screens of actual moderation on the landing? It looks like it just shows profile settings?

bbor
0 replies
1d11h

Obviously this is a highly moderation-averse crowd so I figured I’d add one small voice of support: I was very impressed by this post and your comment, and think this is a huge jump ahead of reddits mediocre system, or god forbid whatever’s going on at Twitter rn. This made me much more interested in Bluesky, and I might even click around your careers page.

In particular, applying Steam Curator functionality to content moderation is just perfect and I can’t believe I didn’t think of it before.

arsome
17 replies
1d14h

I'm sure our extremely divided society will benefit greatly from configurable echo chambers.

gruez
13 replies
1d14h

What should we have instead? A benevolent dictator controlling what everyone can read?

treyd
11 replies
1d13h

You don't have to pick either of two extremes. There's a large design space in the middle.

avarun
9 replies
1d13h

Elaborate. Because to my eyes Bluesky’s solution seems like it fits exactly in that design space in the middle you’re talking about.

mavhc
8 replies
1d6h

De-amplify rather than block, your responses end up at the bottom of the list, if you care to scroll down that far

gruez
4 replies
1d4h

1. this seems orthogonal to configurable blocking lists. You can have configurable "de-amplify" lists, for instance.

2. Isn't this still effectively censorship?

barbazoo
3 replies
1d3h

In this case it sounds like it wouldn’t be censorship because it’s individual users that choose that particular moderation layer to, for instance, de prioritize certain voices. For others these voices are still loud. Just not for those that turn on that particular moderation layer. That’s how I understand it.

mavhc
2 replies
1d1h

Just being louder than someone else is a kind of censorship. Problem is you can't listen to all 8 billion people effectively, and that's before half of them bring bots

barbazoo
1 replies
22h50m

It's not censorship if it's controlled by personal preference. If I ignore a certain subreddit and subsequently don't see it on /r/popular anymore, that isn't censorship. Isn't this here the same?

mavhc
0 replies
7h24m

I'm thinking more of having 1000 extremists replying to your message, drowning out any useful replies

_heimdall
2 replies
1d5h

That's a problem for infinite scrolling feed type of apps, there is no bottom.

mavhc
1 replies
1d4h

There's a bottom of the reply thread though

_heimdall
0 replies
22h29m

Very true, I was thinking mainly about the home feed. I don't use Twitter or Bluesky personally, I would have assumed reply chains are less of an issue since the person already drilled into the thread, maybe that's jot a good assumption!

gruez
0 replies
1d13h

And what would be an example of that?

arsome
0 replies
22h3m

Delete the social networks, work with real people and all of a sudden things become a lot less extreme.

dwb
0 replies
1d7h

No-one’s come up with an obviously-correct, completed, broad-appeal social network design so far. Twitter’s approach worked to some extent, for a little while, but is clearly now utterly broken. Anyone who’s ever spent any time on an internet forum (broadly defined) knows that there has to be some kind of moderation. Personally, I’m simply not going to be using a social network where I’m subjected to routine transphobia – not because I don’t want to “hear both sides” but because I already have the firmest possible position against it and I don’t need to hear any more. Maybe a general solution doesn’t exist, but I applaud Bluesky for giving this a go.

LVB
0 replies
1d14h

These seem more like notch filters and I’m hoping they work. At least they are trying something, and a large scale experiment can only help figure this out (even through failure).

Barrin92
0 replies
1d10h

It unironically will. Instead of locking everyone in the same room just let people have their own spaces, and everyone can get out of each others hair. I'll never understand when the excellent idea of localism was turned into the derogatory term 'echo chamber'.

Unlike in the physical world there's not even a limit to land, cyberspace is infinite. Don't understand why people who can't stand each other are so eager to get in each others face.

metalcrow
8 replies
1d16h

This seems very similar to the idea of Mastodon's server system, where you join the server whose policies match yours, except it's much easier to switch "servers". Which is a really good idea.

jacoblambda
3 replies
1d15h

It's actually pretty substantially different in that mastodon's instances/servers are everything whereas here each part is separate and you can generally use multiple of each.

Bluesky has:

Identity via DIDs. With web DIDs this identity is tied to your domain name and there is no bluesky infrastructure associated with it. It's 100% in your hands (but you can't change names or domains easily). But alternatively you can use a plc DID which does use their centralised infrastructure (currently) while allowing you to easily change your name or domain. With these you get one DID per account.

PDS (personal data servers) or "data repositories" that hold your post content. You can self host this or use Bsky's central PDS. You can only use one of these per account.

Indexers that aggregate posts from all the PDS/data repos. These create what bluesky calls "the firehose".

Relays that route and cache traffic.

App views which give you your actual application like bluesky.

Feed services that give you your "algorithm" for what your page looks like. You can subscribe to as many of these as you like or host your own.

Then you finally get labellers like what is discussed here. Compared to with mastodon you can follow multiple labellers at the same time. Those labellers can provide automated content warnings, etc or manual moderation. But importantly at the end of the day no matter what the labellers do, you control how they act and whether they are just a warning/blur or if they actually hide the content.

----

That all compares to mastodon where everything is tied up under one server/instance and your control over moderation boils down to "run your own instance and do all the mod work yourself" or "rely on some other instance with no oversight whatsoever".

This isn't to say that mastodon's approach didn't make sense at the time but like you said the bluesky approach makes quite a bit more sense and makes it way easier for the user to move around between their options.

account42
2 replies
7h51m

The advantage of Mastodon is that you can actually run the whole thing on your own to not be forced under anyones moderation whereas with bluesky there are still central parts run by a US corporation that will censor you.

Not saying that the fediverse is a great design - tying identities to instances is inexcusable. But Bluesky's central corporate backing makes it a nonstarter.

jacoblambda
1 replies
7h9m

With bluesky there are actually fairly few parts that are central.

Now that PDS are available for federation, you can host all your own data. Apparently relays are now also federated as part of the PDS as well but I haven't gotten a chance to look into that aspect yet.

You can choose your own custom feed services and labellers.

The app itself (web and mobile) is open source so you can build it yourself without the default bluesky labeller or feed if you wanted to.

Even identities can be done without using any central servers by using web DIDs instead of plc DIDs.

The only thing that you have to go through a centralised system for are the indexers which to my knowledge are part of BGS. BGS is open source, it's just still in the process of being federated.

So you can use bluesky with 90% of it not under any US authority and if you give it a year that should be 100%.

account42
0 replies
6h4m

So you can use bluesky with 90% of it not under any US authority and if you give it a year that should be 100%.

Yes, it is theoretically decentralized but that isn't all that useful. If you still need to go through Bluesky-the-company to interact with 99.9% of the users then you really haven't gained that much from running your own server and building your own app. This isn't very far from claiming that everything is decentralized because you can always build your own network.

didntcheck
1 replies
1d8h

In theory, but from what I understand Mastodon is rife with inter-server blocking, so your admin might just decide you're not allowed to even read posts from a condemned server (because aggregation is done server-side, unlike say RSS). And simply not blocking certain servers is enough to get your own blacklisted. Making it less of a network and more of a graph with several isolated networks that you have to exclusively choose between

kevingadd
0 replies
1d5h

The UX if you're on the side of a block is really bad on Mastodon too. You can be following a bunch of people, and they're following you, and suddenly they can't see your posts because you're on the wrong side of a one-way server block.

steveklabnik
0 replies
1d15h

I think this is a decent analogy, but as the sibling says, the mechanics are quite different.

fedeb95
0 replies
1d1h

I think the best part is that users subscribe to services, not the server.

sneak
7 replies
1d13h

Bluesky still retains final cut with centralized censorship. This is a problem that strikes directly at the heart of the danger of social media.

Also, calling them “community guidelines” instead of “unilateral censorship rules” is deceptive.

pjc50
2 replies
1d5h

You're never going to be allowed to post CSAM on bsky.app, stop asking for this.

sneak
0 replies
22h6m

What makes you equate a fear of censorship with promotion of child sex abuse? This is an old trope and not based in fact.

Google “twitter files”. Centralized censorship is a risk to a free society.

account42
0 replies
7h33m

Nice strawman.

logicchains
1 replies
1d11h

It's only with their clients; if you use a different client you don't have to use their centralised filter. They need some form of censorship in their own client for plausible deniability so they don't get shut down (or face extreme deplatforming like Gab).

loceng
0 replies
1d4h

Which is a serious issue still to consider.

The previous playbook was for the small handful of major "social" media platforms to be infilrated-captured by bad actors, including but not limited to how pre-Elon Twitter was illegally working with the US government to suppress and censor free speech - many people and experts who were suppressed or outright ban from the platform - who were solely posting talking points that are now proven-known to be correct but were against the COVID-19 narrative.

So in this new evolution there will be how many different clients bad actors will attempt to nudge the system design to benefit a censorship-suppression-narrative control apparatus - even as simple as helping facilitate the creation and reenforcement of information bubbles.

E.g. the establishment's goal or directive will be to direct as many people towards the captured or vulnerable-capturable clients that have gained a critical mass, and as we can see there is a campaign to drive people off of Twitter-X - arguably in an effort to prevent people from seeing Community Notes that may start users from developing their critical thinking and seeing different perspectives, where going on more of a wild west platform or network like Bluesky et al - can allow bubbles within bubbles to form, where bubbles will be far less likely to ever be exposed to general consensus Community Notes.

At minimum people need to be educated of and aware of these isolation tactics - part of divide and conquer: you need people to have wildly differing beliefs, perpetuating fear mongering to weaponize an ideological mob - to the point of having them turn a blind eye to the actions of or becoming Gestapo themselves.

Reddit's forum design has similar dark patterns that allows for easy narrative control and crafting, creating bubbles of acceptable argument points - and to keep people with incongruent beliefs from being exposed to logic and evidence that would cause cognitive dissonance and start to break the illusion for them.

I think part of what can help manage the weaponizable ideological mob of mobs being isolated is to create a public matrix outlining accounts and displaying what content-keywords, and also perhaps who specifically they are "outlawing" - which then can allow a concerted effort to reach those people to do the hard work that takes more than keyboard warrioring to reach them.

This online censorship-suppression-narrative control of course spills over into the real world once the control desperate establishment's apparatus isn't sufficient - where people like the Tate brothers continue to have a platform to reach people to educate them on certain structures of reality, most notably the strategies of the establishment attempting to maintain the narrative control; and so they attempt other tactics to suppress the truth like the assassination of Jamal Khashoggi, the cancellation attempts of Alex Jones, the imprisonment attempts of the Tate brothers because the ongoing smear campaign is failing; whereas Julian Assange was successfully imprisoned, and Edward Snowdon has been exiled - all for being a threat to or shining light on-exposing the inner workings of what many are calling the "deep state" or the establishment.

charcircuit
1 replies
1d12h

This post says you can opt put of their moderation filter, but yes if you are using bluesky's app then bluesky can control what can and can't be shown. In that case you can use a different client or PDS.

account42
0 replies
7h28m

So effectively there will still be centralized censorship.

Spivak
7 replies
1d17h

This is honestly pretty great. It's the official scalable version of those browser add-ons that let you recursively block people, block twitter blue users, block anyone who's posted in x subreddit, masstagger. Big fan of making programmable blocklists a first-class feature.

I don't think it's going to save them from https://www.techdirt.com/2022/11/02/hey-elon-let-me-help-you... this kind of drama but it's nonetheless welcome. It will be interesting to see how people react to ending up on a popular 3rd party blocklist.

esafak
3 replies
1d15h

What's recursive blocking?

pfraze
0 replies
1d14h

I assume a foaf expansion of the block graph

account42
0 replies
7h36m

Guilt by association as a service.

Spivak
0 replies
1d14h

Yep, I stopped using Twitter so I don't need them anymore but they were a godsend at avoiding internet drama and discourse I had no desire to engage in. You pick someone famous at the center of the drama and let the algo do its thing blocking them, all their followers, all their followers followers and so on. It's a coarse heuristic to be sure but operating on the principle that I can live without any given person's tweets it made discovery so much nicer.

echelon
0 replies
1d8h

block anyone who's posted in x subreddit

Careful. Just because someone posts in /r/conservative or /r/liberal does not necessarily make them either.

avodonosov
0 replies
1d13h

Can you name some such extensions? I was thinking about similar functionality and considered to create a similar service / extension / etc, even experimented a little long ago. Interesting to know what exists.

account42
0 replies
7h36m

Great argument for why this is a really bad idea.

ChicagoDave
6 replies
1d10h

This is the way it should be. People that post things I don’t want to see can be blocked. Groups of people that post things I don’t want to see can be blocked. Conversely, people and groups I do want to see posts can be identified and cleared for my timelines.

thoughtpalette
2 replies
1d2h

Hackernews moment, Hi Dave!

ChicagoDave
1 replies
18h6m

Let’s grab beers next week.

thoughtpalette
0 replies
3h34m

Might have to be later than that. I'll HYU on LI!

intended
1 replies
1d4h

So here’s the fun part of content moderation - you have to let communities talk about bad things as well.

You cant have your own community banned for talking about something bad that happened to them.

I would be curious how that scenario plays out.

Perhaps the content is only greyed out. What do you do about users though? Is their content greyed out?

bastawhiz
0 replies
1d4h

Nothing is stopping the community from talking about bad things. If you want to opt out of certain topics, that doesn't stop anyone from doing their own thing, it just stops you from seeing them. If you choose to allowlist your friends, you'll see whatever they post. If you blanket hide any posts that mention something, that's entirely your prerogative.

concordDance
0 replies
1d9h

Worth noting this is not truly "choose your own" moderation. People who do things like advocate for a lower age of consent will still be banned: https://bsky.social/about/support/community-guidelines

EDIT: though it seems like you could in theory make your own client... I'm unclear how that would work in practice. Presumably content they dislike would not be stored on BlueSky servers?

dev1ycan
4 replies
1d4h

I don't get why this social media exists, twitter was pretty bad before he sold it, I remember he went out of his way to hide the feed of people you actually follow, and people were just abusing keywords to get on my timeline with garbage.

Trends were also insanely manipulated.

I for one will not and won't even remotely consider using another of his "social medias", it must be fun for him to go sell his garbage-fied social media make billions then attempt to create the same exact thing and have sheep follow him to it, get rug pulled every few years so he inflates his pockets, no thank you.

steveklabnik
3 replies
1d4h

Jack Dorsey does not have a stake in BlueSky. He has an advisory board seat, but deleted his account. He’s invested in nostr, not BlueSky.

ianopolous
2 replies
1d1h

But he did set the mission and goals for bluesky, thus determining it's long-term direction. Personally, I think it's the wrong direction in terms of what's best for society. Source - I was one of the first members in the private bluesky group he and Parag set up.

steveklabnik
1 replies
1d

I don't know what mission he set up, but I am no fan of Jack, yet do like what the current team is doing.

ianopolous
0 replies
23h38m

The team is great. They do have to work within the mission though.

walterbell
3 replies
1d11h

Does the Bluesky blog have an RSS feed?

account42
0 replies
7h39m

Missing the <link rel="alternate" type="application/rss+xml" ... /> tag though :|

Even with browsers trying to destroy RSS that link is still useful because it lets you pase the normal blog URL directly in the reader to subscribe to the feed.

Honestly, this disregard for existing standards does not make me confident in Bluesky.

bertman
0 replies
1d10h

Looks like it doesn't.

There is, however, this feed (https://bsky.app/profile/did:plc:z72i7hdynmk6r22z27h6tvur/fe...) which collects all posts from the Bluesky team. Unfortunately, it's not possible to consume bsky feeds as RSS feeds either, but you can at least subscribe to individual accounts via RSS by appending `/rss` to their profile URL, e.g. https://bsky.app/profile/safety.bsky.app/rss , which is the account that posted the OP's announcement.

ThinkBeat
3 replies
1d7h

Another term for moderation is censorship.

Something that has been well demonstrated in existing social networks. but as long as whatever is a node in BlueSky is of limited reach / users then it is not much of a problem.

I do see this creating a lot more safe spaces/ echo chambers and even more olarization for all forms of opinions, both politically correct and politically incorrect, unless the company enforces one side of the other in a specific way.

FireInsight
1 replies
1d6h

Another term for no moderation is rampant abuse.

_heimdall
0 replies
1d5h

That's only true if much of the content posted is deemed abusive.

A better synonymous term would be free speech absolutism.

alkonaut
0 replies
1d4h

The discussion of moderation always falls into the pit of discussion echo chambers. And while that's a problem, it's perhaps better to keep it first on the topic of objective abuse moderation (basically: how to prevent 99% of content from being spam that no human user wants). Since that has to be solved regardless of whether there is any further moderation beyond that, the moderation discussion can be about that first. There is no echo chamber created by removing botspam.

Next, you can have moderation that is strictly legal. Basically how do you moderate things that courts are finding illegal. How do you reconcile that courts in different jurisdictions can disagree and so on.

Finally - and optionally - comes the step of moderating for bad behavior, disinformation and so on. That is, human behavior that has a negative impact but falls short of being obviously illegal. And that's a difficult topic, as we have seen with facebook/twitter and elections for example. But it's important to remember that moderation isn't just this. By volume, the kind mentioned in the first paragraph is far larger.

efitz
1 replies
1d2h

I’ll be honest. I really liked the BlueSky idea and was an early adopter. But I deleted the app last week when you announced your Trust & Safety team, or at least hired a new leader for it.

In my mind (and many other people), “Trust and Safety” is a synonym for censorship.

We live in a time where censorship is becoming a huge risk to freedom. A free society needs to be able to debate, and anyone or anything that decides that it has the wisdom to set limits about what people can discuss, is a threat to freedom.

I had hope that BlueSky was choosing a different path but now I guess not.

breischl
0 replies
1d

If I understood the article correctly, you can get a feed that isn't moderated by their team. So I'm not sure what your complaint is? That other people can get moderation if they choose to?

w10-1
0 replies
1d1h

So, is this getting incentives right?

Not really, IMO. Even when users select moderation, they should be able to transiently unselect moderation features to get the unfiltered view - to take off the glasses at any time. Otherwise, moderation can devolve to censorship in ideological contexts, and BlueSky becomes a friend to those herding opinion.

Even if users can take off their glasses, moderation will likely still suppress production of content. (Why speak if no one's listening?) But by enabling users to experiment with turning off moderation, each user will always be able to assess whether the moderation is supportive or censoring. That in turn can temper the moderators to be reasonable, since their moderation is easily discoverable and measurable.

I think this is good for groups, too. If group-wide moderation rules are definitive and users cannot peek through them, then moderation itself is a powerful position, and you'll get the usual leadership contests as people game to manage opinion. If moderation instead is weakened by being ever-consensual, there will be less gaming to control opinion.

It's better for groups to manage themselves through membership. It may be hard to get in, but once in, your voice is hearable by all, moderation notwithstanding. So weak moderation also makes group membership more valuable, which in turns makes it easier to have the conditions of entry that can help people trust each other. (Making those conditions observable and consistently applied is another question.)

skenderbeu
0 replies
1d5h

Who decides what's rude or discourse?

scudsworth
0 replies
1d2h

just another few years of solving moderation before you can finally add support for basic media types

extraduder_ire
0 replies
1d16h

This is a really cool idea. I love the "Spider Shield" example they came up with. I look forward to this being used for all kinds of moderation-adjacent things like spoilers, and topics you don't care to read about for the time being.

echelon
0 replies
1d15h

Amazing. This is sorely needed for Reddit.

asgerhb
0 replies
1d7h

I can see many benefits to this approach, but there is one area where I'm sceptical:

The article mentions moderation in different cultures several times, and this made me curious – what would be the "default" culture assumed by the site-wide moderation? (Unitrd States?)

Will the moderation team paid for by blue-sky be responsible for all languages or only English posts, and what would it mean to be from a culture without "official support"?

LightHugger
0 replies
1d8h

Very nice, this is an idea i've posted about before, and i'm glad someone is implementing it. Very interested to see how well it works out!