return to table of content

Diseconomies of scale in fraud, spam, support, and moderation

jonathanlydall
38 replies
1d17h

The problem with internet fraud is that no one is really trying to make the fraudsters accountable for their actions, instead what they’re doing is essentially “shoo’ing them away”, and fraudsters after being shoo’ed away simply try again until they, almost invariably, eventually do succeed.

Subsequently, the industry of fraud is generally lucrative and is thus constantly growing.

When I worked at Blizzard, there were many mechanisms in place to spot fraudulent credit card transactions and cancel them before they actually billed the card holder, but there was no process for reporting the fraudsters to prevent them doing the same thing again tomorrow.

This is probably the same strategy that every company does, because there is no practical action any company can do to try make fraudsters accountable and thus discourage them from simply trying again.

So the problem of fraud is getting ever bigger and increasingly unmanageable and out of control.

The problem is a non-trivial political one, governments need to create task forces to work to reduce the size of the fraud industry, perhaps by catching fraudsters and making them accountable.

The problem is even harder than simple political will though, because it’s cross jurisdictional, meaning governments of different countries all need to work together on this.

This may seem impossible at the moment, but I believe until it is achieved fraud is only going to get increasingly worse.

bongodongobob
20 replies
1d16h

Well what can you do? Trace their IP back to a VPN or web of proxies? Or more likely grandmas infected Windows XP computer? Then what? Start kicking doors down?

Going to get the FBI involved for a $20 purchase?

Reminds me of Lebowski asking the cops about his Credence tape.

godelski
8 replies
1d15h

Going to get the FBI involved for a $20 purchase?

Sure, why not? And if you're at the scale is Blizzard I'm sure it's a lot more than $20.

We're collecting all kinds of information these days. Surveillance capitalism is a common term for it. Data is gold they say. So why not give that data to people that can use it to stop crime?

bongodongobob
5 replies
1d5h

Blizzard doesn't get paid for reporting fraud. They lose money. So what's the incentive for them to give a damn? They'll just push it back to the CC companies to deal with, if anything.

godelski
4 replies
1d4h

So what's the incentive for them to give a damn?

Do you only do things that you get paid to do?

bongodongobob
3 replies
23h15m

At work? 100% yes. If it's out of scope for my role or the project, it gets kicked up.

godelski
2 replies
21h10m

So if it is out of scope for your role or the project you mention the issue to someone else who's job it is to resolve those types of problems or do you make no such report and just hope they figure it out themselves? It's unclear which you mean.

bongodongobob
1 replies
18h57m

If we are talking about credit card fraud for World of Warcraft accounts, the cost to fight it is far higher than ignoring it or kicking it back to your credit card company. There is 0 incentive for them to care other than doing the bare minimum because it's not their problem, it's the credit card company's problem.

They offer MFA and that's all they should have to do.

godelski
0 replies
16h45m

Did you dodge the question because you knew that you couldn't be consistent or because you felt that bringing up a new topic was better?

There is 0 incentive for them to care other than doing the bare minimum because it's not their problem, it's the credit card company's problem.

I don't buy this and it sounds incredibly myopic. If it wasn't "their problem" then they wouldn't be doing anything about it. Which the OP talked about. So they aren't just taking the money and letting the credit card companies handle it, that doesn't even make any sense. Credit card companies have to get the money back. Honestly, it sounds cheaper to report it if reporting it leads to a significant reduction because you then have to pay fewer people to deal with this issue and process far fewer charge-backs. It's not much work to do (literally create a program to automate filling out a forum to the gov). We're arguing over less than a week's amount of work.

Plus, it's the right thing to do. Idk about you, but money doesn't run my entire life. Letting it seems incredibly myopic as well. Money is just a tool to help me better my life but it sure isn't the only way to do that. And if we're not trying to do "the right thing" then the fuck do we even have a society for? Seriously, please reconsider your actions, because you're enabling the shittiness in society. I'm not saying you gotta be a saint, but if there's a pretty cheap action you can do to make your society any bit better, just do it.

golergka
1 replies
1d15h

Because they're in Russia or North Korea, and their bosses bosses are good friends of local police and secret services.

What exactly can FBI do about them?

godelski
0 replies
1d13h

Well I guess to be fair it would be a matter of Homeland Security. Either way, there are agencies who's directives are about dealing with international fraud. Global commerce comes with global agreements.

1) There's certainly cards coming from non completely sanctioned countries. This means legal pathways.

2) If fraud is prevalent and these numbers can be tracked it makes for better policy decisions. It's not uncommon knowledge that we don't have the best privacy and security. This is at least ammo to do something about it.

Nextgrid
4 replies
1d13h

to a VPN or web of proxies? Or more likely grandmas infected Windows XP computer?

Legislation can be changed so people are considered responsible for the abuse that comes from their network unless they can pass the buck to someone else. This will quickly cause those Windows XP machines to fall off the internet unless their owners fancy constantly being bogged down in legal trouble.

Going to get the FBI involved for a $20 purchase?

The alternative is effectively saying that theft under 20 bucks is decriminalized?

underdeserver
0 replies
1d11h

San Francisco is trying that right now, aren't they? How's that going for them?

probably_wrong
0 replies
1d12h

The alternative is effectively saying that theft under 20 bucks is decriminalized?

Those who tried to convince a police officer to investigate a bike theft can tell you that the threshold is actually higher.

derivagral
0 replies
1d

$20 seems cheap; not long ago scooter operators would lose hundreds to theft at least, with little interest.

bongodongobob
0 replies
1d7h

Probably anything under $1000, yes.

faronel
3 replies
1d15h

From the article as an example of what doesn't scale:

"...He was going through emails in his inbox, then responding to questions in the craigslist forums, and hopping onto his cellphone about once every ten minutes. Calls were quick and to the point "Hi, this is Craig Newmark from craigslist.org. We are having problems with a customer of your ISP and would like to discuss how we can remedy their bad behavior in our real estate forums". He was literally chasing down forum spammers one by one, sometimes taking five minutes per problem..."

Seems like it's possible but not feasible to build a pipeline to scale out that sort of behavior.

saagarjha
2 replies
1d12h

Why not? It doesn't have to the founder on every call if you are looking to scale.

michaelt
1 replies
1d11h

It depends if your revenue scales as fast as abuse scales.

Amazon should be able to keep on top of fraud easily, twice the sales means twice the revenue means they can afford twice the fraud checks, if they want to. That fake $30 "2tb microsd card" puts $30 of real money in their bank account.

But for something like Twitter, revenue is only vaguely related to number of tweets. That "elon musk wants to give you free dogecoin" tweet doesn't make them a cent.

Jochim
0 replies
1d7h

Then their business isn't viable.

The alternative is that we allow businesses to externalise those costs onto society.

jph00
1 replies
1d14h

When I was at FastMail I did a lot of very manual work to not just block spammers and other abusers, but to make their life as difficult as possible. That included figuring out how to notify the people running the servers they used (including sometimes finding the IRC chat for the folks on that server and telling them they had an intruder). One of my favorite things was to redirect bounce messages that were targeted at innocent FastMail customers to the actual spammer's email address -- which I found stopped the spam from them very quickly, once their inbox filled up with thousands of bounce messages!

Personally, I think it's reasonable to care about such things, and to try to do something about it. If no-one cares or tries, then sucky people will just suck even more.

holigot
0 replies
1d9h

Thx for this detailed feedback.

Is there any reason why you did change mail away from Fastmail after such a long time and now use Google as Mailserver?

godelski
9 replies
1d15h

This is why I have a problem with a lot of the legal structure and incentives. If you measure in number of arrests you aren't solving the problems. I'm not sure there's any good metric to measure that.

To take an analogy I'll mention the drug war. The main target was always low level drug dealers and users. They're easy to arrest and easy to say you're doing something. But they are fairly inconsequential to the business of selling drugs and thus the availability on the streets. But it's orders of magnitude harder to go after the root of the problem. I don't want to detract this conversation with the other aspects or conspiracies (real or contrived), but just focus on the incentive structures. I think the same is here. Blizzard gets no short term reward for reporting fraudulent activity. It's hard to know if they get long term. But at the end of the day it would be the right thing to do.

That's what matters. Doing the right thing. Move fast and break things it's a great strategy. But it can't be used in isolation. If it is you're left with a trail of destruction which never gets cleaned up. It's hard to quantify objectives and so every objective function is misaligned. You need to rely on humans to see through that and correct the course as best they can. You need both the people pushing to move fast and the people pushing to slow down and repair. There's a harmony in that competition. Pick your camp but recognize that there's value in the other one too.

Anotheroneagain
3 replies
1d15h

I don't think so, on the contrary, they ignore the dealers in vain hope to catch some "big fish", but there are no big fishes there.

godelski
2 replies
1d13h

I find that a strange comparison. Certainly there are bigger fish to fry. In both drugs and credit cards there is a whole ecosystem. Cards probably more straight forward as these are typically being sold on websites and you thus have a clear marketplace. Which I'd differentiate a marketplace from a dealer on a corner. Go after the people who are the generators. That's people making the drugs or people stealing the cards in the first place. What do you mean there's no "big fish." If we're going to use a fish analogy it's not the size I care about, but the type.

Anotheroneagain
1 replies
1d10h

I mean it's probably a guy who knows someone who can cook meth, rather than a servant of some kingpin who is running a massive drug conglomerate hiding who knows where.

godelski
0 replies
1d4h

Okay? But somewhere down the line is the person cooking meth. I'm not sure what you're getting at.

danaris
2 replies
1d6h

I mean, the problem with the War on Drugs is that the drugs were only ever a symptom—while it's true that some small percentage of people would try, and get addicted to, drugs otherwise, the vast majority of addicts were people trying to fill a major hole in their lives. Treating addiction as a disease instead of a moral failing or a crime, legalizing drugs, and making sure that people who want to quit have resources available to do so, are all vastly more effective at reducing the illegal drug trade, as other countries have found.

In a similar way, a huge amount of this fraud would disappear if we took more known-to-be-effective measures to combat poverty (because many of these fraud techniques rely on hiring a whole bunch of desperate people to help keep it going): things like providing housing, single-payer health care, and universal basic income.

godelski
1 replies
16h38m

I mean, the problem with the War on Drugs is that the drugs were only ever a symptom

No argument here, and it should be unsurprising I agree, since I'm arguing to not go after users. But also remember we can say the same about internet fraud. I'm all for addressing the roots of the problem. Big fan of ensuring there's a floor to living standards. But admittedly that's not the whole problem and fixing these things is incredibly complex. But I would say treating the addiction is a different side since that's downstream of production. Though complex because we do need to produce some drugs but we shouldn't need to spill blood to do so.

danaris
0 replies
6h32m

But also remember we can say the same about internet fraud.

Indeed—that's basically my point, too! :-D

Nextgrid
1 replies
1d12h

When it comes to the war on drugs there's a perverse incentive problem - said war provides endless amounts of easy to catch/prosecute crime which can be used to meet & exceed their performance targets without much effort. So there might not be too much incentive to kill this "cash cow" of low-level users/dealers by actually curtailing upstream supply.

This might not be too different from the payment anti-fraud industry (which is just varying degrees of snake oil). There are solutions that can be implemented such as actually strong two-factor authentication that could cut down on payment fraud dramatically but this would significantly reduce demand for this industry, so a lot of the field has a monetary incentive to keep the underlying primitives not so secure as to keep demand for their snake-oil.

godelski
0 replies
1d11h

Exactly my point.

palmfacehn
1 replies
1d16h

From the article:

And if I ask who do I trust more, my local reputable electronics shop (Memory Express, B&H Photo, etc.), I trust my local reputable electronics shop more. Not only are they less likely to sell me a counterfeit than Best Buy, in the event that they do sell me a counterfeit, the service is likely to be better.

An international, multi-government taskforce is one solution, potentially. Maybe not. For the problem presented at the top of the article, shopping locally seems like an easy win in comparison.

nradov
0 replies
1d8h

An international, multi-goverment task force is no solution for the routine wire fraud that impacts most consumers. Due to corruption, incompetence, and political disputes, the countries where most of that fraud originates won't participate in a meaningful way.

hnben
1 replies
1d10h

but there was no process for reporting the fraudsters

what would be good incentives to make companies like blizzard start reporting fraudsters?

I am thinking along the line: companies can pay $X to make a report, and will receive $X5 if that reports leads to a fine. And they have to pay additional $X10 if that reports turns out to be frivolous. (If the report appears to be valid, but doesn't result in a fine, than they get no payoff).

My worry is that bad incentives would lead to over-reporting or some other bad thing.

rjmunro
0 replies
1d7h

what would be good incentives to make companies like blizzard start reporting fraudsters?

Having someone to report them to?

withinboredom
0 replies
1d16h

I think we don't hold them accountable for the same reason we generally accept panhandlers (which is usually technically illegal in most jurisdictions). If we hold them accountable, where do we send them? Jail? Does that actually fix the problem, or does it just hide it under a societal rug?

itake
0 replies
1d16h

and the companies have no incentive to investigate fraud further once it has stopped, especially if they fraudsters are international.

Nextgrid
0 replies
1d12h

The problem is a non-trivial political one

It's even less trivial when corporation-on-consumer fraud has been normalized (see dark patterns, or companies outright taking money for no/sub-par service). Suddenly starting to prosecute fraud would inconvenience some deep-pocketed entities.

andy99
26 replies
1d21h

The real "economy of scale" in these platforms is not actually providing any support , moderation, fraud prevention. It's not that they scale it better, it's just that they don't do it and get away with it through market power.

If I provide Andy's small town taxi service and have a customer service issue, I deal with it personally and properly or I go out of business. If I'm uber I just ignore it. It has nothing to do with efficiency, it's just getting the monopoly power to treat your customers badly and get away with it.

Which, of course combined with the larger attack surface, is why bigger platforms do so much worse on spam/fraud/support.

Aurornis
16 replies
1d20h

If I'm uber I just ignore it. It has nothing to do with efficiency, it's just getting the monopoly power to treat your customers badly and get away with it.

People don’t use Uber because they’re a monopoly. They use it because it’s convenient, it’s a known quantity, it’s mostly predictable, and they can use it almost anywhere they go.

People like to complain about Uber, but my pre-Uber experience with taxi services was far worse. The hypothetical case of Andy’s small town taxi service that provides exemplary service either didn’t exist or was such an outlier that you couldn’t reliably find it while traveling.

alexey-salmin
9 replies
1d20h

People like to complain about Uber, but my pre-Uber experience with taxi services was far worse

Did you live in a small city back then?

Aurornis
5 replies
1d20h

I’ve used plenty of small-town taxi services, yes.

If you’re only remembering the gold standard small-town service that did everything perfectly, you’re looking back with rose colored glasses. The typical experience back then was not good.

alexey-salmin
4 replies
1d20h

Well I'm pretty confident in my memories because it wasn't too long ago. The thing fell apart only around 2016 with the rise of apps and the inflection point was a complete misery hard to forget: drivers taking your order and not moving, drivers calling you to ask about the destination point and then refusing to go there, drivers telling you that you should've put the destination point into comments, driver asking you to cancel the order and pay them in cash etc.

However the apps were able to price-dump for around three years until all small players died, then hiked the prices back again.

I'm not saying anything about the "typical experience back then" however I do know that in right circumstances (high demand, cheap supply, no lock-in) competition between independent small companies can have remarkable results.

throwaway2037
2 replies
1d19h

Did you report these drivers to the app owner? Did they do anything?

hansvm
0 replies
1d17h

Not OP, but I've reported many drivers to the app owner with zero [0] results.

[0] Not quite zero. I always got an immediate computer-generated email telling me I'm important and that my complaint is of utmost importance. Drivers never had repercussions from the companies though; every once in awhile if I had concrete evidence of running over a pedestrian's foot or something I could get law enforcement to care, but the companies DGAF.

alexey-salmin
0 replies
1d15h

They apologized and gave promocodes, subsidizing the service even further. They were well aware of the problem.

Things normalized a few years later. I think the biggest lesson was that clients were willing to endure a lot of hardship for a 50% discount.

tsimionescu
0 replies
1d13h

I think this varied greatly between different places. In my own country&city, Uber came in as a premium-ish option initially (regular taxis were already dirt cheap), with nicer cars and excellent service. Of course over time things normalized back down to the same sorts of cars and the same drivers, but the first few years it was beautiful, not a race to the bottom.

Still, even today (and with Uber mostly having lost the market to Bolt), the average ride is much, much better than the average taxi ride ever was (and I was a regular user of both). The cars are still cleaner, and it's virtually unheard of to have drivers try to overcharge or otherwise scam you - which were both regular occurrences with taxis.

staticautomatic
0 replies
1d17h

I live in SF and my pre Uber experience was about as nonexistent as the cabs and the people at the cab company who were supposed to answer the phone.

hughesjj
0 replies
1d18h

Fwiw I used to live in small towns until I graduated college and the taxis in Pennsylvania sucked no matter where I was

Now Uber/Lyft aren't great, but it's still way better seeing the driver on the map when they're running late than having literally no idea and having to call dispatch who themselves will begrudgingly call the driver who will just give a bs status anyway

bawolff
0 replies
1d19h

What about living in a small town now? Uber does not operate in most small towns. Small town (and big town) traditional taxi services still suck massively. I would pay a premium to use uber/lyft over traditional taxis.

Nursie
2 replies
1d17h

Uber was needed to give cab companies in a lot of places a proper kick in the pants, but it is now usually worse in my experience.

chii
1 replies
1d17h

but it is now usually worse

so why not go back to taxis?

Nursie
0 replies
1d17h

I largely have. The local cab company is far from perfect but they now actually show up most of the time, unlike Uber drivers.

dandy23
0 replies
1d14h

You miss the parent's point. Uber might care about their customers, but they don't give a toss about other aspects such as fraud, the local labor rules or local laws etc. They try to dodge it as much as possible. And that is a problem if they are a near monopoly and market leader because they are the market.

cvalka
0 replies
1d19h

Taxis prior to Uber SUCKED.

Spooky23
0 replies
1d17h

Taxis sucked outside of parts of major cities and airports. Even in NYC, the taxi system only worked well in Manhattan.

MichaelZuo
6 replies
1d21h

If I provide Andy's small town taxi service and have a customer service issue, I deal with it personally and properly or I go out of business.

In a very small town, you won't go out of business because you'll be the only taxi service in town. Everyone can establish their 'monopoly power' in a small enough area.

alexey-salmin
5 replies
1d20h

Not really, because the entry barrier to create a small town taxi service is incomparably lower than creating a new Uber.

20 years ago my small town had around five fiercely-competing taxi services with cars arriving within 4-5 minutes of a phone call. In many respects that was better than the new times. Especially when it came to customer service.

cortesoft
3 replies
1d20h

How small a town are we talking about? I grew up in a 100,000 person city and we only had two taxi services that would take ages to arrive, depending on where you were.

alexey-salmin
1 replies
1d20h

Around 100k. It was in Russia though and owning a car wasn't as common as say in US. Public transport wasn't very pleasant to use either.

It was an interesting case, a sort of a breakthrough in transportation habits achieved by loads of 15-years old right-hand-drive Toyotas converted into taxis. The cheapest ride was around $1 and people literally started riding taxis to the next building on the same street.

margalabargala
0 replies
1d19h

A population of 100k is large enough to appear on the wikipedia list of "largest cities in Russia". A small town it is not.

bee_rider
0 replies
1d19h

It must depend on the demographics also, right? Which would apply some skew to our personal impressions, making it hard to say much.

I’ve lived in a couple college towns and they tended to have smaller populations, but pretty good taxi services—which makes sense when you have a bunch of young people who want to go out to drink, might not have cars, and probably are willing to spend money on that sort of thing, and tend to all go to and from the same destinations.

I bet the hackernews demographic skews toward college educated, so I wonder if people are commenting with truthful experiences, but from that particular (IMO, at least questionably representative) niche.

MichaelZuo
0 replies
1d20h

When I say a very small town I mean way less than 100k people. There is a very high barrier to entry in being the second taxi service for outsiders because when it's that small everyone knows everyone else or their family by name so they don't trust outsiders easily.

Regardless of how permissive the regulations and bylaws are on the books.

And even when it's a second group of locals that want to establish a second taxi service, you can still establish your 'monopoly power' on one side of town, or specific areas. There's no practical bottom limit.

cortesoft
0 replies
1d20h

If I provide Andy's small town taxi service and have a customer service issue, I deal with it personally and properly or I go out of business.

We know this isn’t true, though; taxi services sucked before Uber. In small towns, there wasn’t competition at all, so know reason to fix issues. In larger cities, most taxis were hailed on the street or sent from a central dispatcher, so no individual taxi driver cared about their reputation… people weren’t choosing a specific taxi, and there was little repeat business, so customer service wasn’t important.

8note
0 replies
1d20h

If I'm Andy's small town taxi company, I actually bet that nobody will spend thousands of dollars using me over a 20-50 dollar ride. I don't need to care, and no lawyer will go for a class action suit because the class is too small

infogulch
25 replies
1d21h

Way back in 2014 when Jeff Atwood (aka codinghorror) switched from Stack Overflow to creating Discourse, he gave a talk about it (see notes [1], sadly I can't find a live link to the recording anymore). He gave a pithy little explanation of why they built the Discourse trust levels system the way they did that stood out to me:

The only thing that scales with the community is the community.

The point being, you have to grow users into moderators. Any other way of acquiring moderators is unsustainable.

[1]: http://discourse.bridgefoundry.org/t/link-to-jeff-atwood-tal...

crummy
11 replies
1d21h

I wonder how that would work for Facebook, for example. Facebook Groups have admins, but for general posts to the public (or to friends) - could they be user-moderated?

infogulch
4 replies
1d20h

This strategy seems to be especially effective for focused communities as they grow from 50-5000 users. As you approach facebook scales and a more generalized audience ("the public") I'd imagine the optimal strategy may shift some. A federation (small "f") of smaller, more focused communities might be the best you can hope for.

bawolff
1 replies
1d19h

I think wikipedia serves as an example of community governance on a large scale. Not quite facebook scale, but en wikipedia has 126,000 users who have taken an action in the last 30 days, which is pretty big scale.

Then again the wikipedia community is full of drama.

klyrs
0 replies
1d18h

Can you point me to a human community of 100k individuals that isn't full of drama? Asking for a friend

a_bonobo
1 replies
1d20h

Has Facebook perhaps shifted towards a federation of communities?

My Facebook feed as it used to be, general life updates by various friends, is pretty much dead. Nobody posts these things anymore. However, neighbourhood groups, Buy Nothing groups, and other local special interest groups thrive, I see several daily posts in each. Plus Facebook constantly recommends me more groups. This is a n=1 experience but I wonder whether Facebook is 'falling apart' into these groups.

aldonius
0 replies
1d20h

Makes sense to me.

The admins of the groups are responsible for managing the discussion in groups including frontline anti-spam against the porn bots and the shoe bots and whatever. So it's semi-distributed moderation. (Assuming group admins care, but if they don't then users just abandon the group.)

Compared to e.g. Reddit, FB groups aren't as world readable or writable as subreddits are so you don't really get that classic inter-subreddit drama.

Where global moderation is still needed from FB's perspective is when group activities start spilling outside of that group, e.g. because they're coordinating extremist activity.

This probably won't be the result of user reports though. For privacy and scalability that implies an automated system which (hopefully reliably but only rarely) flags posts to a human working for FB.

Denzel
4 replies
1d20h

We're essentially opining about the difference between governance systems.

Facebook (like most corporations) operates an autocracy. However, unlike most corporations Facebook generates 100B+ in revenue per year (more than most countries), is worth $1T+ (again, more than most countries), and home to 3B+ users daily lives (trifecta, more than all countries). It's time to acknowledge the company that walks like a duck and quacks like a duck, is a duck.

If Facebook was so inclined to change their governance model, a plethora of prior art exists, in public sector government implementations, for them to build from. An endeavor of this magnitude is neither easy nor simple nor palatable to more engineering-inclined minds, but it's perhaps generational opportunity for a person/company to truly lead and innovate on such a ubiquitous digital platform.

Aurornis
2 replies
1d20h

It's time to acknowledge the company that walks like a duck and quacks like a duck, is a duck.

Are you trying to imply that Facebook is a country? Because this doesn’t make any sense. Countries have an entirely different set of obligations, duties, and jobs to be done than a website dedicated to light communication and entertainment.

datadrivenangel
0 replies
1d18h

One of the best analogies for social media is the idea of each post being a person yelling in a public area.

Moderation then becomes: what do the local cops tell people to stop saying. Government moderation policy thus approaches social media moderation policy.

Denzel
0 replies
1d17h

I'm directly stating that maybe it's time for Facebook to consider a different governance model than autocracy because their resources and integration within humanity-at-large has scaled beyond that which a single person can reasonably govern. The examples of such failures to govern are so numerous I don't have time to cite them all.

And no, I'm not advocating for Facebook to be nationalized or broken up. I'm specifically pointing out the thought that Facebook could innovate here; they have an opportunity to establish the first truly global self-governing internet-based company in history.

a website dedicated to light communication and entertainment

This statement is deliberately obtuse. Facebook is a communal space used by 3B people every day to mediate conversations, transactions, news consumption, business, events, etc. I'd hardly say people building their livelihood off of Facebook is "light communication" or "entertainment". Again, look no farther than HN to find examples of Facebook's governance policies harming their users with no explanation or due process: https://news.ycombinator.com/item?id=29614629.

RandomLensman
0 replies
1d12h

$100B in revenues for a country of 3 billion would be absolutely tiny (even for a developed country of 30 million people) and in no way enough to support governance systems even vaguely resembling modern governments.

miki123211
0 replies
1d20h

It would work for Facebook users, but not for Facebook the company.

This is essentially what Reddit does. It mostly works for communities about headphones, bicycles or water bottles, but not Donald Trump, piracy or child porn. Reddit had to close some subreddits down, not because their users complained, their users were perfectly happy and considered the moderation to be just fine, but the harm to society / the company's PR image / advertisers was too great.

mncharity
8 replies
1d19h

Imagine having latents on posters, poster-persona-moods, posts, raters, rater-persona-moods, ratings, latent space curations, and user preferences and moods.

User: "AI, what's my current reading status?" AI:"We're on your EveningSurfMood2, emphasizing StuffFromFriends, and your EclecticTopicsLarge (with Dang's HighQ Discussion v3, Alice12's Clueful Author Making Coherent Argument Filter, and hugging's UnusuallyInsightful51)." ... Later while reading, AI: "We've received a rating request from Mastatron: Is this post underappreciating the opportunity for rich human-ai computation hybrids at scale?" User: "Oh yeah".

malfist
5 replies
1d19h

I can't for the life of me figure out what you're trying to say

mncharity
2 replies
1d17h

One builds systems out of humans, ML, and other computation. Ideally integrating the strengths of each, and doing fine-grained mutual compensation for the weaknesses of each. For instance, automated task management which delegates easy cases to ML, and orchestrates humans for harder cases.

When a post is rated, a system ideally learns about the post, the poster, the rater, and the rating question. The candy machine which bribed undergrads to redundantly grade CS exams, the responses giving insight into the grader, the gradee, the exam, and the instruction.

People are multifaceted. I'm interested in someone's apps, but not their cat. In <wizzy biology researcher> when they are writing about their research focus, not on their gone-full-nutter <other biology topic>. It would be nice to be able to stream one's own diverse interests, without burdening people who wish to follow only one of them.

Sites, group moderation, and thumb ratings, are units of very very very course grain selection. Imagine tiktoc or netflix with everyone sees the same thing, one size fits all, and if you want something else, go somewhere else. I don't even want the same thing from hn comments at different times of day. Our capabilities are oddly impoverished - even last-century usenet had "I don't want to see any more comments from this person". Skimming hn comments for ones which match my preferences of the moment, when feasible to delegate to personalized automation, the system design could facilitate that.

User customization could be open, transparent, and collaborative. Not "you looked at this once weeks ago, so here's more, and you can't stop me". Composably collaborative - not a swamp of manually-viewed isolated playlists, but accessibly searchable/queryable - intersect playlists which include this video. Beyond "this person does good videos", to "this person does good video ratings", to "these people are good raters", to "these people do good ratings of raters"... for your personalized interests of the moment.

It's late, but hopefully those are clarified highlights.

nradov
1 replies
1d7h

Sounds expensive. I don't see how a platform for user generated content such as X could do that profitably given that the expected value of each user is so low.

mncharity
0 replies
3h18m

Certainly a high system complexity cost.

But consider all the social costs avoided. The burden of moderation and policy fights. The non-satisficing silos - "not here, but sort of there, but not really, want a little of both, so there's nowhere really, too bad, it would be nice". A wikipedia without deletionist-vs-inclusionist conflict, because both are supported, and fine grained - "my wp? inclusionist on programming languages, and deletionist on popular culture".

What if X could offer advertisers: everyone is on X all the time, so of course there's lots of ugly crazy, that's people; but you have excellent control of to whom, and in what contexts, your ads are shown. If you want your ads only on X/Mormon, X/PlumTV, or X/QAnon, but never near pets talking like a pirate, we've got you. Boycotting X infrastructure would be like boycotting the internet or print - not the right level of granularity. Whoever or whatever you wish to boycott, X is there for you - X/BoycottX is quite popular

Kye
1 replies
1d19h

It's a bunch of AI/ML/LLM jargon. I know what all the individual words and concepts mean and still have no idea.

rossdavidh
0 replies
1d18h

Pretty much like AI, then. "Are you being ironic?" "I don't even know."

FridgeSeal
1 replies
1d19h

That sounds genuinely hideous.

Is this like, meant to be a good feature???

mncharity
0 replies
1d16h

Not wonderful UX, but intended to illustrate underlying capabilities. A collaborative composable community ecosystem of recommendation, vs status-quo black-box recommendation engines and vote-with-your-feet "personalization".

tomcam
2 replies
1d19h

The point being, you have to grow users into moderators

Aka free labor?

rossdavidh
0 replies
1d18h

You have to "pay" them somehow. Perhaps, by the community they're in being a resource for them. But, that requires giving them some say in how it's run, which is where reddit keeps getting into trouble.

AlienRobot
0 replies
1d17h

We prefer to call it voluntary labor.

antoniojtorres
0 replies
1d14h

Didn’t this present it’s own set of issues for StackOverflow? I remember there being some large events surrounding very powerful community moderators

BrenBarn
24 replies
1d15h

I think the post underestimates the scope of these problems. It touches on it briefly, but I've mostly come to believe that it is in general bad for companies to become extremely large. It's not just an issue of content moderation.

One reason that the article does touch on is the detachment from support. Large companies no longer need to care about the individual consumer, so they stop doing so. A sort of corollary for this is the tendency for large companies to shift from "doing what they do" to protecting their market position. That means spending more time on non-core activities, and in some cases even trying to expand into totally unrelated markets (as seen with large conglomerates).

It is only when we talk about "economics" in the sense of money-making for an individual company that "economies of scale" are a good thing. In the sense of benefit to the overall economy, to the society in which that economy sits, or to humanity in general, virtually all economies of scale are actually harmful.

I think we should view economies of scale like the rampant growth of grass or weeds: they are natural phenomena, but that doesn't mean they're good, and in fact for most practical purposes they need to be actively opposed. It's natural for grass to grow, and if it doesn't, it may indicate a problem, but you should still cut the grass every so often. Likewise it is natural for companies to grow, but big companies should regularly be broken up simply because they have become big.

jakewins
21 replies
1d15h

In the sense of benefit to the overall economy, to the society in which that economy sits, or to humanity in general, virtually all economies of scale are actually harmful

You had me until here.

Economies of scale in commodities is the thing that makes it so we don’t have to do sustenance farming and everyone gets to have sewn clothes on :)

There’s truth in both sides there - we want nail production to not be done by hand, but also real humans with power to change things to answer the phone when we call support.

Anotheroneagain
9 replies
1d15h

I'm not sure about that though. Ready to wear clothing is obviously inferior to tailored clothes, and it's not clear to what extent it's actually cheaper, and how much it abuses cheap labor. There may be an optimal scale for everything, and you start losing once you exceed it.

jakewins
6 replies
1d15h

Tailored vs ready-made is a rounding error in the economies of scale involved in making clothes - both will use factory made fabric, dyed with factory dyes, woven from factory thread, made from factory farm cotton, grown from factory fertilizer, made from massive scale fracked natural gas, produced with millions of tonnes of factory steel, mined from factory mines..

The economies of scale that put food on your table and mine and clothe our kids - for all the horrors - will not be replaced by artisans doing small batch t-shirts from yard-grown cotton using hand made tools.

kombookcha
3 replies
1d13h

You're framing the above poster as an anarcho-primitivist for saying that we would be better off with 5 global conglomerates controlling t-shirt production than 2.

I don't think obfuscating "there is a dropoff point in size where economies of scale stop being better for humanity and start getting worse" with "we must switch to artisinal backyard t-shirt making" is really steelmanning here.

ric2b
1 replies
1d12h

The poster is against oligopolies but is using the term "economy of scale" instead, which is incorrect and causes confusion.

Anotheroneagain
0 replies
1d10h

I wasn't talking about "oligopolies". I was saying that economies of scale doesn't imply that bigger is better, but there is an optimal size that may not really be that big for most kinds of business.

XorNot
0 replies
1d12h

No he's pointing out the poster hasn't defined any of their terms accurately enough to really be saying anything.

The sheer size and scale of the economy we live in, and it's benefits, is so big that people tend to make sweeping pronouncements without really understanding their impact.

The point about industrially produced cotton is exactly right: people pick some niche thing they view as "wasteful" and vastly over estimate it's importance or impact.

Anotheroneagain
1 replies
1d14h

"will not be replaced by artisans doing small batch t-shirts from yard-grown cotton using hand made tools."

That isn't what I'm talking about. There is a lot between that and transporting everything multiple times across the globe just for the sake of making it as large scale as possible. What is the gain after you can reliably feed a few looms?

There are no better tools by the way. It's people sitting behind sewing machines, just on the other side of the planet.

jakewins
0 replies
1d12h

Sorry I’m having a crap day; you’re totally right there’s a ton of gradients between “one mega factory makes all t-shirts” and “we all make clothes ourselves with hand made cotton thread.”

I think my argument was mostly like - don’t throw out the baby with the bath water; economies of scale are a real thing that brings us real benefits, and that should go in the scale too.

exe34
1 replies
1d14h

Why stop at tailoring? How about the loom? Or spinning? Or collecting the cotton? Or growing it?

Anotheroneagain
0 replies
1d14h

Yes. How big do you need to go for that? When will the cost of transport and distribution overtake any gains? What is there to gain after all that gets big enough to produce a reasonable assortment of fabrics?

BrenBarn
6 replies
1d14h

I guess I forgot to repeat the part about "extremely large" there. But I mean like "all economies of scale beyond a certain point".

In any case, I'm not sure I agree that economies of scale are the thing keeping food on the table. Like, yes, we want nails to be made in some efficient way, but that's not what I mean by "economies of scale". What I'm talking about is the size of the companies involved. If there are 10 factories making nails, I'd rather have them be run by 10 different companies than one. And you might be able to convince me that 9 or 8 or even 5 companies is reasonable, but not one or two.

The process that the companies use can involve something that you might call an "economy of scale" in the sense that you need to be making X number of nails before it becomes feasible to shift from artisans to factories. But you have to make sure that the economies of scale are facilitating the making of nails rather than the making of money for the nail companies.

jakewins
2 replies
1d12h

I’m being unnecessarily confrontational because I’m having a bad day, sorry, that’s not helpful.

I would say this: I agree with you that massive faceless monopolies with no accountability are terrible, and often a direct consequence of economies of scale.

It is also, simultaneously, true that the labour saving effects - human hours spent per calorie of food or pair of socks - is a number of orders of magnitude such that modern society (ie the not starving/not freezing part of it) could not exist without it.

These two things are simultaneously true, which is what makes the problem hard; we want one effect of economies of scale and not the other.

politician
0 replies
1d5h

A way out: Does your example apply beyond the necessities of basic survival? We can take it as an axiom that it would not serve humanity for oxygen to be distributed on credit on the Moon.

BrenBarn
0 replies
22h8m

I mean, again, I think that's true, but only up to a point. Basically what I'm saying is that up to a certain point economies of scale are good for production, and beyond that they mostly just become good for profit.

I feel like the main problem people are having with my comment is I said "virtually all" economies of scale and forgot to say "beyond a certain point". By "all" I meant "across all domains" (as opposed to just content moderation), not "of every possible size all the way down to individual artisans". I agree that there is some level of economy of scale that is probably required for things like production of clothes, but I think we should try to keep companies close to that level.

mschuster91
1 replies
1d10h

If there are 10 factories making nails, I'd rather have them be run by 10 different companies than one. And you might be able to convince me that 9 or 8 or even 5 companies is reasonable, but not one or two.

Well, a nail is fungible, as are many other products. So what you'll end up with instead is a bad situation on its own.

The 10 nail factories win sales contracts by competing on price (obviously), so there is a massive incentive for the companies to undercut each other by going offshore, exploiting their staff (which usually leads to an enshittification across the industry), exploiting the environment (by "saving" on emission mitigation or outright handing waste over to the mafia instead of recycling) or by cutting corners in the process (water down the material composition with cheaper raw materials or be less strict in outgoing QA and hope the customer doesn't notice). In the end you have 10 companies shipping crap nails but since they were so much cheaper than what was there before, there are no high quality nails on the market, and no bank will give anyone a loan to try and start up a company selling high quality nails.

We've seen that happening quite often over the last decades... UK and German public transport where the "private" competitors have reduced staff and maintenance to the utter bare minimum, "chinesium" being so widespread it's a legitimate meme, or by Europe being completely dependent on India and China for basic pharmaceutical compounds.

In contrast, a government controlled (or outright owned!) monopolist has entirely different priorities, and the customers/citizens will get high quality products again.

pixl97
0 replies
1d3h

Um if the government in question is beholden to its people, otherwise it's just a means of enriching oligarchs.

ric2b
0 replies
1d12h

You're not using the common definition of "economy of scale" and that is leading to confusion. Maybe use "mega-corporations" or something like that?

Your mention of 9 companies vs 2 also tells me you're trying to describe oligopolies.

AnthonyMouse
2 replies
1d11h

Economies of scale in commodities is the thing that makes it so we don’t have to do sustenance farming and everyone gets to have sewn clothes on :)

There are really two different kinds of economies of scale.

One is the one you ordinarily think of, where you're big enough to amortize certain fixed costs over a lot of units so the fixed costs become a small part of the price. But the scale you need for this isn't all that large in most cases, and more than that leads to rapidly diminishing returns.

The other is market power. You're so big that suppliers and customers and governments have no choice than to deal with you and then you can dictate terms. This is where we're supposed to have antitrust laws, and not have regulatory capture by industry incumbents, and on that front we've failed spectacularly.

Const-me
1 replies
1d10h

I agree in general, but note some industries are exceptions from that rule.

For example, for Boeing and Airbus these two scales are pretty close because modern airliners are very expensive per unit, and fixed R&D and tooling costs are sky high. Another example is TSMC.

However, none of that applies to internet search, social networks, web services, and most other goods and services.

AnthonyMouse
0 replies
1d1h

I agree in general, but note some industries are exceptions from that rule.

Not as many as you might think.

For example, for Boeing and Airbus these two scales are pretty close because modern airliners are very expensive per unit, and fixed R&D and tooling costs are sky high.

Things that are very expensive per unit are where you don't need economies of scale, because the unit cost can absorb a lot of bespoke custom work.

What's going on in the airline industry is regulatory capture. Instead of training pilots how to fly "planes" and requiring planes to have standardized controls, the law requires them to be trained on a specific plane from a specific manufacturer. So the law requires the pilots to have "economies of scale" in order to make new design viable, even though a given pilot is generally going to be flying the same type of plane all the time or only one of a very small number and individual pilot training is extremely expensive (also as a result of regulatory rules).

The incumbents like this because it makes it hard for anyone new to enter the market because customers don't want to recertify their pilots on a new type of plane. The rationale for this is nominally safety, but it's not accidentally structured in a way that creates a regulatory barrier to entry, and it's the thing that led to the 737 MAX debacle, which is not an instance of safety occurring.

You also have design vs. manufacturing. Boeing doesn't have to be a vertically integrated conglomerate. One company could design a plane and then license the design to arbitrarily many others to manufacture them. The manufacturing companies themselves wouldn't have to be vertically integrated, you could have separate companies each making screws and fuselages and doing final assembly.

The designs could be protected by patents which then expire, so that anyone could manufacture the already-certified design of a plane once the patents expire.

This is all happening the way it is because the laws are purposely structured to make it happen, not because of any intrinsic characteristic of the underlying economics.

Another example is TSMC.

This is plausibly the best real example, but it's also kind of not. TSMC currently has the best process, but it's only marginally better than Intel or Samsung. There are instances where that matters, but even more instances where it doesn't, and it wouldn't be two years of TSMC resting on their laurels before they were overtaken. And there are a lot more companies in semiconductor manufacturing than even those three: Global Foundaries, Nanya, SK Hynix, Micron, SMIC, Kioxia, etc.

What's really happening there is that it's a highly competitive industry where spending more on R&D yields an advantage. That isn't exactly economies of scale, it's more like the other way around. You get lots of scale if you have a competitive process that customers want.

Conversely, you get a competitive process by spending a lot of money, which is fungible and doesn't inherently need to come from having any scale at all in the existing industry. People speculate about Apple designing their own semiconductor process even though they don't currently even have their own fabs, and they could viably do it, not because they have extensive existing internal fabrication volume but only because they have money.

And Apple gets their money from having a lot of scale in another industry, but you don't even need that. All it would take is a charismatic CEO capable of getting the investment spigot open and then the money comes from telling a story to investors rather than any existing business of the company.

Compare this to, say, auto manufacturing. There are typically only one or two fabs on the world's best process node at any given time and if you built one you'd be the new TSMC as fast as you could tell anyone about it. Whereas Tesla has car designs that people are willing to buy, but it will probably be a decade from the release of the Model 3 before they can manufacture as many cars as Toyota does, and as a result they currently have to amortize their R&D over fewer units.

HeatrayEnjoyer
0 replies
1d4h

Economy of scale also applies to the ability to exploit the common person. From society today we can see that when unilaterally driven entities become too large the economy of scale stops benefiting the regular individual and becomes a weapon against them.

simpaticoder
0 replies
1d8h

> it is in general bad for companies to become extremely large.

The core idea of your post is "moral hazard"[0], which describes a state where you are divorced from the consequence of your decision. Your idea also uses the intuition that the risk of moral hazard grows quickly with the scale of human endeavor, and reaches ~1 at a certain size. Moral hazard encompasses both the harm you do to others directly, as with missing or malicious support, and the indirect harm you do, as with externalities like pollution or depressing a local economy.

I personally think your intuition is obviously correct.

Moral hazard disconnects the social corrective circuitry present in small human groups like families or villages, but it also enables the creation and accumulation of vast wealth. This problem is known in philosophical circles as "The Problem of Dirty Hands"[1].

0 - https://en.wikipedia.org/wiki/Moral_hazard

1 - https://plato.stanford.edu/entries/dirty-hands/

EchoChamberMan
0 replies
1d8h

Competition and other things are supposed to keep company size down, but these days companies just buy their competitors and squash them.

CityOfThrowaway
17 replies
1d20h

The author seems to be mistaking the benefit of selection effects with competence.

Those who work anti-fraud for platforms know that there is something like a market for fraud. There is a market clearing quantity of fraud that will be present on any platform, dictated by the gains from learning to execute the exploit, the effort necessary to do so, and the number of times one might be able to do it.

Small platforms have way less fraud because they are way less valuable to exploit, in general. And that's caused by _the very fact that they are small_.

The big platforms actually are radically more competent at anti-fraud / anti-spam / etc, but the scope of their operations make them such juicy targets that there is a lot of it anyways.

It the entire world of fraudsters against Amazon, or Google, or Facebook. That's a highly unequal battle and it's surprising how well the platforms fare, all things considered.

tareqak
6 replies
1d20h

OK, fraud exists.

Irrespective of whether or not a platform is small or large, can they make their customer whole 100% of the time? Why or why not?

pdonis
2 replies
1d20h

> can they make their customer whole 100% of the time?

They do make their customers (i.e., the companies who pay them for access to their users) whole. But their users (at least the vast majority of them) are not their customers; they don't pay for the service. (It's true, as the article points out, that even paid support at the big platforms is very poor, but that's because the money involved is still not even rounding error compared to the astronomical profits being made from selling access to their users.)

They can't make their users whole, or even a significant fraction of them, without breaking their business model. That's why they don't do it.

CityOfThrowaway
1 replies
1d19h

Those who have run meaningful amounts of ads in The Platforms will know quite well that they are very frequently defrauded and almost never made whole.

There is a fair (but tolerable) amount of click-fraud, etc that you basically factor as a cost of doing busines.

I'm quite skeptical of the ad model myself, but I'm relatively sure that "you are not the customer" is not altogether the most salient argument to make in this thread.

pdonis
0 replies
1d18h

> Those who have run meaningful amounts of ads in The Platforms will know quite well that they are very frequently defrauded and almost never made whole.

"Meaningful amounts" to most ad customers is not the same as "meaningful amounts" to the platforms.

CityOfThrowaway
2 replies
1d20h

To take your question seriously – one of the primary surface areas for fraud is exploiting the customer service function.

Just to work one (real) example:

1. Buy expensive face cream from Amazon

2. Take out the expensive stuff and replace it with junk

3. Complain to Amazon that you got junk and need a refund.

The challenge here becomes obvious. Even in claims where the customers was allegedly a victim, some portion of those customers are fraudsters. So any policy that results in a large, profitable company will likely result in something less than 100% of legitimate fraud cases from being satisfactorily resolved.

In the case of the small business, they are very unlikely to be targeted by industrial scale fraud and will probably just be able to solve this with personal relationships and human judgment.

That said, the price you pay the small operator is very likely to be meaningfully higher than the price of the scaled operator.

If you care more about having near 100% probability of having fraud/abuse cases addressed than you care about benefiting from the economies of scale, then yeah, go with the small operator.

In things like communities, my general sense is that there is almost no beneficial economies of scale. So I like to hang out in smaller places. But in things like information retrieval (ie. Google), the benefits are usually worth it for me to use that.

tareqak
1 replies
1d20h

The kind of experiences that the post and I am talking about are where the platform user or the customer end are dissatisfied by the platform's conduct.

The question of the company itself being defrauded by some number of customers and having to defend against that as it becomes increasing larger is definitely valid and difficult to tackle at scale. However, I think it is a totally separate one. I do not think it is a necessity for the company to compromise the ability to make any one dissatisfied customer whole just because they are being targeted by a whole lot of fraudsters all at once. The company might have to increased their costs on their products to pay for things like additional training, tamper-evident packaging, improving supply chain management, and so on.

CityOfThrowaway
0 replies
1d19h

They aren't separate questions, and there is a trade-off between protecting themselves and attempting to do right by good-faith customers.

The correct amount of fraud the platforms tolerate against themselves is definitely >0%, and it's for the reason you're identifying.

It seems that you're assuming it's obvious to identify good-faith consumers and do right by them. In some cases that's true. If you're a loyal, repeat customer of Amazon's they will basically just do whatever you ask them (within reason). But there are many cases where the act of discriminating between fraudster and Good Samaritan is in fact the whole of the challenge.

Indeed, one of the most common exploits is to hack the account of good customers and abuse merchants that way.

There's a dark, dark underbelly of crime out there. And the cost of that is borne by all of us in occasionally-less-than-good customer experiences.

pdonis
3 replies
1d20h

It's not just that the big platforms provide a much stronger incentive for fraudsters. It's that providing that stronger incentive is not a bug for the big platforms, it's a feature. It goes hand in hand with their business model, which is to sell access to as large a pool of users as possible.

> The big platforms actually are radically more competent at anti-fraud / anti-spam / etc

No, they're not. They are purposely incompetent at those things, because it's part of their business model. They cannot implement actual competence (which would mean providing sufficient actual human support to deal with the volume of mistakes that their automated algorithms make), not because it would cost too much (the article quite correctly points out that their huge profits provide plenty of cash to pay for human moderation), but because it would reduce their value to their actual customers, who are not their users.

CityOfThrowaway
2 replies
1d19h

I don't see how spam or fraud brings value to advertisers (which is what I assume you're talking about?) or any one else for that matter (other than the spammer/fraudster).

Perhaps I can squint and see how an email platform like Gmail could be read as a protection racket, but I don't see it in other platforms.

pdonis
0 replies
1d18h

> I don't see how spam or fraud brings value to advertisers

It doesn't; that's not the point. The point is, as I said, that the big platforms make money by selling access to as large a pool of users as possible, and that requires them to not crack down on spammers and fraudsters--because at their scale, it is impossible for them to reliably distinguish the spammers and fraudsters from the rest of their users. So they can't crack down on spammers and fraudsters without also reducing the size of their user pool and thereby killing their business model.

The article explicitly talks about this:

Either you have to back off and only ban users in cases where you're extremely confident, or you ban all your users after not too long and, as companies like to handle support, appealing means that you'll get a response saying that "your case was carefully reviewed and we have determined that you've violated our policies. This is final", even for cases where any sort of cursory review would cause a reversal of the ban, like when you ban a user for impersonating themselves. And then at FB's scale, it's even worse and you'll ban all of your users even more quickly, so then you back off and we end up with things like 100k minors a day being exposed to "photos of adult genitalia or other sexually abusive content".

Nextgrid
0 replies
1d12h

I don't see how spam or fraud brings value to advertisers

Spam and fraud still generates "engagement" (whether the bad content itself or the people getting outraged by it) which contributes to the numbers they use to attract & bill advertisers.

The modus operandi of these platforms when it comes to moderation is to quickly take down whatever will get them in trouble fast with the law (DMCA takedowns, CSAM), or what will provide ammunition for attack pieces in the media (pornography, sometimes drug- or weapon-related content). But anything else is fair game to stay and earn "engagement" even in the face of user-submitted reports.

In the unlikely event that this "anything else" does get reported on in some big media, then it'll be taken down and they'll issue an apology and how they will do better in the future, and everyone seems to buy it no matter how many times they've done this.

alexey-salmin
2 replies
1d20h

I don't see any mistake.

Small platforms have way less fraud because they are way less valuable to exploit, in general.

Yes.

The big platforms actually are radically more competent at anti-fraud / anti-spam / etc, but the scope of their operations make them such juicy targets that there is a lot of it anyways.

If their business model (scale above everything) results in shitty customer experience I don't really care how competent they are. They are fighting the problem they created themselves: a platform too valuable for scammers.

8note
1 replies
1d20h

If their business model (scale above everything) results in shitty customer experience I don't really care how competent they are.

It's a tradeoff. If your customer service is good such that more people shop there, there will be more fraud attempts overall.

It's not that a platform is too valuable for scammers, it's that a platforms is valuable at all. All platform value is value for scammers, and the most useful platform will also have the most scammers on it

alexey-salmin
0 replies
1d20h

This is like saying that everything (including scam attempts) is proportional to the scale so the scale doesn't really matter.

It's not true. Scale bring synergy and reduces costs. But it's a two-way road which works for scams too: scamming one big platform and fighting one anti-fraud system is ten times more cost-efficient than adjusting to ten smaller ones.

bjterry
1 replies
1d20h

That is not at all in conflict with the post. The selection effect is one of the sources of the empirical diseconomies of scale the author mentions.

CityOfThrowaway
0 replies
1d20h

My first read was that the author was claiming that smaller companies are at least as competent at moderation than big platforms.

In my first read, I took that to mean from a process and inputs viewpoint.

But reading it again, I suppose the author could be taking a more consequentialist point of view. They are at least as good (or much better), largely because they don't have to do much.

I suppose that's fair, but also a lot of words in the OP to state a truism :-)

Majromax
0 replies
1d7h

The author seems to be mistaking the benefit of selection effects with competence. [...] The big platforms actually are radically more competent at anti-fraud / anti-spam / etc, but the scope of their operations make them such juicy targets that there is a lot of it anyways.

The author addresses this point in one of the appendices:

> A standard extension of this idea that I frequently hear is that big companies actually do have the best anti-spam and anti-fraud, but they're also subject to the most sophisticated attacks. I've seen this used as a justification for why big companies seem to have worst anti-spam and anti-fraud than a forum like HN. While it's likely true that big companies are subject to the most sophisticated attacks, if this whole idea held and it were the case that their systems were really good, it would be harder, in absolute terms, to spam or scam people on reddit and Facebook than on HN, but that's not the case at all.

This response makes intuitive sense. If the big companies are indeed more effectively competent at anti-abuse/anti-fraud measures, then the fraud that does slip through should be more subtle and less obvious, particularly over a prolonged period.

Animats
13 replies
1d18h

This is easy to fix. Make Amazon responsible for warranting all sales on their platform. Amazon claims they are not the seller, some anonymous party running an ad on their platform is the seller. That was litigated in Pennsylvania, and when it got to the level that Amazon was looking at a major lose, Amazon settled to avoid a precedent.

On the ad front, anyone who runs an ad for a third party should have to be able to produce the actual name and address of the advertiser on request, or accept responsibility for the contents of the ad. Businesses have no right to be anonymous. That is existing law in the EU and California.

Fraud is a problem because the major platforms get a free pass on supporting it. Not because it's hard.

kurthr
6 replies
1d18h

KYC or you're liable. Interesting solution. It's kinda amazing it's not already that way (since advertisers need to get paid). I guess it would make it harder to advertise, if you're not already big so that could hurt little guys, but so is scamming customers.

lotsofpulp
4 replies
1d18h

KYC just incentivizes discrimination. Eventually ends up with innocent people not being able to get bank accounts or rent hotel rooms because they get associated with prior probabilities that do not necessarily apply to them, aka discrimination.

Punishing scammers to dissuade and reduce the quantity of scammers is the government’s job and should not be laid on businesses, for a well functioning high trust society.

If international jurisdictions are a problem, then the national government needs to provide an identify verification API for businesses so they know the customer they are dealing with is subject to the legal system of the country.

Nursie
3 replies
1d17h

This is nonsense. Businesses are not above the law and claiming that you are only a proxy or that what you do is too complex shouldn't get you out of having to follow those laws.

We're talking KYC (or direct liability) for businesses here. There is nothing discriminatory about it.

lotsofpulp
2 replies
1d15h

I specifically wrote why know your customer laws are bad.

A law that makes a bank liable for their customer’s illegal actions harms people who kind of sort of look like they could be doing something illegal, even if they are innocent. Plainly discriminatory.

A law that makes hotels liable for human trafficking harms people who “look” like human traffickers. Guess what, now people who want to pay with cash or don’t have an ID or don’t fit whatever other profile are no longer welcome at a hotel. Again, obviously discriminatory since people who didn’t do anything illegal are being punished.

These aren’t made up examples, these are real life examples. The government is literally telling a business they have to judge someone without the person being able to provide a defense, like they would in a court of law.

Nursie
1 replies
1d15h

If you sell something to the public, you must abide by certain regulations.

A law that says "Amazon must know their sellers and be able to pass the consumer on to those sellers, or it is liable itself as a backstop" might indeed shut some people out of Amazon's marketplace who cannot comply.

But those people would not be allowed to legally retail to the public anyway, as they couldn't fulfil those requirements

The government is literally telling a business they have to judge someone without the person being able to provide a defense, like they would in a court of law.

A business, not a someone, and there are all sorts of compliance rules you could complain about doing exactly the same thing.

lotsofpulp
0 replies
1d8h

A business is someone. And KYC should just be the government providing an identify verification API so that in the event there is a problem, the government knows who to go after.

Government leaders just want to be able to point fingers at as many people as possible with KYC laws (which do more than just require the business to verify who they are doing business with).

A business should know who they are transacting with, but a business should not be vetting them for illegal activities. That is a job for courts. Businesses can even be required to report who buys what and how much (for example, bulk supplies to make methamphetimine or explosives), but the business should not be using nebulous heuristics such as credit score or income or other non illegal things to figure out who may or may not be a criminal.

Last example, the US government should require online marketplaces to know who they are doing business with, and it should be someone within US jurisdiction. The US government should also provide the online marketplaces with an identity verification API to do this. So when a product sold on Amazon causes a problem, it should be easily possible for the government to charge the seller with a crime, and dole out an appropriate punishment.

Animats
0 replies
1d14h

That's for ads. For online retail, the site that takes the money is the retailer. Period. The platform and the sellers can fight out who pays for returns and warranty problems. After the customer gets paid.

palmfacehn
1 replies
1d16h

This is basically regulatory capture for the fraudsters. Fraudsters will have forums full of ID fraud services to fix their KYC problem. Big platforms won't blink at the costs involved with verification. Fraudsters will simply rinse and repeat when their scheme collapses.

Normal business people will have yet another headache. Corner cases will exist where users are condemned to unending bureaucratic limbo. A human support person, if they exist at all, will repeat a meaningless mantra, "I'm sorry sir, we've been unable to verify your identity with the documents provided. This is a compliance requirement as per the online platform safety act of 2026"

All the while, scammers will be churning out new KYC'd accounts. The process will have nothing to do with safety, as the platform will only care about complying with the new regulation. Don't forget that Amazon still earns fees from each sale. Large platforms will enjoy the compliance requirements as an additional barrier to entry for competitors.

Not to mention the inevitable day when all of that personal data is released. Breeches exposed for all to see. Another headache the fraudsters won't have. The regulars of the ID fraud forum will probably enjoy it.

I could start my own VC fund if I had $1 for every post on HN claiming, "It is so simple and easy! We can fix this with more government!" It is never so simple when you are dealing with humans and incentives. Human societies are not codebases you can just patch to eliminate bugs.

Nursie
0 replies
1d15h

If big platforms like amazon find themselves liable if they cannot connect seller activity to real human beings and real businesses, they will have an incentive to fix their systems to stop those spammers churning out KYC'd accounts.

The platform will care about costs to itself. One of the advantages these large platforms enjoy over smaller ones is their ability to pass the buck on consumer law, to their benefit and to the scammers. By making it the platform's liability, always, this could start to change.

If the companies that think of themselves as large 'platforms' end up erecting barriers to legitimate business in the process, then that becomes their problem and is likely to result in more competition in the market from companies who act as actual stores themselves.

It's not 'so simple' but a liability change, and an end to exempting retailers like amazon from their retailer responsibilities could start to rebalance things.

matheusmoreira
1 replies
1d16h

Amazon settled to avoid a precedent

Maybe that's the true problem. At some point courts should stop settlements from happening due to the value of precedents to society as a whole.

otterley
0 replies
1d15h

While I get your point about precedential value, most disputes won’t result in new law even if litigated. Settlements are more economically efficient than having every dispute adjudicated. There is pretty much universal agreement among economists about this and many papers have analyzed the question. And the court system is definitely not an economy of scale.

Nextgrid
1 replies
1d12h

That was litigated in Pennsylvania, and when it got to the level that Amazon was looking at a major lose, Amazon settled to avoid a precedent.

That seems like a problem in our legal system though. It basically means you can force your opponent through a long & expensive legal process (hoping they'd give up or run out of resources in the meantime) and still not risk anything yourself as you can abort the whole thing the second the tide starts turning against you.

Majromax
0 replies
1d7h

That seems like a problem in our legal system though.

The civil legal system is designed to settle essentially private disputes between parties. It is not really a matter of public interest whether my neighbour or I bear the liability for the tree branch that hits my parked car during a storm.

Your discomfort arises because the actions of large companies are matters of public interest, such that a notionally 'private' dispute can still affect hundreds of thousands of people.

Class-action lawsuits are one way to shoehorn this sort of dispute into the court system, but the other way is regulation. Depending on the framework, regulators can issue orders themselves or bring court cases for the sake of public interest. Unfortunately, good, timely, and competent regulation is its own hard problem.

bawolff
11 replies
1d19h

I largely agree, although i think the author misses on some of the examples.

Reddit is not an example of being bad due to this affect, but an example of being good because of it. Reddit is not one huge platform but a combination of thousands of subreddits independently moderated. That is how reddit remains good, because small subreddits have small scale.

Running your own email server isn't a counterexample, but is legit terrible because in essence you aren't moderating a small community, you are moderating the whole world, but just your view of it, since anyone can email anyone. That is the worst because it is the worst of both worlds - no budget from being big scale but still all the problems of being large scale.

mjevans
5 replies
1d19h

Worse, spammers are better at their 'professional job' (getting mail past other people's spam filters and inane preferences) than your tiny mail server or small group of employees who don't understand how to not look spammy. Like not having annoying email 'signatures' and other anti-patterns commonly found in corporations.

Loughla
4 replies
1d18h

Like not having annoying email 'signatures'

What?

rashkov
2 replies
1d18h

I used to work IT helpdesk and once spent weeks trying to find out why my company’s emails were getting bounced all of the sudden. We had just switched to Microsoft’s hosted exchange (office 365) so we figured that must be it. I escalated the issue until I got some very technical folks on the phone, but all they could say was that everything looked 100% fine and that something we were sending was tripping the spam filters. I thought about it and realized we recently changed our email signatures to include a bit.ly shortened link. I guess spammers also use that to obfuscate their links so we were indeed sending out sketchy email

klabb3
1 replies
1d7h

I mean that one was pretty obvious. Why on earth send a bit.ly link in every email? You have a-hrefs in email, so the link doesn’t even need to be shortened.

I got some very technical folks on the phone, but all they could say was that everything looked 100% fine and that something we were sending was tripping the spam filters

This “we won’t tell you what triggered it because it helps the spammers” is… questionable, and it feels like security by obscurity. Yes, it may help the spammers a little but not possibly enough to warrant the black hole of debugging legit issues.

The right way to fight spammers isn’t about arms racing with hyper-advanced content filtering. Instead, we need to make it uneconomical.

rashkov
0 replies
18h33m

The bit.ly link was to track how many people clicked on it, for marketing purposes I guess.

As for the support team on our email provider’s side, they didn’t know what was tripping it. We simply got to a point where we ruled everything else out. The only thing left was the content of the email being legitimately spammy, somehow.

Our email provider wasn’t the one that was filtering our emails, at any rate, rather it was the recipient email systems.

mjevans
0 replies
1d18h

Things like additional inline images, hyperlinks to a bunch of places, etc non-text data. It's probably only a couple fractions of a point more spam score per message, but spammers and typical employee end user communication can look annoyingly similar.

singron
3 replies
1d18h

My experience running my email server is exactly what danluu says. Receiving emails and classifying spam is not a problem at all, but my outgoing mail is often blocked. It's not that I struggle to moderate the whole world, but the whole world is moderating me.

dewey
1 replies
1d18h

It's not that I struggle to moderate the whole world, but the whole world is moderating me.

Wouldn't that be the "best case" for email because of the centralized idea? In reality it's a few huge companies that matter and if you end up on one of their lists because of some bad IP, missing header etc. you are a bit out of luck.

torton
0 replies
1d15h

It is in fact very instructive, given that email is an open federated protocol but it doesn’t matter because the industry has centralized.

We now have several new social networks based on the ideas of open protocols and federation. They are not yet large enough to attract too much attention but we’ll see if the story rhymes.

torton
0 replies
1d15h

It’s not “the whole world” moderating you, it’s more likely than not specifically Google. In my age and peer group, personal usage of GMail is 98%+, professional usage of GMail is 50%+. Not delivering to GMail is not an option, especially so if the email is not just for a social purpose but, for example, for finding a job.

I ran my own email for close to 15 years, most of that time on the same IP address and the same old domain. I could never get GMail to reliably accept my mail, and of course there is no support to find out why. I set up and tested their documented best practices, and had no issues corresponding with a lot of people.

Spam is not an issue in practice. Two basic filters drop 95% of spam without scanning the content, and an open source reputation+content engine drops the rest to the tune of maybe 1 spam a day. The two annoying parts are dealing with exceptions, often large companies misconfiguring their outgoing email; and the open-source nature of the filtering engine’s ruleset, meaning it occasionally completely breaks for a short while.

selcuka
0 replies
1d18h

you aren't moderating a small community, you are moderating the whole world

Sure, but you are still moderating for a small community. One person's spam is another's ham, so it's much more difficult for Gmail to be as accurate as my spam filter (which only needs to learn from a couple hundred people's preferences) because it will inevitably overfit.

ec109685
9 replies
1d21h

Amazon selling obviously fake SD cards / flash disks is baffling. It’s super easy to spot and fake disks cost users data, so you’d think they’d take that seriously.

https://a.co/d/auXcxrZ ($29 for 1TB as their “overall pick”)

nonrandomstring
2 replies
1d20h

Some work I supervised sampling Amazon and Ebay products for vendor malware showed an absolutely shocking prevalence of dodgy goods.

I don't think they can stem the tide.

smallmancontrov
0 replies
1d17h

The same ebay scammer has been popping the same fraudulent listings into my feeds every few weeks for the last 8 years. Their "pay through (external url)" URL always uses Comic Sans, so I call them the "Comic Sans Scammer." If ebay was trying, I suspect that these listings would have changed a bit to evade detection, but the only thing that ever changes is the compromised account name.

To ebay's credit, their support is very good once you are on the "press button, get called seconds later by a human" tier, but I also make ebay a lot of money, so I'm not sure how common it is to receive that level of service.

kmeisthax
0 replies
1d20h

Amazon is perfectly willing to ban people from reselling their used Apple products. The only reason why they don't apply that treatment to memory devices is because Amazon runs on payola. Apple paid for market exclusivity and Samsung/Sandisk/etc didn't.

ProllyInfamous
2 replies
1d19h

I only purchase solid state storage, in person, from big-box stores.

They have providence, maintaining custody from manufacturer to store shelf.

Unlike (e.g.) Amazon, where all vendors' have "identical" items thrown into one big "for sale" bin.

Tommah
1 replies
1d18h

They have providence, maintaining custody from manufacturer to store shelf.

That's called provenance, not providence.

ProllyInfamous
0 replies
16h27m

Thank you, I had a hunch I mis-remembered.

MattGaiser
2 replies
1d20h

I imagine they make a ton of it back from users who never need anywhere near that much space.

I have the non-technical people of my life insist on a terabyte for everything. One of them bought an iPad Pro just to have a terabyte of space. After 8 years of their prior iPad, they had used up all the space (64gb I think) and wanted to be sure they wouldn’t run out.

margalabargala
0 replies
1d18h

Depending on usage patterns this assumption might not hold.

Improvements in cameras etc are causing the sizes of the photos and videos that people take to hugely increase in size. The prior ipad you spoke of might have only recorded 720p video, while the new Pro likely does at least 4k. If they use their ipad to record video of family events, that 1tb might represent the same usage as the old one.

Nextgrid
0 replies
1d12h

Keep in mind that these scam devices rarely present themselves as a functional but lower-capacity version.

It's more often the case that the device does present the advertised capacity, but with a massive "black hole" in the middle where any writes are silently discarded.

Depending on your filesystem, you don't need to write much data at all before you're likely to hit this "black hole" and start losing data.

no_wizard
4 replies
1d21h

What all this says to me, in total, is that greater competition where you have a bunch of potentially medium sized companies or smaller but no clear massive (50%+ market share) leader that power can’t congregate means they have both incentive to provide a better experience and having multiple targets makes it harder on negative actors, effectively.

Thats my takeaway here is that we should as a matter of regulation / spirit of market competition lean on incentivizing creating medium (mid?) size companies worth collective $100 billion+ than a few giant trillion dollar corporations, as it relates to majority software companies.

I think with hardware it’s A bit different when it comes to economies of scale so I don’t know how that could break down and I don’t want to include that for the sake of argument but there could be something there too

pixl97
0 replies
1d21h

Otherwise summed up as "Monocultures suck"

hyperpape
0 replies
1d10h

Worth noting that by market cap, Twitter is 20-30x less valuable than Facebook.

graemep
0 replies
1d20h

That is true in any market, not just software. There is a large element of regulatory capture that shapes regulation to favour bigger businesses instead of reining them. Software and online services have been particularly bad because there are so many ways to work around regulation. Apple's current malicious compliance with the EU is an egregious one, but most are more subtle (FB persuading regulators to let them take over whatsapp by arguing chat and social media are different markets).

Ideally, IMO, we should aim to prevent monopoly, not just abuse of monopoly (which is what regulators currently do): https://pietersz.co.uk/2009/11/fix-capitalism

Geisterde
0 replies
1d20h

It is the general consensus that competition in the market place is good for consumers and propels business to find creative solutions that create value in the econony. I would say that isnt always the case, Rockefeller had something like 90% market share before litigation of antitrust began, by the time the case was adjucated their market share had already slipped; what this indicates is that even when one party holds 90% of market share (and probably total wealth for that matter), the market will eventually cut them down to size. At the same time, rockefeller was responsible for bringing a groundbreaking innovation to society. So, it was good that initially all that capital flowed to him, because he was using it to move society forward (at least in this regard), and it was also good to see that capital flow away on its own, because it was needed elsewhere.

I would also point out that the centralization of companies into todays megacorps is influenced by many factors, economies of scale may be one, but so are intellectual property laws (we really dont need them!). In fact, I would venture a guess that most regulation is written by the companies that are being regulated, that keeps competition at bay, and they wont regulate themselves downwards in the food chain. Finally, I would say the centralization really took place during WWI/WW2, it was convenient for the government to shout orders at a whole 3 people to build tanks instead of automobiles; this centralization is a strategic goal of the united states government, and enacting it through secret deals (rather than law) is a preferable method to preserve state secrecy.

roenxi
3 replies
1d21h

I started seeing a lot of comments claiming that you need scale to do moderation, anti-spam, anti-fraud, etc., around the time Zuckerberg, in response to Elizabeth Warren calling for the breakup of big tech companies, claimed that breaking up tech companies would make content moderation issues substantially worse

Well Zuckerberg is correct in context. Politicians don't mean moderation as in everyone is being civil with each other or garbage results. That isn't a political issue. They mean things like supporting the local censorship regime which is usually country-specific. So from a political perspective, more centralisation is the only way to successfully implement moderation. Because moderation doesn't mean "behave to community standards" - it means "comply with politician's standards". Ie, moderating what people can say as part of their legal and political engagement with society.

I recall when Australia had a suppression order about Cardinal Pell, high ranking official in the Catholic church who was convicted of various sex offences. And in fairness the case was a political football so the order was reasonable. It was eventually overturned but reputational damage was done because, unfortunately for the courts, there are too many foreign news organisations so the block failed miserably and it was major news in Australia without any official Australian news sources covering it.

It would be relatively easy to enforce that sort of block if the major foreign distributors were just Facebook or YouTube. They'd probably cooperate with the Australian authorities on request.

richrichie
1 replies
1d21h

This “cooperation” with the state is a huge problem though.

roenxi
0 replies
1d21h

Not when talking to Elizabeth Warren. It wasn't that much later that all the Hunter Biden laptop thing happened and showcased what Zuck was getting at. In context it was a straightforward, informative and correct answer for this audience. They won't be able to moderate the debate if the internet is made up of smaller companies; the coordination is much easier with just a few majors.

graemep
0 replies
1d21h

I think there is an ambiguity in what you said. TO be clear the conviction was overturned but the news had long been globally disseminated by then.

nonrandomstring
2 replies
1d20h

There's a lot to unpack in this article, but the key idea is the "myth of big" and that small is beautiful. It should be no surprise that Big-Tech succeeds in part by creating its own powerful mythology. All giant organisations, nations states too, build myths of unique competence, infallibility, and exceptionalism. At the same time it hardly needs saying that scale brings inefficiency, corruption, sluggishness, and insensitivity. If it weren't for protectionist measures in place to sustain big monopolies they'd be torn asunder in real competitive conditions.

jseliger
1 replies
1d17h

"the key idea is the "myth of big" and that small is beautiful"

Based on a cmd-f, I don't see the words "myth" or "beautiful" in the article, and Luu writes:

It's easy to name ways in which economies of scale benefit the user, but this doesn't mean that we should assume that economies of scale dominate diseconomies of scale in all areas. Although it's beyond the scope of this post, if we're going to talk about whether or not users are better off if companies are larger or smaller, we should look at what gets better when companies get bigger and what gets worse, not just assume that everything will get better just because some things get better (or vice versa).

nonrandomstring
0 replies
1d12h

5 + 4

3 * 3

10 - 1

sqrt(81)

I can't see the number 9 anywhere above. Can you find it?

jseliger
2 replies
1d20h

Well, I received a keychron keyboard a few days ago. I ordered a used K1 v5 (Keychron does small, infrequent production runs so it was out of stock everywhere). After some examination, I've received a v4. It's the previous gen mechanical switch instead of the current optical switch. Someone apparently peeled off the sticker with the model and serial number and one key stabilizer is broken from wear, which strongly implies someone bought a v5 and returned a v4 they already owned. Apparently this is a common scam on Amazon now

This is a tangent, but I just looked at Keychron's website, and I don't know how someone is supposed to choose from all the models and options. And there seems not to be a "comparison" page on their site.

pushcx
0 replies
1d5h

They do have a comparison page, but it's hard to find, incomplete, and leaves out significant information: https://www.keychron.com/blogs/news/difference-among-keychro...

The KeyChron Discord has seen enough questions that it has many bot autoresponders to explain model differences. I thought the experience of selecting among the many very similar, poorly-named models was a frustrating slog, but it seems unlikely to change given that they sell out their production runs.

As long as you happen to quote it, I forgot a key sentence in my story: I placed my order with the official Keychron store and it was fulfilled by Amazon.

Nursie
0 replies
1d14h

I bought one recently, a K8. I was also confused, I think I had to find a reddit thread explaining what the ranges were ...

IIRC Q is the full-fat, fully programmable, hot-swappable top of the range. V is like Q but a bit more budget-friendly, may have plastic cases etc. K is their 'standard' range, some have hot-swappable switches, some don't, generally not programmable. C is budget K, wired only.

But then there's various pro, max, god knows what variants. It's a ludicrously wide product range.

All I can say for sure is I really like my K8.

bsder
2 replies
1d5h

The issue is law.

If a "ban" was considered "defamation" and you could take a FAANG to court and be awarded $1 Million in damages, suddenly FAANGs would be superb at avoiding false positive bans.

samatman
1 replies
1d4h

There's a kernel of a good idea here, but banning someone from a platform isn't defamation by any reasonable interpretation.

bsder
0 replies
1d

In what way are unwarranted bans not defamation?

Anyone who used to see your account and watched it suddenly go away is under the impression that you "did something bad which got you banned". That is a public statement about your character.

If you did NOT do something bad, that should be actionable.

rossdavidh
1 replies
1d18h

There are a lot of cases where smaller is better. This article, for example, is way too long (although basically on target).

graton
0 replies
1d14h

Agreed. I wonder if anyone read the entire article. Firefox reader mode says 293–372 minutes to read the article. Which is 4 hours 53 minutes - 6 hours 12 minutes.

Though I do think reader mode can overestimate by a lot. Still a 53,000+ word article is going to take a long time to read.

roca
1 replies
1d19h

I wonder if AI is going to change the equation here --- by making it much cheaper to run spam campaigns against thousands of different small forums.

add-sub-mul-div
0 replies
1d19h

The data pollution is going to be vast.

waynesonfire
0 replies
1d17h

Its almost like if you have a diverse ecosystem its harder to spam everyone. When there is just the bug three, you only have to crack their algorithms to get spam everyone. Spam takes advantage of the same economies of scale and does it better.

Seems like it works exactly opposite of how Zuckerberg describes it. Not surprised.

washmyelbows
0 replies
1d19h

I spent a good chunk of time at a small company trying to mitigate spam, it always felt like an uphill battle that google just did significantly better than I could. I'm not one to usually advocate outsourcing services that my department could provide, but it felt like the right call to outsource email so that it would be better for the users.

tareqak
0 replies
1d20h

In my social circles, many people have read James Scott's Seeing Like a State, which is subtitled How Certain Schemes to Improve the Human World Have Failed. A key concept from the book is "legibility", what a state can see, and how this distorts what states do. One could easily write a highly analogous book, Seeing like a Tech Company about what's illegible to companies that scale up, at least as companies are run today. A simple example of this is that, in many video games, including ones made by game studios that are part of a $3T company, it's easy to get someone suspended or banned by having a bunch of people report the account for bad behavior. What's legible to the game company is the rate of reports and what's not legible is the player's actual behavior (it could be legible, but the company chooses not to have enough people or skilled enough people examine actual behavior); and many people have reported similar bannings with social media companies. When it comes to things like anti-fraud systems, what's legible to the company tends to be fairly illegible to humans, even humans working on the anti-fraud systems themselves.

I think legibility is one aspect of it, but I think another could very well be empathy. If you want to support a user who has a problem, then you have to want to help them and follow through. If you want to make a customer whole after they lost money on your platform, then you want to pay them back and follow through. If you want to stop your platform to be a tool used to disseminate hate, then you want that outcome and move heaven and earth to make it happen. You have to want to see the problems and you have to want to follow through and fix them.

slv77
0 replies
1d5h

This author is essentially arguing that the security of his garden shed is equivalent to the security of most banks because statistically banks get robbed more often.

Spammers and scammers have a fixed cost to initiate an attack and the potential return on investment is proportional to the user base. A platform below a certain size simply isn’t profitable to attack. It isn’t that small shops are going to be better at mitigating spammers and scammers as the article states, small shops just aren’t being attacked.

If the arguments were true that there are diseconomies of scale there wouldn’t have ever been consolidation in email hosting. Email providers all use the same protocols so the incremental cost of attacking a small provider is zero. A small company hosting their own email will quickly see their customers flee for the big providers as they drown in spam. Scaling spam management was the primary driver that drove consolidation in email providers in the early 2000’s.

This also drove consolidation in areas with similar economics such as e-commerce. The low cost of attacking a new player in the e-commerce space and the cost of scaling risk management drove a lot of the growth of PayPal and spawned the growth and consolidation of outsourced e-commerce fraud risk management.

The economics are going to be true of any n-squared business model where the value is in the size of the network. I go to LinkedIn because it has the lowest scam rates but because most of the people I work with are there.

While a wildly diverse ecosystem would be more resilient against fraud this is the internet, the biggest n-squared system of them all. It’s unlikely that the internet is going to devolve into small, diverse fiefdoms so the problem is here to stay.

rgmerk
0 replies
1d20h

The claim that small platforms are better at moderation needs some nuance.

In terms of content that is annoying or worse to the forum’s users, it’s probably broadly true. But it’s not necessarily true for content that the users of the forum are fine with, but society more broadly has issues with.

Take for instance forums which are devoted to sharing “revenge porn”.

nojvek
0 replies
1d4h

I agree with some parts not all of them.

His point of fraudulent Amazon SD cards makes sense but you could get fraudulent cards in other places too.

Amazon has one of the best return policies and support.

I once bought a portable AC from a small shop and they wouldn’t take it back even though it was quite noisy compared to what they had advertised.

Large platforms e.g Google and Meta pride themselves that they have billions of users with very few engineers.

The problem is Google thinks engineers should be isolated from users and support can be algorithmified. More often than not, it fails.

I am all for smaller companies when it comes for human touch.

However there is a case for economies of scale when a problem is simple and can be captured fully by an algorithm.

E.g YouTube only really works because they have algos to detect what background music is attached to what video. Imagine if humans were doing it? They wouldn’t be consistent.

The truth is that computers are slowly getting better at understanding the real messy world.

And the reverse problem is computers are also getting better are creating human-looking mess in the real world.

nitwit005
0 replies
1d18h

This is partly just economy of scale for the scammers. Every additional platform you target costs money, so naturally they tend to target Facebook, Google, and friends.

kmeisthax
0 replies
1d20h

By comparison, on lobste.rs, I've never seen a scam like this and Peter Bhat Harkins, the head mod says that they've never had one that he knows of. On Mastodon, I think I might've seen one once in my feed, replies, or mentions.

I self-host Mastodon and my mentions are chock full of this one Japanese Discord cybercrime server[0]'s invites. This only started four days ago, but it's already making Mastodon almost as bad as Musk-era Xitter spam.

The front-line tool for moderating federated spam is to block the instance that sent it. A year ago, most spam were rando servers following everyone they could think of. However, the Japanese Discord guys are almost slightly kinda smart: they started registering accounts on smaller (non-single-user) instances. Because Mastodon doesn't support any kind of account verification[1], any instance with open registration (i.e. "just sign up") is effectively an open relay for spammers.

I don't consider this to be negating the post, but it is relevant context.

[0] They advertise DDoS, hacking, and stolen credit card services, or at least that's what I was able to glean from the Japanese language advert they sent

[1] This is why Google requires you to verify with an SMS code to do anything e-mail or YouTube related, for example - it lets them throttle how many registrations per year you can do with that number.

hislaziness
0 replies
1d19h

From the fraudsters/spammers perspective large platforms provide the economies of scale.

eightnoteight
0 replies
1d16h

"these systems have to be fully automated because no one could afford to operate manual systems at our scale", what's really being said is more along the lines of "we would not be able to generate as many billions a year in profit if we hired enough competent people to manually review cases our system should flag as ambiguous, so we settle for what we can get without compromising profits".

well anything that comes out of any business has these intentions for sure, but statements could be like multi-dimensional vectors, they could be doing 2 things at once

the ROI for fraud is low for a smaller platform and so it doesn't happen for smaller scale companies as much as it does for bigger scale. but given the interested party who is explicitly looking for the target cohort even the smaller platforms see the fraud

so even if social media as whole were to split into multiple entities, the ROI would still make sense for fraudsters to continue the fraud.

even if you have 1000s of entities, there would be a startup who can build a wrapper out of those 1000s of entities to make the fraud ROI possible. just like how there are businesses that offer scraping competitors

diebeforei485
0 replies
1d17h

Larger platforms care more about protecting the platform, even at the cost of mistakenly banning users. They can afford to do this because they already have lots of users.

Smaller platforms will not want to mistakenly ban users.

btbuildem
0 replies
1d3h

It's an interesting read; I wonder how much of the failure to support users is a factor of the size of the organization itself, and how much is the scale of the user base. I think the other (ordinal) dimension to consider would be whether the users are also customers, or just part of the product.

For meta, google, twitter et al, it's almost as if the users that fall through the cracks of the system are off-spec product pieces, where it's cheaper to get more instead of fixing existing issues.

What I find fascinating (and this is anecdotal) is that large companies that have few big customers ALSO suck at support, despite having relatively large portion of their headcount assigned to Customer Support. I would tend to agree with the article's author, that as an org grows, it becomes less able to perform its core functions.

andrewljohnson
0 replies
1d20h

It’s not apples to apples… are the small/medium platforms really going to be lacking in fraud etc if there are no large platforms to target?

The small sites only lack fraud because the large platforms draw the fire.

andrewljohnson
0 replies
1d20h

As a counterexample, there are and were plenty of bad actors on phpbb forums.

Anotheroneagain
0 replies
1d16h

The problem here is that it's the scammer who experiences the economies of scale, but not the moderation. A big platform is worth a hundred thousand times more for the scammer than Mr 0.0001% but the cost of moderation is the same for both.