return to table of content

Florida House of Representatives approves bill to ban social media for kids < 16

causal
330 replies
1d9h

I might be the only one here in favor of this, and wanting to see a federal rollout.

It is not reasonable to expect parents to spontaneously agree on a strategy for keeping kids off social media- and that kind of coordination is what it would take, because the kids + social media companies have more than enough time to coordinate workarounds. Have the law put the social media companies on the parents side, or these kids may never be given the chance to develop into healthy adults themselves.

lukev
161 replies
1d4h

But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet (and probably unconstitutional under the First Amendment, to boot.)

It's the same problem with requiring age verification for porn. It's not that anyone wants kids to have easy access to this stuff, but that any of these laws will either be (a) unenforceable and useless, or (b) draconian and privacy-destroying.

The government doesn't get to know or regulate the websites I'm visiting, nor should it. And "protecting the children" isn't a valid reason to remove constitutional rights from adults.

(And if it is, let's start talking about gun ownership first...)

jlokier
57 replies
1d4h

But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet

That seems intuitive, but it's not actually true. I suggest looking up zero-knowledge proofs.

Using modern cryptography, it is easy to send a machine-generated proof to your social media provider that your government-provided ID says your age is ≥ 16, without revealing anything else about you to the service provider (not even your age), and without having to communicate with the government either.

The government doesn't learn which web sites you visit, and the web sites don't learn anything about you other than you are certified to be age ≥ 16. The proofs are unique to each site, so web sites can't use them to collude with each other.

That kind of "smart ID" doesn't have to be with the government, although that's often a natural starting point for ID information. There are methods which do the same based on a consensus of people and entities that know you, for example. That might be better from a human rights perspective, given how many people do not have citizenship rights.

(and probably unconstitutional under the First Amendment, to boot.)

If it would be unconstitutional to require identity-revealing or age-revealing ID checks for social media, that's all the more reason to investigate modern technical solutions we have to those problems.

satellite2
15 replies
1d1h

I'm not a cryptographer so I might miss something but I have the impression that

- either a stolen card can be reused thousands of time meaning that it's so easy to get a fake that it's not worth the implementation cost

- either there is away to uniquely identify a card and then it becomes another identifier like tracking ids.

ndriscoll
5 replies
1d1h

Assuming you can make active queries to the verifier, you could do something like

- Have your backend generate a temporary AES key, and create a request to the verifier saying "please encrypt a response using AES key A indicating that the user coming from ip X.Y.Z.W is over 16". Encrypt it with a known public key for the verifier. Save the temporary AES key to the user's session store.

- Hand that request to the user, who hands it to the verifier. The verifier authenticates the user and gives them the encrypted okay response.

- User gives the response back to your backend.

Potentially the user could still get someone to auth for them, but it'd at least have to be coming from the same IP address that the user tried to use to log into the service. The verifier could become suspicious if it sees lots of requests for the same user coming from different IP addresses, and the service would become suspicious if it saw lots of users verifying from the same IP address, so reselling wouldn't work. You could still find an over-16 friend and have them authenticate you without raising suspicions though, much like you can find an over-21 friend to buy you beer and cigarettes.

Since you use a different key with each user request, the verifier can't identify the requesting service. Both the service and the verifier know the user's IP, so that's not sensitive. If you used this scheme for over-16 vs. over-18 vs. over-21 services, the verifier does learn what level of service you are trying to access (i.e. are you buying alcohol, looking at porn, or signing up for social media). Harmonizing all age-restricted vices to a single age of majority can mitigate that. Or, you could choose to reveal the age bucket to the service instead of the verifier by having the verifier always send back the maximum bucket you qualify for instead of the service asking whether the user is in a specific bucket.

woodruffw
1 replies
1d

If you can make active queries to the verifier, so can any adversarial party. These kinds of ZK-with-oracle schemes need to be very carefully gamed to ensure they're truly ZK, and not just "you learn nothing if you only query once."

and the service would become suspicious if it saw lots of users verifying from the same IP address

This implodes under CGNAT, cafe internet, hotel internet, etc.

ndriscoll
0 replies
1d

You can make active queries, with the user's involvement. The verifier can potentially have a prompt with e.g. "The site you were just on would like to know that you are over 21. Would you like to share that with them?"

We do need to get people onto ipv6 so CGNAT can die. Restricted services could potentially disallow signups or require more knowledge (e.g. full ID) if coming from shared IPs as a risk mitigation strategy, depending on how liable we want to hold them to properly validate age. If you've already signed up for facebook at home, obviously you don't need to validate your age again at the cafe.

Fake IDs exist in the real world. The system doesn't have to be perfect, and we can say that there's some standard of reasonable verification that they should do for these sorts of cases.

Personally I'm more in favor of an approach where sites label their content in a way where parents can configure filters (ideally using labels that are descriptive enough that we don't get into fights over what's "adult", and instead leave that decision to individual families), but if we're going to go an ID-based route, there are at least more private ways we could do it, and I think technologists should be discussing that, and perhaps someone at one of these big companies can propose it.

hanspragt
1 replies
1d

There is no way my 94-year-old neighbor can successfully do any of that.

ndriscoll
0 replies
1d

That's the protocol for the computer, similar to oauth. From the user perspective, your 94-year-old neighbor would have an account with id.gov that they've somehow previously established (potentially the DMV or the post office does this for them), and the user flow works much like "Sign in with Google" buttons do today.

ndriscoll
0 replies
7h34m

Addendum: you can actually preserve the privacy of which bucket the user is in to all parties if this is sufficiently standardized that it goes through a browser API.

- Have the service generate the request as above, but now the request is "Please encrypt a response with key A for the user coming from ip X.Y.Z.W".

- Service calls a standard browser API with the request, telling the browser it would like to know the user is in the over 16 bucket. Browser prompts the user to verify that they want to let the service know they are over 16. Browser sends the request to the verifier.

- Verifier responds with a token for each bucket the user is in. So a 22 year old gets an over-16 token, an over-18 token, and an over-21 token.

- Browser selects the appropriate response token and gives it back to the service.

So the service only ever learns you are over the age limit they care about, and the verifier only ever learns that you asked for some token, but not which one.

neom
3 replies
1d1h

It would be neat if some authority like the passport office or social security office also provided a virtual ID that includes the features OP described and allowed specific individual attributes to be shared or not shared, revoked any time, much like when you authenticate a 3rd party app to gmail or etc.

smeej
0 replies
22h6m

Putting on my conspiracy hat for a minute: They don't want to make it easy for you to authenticate anonymously. They obtain their surveillance data from the companies that tie you, individually, to your data. They'd be shooting themselves in the feet.

shuntress
0 replies
1d

These are the things that the post office should be handling.

plorg
0 replies
21h22m

Yeah, hell no.

WalterBright
2 replies
21h36m

You can also use fake ID to buy booze.

Making it illegal is, by itself, enough to discourage a lot.

mirekrusin
0 replies
19h2m

You can't run for loop on buying booze with fake id.

j16sdiz
0 replies
21h6m

Those need some kind of face to face interaction. The perceived risk of being caught is much higher.

vlovich123
0 replies
23h9m

mDL wide scale rollout works be using the trusted computing element that is part of your phone and enrollment would be the same as obtaining a driver’s license in the first place.

There is no physical card - there is an attestation that only an enrolled device can hand out with revocation support in case of security flaws.

Is it going to be absolutely secure? No. The cost just needs to be high enough that it becomes inaccessible to the vast majority of adolescents.

Theft of your parents phone becomes a lot easier attack vector but phone biometrics/password requirements will thwart that for most parents.

This doesn’t need to be 100% fool proof.

jlokier
0 replies
19h52m

There's a unique identifier, but it's your secret and can't be used for tracking. Sites needing verification don't learn anything except that you "have" a token matching the condition they are checking. This includes not learning your unique identifier, so they can't use it for tracking. The issuer also doesn't learn anything about your verification queries.

You have an incentive to keep the secret token to yourself, and would probably use existing mechanisms for that: You might manage like your phone number, private email and other personal accounts today. Not perfect, but effective most of the time for most people.

You might decide to share it with someone you trust, like your sibling. That's up to you, but you wouldn't share it widely or with people you don't trust, even under pressure, because:

To prevent mass reuse of stolen tokens, it's possible to use more cryptography to detect when the same token is reused in too many places, either on the same site or across many sites, without revealing tokens that don't meet the mass-reuse condition, so they still can't be used for tracking. If mass-reused tokens auto-revoke, they can't be reused thousands of times by anyone, and that also provides an incentive to avoid sharing with people you don't trust.

I won't pretend this is trivial. It's fairly advanced stuff. But the components exist these days. The last paragraph above requires combining zero-knowledge proofs (ZKP) with other branches of modern cryptography, called multi-party computation (MPC) and perhaps fully homomorphic encryption (FHE).

Aurornis
8 replies
1d1h

The government doesn't learn which web sites you visit, and the web sites don't learn anything about you other than you are certified to be age ≥ 16.

If the zero-knowledge proof doesn't communicate anything other than the result of an age check, then the trivial exploit is for 1 person to upload an ID to the internet and every kid everywhere to use it.

It's not sufficient to check if someone has access to an ID where the age is over a threshold. Implementing a 1:1 linkage of real world ID to social media account closes the loophole where people borrow, steal, or duplicate IDs to bypass the check.

vlovich123
6 replies
23h6m

As I mentioned elsewhere, you’re falling for letting perfect be the enemy of good. The ZKP + phone biometrics only needs to raise the cost of bypass above what adolescents have access to. And no, you can’t just share the same ID because there’s revocation support in the mDL and it’s difficult to extract the raw data once it’s stored on the trusted element. This is very similar to how credit cards on phones work which are generally very difficult to steal.

Aurornis
2 replies
21h35m

Sorry, you’re not thinking like a group of 15 year olds trying to get online.

The revocation list means nothing when they can get ahold of someone’s older sibling’s ID and sign up for social media.

Did everyone just forget what it’s like to an ambitious kid who wants to get online?

Do people really think a platform that needs people to jump through these hoops and use this imaginary international ID architecture is feasible?

Does anyone really think that kids won’t just set their location to Estonia and/or use a VPN to circumvent all of this?

vlovich123
0 replies
17h10m

You’re thinking like a group of technically proficient 15 year olds and their friends. That’s a small minority. The vast majority of teens are likely to be stymied.

Revocations are not for the individual ID but if an exploit is found compromising the IDs stored on a trusted element. Your older siblings ID can’t be used to sign for millions of accounts - just those who the older sibling lets borrow their phone that has their ID (and assuming there isn’t some kind of uniqueness cookie that can be used to prevent multiple accounts under a single ID). That’s a much different and more manageable problem (fake ids via older siblings have been a thing for forever).

protocolture
0 replies
21h20m

This. The parties that this will detriment are all older people. Kids will simply bypass it.

Cheezemansam
1 replies
21h14m

As I mentioned elsewhere, you’re falling for letting perfect be the enemy of good

No, this line of reasoning deserves nothing but absolute contempt when it comes to laws. We are not talking about getting the finnicky API to work at your job. Too often laws have had unintended consequences as a result of loopholes or small peculiarities. If the damn law doesn't even work on a fundamental level then it should be opposed on principle.

vlovich123
0 replies
17h9m

You’ve just described literally every single law. Congrats. You’re now appreciating what it’s like to live in a law-based society.

j16sdiz
0 replies
21h3m

It don't have to be perfect, but it need to have some way to do spot checks.

If there is no risk involved, everyone will jump on doing it.

jlokier
0 replies
19h22m

There are technical methods to detect and revoke large-scale reuse of an uploaded id. I wrote more detail in another comment.

That only covers large-scale reuse. It doesn't cover lending your id to your younger sibling if you want to, or if they find a way. Maybe that should be acceptable anyway. Same as you can lend your phone or computer to someone to use "as you", or you can buy them cigarettes and alcohol. Your responsibility.

protocolture
6 replies
21h22m

I dont want government crapware on my device to access the internet.

I also dont want third party crapware on my device to access the internet.

"wowee it can be done without revealing my identity to anyone but the government or the corp running the chain"

No thanks

mirekrusin
5 replies
19h6m

Identity is not revealed.

sangnoir
4 replies
17h39m

Do you think the NSA would balk at that challenge?

mirekrusin
3 replies
16h27m

It doesn't have to be based on crypto designed by NSA, does it?

sangnoir
2 replies
15h41m

Design is far from the only threat vector. Any implementation that is less than perfect is prone to all kinds of attacks. A few years ago, there was a report that the NSA could decrypt a double-digit percentage of encrypted web traffic thanks to a larger-than-expected bag of factored primes they keep handy.

mirekrusin
1 replies
5h20m

Great story, are you claiming that NSA can infer from zero-knowledge proofs inputs, maybe map cryptographic hashes to plain input text or something of that nature?

sangnoir
0 replies
0m

No, but I bet a dollar that NSA isn't just going to collectively fold hands and say "These schemes and implementations are too good and too secure for us to break. We'll ignore the meta data, network analysis, huge data centers and side channels, we'll give up and focus on defensive security only"

emporas
6 replies
1d3h

Definitely, we can use a government issued id, or we can create our own. Social graphs i call em. Zero knowledge proofs have so many ground breaking applications. I have made a comment in the past, relevant to how could a social graph be build, without the need of any government [1]. We can create effectively one million new governments to compete with existing ones.

[1] https://news.ycombinator.com/item?id=36421679

paulryanrogers
3 replies
1d1h

Governments monopolize violence. At least at the foundational level. When too many of them compete at once it can get very messy very quickly.

emporas
2 replies
1d

Let's suppose that 1 million new governments are founded, and violence still can be enforced only by the existing ones. The new governments will be in charge of ids, signatures, property ownership and reputation. Governments of Rust programmers, or Python programmers, or football players, or pool players, or truck drivers will be created.

When a citizen of Rust programmers social graph uploads code, he can prove his citizenship via his id. We may not even know his name, but he can prove his citizenship. He can sign his code via his signature, even pull request in other projects. He can prove his ownership of an IT company, as it's CEO, the stock shares and what not. And he will be tied to a reputation system, so when a supply attack happens, his reputation will be tainted. Other citizens of Rust's social graph, will be able to single out the id of the developer, and future code from him will be rejected, as well as code from non-citizens.

Speaking of supply chains, how about the king of supply chains of products and physical goods? By transferring products around, in a more trustworthy way, by random people tied to reputation, Amazon may get a little bit of competition ain't it?

see also an older comment of mine https://news.ycombinator.com/item?id=38800744

dotancohen
1 replies
1d

Government is not the correct word to use for this idea.

emporas
0 replies
1d

Allright, social graphs then. I use social graphs and e-gov interchangeably, but social graphs might be better.

thomastjeffery
1 replies
1d1h

I've been thinking a lot lately about decentralized moderation.

All we need to do is replace the word "moderate" with "curate". Everything else is an attestation.

We don't really need a blockchain, either. Attestations can be asserted by a web of trust. Simply choose a curator (or collection of curators) to trust, and you're done.

emporas
0 replies
1d

Yeah, blockchain is not needed at all. A computer savvy sheriff might do it, an official person of some kind. Or even private companies, see also "Fido alliance".

Additionally the map of governments which accept Esthonian passport might be of some relevancy here[1].

[1] https://passports.io/programs/EE1

dotancohen
5 replies
1d

The government need not know what sites you visit. It is damaging enough that the government know that you are visiting sites that require an age verification. You can then be flagged for parallel construction if you should, I don't know, start a rival political party.

dullcrisp
4 replies
23h59m

Not if this were widespread. I wouldn’t be too bothered if the government knew that I either watched an R-rated movie or rented a car or purchased alcohol or created a Facebook account.

brewdad
3 replies
23h7m

Now do need an abortion or sought out gender affirming care. Today’s “no big deal” can become tomorrow’s privacy nightmare.

ndriscoll
2 replies
20h16m

You missed the either in GP's comment. i.e. they know you did one of those things because you requested an over-18 token, but not which one. The more covered activities there are, the more uncertainty they have about why you might have asked for a token.

harimau777
1 replies
19h19m

This isn't really my area of expertise, is there a way to know for sure that those are all the same token? Or could the government just lie and say they are all the same when in reality they can really differentiate.

ndriscoll
0 replies
7h43m

The government would have to document the API for requesting tokens for anyone to use it. I suggested a scheme here[0] where it's clear that the government doesn't get any information about the service (unless the service re-uses AES keys) and the service doesn't get any information about the user other than whether they're in the appropriate age group.

Potentially there could be coordination between .gov and the service to track users by having each side store the temporary AES key and reconcile out-of-band. But .gov has other ways they could get that information anyway if they have cooperation from businesses (e.g. asking your ISP for your IP address, and asking the service provider for a list of user IPs).

[0] https://news.ycombinator.com/item?id=39183486

px43
3 replies
20h15m

A super interesting example of this is the proof-of-passport project.

https://github.com/zk-passport/proof-of-passport

Today you can scan your passport with your phone, and get enough digitally signed material chained up to nation level passport authorities to prove anything derived from the information on your passport.

You could prove to an arbitrary verifier that you have a US passport, that your first name starts with the letter F, and that you were born in July before 1970, and literally share zero other information.

xyzzy123
2 replies
15h19m

The selective disclosure is super cool, I wonder how it works since smthing like a hash of DG1 is what is actually signed, how can you selectively disclose verified data from "inside" the hashed area? It does not sound very feasible to me but I am not an expert in zk-snarks etc.

There are some wrinkles that prevent passport data being used more broadly - technically it is a TOS violation to verify passports / use the ICAO pkd without explicit permission from ICAO or by direct agreement with the passport holder's CSCA (country signing certificate authority). Some CSCAs allow open use but many do not.

Also, without being too pedantic about it, what you are able to prove is more like possession of a document. An rfid passport (or rfid dump & mrz) - or in fact any kind of identity document - does not prove that you are the subject - you need some kind of biometric bind for that.

px43
1 replies
4h12m

ZK circuts have gotten really fancy lately, to the point where full blown ZK virtual machines are a thing, which means you can write a program in rust or whatever, compile it to riscv, and then run it on the risc zero zkVM. (https://github.com/risc0)

This means you can literally just write a rust program that reads in the private data, verifies the signature, reads the first byte in the name string and confirms that it matches what you expect, and then after everything looks good, it returns "true", otherwise it returns "false". This all would happen on your phone when you scan a QR code or something that makes the request, then you send the validity proof you generated to the verifier, they can see that the output was true, and nothing else.

In theory, the private data would be stored on a trusted device you own, like your phone or something, so someone who steals your phone would have a hard time using your identity. Using fancy blockchain stuff you could even to a one time registration of your passport such that even if someone steals your passport, they wouldn't be able to import as a usable ZK credential. Presumably there would be some logic around it so you can re-register after a delay period or something, giving the current credential holder a chance to revoke new enrollment requests or whatever. So, yes, proving your exact identity to a website isn't perfect, but it's easy enough to make it really noisy if someone is trying to tamper with your identity, and maybe that's good enough.

If you want to go the trusted hardware route, you could make someone take a picture of their face with some sort of trusted hardware camera on their phone or laptop, and then use some zkml magic to make sure it kinda looks like the face on the passport data. Given the right resources, trusted hardware is never that hard to tamper with, so I don't like that solution very much.

What's often more important in an online context is that your credential is unique. It doesn't matter who you are, it matters that you've never used this credential to sign up for a twitter account, or get past a cloudflare captcha, or any other captcha use case. If you steal 10 passports, maybe you can set up a bot that will automatically vote for something 10 times, but at least you can't vote millions of times. This is sybil resistance, and it's massively important for a ton of things.

xyzzy123
0 replies
2h48m

Thanks! I have a big rabbit hole to go down now :)

I don't get what causes the proof to fail if I provide the wrong bytes to the zkvm when it tries to read from inside the hashed area after the hash & signature are verified (this might not be directly sequential I guess, I think it has to be part of the same proof).

Put another way, I get we have to zk prove that a) I know a message M that hashes to H ... (can see this is do-able from googling), but also that a particular byte range M[A-B] is part of M, in a way that the verifier can trust I'm not lying and I don't see how the second bit is accomplished. It feels like there are also details in proving that the data comes from the right "field" in the DG1.

This stuff is such black magic! EDIT: will try this out in ZoKrates...

johnhenry
1 replies
1d1h

I suggest looking up zero-knowledge proofs.

Sure, but is the Florida legislature actually looking into stuff like this?

dotancohen
0 replies
1d

Why would they, when it is not in the governments interest?

thomastjeffery
0 replies
1d1h

You can send a proof that someone's government-provided ID says that their age is ≥ 16.

That's not enough proof to levy a requirement.

lukev
0 replies
1d4h

It'd be cool if any of the proposed bills actually suggested something like this. They do not. They specify an ID check.

jcranmer
0 replies
21h52m

Using modern cryptography, it is easy to send a machine-generated proof to your social media provider that your government-provided ID says your age is ≥ 16, without revealing anything else about you to the service provider (not even your age), and without having to communicate with the government either.

There's just one problem. How does the machine proving your age know that you are who you say you are? Modern cryptography doesn't have any tools whatsoever that can prove anything about the real body currently operating the machine--it can never have such a tool. And the closest thing that people can think of to a solution is "biometrics," which immediately raises lots of privacy concerns.

janalsncm
0 replies
19h59m

So I hash some combination of ID, name and birthday and send it to Facebook to create an account. Facebook relays that hashed info to a government server which responds with a binary yes/no.

Of course you need to trust that the hash is not reversable.

That doesn’t stop kids from using Facebook, but it stops kids’ ID from being used to create an account.

fennecbutt
0 replies
23h20m

We did a hackathon at work and one of the guys from one of my project teams covered this stuff as his project.

I trust that it _would_ work 100%,but what I don't trust is that a government would implement it properly and securely, because no government works like that lmao (even NZ's great one).

I mean living in the UK now I got like a dozen different fucking gov numbers for all manner of things, dvla, NHS, nin, other tax numbers, visa, etc...why isn't there just one number or identity. Gov.uk sites are mostly pretty stellar besides.

endgame
0 replies
20h6m

And as with electronic voting, the contract will go to the lowest bidder with the worst security, not the company that's got the CS chops to do it right.

JeremyNT
15 replies
1d4h

Yes, this isn't the right solution. The power needs to be given to the users.

A better solution is more robust device management, with control given to the device owner (read: the parent). The missing legislative piece is mandating that social media companies need to respond differently when the user agent tells them what to send.

I should be able to take my daughter's phone (which I own), set an option somewhere that indicates "this user is a minor," and with every HTTP request it makes it sets e.g. an OMIT_ADULT_CONTENT header. Site owners simply respond differently when they see this.

basil-rash
4 replies
1d1h

Already exists, simply include the header

Rating: RTA-5042-1996-1400-1577-RTA

in HTTP responses that include adult content, and every parental controls software in existence will block it by default, including the ones built into iPhones/etc and embedded webviews. As far as I know all mainstream adult sites include this (or the equivalent meta tag) already.

In general, I don’t think communicating to every site you visit that you are a minor and asking them to do what they will with that information is a good idea. Better to filter on the user’s end.

joshyeetbox
1 replies
23h2m

This is not a response header. It's a meta tag that's added to a websites head element to indicate it's not kid friendly. The individual payloads returned from an adult site don't include this as a header.

basil-rash
0 replies
11h3m

That'd be the "equivalent meta tag" I mentioned. And this site claims the header works too, though I haven't tested it myself. https://davidwalsh.name/rta-label

pdpi
0 replies
23h17m

That's a much better approach in general.

It's much easier to regulate and enforce that websites must expose these headers so that UAs can do their own filtering. Adult Content = headers in response, no ifs, ands, or buts

Response headers are encrypted in the context of HTTPS, so there's no real sacrifice in privacy. Implementation effort about as close to trivial as can be. No real free speech implications (unless you really want to argue that those headers constitute compelled speech). All in all, it's a pretty decent solution.

JeremyNT
0 replies
1d

I honestly wasn't aware of this, and it sounds like a great solution for "adult content." Certainly, the site specifying this is better than the user agent having to reveal any additional details about its configuration.

cmiles74
3 replies
1d3h

Emancipation of children is also a thing, where a minor may petition the court to be treated as an adult. This also falls afoul of a blanket age restriction.

https://jeannecolemanlaw.com/the-legal-emancipation-of-minor...

FireBeyond
2 replies
23h24m

I think there's an ambiguity here. Even on this page, talking about "for all purposes". This seems to mostly refer to parental decision making and rights.

i.e. even as an emancipated minor, being treated as a "legal adult" does not mean you can buy alcohol or tobacco.

selectodude
0 replies
22h19m

Being an actual legal adult does not mean that you can buy alcohol or tobacco.

cmiles74
0 replies
8h50m

Understood, but being emancipated could mean that you need to be able to used LinkedIn, perhaps as part of a job search. An argument could be made that an emancipated minor should have access to some social media.

notsound
1 replies
1d4h

Another option that allows for better privacy and versatility is the website setting a MIN_USER_AGE header based on IP geolocation.

Terr_
0 replies
1d3h

Geolocation to that degree not that reliable and not necessarily 1:1 with jurisdiction or parental intent.

If we're already trusting a parental-locked device to report minor-status, then it's trivial to also have it identify what jurisdiction/ruleset exists, or some finer-grained model of what shouldn't work.

In either case, we have the problem of how to model things like "in the Flub province of the nation of Elbonia children below 150.5 months may not see media containing exposed ankles". OK, maybe not quite that bad, but the line needs to be drawn somewhere.

PH95VuimJjqBqy
1 replies
1d2h

then propose the RFC.

I haven't read the legislation myself but I don't see why this couldn't still be done, I doubt the legislation specified _how_ to do it.

Izkata
0 replies
1d1h

"Reasonable age verification method" means any commercially reasonable method regularly used by government agencies or businesses for the purpose of age and identity verification.

So no, that wouldn't work right now.

cesarb
0 replies
1d1h

Sounds like you want PICS (though it works on the opposite direction, with the web site sending the flag, and the browser deciding whether to show the content based on it).

Terr_
0 replies
1d3h

Exactly, any design for this stuff requires parental-involvement because every approach without it is either (A) uselessly-weak or (B) creepy-Orwellian.

If we assume parents are involved enough to "buy the thingy that advertises a parental lock", then a whole bunch of less-dumb options become available... And more of the costs of the system will be borne by the people (or at least groups) that are utilizing it.

qwertox
14 replies
1d3h

Lately I'm repeatedly reminded of how in Ecuador citizens, when interviewed during a protest, see it as a normal thing to tell their name as well as their personal ID number into the camera when also speaking about their position in regards of the protest. They stand to what they are saying without hiding.

Since about half a year I've noticed the German Twitter section getting sunk in hate posts, people disrespecting each other, ranting about politicians or ways of thinking, but being really hateful. It's horrible. I've adblocked the "Trending" section away, because its the door to this horrible place where people don't have anything good to share anymore but disrespect and hate.

This made me think about what we're really in need for, at least here in Germany, is a Twitter alternative, where people register by using their eID and can only post by using their real name. Have something mean to say? Say it, but attach your name to it.

This anonymity in social media is really harming German society, at least as soon as politics are involved.

I don't know exactly how it is in the US but apparently it isn't as bad as here, at least judging from the trending topics in the US and skimming through the posts.

Zak
5 replies
1d2h

The algorithm powering the trending section, which rewards angry replies and accusatory quote-tweets is at least a good a candidate as a source of harm to political discourse than anonymity.

adaptbrian
4 replies
1d

Take it a step forward, ban Engagement based algorithmic feeds. I've said this and I'll continue to say this type of behavioral science was designed at FB by a small group of people and needs to be outlawed. It never should have been allowed to take over the new age monetization economy. There's so much human potential with the internet and its absolutely trainwrecked atm b.c of Facebook.

15457345234
3 replies
14h50m

I agree.

It's been deliberately designed to cause strife. They are genuinely anti-human companies that seem to want conflict and tension to occur.

https://www.theguardian.com/commentisfree/2014/jun/30/facebo...

This article was very on-the-nose, that really should have been the writing on the wall.

Zak
2 replies
9h29m

I think this is a partial attribution error; the goal is to make money by capturing attention and selling it to advertisers. The fact that strife is one of the most effective ways to do so, and the companies appear utterly unconcerned about the resulting damage to society makes it look like strife is the end goal, but it is merely a means to make money.

They would change their algorithms immediately if large advertisers applied financial incentives. We've seen some of this with Youtube's policies leading to videos with censored profanity where its use was previously normal and neologisms like "unalived" to mean killed.

Musk's Twitter may be a partial exception since it's decisions are now driven by a single man's preferences rather than an amoral mechanistic imperative to increase shareholder value. That doesn't seem to have improved things.

15457345234
1 replies
4h31m

the goal is to make money by capturing attention and selling it to advertisers

I mean, how do you know that?

How do you know that the goal isn't actually to perform social engineering on a huge scale and that the advertising is just the way that goal is being funded?

Zak
0 replies
2h36m

I suppose I don't. It could be that Facebook, Twitter, Youtube, and TikTok are all actively trying to create chaos, but greed adequately explains their behavior. One thing that points more strongly to greed is that the companies, aside from Twitter post-Musk, rapidly change their behavior when it impacts their revenue.

With TikTok, there's some chance geopolitics is a factor as well.

Kye
3 replies
1d3h

People have zero qualms about being absolute ghouls under their wallet names. The people with the most power in society don't need anonymity. The people with the least often can't safely express themselves without it.

Also:

https://theconversation.com/online-anonymity-study-found-sta...

> "What matters, it seems, is not so much whether you are commenting anonymously, but whether you are invested in your persona and accountable for its behaviour in that particular forum. There seems to be value in enabling people to speak on forums without their comments being connected, via their real names, to other contexts. The online comment management company Disqus, in a similar vein, found that comments made under conditions of durable pseudonymity were rated by other users as having the highest quality. "
qwertox
2 replies
1d3h

There are two points which matter:

- No more bots or fake propaganda accounts.

- Illegal content, such as insults or the like, will not get published. And if it does, it will have direct consequences.

I'm also not tending towards a requirement to have all social networks ID'd, but I think that a Twitter alternative which enables a more serious discussion should exist. A place where politicians and journalists or just citizens can post their content and get commented on it, without all that extreme toxicity from Twitter.

belval
0 replies
1d1h

The thing is, the political climate is very toxic and the absence of anonymity can have a real impact for things that are basically wrong think.

Say for example I held the opinion that immigration threshold should be lower. No matter how many non-xenophobic justifications I can put on that opinion, my possibly on H1B colleagues can and would look up my opinion on your version of Twitter and it would have a real impact on my work life.

There is a reason why we hold voting in private, it's because when boiled down to its roots, there are principles that guide your opinions they are usually non -reconciliable with someone else's opinion and we preserve harmony by keeping everyone ignorant of their colleagues political opinions. It's not a bad system, but it's one that requires anonymity

DylanDmitri
0 replies
1d1h

Or, to post on a political forum you must have an ID. You can have and post from multiple accounts, but your id and all associated accounts can be penalized for bad behavior.

claytongulick
1 replies
1d

I wonder what percentage of the hate stuff is bots.

PeterisP
0 replies
20h16m

My experience with propaganda bots is that the really nasty hate stuff will usually be posted by actual, real people (perhaps as a result of being prodded by bot-provided outrage), and bots will rather have all kinds of more subtle hinting and agenda-pushing - because bots are managed by semi-professionals who care about bots not being blocked and (often) don't really care about the agenda they're pushing, while there also is a substantial minority of semi-crazy people who just don't care and will escalate from zero to Hitler in a few minutes.

simmerup
0 replies
1d3h

Attach their pictures too, so you can see the ghoul spouting hate is a basement dweller

pohuing
0 replies
1d3h

Plenty of hate under plain names on Facebook, been that way for a decade and I doubt it will change with ID verification.

paulddraper
13 replies
1d4h

destroying the anonymous nature of the internet (and probably unconstitutional under the First Amendment, to boot.)

The First Amendment guarantees free expression, not anonymous expression.

For example, there are federal requirements for identification for political messages. [1] These requirements do not violate the First Amendment.

[1] https://www.fec.gov/help-candidates-and-committees/advertisi...

o11c
12 replies
1d3h

In particular, "anonymous speech is required if you want to have free speech" is actually a very niche position, not a mainstream one. It just happens to be widely spammed in certain online cultures.

paulddraper
9 replies
1d3h

Correct.

I am staunch believer in the moral and societal good of free speech.

Anonymous speech is far more dubious.

Like, protests seem valuable. Protests while wearing robes and masks however...

TillE
6 replies
1d1h

Protests while wearing robes and masks however

This is absolutely permitted in America. It is illegal in countries like Germany which have no strong free speech protections.

jonathankoren
5 replies
1d

No. It is not "absolutely permitted in America".

They're Klan Acts, because a bunch of guys in masks and robes marching through is obviously a threat of violence.

https://en.wikipedia.org/wiki/Anti-mask_law

jcranmer
3 replies
21h46m

Those laws are frequently overturned as unconstitutional, but may still remain on the books because we don't do a good job of clearing out laws that were ruled unconstitutional.

As a general rule of thumb, almost every time someone brings an edge case about whether or not speech is First Amendment-privileged before SCOTUS, SCOTUS rules in favor of the speech. (The main exception is speech of students in school.) SCOTUS hasn't specifically ruled on anti-mask laws to my knowledge, but I strongly doubt it would uphold those laws.

jonathankoren
2 replies
21h23m

The last court case was 2004, and the New York anti-mask was upheld.

You don’t have to pontificate. The link was right there.

whiddershins
1 replies
19h43m

Random aside, I haven’t seen NY enforce the anti mask legislation during the Halloween parade, various protests, or Covid. So I bet a new constitutional challenge could be erected.

jonathankoren
0 replies
18h33m

You're trolling right?

chasd00
0 replies
23h58m

There’s no issue protesting in masks. Antifa did it all the time with no repercussions.

protocolture
0 replies
21h18m

Protests while wearing robes and masks protect the protester from followup action, and it also protects their families.

If you assume a just government that doesnt demand revenge for every petty slight, you are living in a fantasy land.

harimau777
0 replies
19h12m

There are places where it wouldn't be safe to protest without masks. In that case people effectively would be losing their freedom of speech if it isn't anonymous.

TillE
1 replies
1d1h

America has a very long tradition of anonymity being part of free speech, going back to the Federalist Papers. This is not some new online issue.

unethical_ban
0 replies
21h26m

Wealthy, politically connected men with the ability to read and write about political philosophy and get it distributed is a bit different situation than a thousand Russian AI trollbots posting bad-faith "opinions" on American current events.

thinkingtoilet
11 replies
1d4h

effectively regulating and destroying the anonymous nature of the internet.

Not at all. Just the social media sites, which are objectively bad for kids. As an adult, you do what you want on the internet.

lukev
10 replies
1d4h

And what makes a site a social media site? Anywhere you can post interactive content?

You do realize that laws like this would apply to sites like HN, Reddit, the comment section of every blog, and every phpBB forum you ever used? It's not just Instagram and Tiktok.

thinkingtoilet
8 replies
1d4h

I think a perfectly clear line could be drawn that would separate out phpBB from TikTok very easily. I genuinely don't understand this comment, we shouldn't do it because it's hard or the results might be imperfect?

woodruffw
3 replies
1d3h

I think your comment would be much stronger if you laid out precisely what you think that line would be.

Laws do not have to be perfect to be good, but they do have to be workable. It's not clear that there's a working definition of "social media" that includes both TikTok and Reddit but doesn't include random forums.

thinkingtoilet
2 replies
1d3h

So if a random person on the internet doesn't have a perfect solution then it shouldn't be considered?

woodruffw
0 replies
1d

This is not a charitable reading. Nobody is asking for a perfect solution; it is reasonable to demonstrate some prior consideration for the ways in which most solutions are dangerously imperfect.

lazyasciiart
0 replies
1d1h

Their opinion that it would be straightforward shouldn’t be considered.

lukev
0 replies
1d4h

Kids want to communicate. Whether it's TikTok, Discord, phpBB, chatting in Roblox or Minecraft, they will if they can.

If we want to "ban social media" we'll need a consistent set of guidelines about what counts as social media and what doesn't, and what exactly the harms are so they can be avoided.

I don't believe that's as easy as you think.

chasd00
0 replies
23h55m

To me the issue is it’s a waste of government and waste of time. Parental controls already exist on all devices. The answer to every problem can’t be “more government, more laws”.

borski
0 replies
20h36m

I genuinely don't understand this comment, we shouldn't do it because it's hard or the results might be imperfect?

Yes. Imperfect solutions, when driven by the government, don’t change for decades, causing terrible consequences.

“SSH is a munition” comes to mind.

CalRobert
0 replies
1d4h

Any clear line would be gamed pretty quickly I imagine.

anonym29
0 replies
1d4h

Trying to force independently owned and operated forums to enforce laws that might not even be applicable in the country that the owners / admins live and work in is going to be about as effective as trying to force foreign VPS/server/hosting providers to delete copyrighted content from their server using laws that don't apply in their jurisdiction.

blitz_skull
8 replies
1d4h

You know, I'm not really sure that requiring IDs for access to porn / social media is a terrible idea. Sure it's been anonymous and free since the advent of the internet, but perhaps it's time to change that. After all, we don't allow a kid into a brothel or allow them to engage in prostitution (for good reasons), and porn is equally destructive.

But with the topic at hand being social media, I think a lot of the same issues and solutions apply. It's harmful to allow kids to interact with anyone and everyone at any given time. Boundaries are healthy.

Aaaaand, finally there's much less destruction of human livelihood by guns than both of the aforementioned topics if we measure "destruction" by "living a significantly impoverished life from the standard of emotional and mental wellbeing". I doubt we could even get hard numbers on the number of marriages destroyed by pornography, which yield broken households, which yield countless emotional and mental problems.

So, no, guns aren't something we should discuss first. Also, guns have utility including but not limited to defending yourself and your family. Porn has absolutely zero utility, and social media is pretty damn close, but not zero utility.

diputsmonro
5 replies
1d1h

The biggest problem with this is how we would define "porn". Some states are currently redefining the existence of a transgender person in public as an inherently lewd act equivalent to indecent exposure.

I have no doubt that if your proposal were to pass that there would be significant efforts from extremist conservatives to censor LGBT+ communities online by labeling sex education or mere discussion of our lives as pornographic. How are LGBT+ people supposed to live if our very existence is considered impolite?

Nevermind the fact that the existence of a government database of all the (potentially weird) porn you look at is a gold mine for anyone who wants to blackmail or pressure you into silence.

The horrors and dangers of porn are squarely a domestic and family issue. The government does not need to come into my bedroom and look over my shoulder.

lelanthran
2 replies
14h3m

The biggest problem with this is how we would define "porn". Some states are currently redefining the existence of a transgender person in public as an inherently lewd act equivalent to indecent exposure.

This is the first I've heard about this. Link?

mlrtime
1 replies
8h32m

This isn't really happening, the closest I can find is the controversial topic of drag shows in public or banning kids from drag shows.

AFAIK there is no real legislation banning a trans person in public.

ninnip
0 replies
6h47m

Agreed, it's just rhetoric. Same as all these claims of an ongoing 'trans genocide' in the USA. Absolute nonsense, but it gets the believers in this ideology all riled up, and so the purpose of this rhetoric is fulfilled.

ninnip
0 replies
2h56m

Nevermind the fact that the existence of a government database of all the (potentially weird) porn you look at is a gold mine for anyone who wants to blackmail or pressure you into silence.

If you don't want a record of you looking at it, then don't look at it. All you need to do is refrain from pornography consumption. It really is that easy and simple.

blitz_skull
0 replies
19h56m

The biggest problem with this is how we would define “porn”.

I wholeheartedly agree. And that’s a problem we should lean into and solve. Its difficulty doesn’t make it less worth of solving.

The horrors and dangers of porn are squarely a domestic and family issue.

Therein lies the problem however. Every systemic issue in our world begins in a family or domestic situation of some form. While I am well aware and also concerned about the implications of government overreach here, I don’t think we can throw up our hands and say, “Meh”. At a minimum it can begin with education. We can teach people about the destructive nature of porn (and social media).

The fact that this impacts every family, domestic situation, and therefore indirectly or directly touches every single life in our society actually kinda makes it a great candidate for government oversight.

tempestn
1 replies
1d4h

You think watching porn is equally destructive to engaging in prostitution? I'd hate to see what kind of porn you're watching.

blitz_skull
0 replies
20h0m

I think you’re vastly underestimating the destructive nature of porn. I’m no longer watching porn, by the grace of God.

giancarlostoro
4 replies
1d3h

But the only way to do this is to require ID checks

COPPA has entered the building. If you're under 13 and a platform finds out, they'll usually ban you until you prove that you're not under 13 (via ID) or can provide signed forms from your parent / legal guardian.

I've seen dozens of people if not more over the years banned from various platforms over this. We're talking Reddit, Facebook, Discord and so on.

I get what you're saying, but it kind of is a thing already, all one has to do is raise the age limit from 13 to say... 16 and voila.

woodruffw
3 replies
1d3h

"Finds out" is the operative part. COPPA is not a proactive requirement; it's a reactive one. Proactive legislation is a newer harm that can't easily be predicted based on past experiences with reactive laws.

giancarlostoro
2 replies
1d3h

Indeed, nothing is stopping said companies from scanning and assessing age of a user uploading selfies though. This is allegedly something that TikTok does. My point being, the framework is there, and then people actually report minors, the companies have to take it seriously, or face serious legal consequences.

woodruffw
1 replies
1d

How do you know the selfie is from the "primary" user? And how do you know they're underage, versus being a chubby-faced 18 year old (like yours truly was?)

giancarlostoro
0 replies
6h43m

It doesn't matter to the platform, mind you, I've seen people abuse this. They will deactivate the account and require ID.

spogbiper
2 replies
1d1h

I propose the Leisure Suit Larry method. Just make users answer some outdated trivia questions that only olds will know when they sign up for an account.

badpun
1 replies
1d1h

In the Internet era, the answers will just be googleable. People wil quickly compile a page with all possible questions and with answers to them.

jonathankoren
0 replies
1d

But with ChatGPT, all the answers will be wrong.

forgetfreeman
2 replies
23h6m

The internet has not been anonymous in fact or theory for decades now, and if you think the government can't get your complete browsing history on a whim I'm guessing you haven't paid any attention to the news about NSA buying user data bundles from online brokers. That said, "muh freedoms" is hardly a quality argument in the face of the widely documented pervasive harms caused to children by exposure to social media. The logical extreme of your position would be to declare smoking in public a form of self-expression and then demand age limits be removed for the sale of tobacco products because First Amendment. :P

hobs
1 replies
22h9m

As the OP said, if freedoms are not a quality argument then we can rid ourselves of millions of guns, but this is a non-starter for the freedom crowd.

forgetfreeman
0 replies
10h1m

Ironically the "freedom crowd" are also statistically significantly more likely to get shot by their own toddlers accidentally so I'm not convinced they represent a pool of quality decision-making or grounded worldview. It's interesting how quickly any discussion of potential solutions to real-world problems gets chucked out the window the second someone says "freedom".

einpoklum
2 replies
1d4h

destroying the anonymous nature of the internet

Aren't the really problematic social networks the ones where you've lost your privacy and anonymity long ago and are being tracked and mined like crazy?

lukev
1 replies
1d4h

That's like saying "80% of the internet has gone to shit, might as well destroy the remaining good 20%".

jf22
0 replies
1d1h

I don't think it's like saying that at all.

ZunarJ5
2 replies
23h13m

We say this is the only way, but what about regulating these companies properly???

zamadatix
1 replies
23h3m

"Regulating them properly" could mean a lot of things to a lot of people. Do you mean e.g. just not allowing porn on the internet or do you mean e.g. just not requiring identification verification? If neither, what way of regulation that allows distinction without banning content or identifying users?

ZunarJ5
0 replies
2h58m

Putting conditions on the company so that they do not even risk this in the first place, e.g. moderation, a demand to make tools to protect children, there are a lot of different things and prohibition is the least likely to work at any level. You can look at any issue in history with prohibition and see what that amounts to.

stevage
1 replies
22h5m

It's not that anyone wants kids to have easy access to this stuff, but that any of these laws will either be (a) unenforceable and useless, or (b) draconian and privacy-destroying.

Surely not.

Imagine: government sells proof of age cards. They contain your name, and a unique identifier.

Each time you sign up to a service, you enter your name and ID. The service can verify with the government that your age is what it needs to be for the service. There are laws that state that you can't store that ID or use it for any other purpose.

Doesn't seem impossible.

throwaway421967
0 replies
21h57m

- Would only work if the government, does not have access to the reverse mapping. (otherwise law enforcement will eventually expand to get its hands on it)

- It will likely be phished very quickly (and you'd have no way of knowing since no one is storing it). (making it short lived, means u'd have to announce tot he goverment eveytime u want to watch porn)

- Eventually there will be dumps of ids, like there are dumps of premium porn accounts now

Still doesn't do anything about foreign websites.

ttt3ts
0 replies
23h41m

effectively regulating and destroying the anonymous nature of the internet

The bulk of the internet has not been anonymous for a while. Facebook requires an id already, Google tracks you using Google and the OS, reddit is tightening to control bots, amazon requires a phone number.

Think about it. What portion of your activities day to day on the Internet are anonymous? Now try to do them anonymously. It isn't practical/possible anymore and the internet of yesteryear is gone.

terpimost
0 replies
23h48m

It is pretty hard to give access to youtube to your kid with an account where age is stated. Yes kids can open browser in private mode… but they rarely do because it is a friction. If every social media would be moved to adult category the current rules in operating systems would do a good job. I am not sure about 16 years ( i would support it as a father )… but up to 13-14 feels appropriate, there is PG-13

slily
0 replies
1d3h

People can post all kinds of illegal things online and no one is suggesting that content should be approved before it can be visible on the Internet. It doesn't have to be strictly enforced to act as a deterrent. How effective of a deterrent it would be has yet to be seen.

pokstad
0 replies
19h56m

Alcohol and tobacco websites have been doing fine without checking IDs.

paulddraper
0 replies
1d4h

privacy-destroying

We ARE talking about social media.

Like, the least private software on the planet.

onion2k
0 replies
1d1h

effectively regulating and destroying the anonymous nature of the internet

Social media is on the internet. It is not the internet.

lelanthran
0 replies
15h33m

It's the same problem with requiring age verification for porn. It's not that anyone wants kids to have easy access to this stuff,

Depends on the argument being made, on the ideology of the audience, on the current norms, etc.

I had an exchange here on HN some time back (topic was about schools removing certain books from their libraries), and very many people in support of those books, which dealt with gender-identity and sexual orientation, also supported outright porn (the example I used was pornhub) for kids of all ages as long as those books with pictures (not photos) of male-male sexual intercourse could stay in the library.

Right now, if you made the argument "There are some things kids below $AGE shouldn't be exposed to", you'll still get some (vocal) minority disagreeing because:

1. They feel that what $AGE kids get exposed to should be out of the parent's hands ("Should we allow parents to hide evolution from their children?", "Should we allow parents to hide transgenderism from their children?")

2. They know that, especially with young children, they will lose their chance to imprint a norm on the child if they are prevented from distributing specific material to young children.

In the case of sex and sexual education, there is currently a huge push for specific thoughts to be normalised, and unfortunately if it means that graphic sexual depictions are made to children, so be it.

The majority is rarely so vocal about things they consider "common sense", like no access to pornhub for 10 year olds.

jakderrida
0 replies
19h24m

It's the same problem with requiring age verification for porn. It's not that anyone wants kids to have easy access to this stuff

I do. Anything to avoid "the talk", tbh. I grew up Catholic. I never had "the talk". I don't even know where to start. Blow jobs?

huytersd
0 replies
1d2h

Ah so be it. I don’t care much for the things that come from anonymous culture. I want gatekeepers. This tyranny of the stupid online is pretty tiresome.

giantg2
0 replies
1d3h

"probably unconstitutional under the First Amendment, to boot"

Probably not. Minors have all sorts of restrictions on rights, including first amendment restrictions such as in schools.

"(And if it is, let's start talking about gun ownership first...)"

Are you advocating for removing ID checks for this? If not, it seems that this point actually works against your argument.

Not saying that I agree with a ban, but your arguments against it don't really stand.

dathinab
0 replies
23h55m

effectively regulating and destroying the anonymous nature of the internet

Technically you can make that work without issues (You only need to prove your age not your identity, something which can reasonably be archived without leaking your identity).

There are just two practical issues:

- companies, government and state (at least US police & spy agencies) will try to undermine any afford to create a reasonable anonymous

- it only technically works if a "reasonable degree" of proof is "good enough", i.e. it's must be fine that a hacker can create a (illegal?) tool with which a child could pretend to be 16+, e.g. by proxing the age check to a hacked device of an adult. Heck it should be fine if a check can be tricked by using the parents passport or phone. I mean it's an 16+ check, there really isn't much of a reason why it isn't okay to have a system which is only "good enough". But lawmakers will try nonsense.

Interestingly this is more a problem for the US then some other states AFIK due to how 1) you can't expect everyone 18+ to have an id and everyone 16+ to be able to easily get one (a bunch of countries have owing (not carrying) id requirements without it being a privacy issue. 2) Terrible consumer protection making it practically nearly impossible to create a privacy preserving system even if government and state agencies do not meddle.

Similar if there wouldn't be the issue with passports in US it probably wouldn't touch the First Amendment as it in the end protects less then a lot of people believe it does.

bimguy
0 replies
18h1m

"Unconstitutional" arguments only go so far. I am not America (I'm a proud Australian) so I can easily see the incredibly obvious and ridiculous destruction "freedom" in your country entails.

Anonymity services can still exist without fostering an environment to addict children and young adults to social media or a device... and without your precious "rights" being taken away.

beretguy
0 replies
10h27m

But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet

Hopefully this will reduce amount of people using social media.

Terr_
0 replies
1d4h

But the only way to do this is to require ID checks

Not necessarily, consider the counterexample of devices with parental-controls which--when locked--will always send a "this person is a minor" header. (Or "this person hits the following jurisdictional age-categories", or some blend of enough detail to be internationally useful and little-enough to be reasonably private and not-insane to obey.)

That would mostly puts control into the hands of parents, at the expense of sites needing some kind of code-library that can spit out a "block or not" result.

RandallBrown
0 replies
1d4h

The definition of "social media" in this bill actually seems to exempt anonymous social networks since it requires the site "Allows an account holder to interact with or track other account holders".

NoMoreNicksLeft
0 replies
1d3h

But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet

Ban portable electronics for children. Demand that law enforcement intervene any time it's spotted in the wild. If you still insist that children be allowed phones, dumb flip phones for them.

It could be done if there was the will to do it, it just won't be done.

Beldin
0 replies
16h41m

But the only way to do this is to require ID checks,

No, it isn't. Check out Yivi [1]. Its fundamental premise is to not reveal your attributes. It's based on academic work into (a.o.) attribute-based encryption. The professor then took this a step further and spun off a (no profit) foundation to expand and govern this idea.

[1] https://privacybydesign.foundation/irma-explanation/

1vuio0pswjnm7
0 replies
1d2h

How many social media users who create accounts and "sign in" are "anonymous". How would targeted advertising work if the website did not "know" their ages and other demographic information about them. Are the social media companies lying to advertisers by telling them they can target persons in a certain age bracket.

andy99
59 replies
1d9h

I'm in favor of kids not using social media, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required. And the chance of actually enforcing it is zero anyway. It's no more realistic to expect this to work than to expect all parents to do it as you say. It's just wasted money plus personal intrusion that won't achieve anything.

ajhurliman
20 replies
1d5h

We have a ban on gambling for minors, so if you see social media as more harmful than gambling (personally, I do) it probably makes sense.

monkeynotes
16 replies
1d5h

Gambling is outright banned for a majority of regions in the US, not just kids. I don't think they are equally bad, just different bad. Gambling is addictive, and it destroys people. Social media is addictive and socially toxic, on the whole it erodes the very fabric of a society.

ribosometronome
11 replies
1d4h

Social media is addictive and socially toxic, on the whole it erodes the very fabric of a society.

It's interesting how every generation seems to decry new forms of media as eroding the fabric of society. They said it about video games, television, music, movies, etc. I'm sure we're right this time, though.

lumb63
5 replies
1d3h

… we’ve been right. Television, music, video games, movies, etc., have decimated social capital and communication in the western world. It is not uncommon to “hang out” with people who are your “friends” without ever interacting with them in any meaningful way because everyone is focused on, e.g., a television. That’s assuming everyone isn’t too busy playing video games to leave their houses.

Whether you agree or not that they’re “eroding the very fabric of a society” (I would argue they are), it should be acknowledged that almost all the downsides predicted have come to pass, and life has gone on not because these things didn’t happen, but in spite of them having happened.

viraptor
4 replies
1d2h

People are interacting through online games quite often though. You can't throw them all in one basket. You can make friends / longer term connections that way (I did) or keep in touch with people you don't live near to.

badpun
3 replies
1d1h

That's not community, though. You're not inhabiting the same space, share the same problems and try to work them out them together. You're not helping each other out if someone falls into trouble.

viraptor
1 replies
17h59m

It's up to you what you build that way. You can build a real community, you can build a shallow friend group, the medium doesn't have to limit it. My boss met his wife in a MUD.

badpun
0 replies
15h16m

You can meet your wife on a Tinder, via ads posted in newspaper or on a cruise. It does not mean that those are communities.

sapphicsnail
0 replies
14h40m

You can do all of those things sans inhabit the same space with people you meet online. I met my 2 girlfriends online and we support each other through everything even though we aren't in the same place most of the time. I'm part of a niche group on reddit that supports each other emotionally in a way I've struggled to find in physical spaces. I still meet up with people in physical space, but I absolutely get meaningful social connection from online spaces.

mynameisash
0 replies
1d2h

It seems that all data on the subject of social media point emphatically to, Yes! It's terrible for adolescents[0] (and probably society writ large?).

[0] https://jonathanhaidt.com/social-media/

monkeynotes
0 replies
1d4h

Yeah, none of those are comparable. TV never lead to children bullying each other anonymously which then leads to kids committing suicide.

bee_rider
0 replies
1d4h

Television clearly did something pretty significant to society. Cable news makes the country more polarized, that we didn’t do anything about it doesn’t mean it wasn’t a problem. We just missed and now the problem is baked into the status quo.

Video games are typically fiction, so the ability to pass propaganda through them is usually a little more limited. It isn’t impossible, just different.

Social media is a pretty bad news source for contentious issues. We should be pretty alarmed that people are getting their news there.

bcrosby95
0 replies
1d

Social media isn't new. It's over 20 years old by now. If you count things like forums you're looking at 30+ years old.

angra_mainyu
0 replies
1d4h

Apples-to-oranges comparison.

Plenty has been written already on the ravaging effects of social media on society and it's pretty plain to see.

https://www.mcleanhospital.org/essential/it-or-not-social-me...

Rates of sexual activity have been in decline for years, but the drop is most pronounced for adults under age 25 (Gen Z).

For Gen Z, a rise in sexlessness has coincided with a decline in mental health.

Sexual activity can boost mood and relieve stress and may serve as a protective factor against anxiety and depressive disorders.

https://www.psychologytoday.com/intl/blog/the-myths-sex/2022... missing-out-the-benefits-sex

rchaud
3 replies
1d4h

Not anymore. A 2018 Supreme Court decision opened the floodgates to legalized sports gambling, much of it online. The only states with existing bans are Hawaii and Utah, which combined have only 5 million residents.

https://en.m.wikipedia.org/wiki/Murphy_v._National_Collegiat...

alexb_
1 replies
1d4h

This is just blatantly not true, anyone who has tried to gamble on sports can tell you that companies go to quite incredible lengths to make sure that nobody outside of the few jurisdictions where they're legal can gamble online. I live in Nebraska, and I have to go travel across to Iowa before I can do anything.

ajhurliman
0 replies
1d1h

True, it's state by state, but a lot of states allow it. And even if you're not in the right state you can always go to sites from other countries (e.g. betonline.ag)

monkeynotes
0 replies
1d4h

Oof.

WindyLakeReturn
2 replies
1d2h

The ban we have on gambling seems weak. From trading card games to loot boxes to those arcade games that look to be skill based but are entirely up to chance, children are allowed to do all. The rules feel very inconsistent to the point that they appear arbitrary in nature.

ajhurliman
0 replies
1d1h

Agreed, the gaming industry has done it's damndest to undermine the restrictions of gambling.

Ferret7446
0 replies
22h4m

There's a pretty clear difference between gambling and "gambling": whether you're winning money.

matthewdgreen
15 replies
1d8h

Several governments have already effectively banned sites like Pornhub by creating regimes where people have to mail their ID to a central clearinghouse (which creates a huge chilling effect.) The article talks about “reasonable age verification measures” and so saying it’s unenforceable seems a little bit premature. Also, you can bet those measures won’t be in any way reasonable once the Florida legislature gets through with them.

55555
6 replies
1d7h

You're proving the argument that the parent set forth. Anyone who wants to visit Pornhub can just visit one of the many sites that isn't abiding by the new law. However, that's not due to a lack of legislation, but rather a lack of enforcement, or, perhaps, enforceability. If laws always worked I'd be for more of them. My argument is not that we should never make laws because it's futile but rather that some laws are more futile than others and having laws go unenforced weakens government and enforcing them inequitably is injust.

monkeynotes
5 replies
1d4h

Also social policy enforcement is a generational thing. The UK is only just getting toward outright banning cigarettes by making it illegal for anyone born after X date from ever buying them. Eventually you have a whole generation that isn't exposed to smoking and on the whole thinks the habit is disgusting, which it is.

verall
4 replies
1d4h

Except that some people born after that date will still acquire them, get addicted, and then what? Prosecute them like drug possession?

It's infantilizing and dumb. Grown adults should be allowed to smoke tobacco if they so wish and smoking rates are already way down due to marketing and alternatives. Noone needs to be prosecuted.

zhivota
2 replies
1d3h

You don't need to prosecute any buyers at all though. All you need to do is make it illegal to sell in shops, and illegal to import. There will be a black market, sure, but how many people are going to go through the trouble and expense to source black market tobacco? Not that many. And everyone benefits because universal healthcare means everyone shares the cost of the health effects that are avoided.

monkeynotes
0 replies
1d3h

Should mention the govt. has to find budget to fill the gap of tobacco budget, but they've been slowly doing this as demand has slumped since 2008.

badpun
0 replies
1d1h

how many people are going to go through the trouble and expense to source black market tobacco? Not that many.

Just see how many people already go through that trouble to source illegal drugs...

monkeynotes
0 replies
1d3h

I think it's hyperbolic to look at tobacco like other drugs. Tobacco is a lifestyle thing, it doesn't get you high, it's a cultural habit. There are only upsides to getting rid of the social demand for it.

If you think taking tobacco away from consumers is infantilizing, why yes, yes it is. We are dealing with children's futures. Adults get to continue smoking, children less likely to even want to smoke as the social acceptance goes down and with that there is less and less desire to smoke. Nicotine doesn't do much other than get you addicted, no one is chasing after a pronounced high with it, people start smoking because it's perceived as cool.

I can't imagine an adult wanting to start smoking, most adults get addicted in their teens.

I think you have have an import ban, and a black market, and still see significant gains in eroding the demand. I do not think people should be prosecuted for possession, but the UK will probably make some bad decisions there, but that doesn't mean the overall policy is bad.

rschneid
4 replies
1d8h

effectively banned

In my opinion, these governments haven't implemented 'effective' bans (though maybe chilling, as you say) but primarily created awkward new grey markets for the personal data that these policies rely on for theatrics. Remember when China 'banned' youth IDs from playing online games past 10PM? I think a bunch of grandparents became gamers around the same time...

https://www.nature.com/articles/s41562-023-01669-8

seanw444
1 replies
1d7h

Which is exactly what happens for markets that are desirable enough. We compare bans of things not enough people care about, to bans of things that people are willing to do crazy things for. They don't yield the same results.

monkeynotes
0 replies
1d5h

No policy is 100% effective. Kids still get into alcohol, but the policy is sound.

rft
0 replies
1d7h

Another example is some Korean games requiring essentially a Korean ID to play. A few years ago there was a game my guild was hyped about and we played the Korean version a bit. You more or less bought the identity of some Korean guy via a service that was explicitly setup for this. Worked surprisingly well and was pretty streamlined.

angra_mainyu
0 replies
1d4h

At least from personal experience, when there was a period where my ISP in the UK started requiring ID verification for porn, I literally ceased to watch it.

Making something difficult to do actually works to _curb_ behavior.

Taylor_OD
1 replies
1d7h

Does this actually work or does it just push those same people to sketchier websites?

angra_mainyu
0 replies
1d4h

Worked in my case when my ISP required ID for it in the UK.

I just noped out of it entirely.

__turbobrew__
0 replies
1d3h

My main issue is that the only effective way to ban access to a website is to also ban VPNs and any sort of network tunneling. A great firewall would have to be constructed which I am very much against. Even China’s firewall is surpassable and it is questionable how much it is worth operating given the massive costs which would be incurred.

I think the government should invest in giving parents the tools to control their child’s online access. Tools such as DNS blocklists, open source traffic filtering software which parents could set up in their home, etc.

dylan604
5 replies
1d3h

There is a societal problem that is beyond just parenting. The peer pressure for kids to feel left out and ostracized because they are the only ones not on the socials is something a teen is going to definitely rebel against their parents on. It's part of being a teen. I'm guessing the other parents would even put pressure on the parents denying the social access.

To me, the only way out of this is by changing one nightmare for another giving the gov't the decision of allowing/denying access. Human nature is not a simple thing to regulate since the desire for that regulating is part of human nature

diputsmonro
4 replies
1d1h

Is talking to other people online really so bad that we need the government to step in and tell us who we can and can't talk to? How quickly will that power expand to what we can and can't talk about?

I agree that neither solution is perfect, but exchanging an imperfect but undoubtedly free system of communication for one that is explicitly state-controlled censorship is an obvious step backwards.

"Thinking about the children" should also involve thinking about what kind of a society you want to build for them. A cage is not the answer, especially not with fascism creeping back into our politics.

dylan604
3 replies
1d1h

I think you are willingly playing this down as "talking to people online" to make some point. However, it is beyond what one kid online says to another online. It is what predators say to those kids online. I don't just mean Chester and his panel van. I'm talking about anyone that is attempting to manipulate that kid regardless of the motive, they are all predators.

Social media has long since past just being a means of communicating with each other, and you come across as very disingenuous for putting this out there.

diputsmonro
1 replies
1d1h

I think people are being incredibly disingenuous when they imagine that the government won't abuse this power to censor and harm marginalized communities. Many states are trying to remove LGBT books from school libraries for being "pornographic" right now, for example. All it takes is some fancy interpretations of "safety" and "social media" for it to become a full internet blackout, for fear of "the liberals" trying to "trans" their kids.

I don't deny that kids can get into trouble and find shocking or dangerous things online. But kids can also get in trouble walking down the street. We should not close streets or require ID checks for walking down them. Parents should teach their kids how to be safe online, set up network blocks for particularly bad sites, and have some kind of oversight for what their kids are doing.

Maybe these bills should mandate that sites have the ability to create "kid" accounts whose history can be checked and access to certain features can be managed by an associated "parent" account. Give parents better tools for keeping their kids safe, don't just give the government control over all internet traffic.

dylan604
0 replies
1d

when they imagine that the government won't abuse this power

I've suggested no such thing. In fact, I described putting regulations in place as a "nightmare". Parenting alone will not work. Self regulation from the socials will not work. The entire situation is a nightmare of our own making. There is no simple solution. These tendencies of human nature were present long before social media. It was just fuel for the fire for the worst qualities.

15457345234
0 replies
11h28m

I think you are willingly playing this down as "talking to people online" to make some point.

They won't engage on this topic in a fair and balanced manner. I've tried. They want unlimited ability to push agendas and incite strife and they just throw out keywords and thought-terminating cliches when criticised; diversity, marginalisation, fascism, they just gaslight and gaslight and gaslight.

I mean, don't stop trying, but you'll be very frustrated

arcticbull
2 replies
1d5h

It seems like you are in favor of something that requires coordination, but don't believe in coordination. Is there a different way you think this could be achieved?

kelseyfrog
1 replies
1d4h

I don't believe in the disbelief of coordination so it seems like we're at a bit on an impasse. Please expound.

m_mueller
0 replies
21h18m

I think GP means the coordination between parents, and I agree on that: if you can’t get a strong majority of parents to agree on keeping their children off of social media & smart phones, you as parents have the choice between two outcomes: enforcing isolation vs. letting social media slip through and just trying to delay as much as possible.

Almost all parents either don’t care or opt for the second option (which I also think is the better one), but the dynamics today are that between 10-12yo the pressure to get your kid a smartphone will mount and you have to give in at some point. Being able to wait till 16 would be much better IMO.

JoshTko
2 replies
1d9h

I used to want no govt intrusion for this. Then I understood how there are teams of PHDs tweaking feed to maximize addition at each social network. I think there could be even limits, or some sort of tax on gen pop.

protocolture
1 replies
21h15m

It would be less imposing on the populace if you just demanded a government hit squad kill those PHD's instead of demanding everyone submit to some onerous government crapware to access the internet.

JoshTko
0 replies
5h0m

There are plenty of middle ground approaches - one possible one: App default would be restricted time use. To get unrestrxistricted use you use a driver's license to authenticate birthdate only with OS. Mobile OS only confirming with app that user is above age requirement (does not share birthdate). App can only query if user is above 18yrs and nothing else. Easy peasy.

reaperducer
1 replies
1d

I'm in favor of kids not using social media, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required.

People said the same thing about age restrictions for smoking, alcohol, movies, and on and on and on.

It's not some unsolvable new problem just because it's the ad-tech industry.

cjensen
0 replies
1d

Your comment begs the question of whether or not those age restrictions are a "solved problem." In the US, government age restrictions on movies DO NOT exist for the exact reason the government cannot impose age restrictions on social media: the First Amendment flatly forbids it.

loceng
1 replies
1d7h

So, maybe it a law - but requiring parents-guardians to enforce it, and not the state - sounds like a reasonable inbetween?

qup
0 replies
21h41m

I think it should just be opt-out. A parent can opt-out without any stated reason, easily, and overrule the law.

So kids get a social media "license," maybe.

OJFord
1 replies
1d7h

We don't even have to speculate, isn't it already the case for <13yo? Or is just Europe? Anyway - yeah, of course they're still on it. Expect less compliance and harder enforcement the older they are, not more/easier.

samtho
0 replies
1d7h

The only protection in the Us is technically “collection of personal information” via COPPA[0] which you can argue would kneecap social media. Any parent can provide consent for their child, however. Children themselves can also just click the button that says they are over 13 if it gets them what they want.

[0]: https://www.ftc.gov/legal-library/browse/rules/childrens-onl...

yew
0 replies
1d7h

There are degrees of enforceability.

When I was first getting online, the expectation was that you at least had to be bright enough to lie about your age. Now I have to occasionally prune my timeline after it fills up with "literally a minor." Even an annoyance tax might have some positive effect. Scare the pastel death-threats back into their hole...

soco
0 replies
1d9h

Is there an alternative? Self-control - as we have now - brought us here. If the government shouldn't step in then the only other option left (only I can see) is magic. And we have a bad record with magic.

ribosometronome
0 replies
1d4h

Social media's existence is predicated on their algorithms being good at profiling you. Facebook's already got some level of ID verification for names, where they'll occasionally require people to submit IDs. No reason that similar couldn't be applied to age if society agreed it was worthwhile.

incomingpain
0 replies
1d5h

I'm in favor of kids not using porn, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required. And the chance of actually enforcing it is zero anyway. It's no more realistic to expect this to work than to expect all parents to do it as you say. It's just wasted money plus personal intrusion that won't achieve anything.

I only changed 2 words for 1.

brightball
52 replies
1d9h

I’m normally anti-regulation as well, but as a parent I’m fully on board with this. The amount of peer pressure to be on social media is insane.

WarOnPrivacy
25 replies
1d9h

Are you asserting that parents should not have the rights to determine this for their own children?

brightball
11 replies
1d8h

Parents don’t have the right to get their kids a tattoo, vote or buy alcohol before a regulated age. Is this different?

WarOnPrivacy
5 replies
1d8h

So your answer is yes?

Your position is that decisions about youth access to social media should be fully taken from the parents and made by govs instead. Penalties can be assumed from your examples.

This is the reality that you want imposed on parents and children - yes?

somenameforme
2 replies
1d6h

There's actually a really simple and elegant penalty - forfeiture of the device used to access the social media. With all seized devices to then be wiped and donated to low income school districts/families. This gets more complex when using something like a school computer, but I think it's a pretty nice solution outside of that. That's going to be a tremendous deterrent, yet also not particularly draconian.

runsWphotons
1 replies
1d4h

Yep thats what low income families need: more cell phones.

foobarian
0 replies
1d4h

Now they too can get on social media!

samus
0 replies
1d7h

This is already the reality for alcohol and plenty of other things. Maybe not everywhere. Reality check: parents giving unrestricted access to these things are usually perceived as irresponsible.

brightball
0 replies
1d8h

Is your position that no age limits should exist for anything?

6yyyyyy
2 replies
1d8h

Actually, in many states there is no minimum age to get a tattoo with parental consent.

https://en.wikipedia.org/wiki/Legal_status_of_tattooing_in_t...

lupire
0 replies
1d6h

Likewise for cosmetic surgical modification of the penis by a doctor, or puncturing the earlobe by a non-medical-professional.

brightball
0 replies
1d7h

That is…shocking

20after4
1 replies
1d4h

Missouri law allows minors to consume alcohol if purchased by a parent or legal guardian and consumed on their private property.

edit: Apparently Missouri is not the only state. I had trouble finding a definitive list though. There are also other exceptions such as wine during religious service.

WindyLakeReturn
0 replies
1d2h

While there are exceptions, and in general exceptions seem pretty common, they still require businesses to officially get approval and it gives power to parents to enforce rules that would otherwise be hard to do so. Even with the exceptions for children to legally be allowed to drink, I would be surprised if that led to more kids drinking than alcohol obtained illegally, which means the question should be back on how well does the law work (obviously not perfectly, but there is a large gap between perfect and so poorly that it is useless purely from an efficiency perspective).

giarc
4 replies
1d7h

Are you a parent? It's not as easy as saying "no social media" to your kids. In this day and age, it's basically equivalent to saying "you can't have friends". Online is where kids meet, hang out, converse etc. I'd LOVE to go back to the days before phones and social media, where kids played with neighbours and ride their bikes to their friends house, but that's slowly slipping away.

foobarian
3 replies
1d4h

We try pretty hard to get our kids to play with their friends in person (we invite them or give rides to playdates) but what do they do when they meet up? Sit on the couch with their tablets and play virtually in Roblox :-)

eitally
1 replies
1d

You organize playdates for middle & high schoolers?

giarc
0 replies
9h39m

They didn't say anything about the age of their kids. Roblox is played by almost all ages.

amalcon
0 replies
1d1h

When my friends would meet up in the 80's-90's, quite a bit of Nintendo happened. Is it really that different? The proportion of video games should eventually drop (not to zero), in favor of (if you're lucky) talking and whatever music bothers you the most.

ryandvm
3 replies
1d6h

Do you also think that children should be allowed to buy cigarettes? I'll be honest, I am not certain that social media is any less deleterious than tobacco.

I'm a pretty pro-market guy, but there are times when the interests of the market are orthogonal to the interests of mankind.

tstrimple
2 replies
1d4h

I can pretty confidently say that the half a million deaths a year attributable to smoking is a little more deleterious than getting bullied online and the suicides which follow. Many orders of magnitude more.

nkohari
1 replies
1d3h

Just because one thing is worse than the other doesn't mean the less-bad thing is suddenly good.

tstrimple
0 replies
1d2h

The original statement was:

"Do you also think that children should be allowed to buy cigarettes? I'll be honest, I am not certain that social media is any less deleterious than tobacco."

To which I pointed out that cigarettes kill far more people than social media. And your response was somehow that I'm implying that a less-bad thing is good? Are you sure you're following the conversation? It's really not clear that you're addressing anything I said, and it's unclear what your point is.

amerkhalid
1 replies
1d7h

Yes, parents should not have unlimited rights to determine what is good/bad for their children.

Social media is a powerful, addictive and dangerous. Pretty much anywhere on this earth, parents will end up in jail or lose custody of their kids, if they give them harmful substances like drugs. Social media should be regulated like how drugs, alcohol, and cigarettes are regulated.

20after4
0 replies
1d4h

Because drug regulations are so effective with no collateral damage at all. /s

somenameforme
0 replies
1d7h

I'm almost invariably anti-regulation, but in this case - absolutely!

There's extensive evidence that social media is exceptionally harmful to people, especially children. And in this case there's also a major network effect. People want to be on social media because their friends are. It's like peer pressure, but bumped up by orders of magnitude, and in a completely socially acceptable way. When it's illegal that pressure will still be there because of course plenty of kids will still use it, but it'll be greatly mitigated and more like normal vices such as alcohol/smoking/drugs/etc. It'll also shove usage out of sight which will again help to reduce the peer pressure effects.

This will also motivate the creation/spread/usage of means of organizing/chatting/etc outside of social media. This just seems like a massive win-win scenario where basically nothing of value is being lost.

lupire
0 replies
1d6h

Parents can make accounts to use on collaboration with their children.

The law prevents the corporation from directly engaging with the child without parental oversight.

1shooner
25 replies
1d7h

Because regulation worked so well in eliminating peer pressure for drinking, smoking, and drugs.

ceejayoz
20 replies
1d7h

Regulation worked remarkably well on smoking.

anshumankmr
16 replies
1d7h

How so?

ceejayoz
14 replies
1d7h

A massive, decades-long decline in the habit? Stemming from ad restrictions, warning labels, media campaigns, taxation, legal action by states/Feds, etc.

LynxInLA
11 replies
1d7h

Have you not seen people vaping at a young age now?

ceejayoz
9 replies
1d6h

Vaping isn't the same thing as smoking.

lbhdc
5 replies
1d5h

They are consuming nicotine, and tobacco companies are invested in / are the companies producing products in that space. It seems functionally to be the same.

ceejayoz
2 replies
1d5h

Only if you ignore... a lot. Cancer rates, smoking sections in restaurants, the smell, the yellow grime and used butts sprinkled everywhere, the impact on asthmatics... Smoking a cigarette gets you a lot more than just the nicotine.

A smoker moving to vaping is an enormous benefit to health and society.

lbhdc
1 replies
1d4h

That sounds like you are being disingenuous. Smoking sections haven't been a thing in the US for a long time (I went to the last one I could find around 2009). Waste from single use vapes is also a huge problem. Similarly there are health effects specific to vaping, time will tell if cancer is among them.

ceejayoz
0 replies
1d4h

Smoking sections haven't been a thing in the US for a long time...

Yes, the regulations that made this happen are good. That's my point.

Waste from single use vapes is also a huge problem.

Nothing like the cigarette butts that used to be everywhere.

Similarly there are health effects specific to vaping, time will tell if cancer is among them.

We've plenty of data to safely conclude vaping is safer than smoking tobacco. That doesn't make it safe, but it's absolutely safer.

sneak
0 replies
1d4h

No, they are not remotely the same. Nicotine isn’t really that harmful, but combustion byproducts very much are. Also the effects on bystanders are orders of magnitude better.

Most of the harm from smoking comes from the smoke, not the nicotine or associated addiction.

huytersd
0 replies
1d2h

Nicotine by itself is harmless besides the addictiveness. A nicotine addiction is not going to drastically affect your mental state or cause socially disruptive behavior like domestic violence or armed robbery so it’s really nothing to be concerned about.

LynxInLA
1 replies
1d4h

I agree it is different, but the jury is out on whether it is better. Banning "social media" is likely to push users to a "lite" version of it. I'm not convinced that will be better.

bigfishrunning
0 replies
1d4h

I would call IRC "social media lite", and it is indeed better.

blitzar
0 replies
23h38m

Vaping is way cooler than smoking.

huytersd
0 replies
1d2h

Vaping (non flavored) vapes has basically a negligible impact on your health.

akerl_
1 replies
1d7h

This must be why there’s not 50 vape shops in my town with big neon signs.

ceejayoz
0 replies
1d6h

Don't get me wrong, I'd love to see vaping turn into a prescription-only smoking cessation aid, but it's not smoking. I'm 100% happy with even a one-for-one replacement of smoking for vaping, even in kids, given the dramatically lower risk of resulting health problems.

brodouevencode
0 replies
1d5h

It's more influence than actual regulation enforcement. Smokers these days are seen as social pariahs in some circles.

jimt1234
1 replies
22h36m

Growing up in the 80s, no one I knew avoided cigarettes because of regulations. Cigarettes were easy to get your hands on as a kid, even though it was illegal to sell to children. We avoided cigarettes because we had a general understanding of the health risks, because we knew our parents would beat our asses if they caught us smoking, and because smoking makes your clothes smell like shit.

Jensson
0 replies
16h56m

Yet that wasn't enough to stop your generation from smoking, they added even more regulations and then the generation after yours stopped.

ScoutOrgo
0 replies
1d4h

You also had to go in store to buy cigarettes, so the application of regulation would work a little differently in the case of social media.

scythe
0 replies
1d7h

Drinking, smoking and drugs don't depend on a central point of control. Social media companies of any significance can be counted on two hands, and are accountable to corporate boards.

huytersd
0 replies
1d2h

Absolutely. Almost no kids smoke cigarettes (vaping non flavored varieties have almost no risk associated with it) and drunk driving is a shadow of what it used to be. Getting alcohol for someone under 21 is not child’s play either.

DiggyJohnson
0 replies
1d5h

It absolutely did for smoking and drunk driving.

CJefferson
0 replies
1d3h

I'm fairly sure more 13 year olds are on social media, than are drinking, smoking, or on drugs.

tedajax
8 replies
1d9h

Answering a problem with unenforceable garbage like this doesn't seem like a very sound strategy.

soco
2 replies
1d9h

This discussion can be seen every time when the EU decides on some regulation against tech industry. A lot of people will jump that it won't be enforced, then when we see the first fines those people will jump that it won't move a needle, then when the tech giants do change a bit their course then... well the tech bros will always find a reason to jump against doing anything to curb tech.

lupire
1 replies
1d6h

This is not a "tech bro" thing. It's not particular to tech nor bros. This a business thing. Phillip Morris weren't "tech bros".

soco
0 replies
1d5h

You are right. I was having in mind the HN crowd when I commented.

o11c
2 replies
1d3h

It doesn't have to be perfectly enforceable to have a positive effect.

Even just making illegal the promotion of social media toward children would have a huge effect.

plorg
1 replies
21h6m

Seeing this opinion all over this thread. Unenforceable laws are one way to create the pretext for discriminatory enforcement.

And you have to be trolling with your second paragraph.

o11c
0 replies
20h6m
brodouevencode
1 replies
1d5h

Enforceability is a foregone conclusion, and when it comes to things like this it's somewhat expected. The same can be said for pornography, drugs and alcohol and tobacco (remember Joe Camel?), and anything else that would fall under blue-laws.

The goal of this is to bring attention to the fact that it's a problem and should be seen as undesirable, like pornography or Joe Camel. The cancellation of Joe didn't prevent kids from getting cigarettes but it did draw attention to the situation and there has been a marked decline in youth smoking since the late 90s when the mascot was removed. It's correlative, for sure, but the outcomes are undeniable. The same happened with the DARE program and class 1 drugs (except for marijuana iirc).

20after4
0 replies
1d4h

Multiple studies have determined that the dare program is entirely or almost entirely ineffective.

Here's just one: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1448384/

WarOnPrivacy
5 replies
1d9h

You can be in favor but in the US it is unconstitutional for a gov to broadly restrict speech. It's why each of these age verification + social media laws eventually get tossed. Legislators know this (or are too dumb to) but it's not their own dollars that are getting burned during this vote-baiting performance.

spogbiper
1 replies
1d1h

honest question, is an american child granted the right of free speech? or do you get that right at a certain age?

gottorf
0 replies
20h46m

is an american child granted the right of free speech? or do you get that right at a certain age?

This isn't the right way to frame the question. The Constitution prohibits government actors from infringing upon the right to free speech[0], including that of children; meaning e.g. an attempt to censor a child's treatise on some subject would be as illegal as doing the same to an adult. It's not that you get a "free speech license" when you turn 18. The restrictions on government power applies whether or not the person being infringed upon is a minor or an adult.

[0]: This isn't an absolute, unqualified right. A child breaking into a profanity-laden tirade in the middle of class at a public (meaning taxpayer-funded and an extension of the government) school may believe they are exercising their First Amendment rights, but the school may still legally send that child home, for example, on the grounds that it disrupts the schooling of other children.

bilsbie
1 replies
1d7h

The FCC seems to get around that pretty easily not allowing nipples on Tv.

jcranmer
0 replies
1d5h

Only on broadcast TV, and the decision is fundamentally reliant on the nature that RF spectrum is a finite resource to be able to justify the restriction.

SCOTUS has routinely struck down prohibitions against the same things in other media, including explicitly the internet.

carabiner
0 replies
1d3h

Deprecating the Constitution is long overdue.

dathinab
3 replies
1d

IMHO the general idea isn't terrible the implementation is, suppar. But hy that's why it's good that's it's not yet US wide as it means there is time to make improvements.

- I'm not the biggest fan of hard cutoff

- addictive dark patterns which cause compulsive use should be general banned or age restricted no matter where they are used, honestly just ban most dark patterns they are intentional always malicious consumer deception not that far away from outright committing fraud. (And age restrict some less dark but still problematic patterns.)

- I think this likely will make all MMORPGs (and Roblox, lol) and similar 16+, I'm quite split about that. I have seen people between 14-18 get addicted to them and mess up their education path. But I have also seen cases of people which might not be alive anymore today if they hadn't fund a refuge and companions in some MMORPG.

- I guess if it can make platforms like YT, Facebook, Instagram, Snapshat etc. implement a "teen" mode with less dark patterns and tracking it would be good.

- The balance between proving your age and making things available and keeping privacy is VERY tricky (especially in the US) and companies, the government and spy agencies will try to abuse the new requirements for age verification to spy more reliable on everyone 16+.

- It's interesting how that affects messengers. Many have less dark patterns, some do not track users, or can easily decide not to track children. The aren't social networks per-se. But most have some social network like features. Even such which do not try to create compulsive use might still end up with it as long as their as is "live" chatting.

trogdor
2 replies
23h18m

addictive dark patterns which cause compulsive use should be general banned or age restricted no matter where they are used, honestly just ban most dark patterns they are intentional always malicious consumer deception not that far away from outright committing fraud.

How would you write a law that accomplishes your goal?

vimax
0 replies
19h22m

I'm not a lawyer, but I imagine something like deceptive advertising regulations could serve as a starting point.

dathinab
0 replies
14h31m

The usage with of user interface design patterns which take advantage of [..] to cause compulsive use is not allowed. In context of this law user interface refers to the mechanism of a user interacting with a software, this includes any kind of user interface no matter weather which medium it uses for presenting information and allowing interactions and no matter which form it is presented at in the software.

Where [..] is a rough definition of dark patterns which there exist multiple of in CS and which you can base on factors like 1) deceptiveness, 2) intend to manipulate the users behavior, 3) how it interact with various human feedback systems etc.

The thing about law people on HN often tend to forget or complain about is that they intentionally do not need to be perfect precise scientific definitions, predicate logic, or anything like that. This means you can reasonable describe them by likely effect and weather it seems reasonable that the effect could have been intended (without bothering to actually determine intend) without needing to go into _any_ technical details outside of clearing that indeed any form of user interface is covered.

EasyMark
3 replies
1d2h

I can't agree. This teaches kids that the government is the answer to everything. this should 100% be the responsibility and decision of parents. Kids are different, and these one-size-fits-all authoritarian tactics that have become a signature of the current GOP Floridian government are just the beginning of the totalitarian christofascist laws that they want to implement. Before you ask, I am a parent, and my kid's devices all have this crap blocked and likely will remain that way until he's at least 15, give or take a year depending on what I determine when he gets to that age. He knows that there are severe ramifications if he tries to work around my decision, and will lose many, many privileges if such a thing happens.

protocolture
0 replies
21h14m

Yeah I hate that at the first sign of trouble people run to the government.

My kid, and I have told my wife this, is ready to view whatever he wants on the internet when he can circumvent me.

bloqs
0 replies
14h21m

The libertarian equivalent is the model we are currently running and it hasn't worked. It's psychologically addictive, harmful and has parallels with smoking in a way. If the evidence for its harm was less robust.

Do you think simply labelling it as bad is sufficient? Parents have no idea.

11101010001100
0 replies
21h47m

If you ignore history, yes, it may teach kids that the gov is the answer. Don't ignore history.

reactordev
2 replies
19h5m

The answer to our problems is not less freedom. It should never be less freedom. I’m opposed of any law that restricts freedom of information. No matter the age. I think we need to do better on educating kids on online behaviors and we should hold social media companies accountable for addictive features but what we absolutely shouldn’t do is blame little Jenny and take away her access to groups and social interaction online.

rrrrrrrrrrrryan
1 replies
18h58m

Yes. Little Jenny should also be free to purchase and drink a bottle of whiskey, but only if her parents are sufficiently negligent.

I would've backed your argument up until a few years ago, but the science is coming down pretty hard now showing that social media use is absolutely detrimental to still-developing minds.

reactordev
0 replies
18h50m

I’m not refuting that. Only I don’t think we should be reacting with laws to restrict access. Who’s to say “what” a social media platform is? Does this mean discord as well? Steam? Roblox? Where do we draw the line on what is social media and what is social?

jijijijij
2 replies
1d2h

I am an adult and I wish someone would take social media away from me. Honestly, I think social media has done more harm than good and I wish it would just cease to exist.

However, especially in Florida, social media may be the only way for some teens to escape political and religious lunacy and I fear for them. I think it's not wise to applaud them taking away means of communication to the "outside", in the context of legislation trends and events there.

trogdor
1 replies
23h11m

I am an adult and I wish someone would take social media away from me.

Why don’t you take it away from yourself? Just delete your account.

jijijijij
0 replies
14h20m

I tried but then got betrayed by myself.

Addiction mechanics are a real thing and sooner or later social media will pop into your life again. For me it's Reddit, not Instagram or Twitter. Luckily, HN makes me always ragequit at some point, cause y'all are a bit special.

huytersd
2 replies
1d2h

Right there with you. This is probably the only thing I’m on board with the republicans about.

mlrtime
1 replies
8h2m

Funny, conservative beliefs are a like a slow feed as you get older and have kids... This is just the gateway.

It happened to me and millions of others.

huytersd
0 replies
3h53m

I’m pretty old and I don’t think anything else remotely appeals to me.

ben_w
2 replies
1d3h

I might be the only one here in favor of this, and wanting to see a federal rollout.

I'm not American, I think it's perfectly reasonable to ban kids from the internet just by applying the logic used for film classification. Even just the thumbnails for YouTube (some) content can go beyond what I'd expect for a "suitable for all audiences" advert.

This isn't an agreement with the specific logic used for film classification: I also find it really weird how 20th century media classification treated murder perfectly acceptable subject in kids shows while the mere existence of functioning nipples was treated as a sign of the end times (non-functional nipples, i.e. those on men, are apparently fine).

Also, I find it hilariously ironic that Florida also passed the "Stop Social Media Censorship Act". No self-awareness at all.

protocolture
1 replies
21h10m

Film classification is also dumb. In Australia Margaret Pomeranz used to run banned film viewing sessions that ended up in cops wrestling them for dvds.

https://www.theguardian.com/film/2023/jul/31/margaret-pomera...

ben_w
0 replies
14h9m

Hence my second paragraph.

The first is saying that no matter what your rules are, the internet may at any time show you a clip from the highest rated category and should be treated as such.

The internet may also at any time show you outright banned content, but that's often treated as a separate battle no matter if that's Tiananmen Square (or "content promoting terrorism", although now I'm wondering if the Chinese Communist Party think the former is a subset of the latter?) or sexual acts you're not allowed to show in film (varies by jurisdiction).

wonderwonder
1 replies
1d5h

I find myself agreeing with this. Me 15 years ago would have raged at this. I have kids now and they are pressured to join all sorts of social media platforms. I still don't allow them to have it but I know they take a slight social hit for it.

There is zero positive to giving kids the ability to access social media sites designed to be addictive when they don't have the mental facilities to determine real from not real. Many adults seem to suffer from this as well. Plus kids don't understand that the internet is forever, really no need for an adult looking for a job or running for office to be crippled by a questionable post they made as an edgy teen.

I'm against a lot of government regulation but in this case I am even more against feeding developing kids to an algorithm

Just remove the temptation and pressure all together.

runsWphotons
0 replies
1d1h

The internet is just as real as anything else. Out of touch.

tootie
1 replies
1d1h

I have two teens and have yet to see the negative effects of social media for them or any of their peers. Not to say it doesn't exist, but I sincerely doubt it's as awful as the doomsayers think. My personal observation of being raised in he 80s is that kids were far more awful to each other then than now.

kbelder
0 replies
23h15m

Fits my observation.

Every generation has this freakout about something or another. I expect modern kids will end up better at handling social media than their parents are.

blitz_skull
1 replies
1d4h

You're not the only one!

I also believe that this is a Big Deal™ that we need to take seriously as a nation. I have yet to see any HN commentator offer a robust pro-social media argument that carries any weight in my opinion. The most common "they'll be isolated from their peers" argument seems pretty superficial and can easily be worked around with even a tiny amount of efforts on the parents' part.

As an added bonus, this latest legislation removes the issue of "everyone is doing it". I mean, sure, a lot still will be—but then it's illegal and you get to have an entirely separate conversation with your kid. :)

lelanthran
0 replies
13h51m

The most common "they'll be isolated from their peers" argument seems pretty superficial and can easily be worked around with even a tiny amount of efforts on the parents' part.

This is so incorrect it makes the flat earth theory look good.

robotnikman
0 replies
1d2h

Same here. With the way the internet is nowadays, its probably best to keep kids off the internet until they are older. One just has to look at whats on places like Youtube 'Kids' to see all the stuff that is not kid friendly and probably detrimental to their mental health.

mmvasq
0 replies
22h55m

16 seems too young. Why not tie it to drinking age. There are way too many people who have gotten better at online manipulation in the past few decades.

meroes
0 replies
21h24m

social media replaced the old internet. this is like banning kids from the internet. Talk about pulling the ladder up behind yourself

kcrwfrd_
0 replies
1d4h

Yes. Rather than mandating verification, can we just mandate that there a registry or that websites are legally required to include a particular HTTP header, combined with opt-in infrastructure in place for parents to use?

e.g. You could set up a restricted account on a device with a user birthdate specified. Any requests for websites that return an AGE_REQUIRED header that don’t validate would be rejected.

jimt1234
0 replies
22h31m

One thing I've found interesting with social media and children is that almost every parent I know recognizes the impact social media has on their children, but they willfully ignore it or feel powerless to avoid it. I hear stuff like, "It's impossible for kids to not be on social media. It's required these days.", and "The social consequences will be worse for them if they're not a social media."

dmitrygr
0 replies
1d3h

I might be the only one here in favor of this

Not even close. I am with you

chankstein38
0 replies
1d3h

No I definitely agree. I'm a little skeptical of how they'll enforce this but ultimately I think less kids on the internet and social media will be a positive and I agree that it doesn't seem like parents have managed to figure out how to address this.

bdangubic
0 replies
22h4m

we have laws against drinking and smoking / vaping… for kids under age of 16 and that has been working so GREAT that we should get more of these laws in place (federal preferrably) so that our kids can be even healthier adults. moar laws please - need as much of federal government in our families and parenting as we can get that I’d suggest 1 federal agent be assigned to each child born to ensure healthy adulthood

babyshake
0 replies
20h48m

I'm in favor of this if the only enforcement actions are against social media companies for being predatory, and not against families for breaking the law and allowing their kids on social media. And it's useful for indicating that social media on the balance is not good for kids.

adamredwoods
0 replies
21h26m

I am against government regulation of websites and classification of websites. If we allow government to do this, it will be politicized at some point.

We need to find a better way, for example (just a quick idea), social websites run and protected by a school-by-school basis. This way, it can be regulated and controlled.

In other words, government should regulate what they already have control over, not impose new control measures over things they don't.

Spooky23
0 replies
20h30m

Can they use email? Text? The telephone?

It’s dumb policy, because Florida GOP. The smart move is to target advertising for kids. If you attack the ability to advertise to the underage audience using mom’s iPad, social media will self police.

Bluescreenbuddy
0 replies
7h17m

You and I both know they're doing this for two reasons. To put a heavy burden that will be almost impossible to enforce on tech companies so they can easily punish them for political reasons and to stop teens from learning early how shitty the GOP is.

Andrex
0 replies
19h49m

Random ideas!

- Make this an ISP level thing? Somehow? They already know the makeup of a household. If they know a house has kids, something something ToS "You as the parents are liable..." Then maybe repeat those scary RIAA letters but "for good" when someone in that household hits a known adult IP?

- Maybe browsers send an "I'm an adult" flag similar to "Do Not Track," and to turn it on, the user has to enter a not-to-be-shared-with-kids PIN? If the browser and OS can coordinate, OSes would be able to tell the browser if the user is an adult, and skip the PIN entering.

- Force kids to use a list of Congress-approved devices that gate access to the wider Internet? YouTube Kids but for everything. Yes, hacker kids will be able to get by, but this being Hacker News, they'd deserve the fruits of that particular labor.

Just spitballing. Anything obvious I'm missing?

PS- I am neither for nor against the Florida-type legislation as of this comment.

Aeolun
0 replies
23h28m

I don’t think so. There’s nothing about social media that makes me feel like kids need it. Hell, if we could ban it for adults that would be an unmitigated good.

goda90
50 replies
1d9h

I'm still on the fence about government doing a parent's job here, especially for kids under 13, but I can't stand that no one pushing these bills has come up with an actually reasonable age verification method.

Goronmon
18 replies
1d9h

I'm still on the fence about government doing a parent's job here, especially for kids under 13, but I can't stand that no one pushing these bills has come up with an actually reasonable age verification method.

How do you anonymously verify someone's age?

goda90
13 replies
1d9h

Anonymous credentials. A central authority with verified age information of each person grants credentials that verify the age to third parties, but the authentication tokens used with the third party can't be used by the third party nor the central authority to identify anything else about the credential holder.

thomastjeffery
11 replies
1d7h

Then there is a data breach, and every person in the country is de-anonymized.

No thank you.

tzs
10 replies
1d6h

A data breach where?

The central authority should be someplace that already has your non-anonymous ID data, so using your ID for age verification doesn't give them any new ID information. The only new thing that them doing age verification adds is that they might keep a list of verification tokens they have issued.

Someone who obtained copies of the verification tokens you requested might go to various social media sites and ask them who used those tokens, allowing matching up your social media identities with your real identity.

That's fixed by making it so the token that is given to the social media site is not the token that came from site that checked your ID. You give the social media site a transformed token that you transform in such a way that the social media site can recognize that it was made from a legitimate token from the ID checker but does not match anything on the list of tokens that the ID checker has for you.

hn_acker
7 replies
1d4h

The central authority should be someplace that already has your non-anonymous ID data, so using your ID for age verification doesn't give them any new ID information. The only new thing that them doing age verification adds is that they might keep a list of verification tokens they have issued.

But the central authority, a third party, will get a heads-up every time someone - whether child or adult - logs into the social media site. That's a privacy violation. Even if the verification system were set up in such a way that the third party wouldn't be able to know which exact website I'm trying to visit, the third party would be able to track how frequently I visit websites that require age verification. With just this law, it would be "you visited social media during X, Y, and Z times." With extensions of this law to other kinds of websites, it would be "you visited social media or porn or violent video games or alcohol sites during X, Y, and Z times", which obfuscates the kind of website I visit but also makes the internet into something I have to whip out an ID for just to use.

That's fixed by making it so the token that is given to the social media site is not the token that came from site that checked your ID. You give the social media site a transformed token that you transform in such a way that the social media site can recognize that it was made from a legitimate token from the ID checker but does not match anything on the list of tokens that the ID checker has for you.

Is it possible to transform the token such that the social media site would be able to link it to your identity but an attacker who gains access to the social media site's data wouldn't? If so, I'd appreciate an example of a transformation for such a purpose. But it doesn't wipe out my privacy concern, that I - or anyone else - wouldn't be able to log in to a social media site without letting a third party know against my will.

kaibee
2 replies
1d4h

But the central authority, a third-party, will get a heads-up every time someone - whether child or adult - logs into the social media site. That's a privacy violation.

Why would you do age verification on login? It only needs to happen once on account creation.

tzs
0 replies
11h50m

I think you’d want to also reverify now and then. People only rarely create accounts, which I think would make de-anonymizing someone from simultaneous breaches of site and verifier logs easier.

If you have to verify often enough, and age verification is required on enough sites that are widely used by the general public so that the mere fact that you are using sites that require age verification is not something you might need to hide, I think it would make it much harder to get useful information from log comparisons.

hn_acker
0 replies
1d4h

Why would you do age verification on login? It only needs to happen once on account creation.

Oops. That slipped my mind. For sites that require log-in, my previous comment is wrong.

I had unconsciously assumed that at least one site would implement the age verification system without requiring users to make accounts to browse the site. In this comment, I will make explicitly make that assumption. For sites without log-in walls but with government-mandated age verification, the concerns in my previous comment would apply. But sites with log-in walls have their own privacy problems independent of age verification, chief being that having to log in means letting the first party site track how often I use the site. A different problem (not necessarily privacy-related, but can be) of log-in walls is that I would be forced to create accounts. If I don't wish to deal with the burden of making accounts, then I won't browse the website. If the website made a log-in wall in response to an age verification mandate from a government, then my First Amendment right to access the speech the website wished to provide will have been chilled.

jlokier
1 replies
1d3h

But the central authority, a third party, will get a heads-up every time someone - whether child or adult - logs into the social media site. That's a privacy violation. Even if the verification system were set up in such a way that the third party wouldn't be able to know which exact website I'm trying to visit, the third party would be able to track how frequently I visit websites that require age verification.

It doesn't have to work like this.

It's technically possible to do verification such that the authority (probably the government which already has a database with your age), doesn't get any communication when verification takes place. They'd have no idea which sites you visit or join, or how often.

And the site which receives the verification token doesn't learn anything about you other than your age is enough. They don't even learn your age or birthday. They couldn't tell the government about you even if subpoenaed.

(But if you tell them on your birthday that you are now old enough, having been unable to the day before, they'll be able to guess of course so it's not perfect in that way.)

Using modern cryptography, you don't send the authority-issued ID to anyone, as that would reveal too much. Instead, on your own device you generate unique, encrypted proofs that say you possess an ID meeting the age requirement. You generate these as often as you like for different sites, and they cannot be correlated among sites. These are called zero-knowledge proofs.

They work for other things than age too. For example, to show you are an approved investor, or have had specific healthcare or chemical safety training, or possess a certain amount of credit without revealing how much, or are citizen with voting rights, or are a shareholder with voting rights, without revealing anything else about who you are.

hn_acker
0 replies
1d1h

Do you mean that I can get a permanent age-verification key from the third-party authority, then never have to contact the authority again (unless I want a new key)? If so (and assuming that zero knowledge proofs, which I'm not very familiar with, work), then my privacy concerns are resolved. (Well, I don't want the authority to keep a copy of my verification key, but FOSS code and client-side key generation should be feasible.)

tzs
0 replies
1d3h

An example of the kind of token transformation I'm thinking of follows.

Assume RSA signatures from the site that looks at your ID having public key (e,m) where e is the exponent and m is the modules, and private key d. The signature s of a blob of data, b, that you give them is b^d mod m.

To verify the signature one computes s^e mod m and checks if that matches b.

Here's the transformation. You generate a random r from 1 to m-1 such that r is relatively prime to m. Compute r' such that r r' = 1 mod m.

Instead of sending b to be signed, send b r^e mod m.

The signature s of b r^e is (b r^e)^d mod m = b^d r mod m.

You take that signature and multiply by r'. That gives you b^d mod m. Note that this is the signature you would have gotten had you sent them b to sign instead of b r^e.

Net result: you've obtained the signature of b, but the signing site never saw b. They just saw b r^e mod m.

That gives them no useful information about b, due to r being a random number that you picked (assuming you used a good random number generator!).

For any possible b, as long as it is relatively prime to m, there is some r that would result in b r^e having the same signature as your b, so the signing site has no way to tell which is really yours.

b is unlikely to not be relatively prime to m. If m is the product of two large primes, as is common, b is relatively prime to it unless one of those primes divides b. We can ensure that b is relatively prime to m by simply limiting b to be smaller than either of the prime factors of m. Since those factors are likely to each be over a thousand bits this is not hard. In practice b would probably be something like a random 128 bits.

bigfishrunning
0 replies
1d3h

But the central authority, a third party, will get a heads-up every time someone - whether child or adult - logs into the social media site.

Why? i imagine this could be a "they've signed my key" situation, no requests needing to go up the tree further then necessary...

thomastjeffery
1 replies
1d6h

Would you care to elaborate?

tzs
0 replies
1d4h

Say you have a user U who wishes to demonstrate to site S that they are at least 16, and we have a site G that already has a copy of U's government ID information.

Here's one way to do it, with an important security measure omitted for now for simplicity.

• S gives U a token.

• U gives G that token and shows G their ID.

• G verifies that U is at least 16, and then signs the token with a key that they only use for "over 16" age verifications. The signed token is given back to U.

• U gives the signed token back to S.

If G saves a list of tokens it signs and who it signed them for, and S saves a list of tokens it issues and what accounts it issued them for, then someone who gets both of those lists could look for tokens that appear on both in order to match up S accounts with real IDs.

To prevent that we have to make an adjustment. G has to sign the token using a blind signature. A blind signature is similar to a normal digital signature, except the the signer does not see the thing they are signing. All they see is an encrypted copy of the thing.

With that change a breach of G just reveals that you had your age verified and gives the encrypted token associated with that verification. These no longer match what is in the records of the sites you proved your age to since they only have the non-encrypted tokens.

Someone with both breaches might be able to match up timestamps, so even though they can't match the tokens from S directly with the encrypted tokens from G they might note that you had your age verified at time T, and so infer that you might be the owner of one of the S accounts that had a token created before T and returned after T.

This would be something people trying to stay anonymous would have to be careful with. Don't go through the full signup as fast as possible--wait a while before getting the token signed, and wait a while before returning the signed token. Then someone who is looking at a particular anonymous S account will have a much larger list of items in the G breach that have a consistent timestamp.

Also note that to G it is just being asked to sign opaque blobs. Occasionally have G sign random blobs. If your G data shows that you are getting your age verified a few times a month, then it is even more likely that if one of those verifies is at about the same time as a particular social media signup it is just a coincidence.

bonton89
0 replies
1d9h

This is technically possible but politically impossible. Any system you make like this will get special government peaking exceptions added making it non-anonymous and probably rank corruption from industry lobbying will add some sort of user tracking for sale with data that is poorly anonymized. Once the sham system is in place they'll probably expand the requirement to other things.

ip26
3 replies
1d9h

Third party, perhaps. You sign up for service A. Service A queries service B, which knows who you are and provides a one-time ack of your age.

tnbp
2 replies
1d9h

That sounds nightmarish. I don't want the verifier to know what porn sites I visit. Someone else proposed the following system to me: a third party authority issues a certificate which I can then use to prove I'm 18. The CA cannot see where I use the certificate, though.

Tadpole9181
1 replies
1d

Er... The CA needs to be used to verify the certificate by the third party, ergo it will know the websites.

It's virtually impossible to make a verification system that's anonymous. Somewhere the third party and authenticator will need to share a secret that you cannot touch.

Furthermore, you would need the government to agree to this system and mandate this system universally and pay for the authentication services to exist. That's not what Florida is doing.

tnbp
0 replies
14h30m

I can show my government-issued ID to any third party without the government knowing about whom I've shown this ID to. The third party needs to trust the government and the authenticity of the ID.

The problem is that my ID contains too much information; I would prefer a document (i.e. digital certificate) that only certifies my age, not my name, address etc.

zo1
11 replies
1d9h

While people are on the fence about it, our children are having their youth, innocence and brains destroyed by tiktok et al. Those platforms are cancer to adults even, let alone impressionable kids... yet here we are still debating it and faffing around about "1st amendment yaddi yadda".

Clubber
10 replies
1d9h

children are having their youth, innocence and brains destroyed by tiktok

For one, ease up on the hyperbole if you want to be taken seriously. I'll give you the benefit of the doubt because the news is nothing but hyperbole these days, so it's easy to pick up the habit. Second, most kids aren't having "their youth, innocence and brains destroyed." The news takes the edge cases, amplifies them and presents it as the norm to peddle fear because fear sells. Nothing ever is bad as the news makes it out to be, but they gotta make a dollar, you see how bad the news business is since the internet?

FWIW, my kid uses social media and just connects with her friends. Nothing overly malicious goes on, they just goof off. I've checked.

You really wanna protect the kids from anxiety and whatnot, block the news and all the talking heads trying to manipulate the next generation to their political opinions.

light_hue_1
3 replies
1d9h

Don't minimize what's going on.

There's a massive rise in depression in young children. The teen suicide rate has almost doubled.

The idea that you know what's going on, on their social media is pretty funny. Certainly what every adult always assumed about me. And now that I have kids, I can see how easily other kids fool their parents all the time.

And what's overly malicious? It may be social media itself without anything bad driving this. Merely seeing a sanitized version of people's lives over and over again, without anyone bullying you, that leads to depression because your life isn't as good.

No. Don't block the news. Because then you miss important things like this. https://www.hhs.gov/surgeongeneral/priorities/youth-mental-h...

Clubber
2 replies
1d7h

I think if you wanted to reduce teen suicide a significant amount, banning social media isn't going to do it. It certainly isn't responsible for half. Of course banning it doesn't cost the government any money, so it's top of the list as opposed to any real solutions.

You also cherry picked your stats. If you open a larger window, the current teen suicide rate is not as abnormal as you are making it out to be.

https://www.cdc.gov/mmwr/volumes/66/wr/mm6630a6.htm

Perhaps all the fear mongering is having an effect on their mental state, no? Teens don't yet realize that most of the world is phony bullshit.

medvezhenok
1 replies
1d5h

Umm, that chart is 10 years out of date - it ends in 2015; the beginning of the social media era.

The current teen suicide rate is ~62 / 100K, which is just about double (or triple!) the last value in that chart. And is also an anomaly over the last 40 years.

Clubber
0 replies
9h36m

I stand corrected on the current stats. I went with what the CDC had on a google search. It's aggravating that most sources don't show the entire picture.

Here's a chart showing that this trend is mostly in the mountain states and Native Americans are the largest demographic affected by this trend by nearly triple. Both these stats disprove the theory that social media has much of an impact on teen suicide across the entire nation, otherwise why wouldn't states like California and Florida have a higher rate? Residents of those states obviously use social media too.

https://www.charliehealth.com/research/the-us-teen-suicide-r...

2015; the beginning of the social media era.

The social media area had been in full swing for 10 years by 2015. Facebook was established in 2004 and blew up by 2006. Twitter blew up a few years later.

The narrative that social media is the cause in the rise of teen suicide across the country is simply false. Native Americans in mountain states are bearing the brunt of it and causing the national average to spike. Instead of "tilting" to social media, we should try to understand why Native American teens are having such a difficult time and solve that problem. That doesn't draw headlines though, does it?

ditto664
1 replies
1d6h

For one, ease up on the hyperbole if you want to be taken seriously.

I disagree that this is hyperbole. It's a huge problem among kids. Literacy rates are dropping. Listen to the stories you hear from teachers.

mrguyorama
0 replies
1d1h

Teachers have ALWAYS made those screams. My mother, a french teacher always complained that before she could teach kids french, she had to teach them how to read a clock, how to do math, and how the days of the week work (these were fifth graders mind you). She blamed education policy, but this is nothing more than what happens when 30% of your students are in poverty.

The reality is that some percentage of students will always fall through the cracks, and the human brain loves to blame whatever is "new" for problems that are "new" to you. This has been a problem for teachers since at least the No Child Left Behind policy, and even goes as far back as Socrates bemoaning his students being terrible because books meant they didn't have to have perfect memories.

Students suffered because covid was both a huge disruption to their education, and parents freaked out instead of trying to handle it (and plenty of people literally could not handle it anyway). It doesn't help that half the country openly cries that education is nothing more than liberal indoctrination, and openly downplay the value of even basic education, like the three Rs, and claims that anything higher than a high school education is also liberal indoctrination, is "woke", and is valueless.

I 100% hate tiktok, but I don't think it is (currently) being used to mentally attack the US. Maybe someday if we are ever at war with China, but right now they are content believing that inclusivity is toxic on it's own. I don't think tiktok changes people's brain significantly. I do think it is extremely low value way to spend time, and that it is addictive, two serious issues when taken together, but then again I spent my life watching several hours of TV a day. I especially don't like how tiktok seems to purposely direct new male users to what is basically softcore porn.

dang
1 replies
1d5h

For one, ease up on the hyperbole if you want to be taken seriously.

That is the kind of swipe the HN guidelines ask people to edit out of their comments (https://news.ycombinator.com/newsguidelines.html: "Edit out swipes."), so please don't post like this.

Your comment would be fine without that sentence and the one after it.

(I'm not saying the GP comment is particularly good—it was pretty fulminatey—but it wasn't quite over the line, whereas yours was, primarily because yours got personal.)

Clubber
0 replies
11h7m

so please don't post like this.

Ok.

because yours got personal

In my defense, I made an effort to attacking the person's statement, not the person themselves.

zo1
0 replies
1d9h

I exaggerate for a reason, because I feel strongly about it and think that it does do those things in a literal sense.

Besides, I've consumed tiktok and seen the content. It is nowhere near appropriate for children, it's night and day.

bryan_w
0 replies
9h59m

You can tell by how many different replies that needed you to believe the hysteria, that there's something inorganic about this topic.

bamboozled
11 replies
1d9h

You could say the same about smoking I guess ?

For kids under 13 to see any of the content , ask them to enter a credit card ?

LeifCarrotson
10 replies
1d9h

Adults should not have to enter a credit card to, say, read HN. But kids should. And therein lies the problem...

thsksbd
9 replies
1d9h

Why not? Honestly asking here.

Let's assume oposition to the law is a "progressive" position:

If there is a constitutional right to absolutely 100% friction free access to information then what happens to all the barriers the government has erected to access covid, Trump, Russia and other "disinformation" progressive pushed for?

(You can invert this example for a right wing if you want)

nickthegreek
6 replies
1d8h

Not everyone has a credit card. Some people cannot obtain a credit card. People under the age of 18 can also have a credit card. I do not trust random sites with my credit card.

thsksbd
5 replies
1d6h

So these are objections in practice, not principle. Important but consider:

- Most states give free IDs

- Your safety concern is addressed by other commenters here (see the verifying age anonymously)

Tadpole9181
2 replies
1d4h

How about this: I don't want every little thing I do on the internet tracked and tied to my real identity?

Is the CCP conducting a psyops on HN right now or something? Since when were we all for every tiny interaction you have on the internet requiring you to look in the scanner and say "My name is X and I love my government and McDonalds"?

CaptainFever
1 replies
1d4h

HN seems to have become a lot more mainstream, in comparison to the old cyberlibertarian "privacy and piracy" days.

Tadpole9181
0 replies
1d3h

Where's the new hangout?

nickthegreek
0 replies
1d4h

You changed the argument from credit card to government Id (an even worse idea imo).

You seem to want the law to be that I need to show ID to enter most internet establishments. I will never, ever, ever hold that opinion.

djfdat
0 replies
1d4h

Gut check was that the "Most states give free IDs" statement wouldn't hold up. So I checked real quick.

"At least eight states issue free or discounted IDs to low-income or homeless residents and at least 10 states waive ID fees for seniors."

That's far from most states, and even then it comes with stipulations.

jMyles
0 replies
1d4h

If there is a constitutional right to absolutely 100% friction free access to information then what happens to all the barriers the government has erected to access covid, Trump, Russia and other "disinformation" progressive pushed for?

...those barriers go away. They never really existed in the first place in any real way. Like the Great Firewall, they were a polite fiction defining what people are allowed to know, but were trivially circumvented from minute zero.

This is one of the most reliable and desirable features of the internet in the first place.

frumper
0 replies
1d7h

Growing up I remember the trope about showing papers to do anything in the USSR. It was in direct contrast to the display of freedom in America.

soared
1 replies
1d4h

https://developers.google.com/privacy-sandbox/protections/pr...

Private State Tokens enable trust in a user's authenticity to be conveyed from one context to another, to help sites combat fraud and distinguish bots from real humans—without passive tracking.

An issuer website can issue tokens to the web browser of a user who shows that they're trustworthy, for example through continued account usage, by completing a transaction, or by getting an acceptable reCAPTCHA score. A redeemer website can confirm that a user is not fake by checking if they have tokens from an issuer the redeemer trusts, and then redeeming tokens as necessary. Private State Tokens are encrypted, so it isn't possible to identify an individual or connect trusted and untrusted instances to discover user identity.

Tadpole9181
0 replies
1d

This system clearly and trivially deanonymizes the internet. Even worse than a centralized system, it uses a simple "just trust me bro" mentality that issuers would never injure users for personal gain and would never keep logs or have data leaks, which would expose the Internet traffic of a real person.

safog
1 replies
1d7h

We're barreling towards an internet that requires an id before you can use it.

It's a bit upsetting but I don't harbor the early 2000s naiveté about the free internet where regulation doesn't exist, the data exchange happens over open formats and connecting people from across the world is viewed as an absolute positive.

Govt meddling on social media platforms, the filter bubble, platforms locking data in, teenage depression stats post Instagram, doom scrolling on tiktok have flipped me the other way.

Internet Anonymity is going to die - let's see if that makes this place any better.

Tadpole9181
0 replies
1d1h

Govt meddling on social media platforms

And the government having unfettered knowledge of every site you visit - in particular the more salacious ones - is how we solve that? Surely that won't be used as a political cudgel to secure power at any point, nor will it ever be used to target specific demographics or accidentally get leaked.

mbitsnbites
0 replies
1d8h

government doing a parent's job

The problem here is that it's pretty much out of the hands of the parents. If your kids' friends have social media, your kids will absolutely need it too in order to not be left out. I've witnessed the pressure, and it's not pretty. Add to that the expectation from society that children shall have access to social media.

Regulation is pretty much the only way to send the right signals to parents, schools, media companies (e.g. Swedish public service TV has a kids app that until recently was called "Bolibompa Baby", but it's now renamed to "Bolibompa Mini"), app designers, and so on.

insane_dreamer
0 replies
1d9h

government doing a parent's job

the govt already set the bar at 13, so what's different about setting it at 16?

KaiserPro
0 replies
1d2h

I'm still on the fence about government doing a parent's job here

The issue is, as a parent who is not very technical, how do they _safely_ audit their child's social media use?

I am reasonably confident that I could control my kid's social media habit, but only up to a point. there isn't anything really stopping them getting their own cheap phone/signing in on another person's machine.

The problem is, to safely stop kids getting access requires either strong authentication methods to the ISP. ie, to get an IP you need 2fa to sign in. But thats also how censorship/de-anonymisation happens.

pfdietz
35 replies
1d9h

I can't see how this could pass 1st amendment scrutiny.

goda90
16 replies
1d9h

How would it be different than age restricting voting, driving, or alcohol and tobacco sales? It seems there's precedent for treating minors differently than adults in many ways.

nickjj
11 replies
1d9h

age restricting voting, driving, or alcohol and tobacco sales

I think those things are a bit more black and white.

How do you define social media? Is HN social media? You engage with others through comments, posts are ranked and it has voting elements as well as your profile has a gamified score for upvotes. Is a Disqus comment on any website social media if that's how we broadly define social media? Where do we draw the line?

You could make a case that leaving reviews on a site are a form of social media too. You can post something there and feel like you need to check back in hopes someone leaves a like or replies. If it were a wearable item you might take a picture of yourself and now hope people engage with it.

bhpm
8 replies
1d9h

The social media platforms the bill would target include any site that tracks user activity, allows children to upload content or uses addictive features designed to cause compulsive use.

mccrory
5 replies
1d9h

Then they can say goodbye to most online gaming. I wonder how people will feel about this when they realize their kids can’t play games anymore?

Ekaros
1 replies
1d9h

Pretty good. Pretty good. I would go to argue that online games do more harm than good to children and teenagers. In many ways banning them from playing them might steer them to more productive use of time.

serialNumber
0 replies
1d6h

Must children's use of time be "productive"? They have their whole lives to be productive - outright banning video games is not the solution in my eyes

Dalewyn
1 replies
1d9h

According to another comment[1], video games are exempted.

[1]: https://news.ycombinator.com/item?id=39176380

mccrory
0 replies
1d6h

This exempts almost everyone and everything. lol

mbreese
0 replies
1d9h

At the time, Fortnite and Minecraft (and more) were my son’s way to socialize with friends. That was how they hung out (pre Covid). I can honestly say it would have been detrimental to him from a mental health perspective if those outlets didn’t exist or if kids were blocked from them.

Drawing the lines between what types of media are and aren’t allowed is a major issue with this type of law, regardless of if you think it’s a good idea or not.

nickjj
1 replies
1d9h

That covers a massive range.

"I hope someone likes my review" is not that much different than "I hope someone likes my tweet" or "I hope someone replies to my IG post" or "I hope someone replies to my HN comment".

All 4 scenarios trigger the same thing which is setting up a future expectation that's hopefully met while you wait with anticipation of the event. Is that the process they are trying to get a handle on?

Personally I don't know how any of that could get enforced. Even making the internet read-only wouldn't work because that wouldn't stop people from internally comparing themselves to someone else who is allowed to post. Although that type of thing has been going on since advertising existed.

bhpm
0 replies
1d8h

What products are children reviewing?

zo1
0 replies
1d9h

You could come up with corner cases and odd delineations for those former examples too. Yet we all know the gist of the laws and manage to somehow, on the aggregate, prevent kids from engaging in said activity.

E.g. alcohol. How do we stop kids form drinking alcohol? How do you define alcohol? What about kids medicine that includes alcohol, what about medical procedures that insert substances with alcohol content. What about shops that get conned by kids with fake ID. What if the label gets scratched off and kid doesn't know it had alcohol. What if a parent gives their child a sip of beer. What about old home remedies that include whiskey. What about colic medicine that has alcohol, okay What about carving exceptions for babies, etc etc etc.

Granted some of those are contrived, but it's not as black and White as you think.

internet101010
0 replies
1d7h

I would define it as a platform where the primary focus is for users post visual content of themselves.

pfdietz
1 replies
1d9h

There's no constitutional right to drive, smoke, or drink alcohol.

As for voting, it's constitutionally mandated that 18 or older can vote, which implies there's no constitutional mandate for anyone younger.

cooper_ganglia
0 replies
1d4h

Minors do not have full constitutional rights when it comes to free speech. We've had SC precedence for this for 50+ years, thanks to Ginsberg v. New York.

"In Ginsberg v. New York, 390 U.S. 629 (1968), the Supreme Court upheld a harmful to minors, or “obscene as to minors,” law, affirming the illegality of giving persons under 17 years of age access to expressions or depictions of nudity and sexual content for “monetary consideration.”

Judge Fuld of the Nassau County District Court had convicted Sam Ginsberg, who owned a small convenience store in Long Island, New York, where he sold “adult magazines” and was accused of selling them to a 16-year-old boy on two occasions."

It's not good for kids to have unrestricted access to pornography in the same way that it's not good for them to have access to unrestricted social media.

summerlight
0 replies
1d

The legal restriction needs to be very specific and demonstrate a good trade-off between societal gain and harms on the individual's right, not something a clean cut like many folks misunderstand.

Alcohol and Tobacco are pretty straightforward in the public health context. And you probably don't want your toddlers to drive a car given the risk. The voting age is an exception as it's defined in the 26th amendment. I consider it as political compromise though.

erellsworth
0 replies
1d9h

I think the difference would be in enforcement and what that would imply. How do you verify someone's age online without exposing that person to unwarranted tracking? Do we want to just say social media anonymity is dead?

gnicholas
4 replies
1d4h

Minors don't have the same First Amendment rights as adults. For example, government-run schools can have speech codes that students must obey. Although I have not read this bill, the general notion blurs the line between speech and action a bit, which would make it easier to pass muster.

piperswe
3 replies
1d4h

My understanding is that minors _do_ have first amendment rights, and school restrictions on speech can only apply to instances which disrupt the learning environment.

gnicholas
2 replies
1d4h

Yes, they do have First Amendment rights, just not to the same degree as adults. This is the same as other rights — they can't buy guns, vote, etc. either. The point is that it's definitely not a slam dunk case as claimed. I'd love to hear a constitutional lawyer weigh in, if there are any here. I used to be a lawyer, but this was never my specialty.

shrimp_emoji
1 replies
1d4h

No way. They're essentially property.

If someone could extralegally ground me with no devices if I say something they don't like (provided that it's speech protected under the First Amendment -- since not all speech is), I essentially have no First Amendment right.

gnicholas
0 replies
1d3h

I’m not sure what you mean. Can you explain?

mrstumpy
3 replies
1d9h

Children don't have the same standing with the constitution as adults. I don't remember the exact term but children have their rights restricted all the time. They do not have free speech in school for instance. They certainly have limits to their second amendment freedoms. So I don't think this bill will have constitutional issues. Personally, I'm all for a federal restriction on addictive social media for kids.

pfdietz
2 replies
1d9h

So, a law preventing children from receiving religious instruction would also be constitutional, in your mind?

etiennebausson
1 replies
1d7h

Spirituality (and religion) are personal choice. There is no "religious education" for children, only indoctrination. It may be legal, but it is in no way moral.

pfdietz
0 replies
1d4h

This is not an answer to the question I asked.

Ekaros
2 replies
1d9h

2nd amendment seems to already been over turned. Children cannot own for example fully automatic weapons. And in some states firearms in general.

lsiunsuex
0 replies
1d9h

The 2nd amendment is very much alive and kicking. Those of us that would like to defend ourselves will fight for 2A.

To your point, children cannot own weapons. Most states are a minimum 18 years old for if not 21.

Full automatic weapons have been banned from civilian production for decades now. Semi automatic weapons are not weapons of war; are not the death machines non-gun owners think they are.

I was very anti gun until we (the wife and I) had a need to defend ourselves. We were literally put in a self defense position. We're both all in now and actively train ourselves and educate ourselves. More so, friends that are anti gun are no longer anti gun once they go to the range with me and are educated on what a semi automatic weapon actually is.

TL;DR - anyone anti gun is simple uneducated in the matter.

_dark_matter_
0 replies
1d9h

2nd amendment seems to already been over turned. Children cannot own for example fully automatic weapons.

Just as the founding fathers intended, children brandishing fully automatic firearms. James Madison is turning in his grave!

oblio
1 replies
1d9h

I'm confused, what does this have to do with freedom of speech?

Also, do kids even <<have>> freedom of speech?

alistairSH
0 replies
1d9h

Freedom of speech has ~2 definitions in the US... first, the legal definition (with the Constitution and related case law). A second, the general notion of being literally free to make speech however/whenever one wants without intrusion for anybody at all (government or otherwise).

In this case, we're sort of at the edge of the legal definition. Social media can be viewed as the modern version of the town square (can be, isn't 100% proven out in law yet). If one takes that statement as valid, then the government cannot regulate speech through social media without a very good reason.

But, minors don't have the exact same rights as adults for various reasons (guns, alcohol, privacy at school, etc).

My general impression (IANAL)... the government can likely limit minors' speech on media platforms, however that limit would have to very specific and tightly defined so as not to deny speech to adults. The devil is in the details (implementation)... the legality probably hinges on what method is required to verify age.

perihelions
0 replies
1d8h

Here's the current status of state cases, if anyone's curious:

- "FIRE’s lawsuit asks Utah to halt enforcement of the law and declare it invalid. Other states — including New Jersey and Louisiana — are proposing and enacting similar laws that threaten Americans’ fundamental rights. In Arkansas and Ohio, courts have blocked enforcement of these laws."

https://www.thefire.org/news/lawsuit-utahs-clumsy-attempt-ch... ("Lawsuit: Utah’s clumsy attempt to childproof social media is an unconstitutional mess")

I.e. at least four states' legislatures have passed laws like this, federal injunctions have paused two. Two others (Utah, Louisiana) aren't in effect yet.

insane_dreamer
0 replies
1d9h

I'm not sure 1st amendment legally applies to minors since they are legally under their parent's jurisdiction.

dpkirchner
0 replies
1d9h

There's no downside to passing unconstitutional bills. They just need to become law and remain in effect long enough to be useful in electoral campaigns. When the courts strike the law down, they'll be painted as activist justices and yadda yadda we've seen all this before.

2OEH8eoCRo0
0 replies
1d9h

I can think of a few reasons. This doesn't prevent children from criticizing the government. The 1st amendment doesn't guarantee a platform. Children aren't legally the same as adults.

akerl_
29 replies
1d10h

This makes me think of how, as a child, every site asked “are you over 13”, and I diligently clicked “yes”. Some more clever sites asked for my birth year… forcing me to do the arduous work of taking the current year and subtracting 14.

Though I suppose the real plan here is to pass the law and then have the government selectively prosecute social media companies for having users under 16.

ok123456
5 replies
1d4h

A small transaction would cover 99% of cases (e.g., pay a dollar that's immediately refunded). It would stop kids from casually creating accounts. The kids who can do this are already precocious enough to bypass any other verification steps you could come up with.

Maybe if they use a profile pic that you algorithmically determine is someone underage, you could do some additional checks. The smart ones would learn not to utilize a profile pic of themselves, which would ultimately be better.

rndmnflnce
2 replies
1d2h

You could even shave a fraction of a penny off with each transaction and they would never notice!

bennettnate5
1 replies
1d

I understand the point you're trying to make with this (social media will definitely abuse the additional knowledge/opportunities they get by having compulsory credit card info), but chargebacks are actually a pretty effective incentive against this. Given that chargeback fees are ~$20-$100 per incidence, you'd only need 5% or less of users calling out the social media site's false charge for that company to be netting a loss.

I would relish the opportunity to cost Facebook $20 because they gave back a couple cents less than they should have.

yuliyp
0 replies
17h10m

I think you completely missed the point of the post you replied to (a reference to the plot of the movie Office Space).

zamadatix
1 replies
22h48m

I wonder if it'd really cover anything remotely close to 99% of cases. Even if 100% of parents knew about it and watched their credit cards enough to notice a $1 refunded transaction it just takes something like one friend in high school with a credit card to sign up all their little brother's friends. It may even just cause more credit cards being shared around than kids it stops from getting to the site they want on.

Then there'd be even more unintended consequence. Instead of sites you don't want kids creating accounts on you'd have sites selling 5 minutes of ads to create an account for them or increasingly shady stuff. Preventing this kind of site is the same as the original issue.

throwup238
0 replies
20h19m

Reminds me of a scene in the South Park episode The Scoots where they needed a credit card for those e-scooters.

All the kids had memorized several of their parents CC numbers, including backups if someone didn't accept AmEx

tnbp
3 replies
1d9h

  Some more clever sites asked for my birth year… forcing me to do the arduous work of taking the current year and subtracting 14.
But why? You could have just picked a year that worked, and sticked with it. Obviously, there's no way of telling which year works, but you could have bruteforced that just once.

m2fkxy
1 replies
1d9h

probably the thrill of living on the edge.

shermantanktop
0 replies
1d7h

Arithmetic under stress is a helluva drug.

justinclift
0 replies
1d9h

Obviously, there's no way of telling which year works

Um, it's simple maths. Guessing you're meaning something else though?

ratg13
3 replies
1d9h

Just a guess .. but I get the feeling the 'law' is more to get this in the mind of the parents.

It gives parents tools and guidelines that can help them direct their children.

Whether this is a good approach or not, is a whole other argument.

Consultant32452
1 replies
1d9h

It also gives the parents an excuse to limit their children.

"I'm not being mean, it's the law."

mrguyorama
0 replies
1d2h

Anyone who turned on a dome light in a car during the night knows you don't need an actual law on the books to do that

akerl_
0 replies
1d7h

The legislature is claiming out loud that the law is because parents don’t have the ability to do this on their own.

AlecSchueler
3 replies
1d8h

I don't think enforcement actually needs to be very tightly controlled. The barriers that are put in place like the one you describe are already enough to create a social milieu where parents and kids with think twice about these things and understand that there is a recognised harm potential.

There's nothing stopping you pouring your youngsters a glass of wine with dinner, but as a society we've made the dangers of alcohol and similar things so well understood that no parent wants to.

WarOnPrivacy
1 replies
1d8h

as a society we've made the dangers of alcohol and similar things so well understood that no parent wants to.

Unfortunately, as a society, we have a much harder time grasping social media threat data. I suppose some of that is due to how news orgs consistently+bizarrely+hugely overstating the actual harms in the data.

https://www.techdirt.com/2024/01/08/leading-save-the-kids-ad...

AlecSchueler
0 replies
1d8h

Oh totally! What I'm saying is that laws like this, even when not enforceable per se, can help move society towards that understanding.

aidenn0
0 replies
1d

Judging by how many of my kid's friends under 13 have various accounts already, I'm not sure that's the case.

wrsh07
2 replies
1d4h

Lol I just always added 10 to my birth year

umvi
1 replies
1d

Why would you want to make yourself 10 years younger?

postalrat
0 replies
23h32m

Why would you want to be truthful?

idontwantthis
1 replies
1d9h

I think it’s better to give the media companies a requirement and let them figure it out rather than government mandate how they do it.

akerl_
0 replies
1d7h

What if we didn’t give them a mandate and then they didn’t figure it out? The fix for social media is not “wait til 17 to expose kids to it”.

Dalewyn
1 replies
1d9h

have the government selectively prosecute social media companies for having users under 16.

The US government is already legally mandated to prosecute companies known to harbor information, collected online, concerning minors less than 13 years old without consent from their parents or legal guardians.[1]

It's why Youtube blocks comments and doesn't personalize ads on videos published for kids, to pick out a prominent example.

[1]: https://en.wikipedia.org/wiki/Children's_Online_Privacy_Prot...

Obligatory IANAL.

lupire
0 replies
1d6h

Laws are getting stricter. Around the world, there is increasing regulatory requirement for businesses to actively investigate user behavior (tracking!) to identify and exclude underage users who are concealing their age.

rtkwe
0 replies
1d5h

If the year was a scroll box I was always about a flick and a half old which often ends up in the 60s so they had some odd age metrics.

riffic
0 replies
1d7h

it was always appropriate to lie and make up a fake age / date of birth on those signup pages.

dotnet00
0 replies
1d2h

Yeah, similarly I had just gotten used to entering an elder sibling's birthday whenever asked. Adding these arbitrary age restrictions does nothing but make it increasingly obvious to kids how little our leaders and other supporters of these arbitrary age restrictions actually care.

Lerc
0 replies
23h44m

I remember my daughter at an astonishingly young age encountering a age-login screen, turning to me and asking "How would they be able to tell?" then merrily telling the system she was 18.

https://fingswotidun.com/images/MelissaQuake3.jpg

lsiunsuex
14 replies
1d9h

As a Floridian and someone in IT - I'm curious how this will be implemented

I can't remember the last time I signed up for a new social network; do they ask age? Is it an ask to Apple / Google to add stronger parental approval? Verify drivers license #?

We heard about this days ago on local news and I've been struggling to figure out short of are you 16 years or older how this is gonna get done and how do you fine someone if it's breached.

WarOnPrivacy
5 replies
1d8h

I'm curious how this will be implemented

The only way to determine age is to compile a database of gov-issued IDs and related data. Which is an unconstitutional barrier to speech. Which is why this will get struck-down like each similar law.

The part about ID data eventually being shared with 3rd parties, agencies - and/or leaked - is a bonus.

tzs
4 replies
1d7h

It sounds like you are envisioning age verification that involves just two parties: the user and the site that they need to prove their age to. The user shows the site their government issued ID and the site uses the information on the ID to verify the age.

That would indeed allow the site to compile a database of government issues IDs and give that information (willfully or via leaks) to third parties.

Those issues can be fixed by using a three party system. The parties are the user, the site that they need to prove their age to, and a site that already has the information from the user's government ID.

Briefly, the user gets a token from the social media site, presents that token and their government ID to the site that already has their ID information, and that site sign that token if the user meets the age requirements. The user presents that signed token back to the social network which sees that it was signed by the third site which tells it the third site says the user meeds the age requirement.

By using modern cryptographic techniques (blind signatures or zero knowledge proofs) the communication between the user and the third site can be done in a way that keeps the third site from getting any information about which site they are doing the age check for.

With some additional safeguards in the protocol and in what sites are allowed to be the ID checking sites it can even be made so that someone who gets records of both the social media site and the third site can't use timing information to match up social media accounts with verifications and so could work with sites that allow anonymous accounts.

bilsbie
1 replies
1d7h

I don’t think anyone in government is smart enough to enable or allow this.

throwaway-blaze
0 replies
1d4h

That is literally how the age verification for porn works in Louisiana and Virginia among other states.

hn_acker
0 replies
1d4h

With some additional safeguards in the protocol and in what sites are allowed to be the ID checking sites it can even be made so that someone who gets records of both the social media site and the third site can't use timing information to match up social media accounts with verifications and so could work with sites that allow anonymous accounts.

I'm assuming that there will be some kind of way to prevent matching of logged IP addresses between the social media site and the verification site. Is there really a method for preventing matches of timing without requiring the user to bear the burden of requesting tokens from the sites at different times?

As I hinted at in a different comment [1] though, there remains a tradeoff of letting the verification party know how frequently I visit a single type of website vs. avoiding the first problem but needing my ID for multiple types of websites i.e. more of the internet.

[1] https://news.ycombinator.com/item?id=39180203

WarOnPrivacy
0 replies
1d3h

It sounds like you are envisioning age verification that involves just two parties: the user and the site that they need to prove their age to. ... Those issues can be fixed by using a three party system.

Okay. That sounds promising.

However the method of collecting childrens' private data isn't what makes these laws unconstitutional. It's a government erecting broad, restrictive barriers to speech.

ref: https://reason.com/2023/09/19/federal-judge-blocks-californi...

ref: https://www.theverge.com/2023/8/31/23854369/texas-porn-age-v...

ref: https://www.techdirt.com/2023/09/13/you-cant-wish-away-the-1...

ref: http://mediashift.org/2009/01/u-s-supreme-court-finally-kill...

ref: https://netchoice.org/district-court-halts-unconstitutional-...

Utah caught a glimpse or reality and stayed their own unconstitutional law. They seem to looking for a way to retool it so it won't be quite so trivial to strike down.

ref: https://kslnewsradio.com/2073740/utahs-social-media-child-pr...

thsksbd
3 replies
1d9h

(Putting aside if the law is good or bad and the constitutionality of it.)

Put criminal penalties to the directors if no reasonable attempt to keep kids out.

Plus corporate death penalty if they purposely target kids.

Then how they enforce it doesn't really matter as long as there are periodic investigations. The personal risks are too great and the companies will figure it out.

djfdat
1 replies
1d4h

Excessive punishment with arbitrary enforcement? What could go wrong?

a_victorp
0 replies
1d3h

You forgot selective punishment as well

throwaway-blaze
0 replies
1d4h

The FTC already implements a "corporate death penalty" in the form of massive fines if an organization collects data on kids and uses it to target advertising (see COPPA)

AlecSchueler
1 replies
1d9h

Yes, they always ask your date of birth and generally won't allow sign ups for under 13s, it's been that way for almost 20 years.

internet101010
0 replies
1d7h

Yep. Twitch automatically bans any chatter who says that they are under that age.

brightball
0 replies
1d9h

My guess is that it will be most easily enforced in school. After school is another story entirely.

alistairSH
0 replies
1d9h

Yes, some (all?) ask for DOB/age at sign-up.

If I remember correctly, at one time Google even tried to enforce it and there were usability problems with typos and wrong dates and things - there was no verification and no easy way to fix an error. IE, if a mid-40s adult accidentally entered 1/1/2024, they'd be locked out. And if a kid entered 1/1/1977, they'd have an account (but not way to correct that date when they eventually turned 18).

nerdjon
13 replies
1d4h

I feel like I can almost guarantee that this bill has nothing to do with protecting children and has more to do with brainwashing children and restricting their access to opposing viewpoints, especially given this is Florida.

That being said, I am not strictly opposed to a bill like this. But 16 is way too old. I feel like likely somewhere within the 10 to 13 range since most don't allow for under 13 anyways would be fine. But then if they all block under 13 what is the point of the bill?

mrangle
6 replies
1d3h

Restricting social media use is tantamount to brainwashing? I don't see the connection.

As for restricting access to "opposing" (opposition to what?) viewpoints, what children can be exposed to has long been restricted.

But since there isn't a syllabus for what children will be presented on social media, I don't see the viewpoint restriction angle either.

In fact, that position is illogical to the point that it raises the question of whether or not people concerned with it have an agenda to expose kids to "viewpoints" that their parents would disapprove of. Under the radar of supervision.

Going only on my experience with social media, a valid and more plausible reason for this restriction would be that social media seeks to optimize the feed of users for engagement. In a manner that "hacks" psychology in a way that makes it difficult for even adults to disengage. Given that minors do not have fully developed brains, the ability to disengage may be even more hindered.

Television programing has long sought this goal as well and with some success. While that use isn't restricted, there is theoretically a red line. Florida may see it in social media use.

nerdjon
3 replies
1d3h

opposition to what?

This is coming from the state that is trying to ban books based on some backwards concern of a white kid feeling bad. (for the record I am white).

I don't care what side of politics you are on... you know what opposition I am talking about. Whether you are for or against it.

not people concerned with it have an agenda to expose kids to "viewpoints" that their parents would disapprove of.

Yes! Because otherwise the parents are brainwashing their kids into their viewpoint not allowing them to see the real world.

This isn't a hard concept to understand here.

I mean I am liberal and atheist. But even I have wondered if I ever have kids if I don't expose them to the choice of religion if that is brainwashing of its own. I may try to justify it with the harm that religion has caused, but I am still denying my kid another prospective that is different than my own.

Edit:

To clarify here. If this was a state that was not actively removing opposing viewpoints from their libraries and teachings than I may buy that they actually care about their kids. But it's not, it's Florida. The beacon of being scared of their kids knowing anything about the real world and daring to have compassion for someone different than them.

evantbyrne
2 replies
1d2h

I don't mean to turn this into a religious discussion, but adding onto what you are saying here with a personal anecdote: having access to the internet allowed me to see viewpoints besides the religious right-wing perspective I had been conditioned with (for lack of a better term), but the internet did not start the process of leaving religion. I made the decision to seek other perspectives and did so through many avenues. Blocking teens from generally interacting with other people online certainly looks like a hamfisted attempt at ideological preservation. It won't work.

nerdjon
0 replies
1d2h

If you don't mind me asking what was the thing that prompted it? Tbh I was in a similar boat growing up and for me it was the internet that really first exposed me to things because I grew up in a very conservative area. I mean part of it for me is also being gay and at that time being pushed out in a way so I was also seeking that.

I consider myself very fortunate that I had gone to talk to the school guidance counselor and she was able to get me a book to help as well.

But Florida in particular is going multiple routes which is why I said what I said. Some kids will still seek it out, but some may just not be exposed to other viewpoints if they are more sheltered or live in a certain type of community.

And I know that this really isn't going to work, especially in today's age. But there is also something unique about seeing someone else's life that is different than yours (admittedly though the filtered lens of social media) vs just seeing on tv/movies or even the news.

mrangle
0 replies
6h59m

There's nothing hamfisted about monitoring what other adults say to teens (and younger) when alone. It falls under basic supervision of minors, in order to prevent abuse.

Including abuse that abusers try to shoehorn in under the guise of noble cause. For example, like sexual grooming, cultic induction, political radicalization, etc.

Anything that anyone can say to a child, can be said to an adult. I mean that in two fashions.

The first is that it can be said in the presence of a guardian. If it can't be, that's a red flag for abusive and grooming behavior. Children have what is known as "guardians" for a reason. They are charged with making decisions for the child until that child is an adult. Which includes making decisions about steering them around abusive adults. Trying to circumvent that firewall is red flag behavior. Everyone who aims to do this should know how they are framing themselves. If an adult has something to say to children that is outside of the known school curriculum, then there is zero reason that it can't be said to their guardian at the same time.

The second fashion that I mean the assertion that "anything that anyone can say to a child, can be said to an adult" is the following:

If someone believes that what a parent teaches a child is misguided, then there is no reason whatsoever that their alternate presentation of facts cannot wait until the child is an adult. Assuming that their guardian wants to sheild them from that alternative view.

Needing to present alternative views to a child, instead of an adult who is more capable of wieghing what they say, falls in the same category of suspicious red flag behavior.

You may have appreciated the alternative views that you sought out as a child, but the truth is that you circumvented your parent's guardianship. And if your current views are valid or more valid, then they would have remained so once presented to you as an adult.

netsharc
0 replies
1d1h

How TikTok users trolled the Trump campaign: https://www.cbsnews.com/news/trump-rally-tiktok-crowds-tulsa...

Well, true, TikTok probably has more negatives than positives, but I have a feeling the American Talibans[1] in power don't like teens organizing, and where do they organize? On social media...

[1] Yes this is an apt comparison. Suppression of opposing viewpoints, growing voter suppression and not even accepting results of democratic elections, and then the whole anti-Abortion movement.

Vegenoid
0 replies
23h23m

Restricting social media use is tantamount to brainwashing? I don't see the connection.

The idea is that social media exposes kids to viewpoints that they wouldn't otherwise be exposed to, so parents who want their kids to be a certain way would not want this, as they cannot easily control what viewpoints their kids are exposed to online.

Of course, every parent wants their kid to be a certain way, whether or not this is negative is dependent on how narrow that certain way is. The same applies to restricting what kids are exposed to: it is good to restrict exposure to some things, but too much restriction becomes bad.

The Florida legislature has recently been restricting the education system's ability to talk about gender and race, and pushing for more Christianity in schools. This makes some people feel there is an implied extension to the apparent "This is to protect kids" message: This is to protect kids (by making them conservative and Christian)

holoduke
2 replies
1d3h

Its should be categorized in the same way as gambling. Its addictive and useless in any form. The whole world would be better off without any social media. Including the most anti social people.

nerdjon
0 replies
1d3h

I disagree, what social media has turned into thanks to algorithms and engagement is a problem.

But in its purest form Social media isn't a bad thing and is a good way to actually keep in contact with friends. Also a good way to keep up on events happening around the world without relying on the news for everything.

RoyalHenOil
0 replies
1d

Do you realize that this is a social media site?

dylan604
2 replies
1d3h

what is the point of the bill?

This is a valid question for pretty much all legislation. It serves by allowing the congress critters to toot their horns as doing something for those that only pay attention to news bites while doing no harm by doing nothing

throwup238
0 replies
1d2h

I feel like the Australian TV show Utopia should be mandatory viewing for anyone who wants to understand government, even though it is ostensibly a comedy.

nerdjon
0 replies
1d3h

I really hate how right you are. And that meaningless thing will be all over ads and or your opponent voted against it (since it was meaningless and shouldn't be on the books) that turns into an attack on them.

soared
10 replies
1d4h

Technical implementation looks like a good use case for this, morals/etc aside -

https://developers.google.com/privacy-sandbox/protections/pr...

Private State Tokens enable trust in a user's authenticity to be conveyed from one context to another, to help sites combat fraud and distinguish bots from real humans—without passive tracking.

An issuer website can issue tokens to the web browser of a user who shows that they're trustworthy, for example through continued account usage, by completing a transaction, or by getting an acceptable reCAPTCHA score. A redeemer website can confirm that a user is not fake by checking if they have tokens from an issuer the redeemer trusts, and then redeeming tokens as necessary. Private State Tokens are encrypted, so it isn't possible to identify an individual or connect trusted and untrusted instances to discover user identity.

solarpunk
8 replies
1d3h

you've posted this in a few threads, but i dont think i understand what the scenario it is used in would be?

every user of social media in florida now has to visit a third party (who?) that sets a cookie (private state token?) on their browser that verifies their age?

soared
7 replies
1d2h

Correct - ISP requires you to visit Florida.gov (or realistically a company the government trusted to set up verification) to set your token if you’re an adult. Then each social media site checks whether a visitor is from Florida, and then if they have a valid token. If valid, load like normal. If not valid, don’t load the site.

Tadpole9181
6 replies
1d1h

And now the state of Florida has a receipt of every website you ever visit. That will surely never be an issue when the Governor's private law enforcement arm looks through it or the inevitable data leak happens.

soared
5 replies
1d

The intention of the API is for that to not be possible.

The privacy of this API relies on that fact: the issuer is unable to correlate its issuances on one site with redemptions on another site. If the issuer gives out N tokens each to M users, and later receives up to N*M requests for redemption on various sites, the issuer can't correlate those redemption requests to any user identity (unless M = 1). It learns only aggregate information about which sites users visit.

https://github.com/WICG/trust-token-api?tab=readme-ov-file#p...

solarpunk
2 replies
23h16m

would this mean if the issuer experiences an outage you'd be unable to access sites?

soared
0 replies
19h48m

I’m not sure tbh

Tadpole9181
0 replies
14h32m

Yes. Also if you run out of issued tokens.

Tadpole9181
1 replies
14h33m

Someone has to hand the browser the token. And that token has to validated by someone's backend. You now have an issuer with knowledge of who a token belongs to and a visited with a record of where they were. They go over this on that very page:

(unless M = 1)...

If the server uses different values for their private keys for different clients, they can de-anonymize clients at redemption time and break the unlinkability property...

If the issuer is able to use network-level fingerprinting or other side-channels to associate a browser at redemption time with the same browser at token issuance time, privacy is lost.

This is why Mozilla rejects the proposal. We just have to trust issuers to be good and then trust that neither issuers nor websites will "accidentally" log these tokens where a data leak creates a papertrail to real-world identities.

soared
0 replies
8h55m

It would be pretty simple to determine if tokens are unique per person - I agree that for many listed use cases in their documentation its not amazing, but with specific government oversight and watchdogs for the specific Florida use case I think technically it makes sense. Morally, still not a fan.

jvanderbot
0 replies
1d4h

Yep - one of those "it's possible but do we want this" situations. Something feels a bit slimy about government-approved browser tokens. Like,

"We're sorry, your revocation appeal is taking longer than expected due to ongoing unrest in your area - please refrain from using internet enabled services like ordering food, texting friends, uploading livestream videos of police, giving legal advice, finding directions to your employer - a nonprofit legal representation service, or contacting high-volume providers like the ACLU. Have a nice day!"

But it could just be "Please execute three pledges of allegiance to unlock pornhub"

captainmuon
9 replies
1d8h

I guess for me it depends on what the law considers "social media".

Is something like the bulletin boards we used to have around the late 90s/early 2000s social media? What about chat rooms? Local social web sites for the school or your city? I think a lot of these things can even be beneficial, if I think about my own experiences as a somewhat introverted teenager.

And what about things like Netflix, Youtube, Podcasts? They can be just as harmful as TokTok and Instagram. Especially on Youtube you have a lot of similar content.

I've found accounts that claim to be official accounts of children's shows - maybe they even are - and which are full of nonsensical videos, just randomly cut together action scenes of multiple episodes. It's like crack for children. Of course YouTube doesn't do anything, they want you to pay for YouTube kids. And the rights holders want you to buy the content, so they leave the poor quality stuff up.

The thing is, exploitative content is always going to be created as long as there are incentives to do so. You can ban stuff, but it's whack-a-mole, and you are going to kill a lot of interesting stuff as collateral damage. The alternative is much harder, change the incentives so we can keep our cool technology and people are not awarded for making harmful stuff with it. But that would require economic and political changes, and people don't like to think about it.

scythe
8 replies
1d7h

I guess for me it depends on what the law considers "social media".

It's a bill written by the Florida House of Representatives, so there's a definition there. Mind you, it's the Florida House, which has put out some extremely bad laws in its current session -- from "Parental Rights in Education" to the Disney speech retaliation. But given that this is a less ostensibly partisan issue, there are reasons for hope.

The definition seems narrowly tailored. I think that part (d)1d is a questionable choice, since most social media platforms will probably argue that they are not really "designed" to be addictive (for various definitions of "designed" and "addictive"). It appears that specific exemptions were made for YouTube, Craigslist and LinkedIn (without mentioning those companies by name), and algorithmic content selection is part of the definition. This is one of the better versions of this law I could imagine being written by a state legislature, though it isn't without its faults. It's nice to see my home state in the news for something good for once.

I agree that YouTube is a particularly difficult case. But part of the problem comes from using it as a digital pacifier, rather than peer pressure. There's no particular reason why the technology market should produce a free stream of child-appropriate videos. Ad-supported media has its ups and downs, but when the targets of those ads are young children, it's much harder to defend. And parents have more control over the behavior of their 4-year-olds than their 14-year-olds.

Here's the definition:

(d) "Social media platform:"

1. Means an online forum, website, or application offered39 by an entity that does all of the following:

a. Allows the social media platform to track the activity of the account holder.

b. Allows an account holder to upload content or view the content or activity of other account holders.

c. Allows an account holder to interact with or track other account holders.

d. Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform.

e. Allows the utilization of information derived from the social media platform's tracking of the activity of an account holder to control or target at least part of the content offered to the account holder.

2. Does not include an online service, website, or application where the predominant or exclusive function is:

a. Electronic mail.

b. Direct messaging consisting of text, photos, or videos that are sent between devices by electronic means whe re messages are shared between the sender and the recipient only, visible to the sender and the recipient, and are not posted publicly.

c. A streaming service that provides only licensed media in a continuous flow from the service, website, or application to the end user and does not obtain a license to the media from a user or account holder by agreement to its terms of service.

d. News, sports, entertainment, or other content that is preselected by the provider and not user generated, and any chat, comment, or interactive functionality that is provided incidental to, directly related to, or dependent upon provision of the content.

e. Online shopping or e-commerce, if the interaction with other users or account holders is generally limited to the ability to upload a post and comment on reviews or display lists or collections of goods for sale or wish lists, or other functions that are focused on online shopping or e-commerce rather than interaction between users or account holders.

f. Interactive gaming, virtual gaming, or an online service, that allows the creation and uploading of content for the purpose of interactive gaming, edutainment, or associated entertainment, and the communication related to that content.

g. Photo editing that has an associated photo hosting service, if the interaction with other users or account holders is generally limited to liking or commenting.

h. A professional creative network for showcasing and discovering artistic content, if the content is required to be non-pornographic.

i. Single-purpose community groups for public safety if the interaction with other users or account holders is generally limited to that single purpose and the community group has guidelines or policies against illegal content.

j. To provide career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.

k. Business to business software.

l. A teleconferencing or videoconferencing service that allows reception and transmission of audio and video signals for real time communication.

m. Shared document collaboration.

n. Cloud computing services, which may include cloud o. To provide access to or interacting with data visualization platforms, libraries, or hubs.

p. To permit comments on a digital news website, if the news content is posted only by the provider of the digital news website.

q. To provide or obtain technical support for a platform, product, or service.

r. Academic, scholarly, or genealogical research where the majority of the content that is posted or created is posted or created by the provider of the online service, website, or application and the ability to chat, comment, or interact with other users is directly related to the provider's content.

s. A classified ad service that only permits the sale of goods and prohibits the solicitation of personal services or that is used by and under the direction of an educational entity, including:

(I) A learning management system;

(II) A student engagement program; and

(III) A subject or skill-specific program.
jcranmer
4 replies
1d4h

The definition seems narrowly tailored

The fact that are well over a dozen exceptions carved out strongly suggests that the definition is anything but narrowly tailored, and the authors of the bill preferred to add in exceptions to everyone who objected rather than rethinking their broad definitions.

1a-c will be trivially satisfied by anything that "has user accounts" and "allow users to comment". 1e is clearly meant to cover "algorithmic" recommendations, but it's worded so broadly that a feature that includes "threads you've commented on" would satisfy this prong. 1d is problematic; it can be interpreted so narrowly that nothing applies, or so broadly that everything applies. IANAL, but I think you'd have a decent shot of going after this for unconstitutionally vague on this prong for sure.

Discounting 1d, this means that virtually every website in existence qualifies as social media sites, at least before you start applying exceptions. Not just Facebook or Twitter, but things like Twitch, Discord, Paradox web forums, Usenet, an MMO game, even news sites and Wikipedia are going to qualify as social media platforms.

Actually, given that it's not covered by any of the exceptions, Wikipedia is a social media platform according to Florida, and I guess would therefore be illegal for kids to use. Even more hilariously, Blackboard (the software I had to use in school for all the online stuff at school) qualifies as a social media platform that would be illegal for kids to use.

scythe
2 replies
1d3h

Discounting 1d, this means that virtually every website in existence qualifies as social media sites

Most websites would not satisfy 1e. Hacker News, for example. Traditional forums do not satisfy 1e.

1e is clearly meant to cover "algorithmic" recommendations, but it's worded so broadly that a feature that includes "threads you've commented on" would satisfy this prong.

There could be some haggling over this, but I don't think that reading it in the least reasonable possible way is likely to fly in court. In particular, 1e stipulates "content offered". If "threads you've commented on" is content that the user has to request, e.g. by viewing a profile page or an inbox, that might not be considered "offering". It also says "control or target", but content with a simple bright-line definition like that is probably not controlled and certainly not targeted.

The fact that are well over a dozen exceptions carved out strongly suggests that the definition is anything but narrowly tailored

at least before you start applying exceptions.

Yes, the definition is excessively broad if you ignore the majority of the text in the definition. This is a circular argument.

Actually, given that it's not covered by any of the exceptions, Wikipedia is a social media platform

Exception 2m, shared document collaboration. But I don't think Wikipedia satisfies 1e either.

Blackboard (the software I had to use in school for all the online stuff at school) qualifies as a social media platform

Probably qualifies under 2s or 2m. I'm not familiar enough with the platform to know if it satisfies 1e.

jcranmer
1 replies
1d2h

Most websites would not satisfy 1e. Hacker News, for example. Traditional forums do not satisfy 1e.

Hacker News has a page that lets me see all of the replies from comments I've posted. Posting is clearly "activity of an account holder", and that means there is "at least part of the content" being "control[led]" by that activity.

If "threads you've commented on" is content that the user has to request, e.g. by viewing a profile page or an inbox, that might not be considered "offering".

You're the one who's criticizing me for "least reasonable possible way", and you're trying to split hairs like this? (FWIW, an example that would qualify under your more restrictive definition is that the downvote button is not shown until you receive enough karma.)

But at the end of the day, it doesn't matter what you think, nor what I think, nor even what a court will think. What matters is how much it will constrain the government using this law to harass a website. And 1e isn't going to provide a barrier for that.

Yes, the definition is excessively broad if you ignore the majority of the text in the definition. This is a circular argument.

No, it's not. The definition boils down to "everything on the internet is social media, except for these categories we've thought of" (or more likely, had lobbyists who pointed out that the definition included them and so we through them into the list of exceptions). That the majority of the text of the definition is a list of exceptions doesn't make it a narrow definition; indeed, it just highlights that the original definition sans exceptions is overly broad.

Probably qualifies under 2s or 2m

Definitely not 2s, it's not "a classified ad service". I'm not sure there are any classified ad services are "used by or under the direction of an educational agency", but that's what you have to be to qualify under 2s. (This makes me think someone did a bad job of splicing in the exception, and the second half of 2s is supposed to be 2t. Just goes to show you the level of quality being displayed in the drafting of this bill, I suppose.)

scythe
0 replies
1d1h

Hacker News has a page that lets me see all of the replies from comments I've posted. Posting is clearly "activity of an account holder", and that means there is "at least part of the content" being "control[led]" by that activity.

It doesn't say "control" by the activity, it says control by the platform. Control is a little bit difficult to define, but one reasonable necessary condition is that if someone is in control of something, then someone else might do it differently. "Show me threads I've commented on" should produce the same result regardless of platform. "Show me a random page" should at least have the same probability distribution. But "my recommendations" is fully under the platform's control.

I grant that trying to interpret "offer" was a bit of a reach on my part. On the other hand, "control" can be interpreted in a pretty reasonable way to imply some form of meaningful choice.

You're the one who's criticizing me

I'm not criticizing you, I'm criticizing your argument. It's important to keep a safe emotional distance in this kind of discussion.

it doesn't matter [...] even what a court will think. What matters is how much it will constrain the government using this law to harass a website. And 1e isn't going to provide a barrier for that.

It certainly does matter what a court will think. Once a couple of precedents are set, it should be possible to identify what legal actions are meritless, and governments bringing frivolous suits may find themselves voted out of office. Unfairly targeted websites can bring a claim of malicious prosecution.

Now, if you don't trust the voters and the courts, that's a different issue, but it's going to affect every law, good or bad. That's just how government works.

That the majority of the text of the definition is a list of exceptions doesn't make it a narrow definition; indeed, it just highlights that the original definition sans exceptions is overly broad.

If we assume that ad targeting and algorithmic content recommendation are profitable, the original definition clearly constrains the ability of sites to make money while offering user accounts for minors. Lobbyists probably don't want profits constrained for their employers, even if the targeting aspects aren't necessary.

But just because it targets features that can be added to every website, it isn't reasonable to say that it targets every website. Most of the Internet functions just fine without needing to create accounts, and when accounts are necessary, they're for buying stuff.

soared
0 replies
1d4h

Agreed - they’re effectively banning most commonly used websites and then carving out exceptions.

ramblenode
0 replies
1d4h

Thanks for posting the details.

f. Interactive gaming, virtual gaming, or an online service

This bill is already out of date. The new generation's social media are games like Roblox. And these are as addictive as the old social media.

Good luck with this whack-a-mole. A comprehensive bill would stop this at the source: kids owning smartphones. But addressing smarphones would upset too many parents and too much business, so it won't get done.

merdaverse
0 replies
9h44m

The problem with this definition is

Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform.

Many platforms can argue that they're not engaging in this behavior. Do Mastodon and Lemmy count as addicting? They look like Twitter and Reddit on the surface, but they don't have a sorting algorithm that maximizes for engagement. So would they be included in the definition or not?

And if they don't, what's stopping big companies from claiming the same, since you can't actually see their source code for news feed sorting?

cloverich
0 replies
18h31m

Thank you for sharing all the context; very interesting.

1d does stand out. I can guess what they were going for. I wonder if it could be somehow scoped to gamified, or to feed-based algorithmic sites. As a random example, Reddit's site definitely underwent such feed based boosting in the last few years. I'm constantly getting suggested content that is some form of region-based outrage event, Person X doing horrible thing to person Y, etc, and its nauseating. You click one such thing and it knows and it just hits you again and again, and eventually you just have to get out. Which sucks because every time I pick up a new interest, its an easy place to go to find more people that are into it; but you can't get that without the BS.

rpmisms
7 replies
1d7h

This is good. Seriously, the damage it does is massive, kids should not be on Instagram, Snapchat, or Tiktok.

anshumankmr
3 replies
1d7h

What about an Instagram for Kids type thing?

Like there is YouTube for kids,

Make it ad-free (I know this is a pipe dream, but at the very least it should have ads from companies that do not sell gambling apps), scrolling time limits (that actually work which should be a bare minimum), chronological time feed

ryandvm
0 replies
1d6h

Great idea. We could also give them clove cigarettes until they're old enough to smoke!

rpmisms
0 replies
1d7h

No. I'm not talking about the content itself, I'm talking about the firehose of useless garbage, appropriate or not, that's frying their dopamine receptors.

dartharva
0 replies
1d4h

NO. The base concept of Social Networks by itself is harmful. It does not matter how much you "improve" those things.

ibejoeb
2 replies
1d7h

I have a hunch that you're right, but there are a few other things that coincided with the rise of social media, so it's hard to tell what has driven the change in kids' psyches. Among them is the explosion of pharmaceuticals that are routinely prescribed to kids.

rpmisms
0 replies
1d7h

I think these things are linked. Either way, kids are having serious issues, so let's roll back the clock and see what innovation causes it. Kids were fine before social media, arguably better.

owisd
0 replies
1d7h

The pharmaceuticals thing is unique to the US, but every country has seen a simultaneous rise in kids mental health issues, so that rules out pharmaceuticals.

brocklobsta
5 replies
1d4h

This is a very complex problem. Its not just social media, but porn and other content that is not intended for young eyes.

One issue I have is with the age verification system. This will either be totally ineffective or overly effective and invasive. I feel legislation is drifting towards the latter with the requirement of ID.

One idea I had is a managed dns blacklist of inappropriate content. The government can have a requirement that a website register their site in this list to operate, otherwise they are subject to litigation for their content. At the same time have isps and network gear support this list in a 1 click type fashion. I have multiple dns blacklists I use at home. I know this may be a little more technical for the parents and guardians, but that is the world we are living in.

Limitations being:

Section 230 - user posts explicit content and the site isn't in the blacklist.

Network scope - This blacklist will have to be added to all networks accessed by children. What about public wifi? coffee shops?

IDK, I love being able to be anonymous online, but I do see the negative effects of social media, porn, and explicit content on our youth. I don't really trust the government to solve this effectively.

cheschire
2 replies
1d4h

What qualifies for the blacklist? It's a moral question. What happens when the blacklist maintainer's morals differ from your own? Sure, in the U.S. it seems fairly uniform that most people do not want children having access to porn. But what about women having access to information about abortion? Or information about suicide? The use of drugs on psychological conditions? Vaccination efficacy?

Really sucks when someone that controls the blacklist decides you're on the moral fringes of society.

brocklobsta
1 replies
1d4h

Agreed, some are easy to classify like nude content, but what about a website about war history, that content is simultaneously factual and explicit. This is why I say its up to the website owner to register for the blacklist. That in itself has an incentive to reduce the surface of liability of the website.

Social media is a hard problem. What is exactly the issue? Is it creating a larger social hierarchy than children can cope with? Is it meeting and interacting with strangers? Is it reinforcing dopaminergic pathways from superficial digital content and approvals?

djfdat
0 replies
9h54m

Even worse, what happens when the deciding party decides that their opposition's gathering sites or news sites should be blacklisted?

kayodelycaon
0 replies
1d2h

I don't even want the blacklist idea floated.

In a few states, you'll imminently see any information about LGBTQ+ people, include mental health resources, on that blacklist. (This has already been the case for decades in various school districts.)

And I'm not even trying to exaggerate. Ohio is working to eliminate any transgender medical treatment from the state. They've already succeed in making it nearly impossible for minors and now they are working on preventing adults from receiving hormone treatments.

2OEH8eoCRo0
0 replies
1d4h

I think that online anonymity is overrated (and yes, I'm aware of my username). Social media platforms ought to require traceability and age verification.

bedhead
5 replies
1d9h

Not sure how I feel about this, but I don't hate it, theoretically. I'm sure it's hopeless in practice. It might be a worthwhile experiment if nothing else. But a piece of legislation will never be an adequate substitute for good parenting.

stronglikedan
3 replies
1d9h

I'd like to see more science proving that social media is bad for kids before I see lawmakers enacting laws around the theory. That said, I think 16 is too high an age. Kids go into high school at ~13, so I think that would have been a more reasonable age to consider. The hard truth is that most kids don't seem to have access to "good parenting" by any reasonable standard, and parenting is hard with good parenting being harder (not an excuse).

mbitsnbites
0 replies
1d8h

I'd like to see more science proving that social media is bad for kids

Smoking tobacco was allowed for children for hundreds of years before it was regulated. Today we can hardly fathom how stupid it would be to allow children to smoke. Social media is much more accessible than cigarettes, far more addictive, and rather than messing with your lungs it messes with your brain and personality.

There are many problems with "waiting for the science". One is that it takes many, many decades to get reliable longitudinal studies on things like addiction and how the brain is affected.

There are so many indications that social media is bad for children (including scientific studies), in many different ways. It really should not be controversial to limit use for children. It's not like it is something that they need.

This book, "Smartphone brain: how a brain out of sync with the times can make us stressed, depressed and anxious" by Swedish MD psychiatrist Anders Hansen, brings up one of the aspects (unfortunately I don't know of an English translation): https://www.amazon.se/Sk%C3%A4rmhj%C3%A4rnan-hj%C3%A4rna-str...

Here's a short video with Anders Hansen: https://youtu.be/DwAx2kRwCRI?t=112

Then there's is of course the question of how to limit the use. But that's another issue.

dmm
0 replies
1d9h

I'd like to see more science proving that social media is bad for kids before I see lawmakers enacting laws around the theory

Is there science proving that porn is harmful for children? If not, do you see that as an argument for legalizing it?[0]

The first restrictions on tobacco purchases for children were in the 1880s, long before the science was settled on the harms tobacco use causes.

Science is slow, contentious, and produces limited results[1]. If we can only ban things that are scientifically proven to be harmful, what is stopping TikTok from slightly modifying their app and rereleasing it?

I can easily buy products that are BPA free but that just means clever chemists have come up with even worse platicizers like BPS to use as substitutes.

And software can be adapted way faster than chemistry.

[0] I don't. [1] It's definitely still worth studying.

brightball
0 replies
1d9h

I’d imagine that 16 is the age when most kids finally get a state ID.

M4v3R
0 replies
1d9h

Definitely not as a substitute, but it might be something that helps push more parents to consider preventing their children from using social media. It’s much easier for a parent to explain to the child that the reason they’re not allowed to use it is because it’s illegal, instead of trying to explain how social media negatively affects their brains.

tokai
4 replies
1d8h

Its impressive how people here, that should know better, are cheering this on - just because they are parents themselves. Even disregarding the legality, its not going to work technically. The perceived safety of ones kids really crosses some wires in parents.

syrgian
1 replies
1d6h

It would definitely work if a parent could report the accounts of all the kids involved and their tiktok/whatever accounts got deleted + their phone numbers and emails got block-listed for the service.

Even more if the system worked in a centralized way: this email/phone is used exclusively by a kid, so now all on-boarded companies must delete their accounts and not allow them to register again.

voidwtf
0 replies
1d4h

Herein lies the rub, every time I've seen this happen in the past companies just applied blanket wide bans to accounts. Sometimes retroactively for accounts that were illegal at the time of their creation regardless of the users current age. Both Google and Twitter did this to users who created accounts before they were 13 but were then adults.

If you're going to introduce legislation like this, then it needs to include provisions that it will not permanently bar those users access to the services once they are of age. I manage my children's social media interaction (near 0 with the exception of YouTube), if their accounts get permanently disabled that would be unfortunate in the future when they are old enough.

semiquaver
0 replies
1d3h

I’m a parent and think this is bullshit. It won’t work and it isn’t designed to work, it’s just political posturing destined to be struck down by courts.

efields
0 replies
1d7h

It’s good to have this conversation. I don’t think anyone here wants the bill enacted as-is (did anyone actually read it? I didn’t, and I don’t live in FL).

Of course this is technically unenforceable. There will always fbe workarounds. You could smoke as an 11 year old if you were determined enough.

But we need more pushback and dialogue on social media’s role in the common discourse. For a while, nobody talked about smoking being bad and it became normalized while killing a lot of people. Seatbelts. Dumping waste in the river… most people go with the flow of common consensus until that consensus is scrutinized.

tempestn
4 replies
1d3h

16 seems ridiculously old for such a ban. Especially if sites like YouTube count as social media (and I can't imagine how it wouldn't, given most of the content is identical between the short video platforms).

ChrisLTD
3 replies
1d2h

Why does 16 seem ridiculously old?

tempestn
2 replies
1d1h

Because 15 year-olds are the prime audience for a lot of social media. And while much of it is a waste of time at best or actively harmful at worst, a lot of it is also engaging or even educational content that high school students should be allowed to access. And will access regardless; it'll just be a hassle for them or their parents to have to jump through hoops to get to it.

Izkata
1 replies
1d

or their parents

There doesn't appear to be a parental approval exception, it's just a straight ban.

So kids will apparently be responsible enough to drive a car but not make a Facebook account.

tempestn
0 replies
1d

Yes, I mean the parents will help kids circumvent the ban.

spdustin
4 replies
1d2h

I truly don't understand how the party of small government and parental rights of determination can justify passing this.

Perhaps more relevant to HN, it seems to me that any solution here would be a dramatic loss of agency, privacy, and anonymity.

djfdat
2 replies
9h46m

They've never been the party of small government and parental rights. That's just their sanctimonious tagline they use when convenient.

mlrtime
1 replies
7h47m

The reddit /r/politics trolls are entering HN. If you want to criticize something, better to understand it first.

Small Federal Government, more local government control. Let local communities decide for themselves what is important. Now I know your first thought is to ignore what I'm saying to find examples that prove this wrong. But this is the conservative approach in general.

NY and CA are never going to pass a bill like this, if having your kids on TikTok is important, move somewhere where you are with like minded people.

reducesuffering
0 replies
6h21m

more local government control. Let local communities decide for themselves what is important.

Do you hear yourself, when commenting on a bill that is a ban in an area of 21 million people?

Hey, I think this bill is bad and should be legislated at the county level. I'm now more conservative than you.

Smeevy
0 replies
22h50m

I'm in agreement with you. I'm amazed at how many people that do I do can look at this myopic, sanctimonious rage bait and immediately start figuring out how to implement it with an encrypted token exchange.

This is still awful no matter how much crypto you throw at it. The end result of solving this little puzzle of a problem is that everything is worse shortly afterwards. Congratulations to them, I guess.

newsclues
4 replies
1d9h

Government needs modern digital first ID/authentication services.

alistairSH
2 replies
1d9h

Probably, but that's a tough nut to crack. A significant chunk of the American public is insanely anti-federal ID.

newsclues
1 replies
1d7h

Doesn’t need to be federal, because it’s digital it can be distributed and portable instead of centralized.

If it respects privacy, is secure and convenient and optional, people would love it.

alistairSH
0 replies
1d6h

That's true, however I'm not sure I trust "Big Tech" to self-manage this system. After all, "Big Tech" are the ones that forced us into needing this legislation in the first place. And they don't have a great track record at protecting PII.

The federal government already has all the info it needs to run an ID program.

But, I'd be open to debate on who should do it.

mindslight
0 replies
1d7h

First, we need privacy regulation (eg a US port of the GDPR) that stops the existing widespread abuses of identification and personal data, especially the abuses being facilitated by the current identification systems. Only after this is fixed does it make sense to talk about increasing the technical strength of identification.

geogra4
4 replies
1d4h

I remember when I was 10 years old at a computer camp during the summer at a local college. They had me set up my first email account with hotmail. They all asked us to lie about our age. I think even then they had restrictions that you had to be 13 years old.

But - that was over 25 years ago. The internet was a much different place.

mixmastamyk
2 replies
1d

Fast forward to today. Ours came home with a google account in the 5th grade I think. Something I explicitly did not want. They didn't send a permission slip home like they do for everything else either.

Another teacher around the time had the kids set up on GoodReads. They were under 13 and there was a TOS at the time restricted to 13+. Mostly adults on that site.

Not happy with the school to say the least.

judge2020
1 replies
21h30m

Fast forward to today. Ours came home with a google account in the 5th grade I think. Something I explicitly did not want. They didn't send a permission slip home like they do for everything else either.

Google Workspace accounts, especially those for education[0], have Web & App Activity, as well as Location History, automatically turned off. It's just a tool for schools to get free/cheap email, storage, and classroom tools. For your child under 13 to be able to use it compliant with COPPA[1], your school must have either used some level of blanket consent, or the school didn't bother to actually get the parental consent Google requires.

0: https://edu.google.com/intl/ALL_us/workspace-for-education/e...

1: https://cloud.google.com/security/compliance/coppa

mixmastamyk
0 replies
20h2m

Not good enough. Didn’t want google to know the kid exists. Too late.

CalRobert
0 replies
1d4h

Might be thinking of this? https://en.wikipedia.org/wiki/Children's_Online_Privacy_Prot...

I remember joining ebay (well, auctionweb - aw.com/ebay, IIRC) and it not even being an issue that I was around 14, we mostly trusted each other, and just mailed money orders around. A different time.

bentt
4 replies
1d2h

I am here to report as a parent of children that I have been able to keep them off of social media by not getting them phones, controlling the Internet in the house, and paying close attention.

zamadatix
1 replies
22h30m

My (non-technically inclined) nephew age 14 managed to get caught with an old Wii u from a friend and was using it to browser the net at 3am via the friend's (neighbor) Wi-Fi. I don't think paying close attention, as much as it should be done, is really the be and end all answer for everyone to know their kid is never on social media or sites they shouldn't be on. Beyond this example unless you're home schooling them in remote isolation with regular prison level searches there are going to be other ways your kids do things you wouldn't like them to. If you are talking about that kind of setup I'm more worried about that in society than Facebook.

That doesn't mean parents should not pay attention or give them easy access but I'm not sure the proud proclamation is really solving or responding to the conversation at hand. Most likely, IMO, it's a problem without a single golden solution.

bentt
0 replies
5h18m

My point really is that people get their kids' phones because they think they have to, but that's the root of the problem. We don't prevent our kids from being online - what we prevent them from doing is being online on their own, away from the house, with the internet in their pocket all the time. They each have computers.

munificent
1 replies
1d2h

How has it impacted their social life?

My experience with my kids (middle and high school age) is that online is "where" most kids socialize today and if I don't let my kids go there, I am socially isolating them.

bentt
0 replies
5h20m

We allow my son to be online via his computers. He has a Macbook and a desktop PC. He's on Discord and a Slack. He can iMessage friends via his Macbook.

The thing about a phone is that there's a data plan and it's out of your sight a lot. But to be fair, my son is mostly fine with sticking to these channels for talking to friends. So, it is particular to the kid.

It doesn't hurt that we live in a tight nit community where he can walk to friends' houses, school, etc.

My daughter is younger and probably will want to get a phone more than he does, when she is old enough.

wan23
3 replies
23h58m

I'd like to see a federal law mandating that any future law or regulation that restricts what children are allowed access to makes the parents solely liable if the children gain access. There should be a standard way to declare pages are age restricted, and the browser on the child's phone or computer should check it. But if the child bypasses that somehow, that's on the parents.

seattle_spring
0 replies
17h52m

So some loon bans Call of Duty for anyone under 18, but 16-year-old Jonny figures out how to log in anyway at his friend's house. You're championing for a way to indict Jonny's parents on federal charges?

If I'm off base, can you clarify exactly what you would like to see instead?

mlrtime
0 replies
7h34m

Let me guess, you don't have children... It's painfully obvious the people commenting here who have vs don't have kids.

The most vocal [and the most absurd ideas] are sure from people whom this will not affect at all as they don't have kids.

djfdat
0 replies
9h50m

So, we don't trust parents to monitor their children's online activity the way the government finds acceptable.

So now we will punish parents for their inability to monitor their children's online activity the way the government finds acceptable?

standardUser
3 replies
22h57m

We've got to stop crafting a special world for children that is disconnected from the adult world. We should be moving in the opposite direction, lowering the voting age for example, allowing toplessness in appropriate places and allowing children in non-sexual adult spaces when with a guardian (like bars). How have we not yet learned that repressing kids is counterproductive?

insamniac
1 replies
21h41m

To what end exactly?

standardUser
0 replies
6h7m

To create well-rounded human beings who don't grow up with weird issues around nudity and drinking, for example. And who feel like that have some skin in the game and are actually part of society, not victims of it.

reactordev
0 replies
22h56m

Religion… it makes certain people think the world is full of bad people. While there are bad people, the vast majority of people are good people. Our policy here is to protect the children from X. Whatever X is at the time. Right now it’s wokeism.

shuntress
3 replies
1d

Everyone here seems to be forgetting the root of this problem:

Algorithms that select content (including ads) need to be treated as publishers rather than platforms.

"facebook.com/users/my_friend/posts_in_cronological_order" is a platform

"facebook.com/my_feed" is a publisher

2OEH8eoCRo0
1 replies
22h23m

Wasn't this how things were before Section 230? If your act as a dumb pipe then you're a provider and if you curate then you're a publisher.

shuntress
0 replies
4h59m

There wasn't much Internet before Section 230. At that time, it was still kind of an open question.

stavros
0 replies
1d

That doesn't sound like a bad distinction. If you're shuffling the stories around to maximize some metric, you're no longer an impartial carrier, you're actively biasing what someone sees.

joshuaheard
3 replies
1d7h

I support this. Normally, I think government intervention is bad and parents should be in control. But, as a parent myself, it was hard not to allow my child to have a phone when she kept saying, "Everyone has one. I'll be the only one without a phone". No one wants their child left out or left behind. This will remove that rationale.

tstrimple
1 replies
1d2h

Just to be clear, because parenting is hard you want to legislate how other parents are able to raise their children so that it's easier for you to get the behavior you want? I think all parents should take this approach.

  * Having trouble getting your children to go to church because their friends don't? Let's just legislate mandatory church attendance so that will remove the rationale for kids whose friends don't attend! 

  * Having trouble getting your kids to eat healthy because all their friends get to eat and drink whatever they want? Let's outlaw sodas so kids won't have to feel peer pressure!

  * You think your kid is playing too many video games? Why not just pass legislation that restricts all video game usage so your kid doesn't feel left out! 
Telling your kids no is part of being a parent. Explaining to your children why they aren't allowed to do some things that other kids do is part of being a parent. It seems we have an abundance of parents who don't want to actually be a parent and would rather legislation was passed so they don't have to say no to their kids.

joshuaheard
0 replies
1d1h

It's not my deficiencies as a parent that make me support this law. It's the need for a reverse network effect. That's how social media works. If everyone else has it, your kid wants it. If no one else has it, your kid doesn't want it. Social media has been found harmful to children, like smoking or alcohol. For many reasons, it should be limited for children.

frumper
0 replies
1d6h

This wouldn't stop your kid from wanting a phone. They'll all still want phones for games, chatting, cameras, videos, comment sections, forums. Of course many parents won't care if their kids sign up to social media sites through some workaround, so the pressure will still be there to join social media sites.

dvngnt_
3 replies
1d7h

i don't know why people are cheering for this?

the government shouldn't take the place of the parents.

I think the real reason is they don't like how tiktok is creating activists out of the next generation of voters.

we should demand better from big tech but banning does nothing to improve platforms.

so many adults here were on forums, 4chan, mmos, ventrilo growing up, even Facebook which was useful keeping in touch as a kid.

causi
1 replies
1d7h

All those things were horrible for us. I can't imagine the nightmare of combining that with the use of my real name. Like yeah, I want a free for all, but how bad does something have to be before we say no way?

tstrimple
0 replies
1d3h

All those things were horrible for us.

Except that they weren't. My life would actually be substantially worse now without most of those apps / websites. I was just another loser growing up in a rural trailer park with no real prospects before I got interested in programming and taught myself employable skills in those online forums and chat apps. It's insane to me that anyone could call themselves a part of a "hacker" community and complain about kids having access to information and wanting legislation to restrict it.

mminer237
0 replies
1d6h

I agree it's the parents' responsibility, but I don't know that the government can't prevent things it deems harmful just because parents are okay with it. It also often makes things easier for parents by being able to require service providers to work with parents and limit their sales to children.

I'm fine with the government enforcing curfews, smoking, and drinking laws on children even if their parents disagree.

I don't think any of the things you listed, except probably Facebook would qualify as a social medium under this law. I'm honestly very thankful more manipulative social media didn't exist when I was a teen. My life probably would have been better if I didn't have access to as many as I did even.

tapoxi
2 replies
1d7h

Is Hacker News considered social media?

tzs
0 replies
1d7h

For this bill, no. It does not meet conditions (d) and (e) of the 5 conditions a site must meet in order to be a social media platform. I've quoted the relevant part of the latest bill text in this prior comment [1].

[1] https://news.ycombinator.com/item?id=39176380

riffic
0 replies
1d7h

yes

neilv
2 replies
21h26m

What is "social media", and how much are they actually banning communications, speech, association, organizing, political activity, etc. by young people?

Plus a backdoor way to promote surveillance tracking to communications for everyone, by the likely mechanisms to be used in practice for excluding those under 16.

jackvalentine
1 replies
21h19m

Definition with inclusions and exclusions is in the bill: https://www.flsenate.gov/Session/Bill/2024/1/BillText/e1/PDF

neilv
0 replies
18h40m

Interesting. To meet their definition of "social media platform" requires that it "Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform."

Also, they have a large list of exclusions. Some of the phrasing sounds like under-16 people are mostly limited to person-to-person messaging, consuming content produced by others, and being able to access certain services that got exclusions.

Being able to speak publicly is sometimes expressly not allowed in these exclusions, but it is at least implied in other exclusions (which I guess might have originated to allowlist specific companies).

Teens who care to observe the law at all (many will not) will quickly realize that they can figuratively drive a truck right through some of those loopholes. (Example: a "photo editing & hosting" service with accounts and likes and comments is pretty much Twitter or Facebook, once an exodus of kids from "social media" starts congregating there. Bonus rebelious vibe, and the freedom/coolness that your parents and grandparents, and mean school principal haven't yet discovered it.)

If I had an online service that had under-16 among its users who were doing public expression/communication/content, and my business wasn't already clearly excluded, I would be scrambling to lobby for an exclusion. (I would also be discouraging this particular formulation of law, in the public interest, but remiss if I didn't have a backup plan for my immediate business in the case that it passed.)

nanolith
2 replies
22h55m

The bill is pretty broad, with oddly specific carve-outs. I did not think that I would need to pay for a VPN service to avoid weird legislation in my state, but here we are.

The sad thing is that, with the exception of the occasional comment on Slashdot / HN and my LinkedIn profile, I'm barely active on social media anymore. Still, I'm not giving PII to sites that this bill considers "social media" sites, just to prove that I'm over 16. That's absurd.

seattle_spring
1 replies
18h8m

I did not think that I would need to pay for a VPN service to avoid weird legislation in my state

Why not? Were the book bans not a sufficient warning signal?

nanolith
0 replies
16h42m

A warning signal for what? To need a VPN? I was hoping that our current legislators would be unable to spell "Internet", to be honest.

Unfortunately, Florida has gone crazy over the past twenty years. I'd vote with my feet, but this is where my family lives. So, I have little choice but to hunker down and contribute what I can to destabilize the dominant QAnon aligned caucus in the legislature, and hope that in 2026, we get a governor who doesn't play to this nonsense.

manicennui
2 replies
1d7h

Perhaps a better solution is to allow children on social media, but drastically limit how companies are allowed to interact with such accounts. Random ideas: no ads, no messages from people they don't follow, limit ability to follow other accounts in some way.

djfdat
0 replies
1d4h

Can I have that as an adult please?

dartharva
0 replies
1d4h

Children are not as much at danger from falling for online advertisements as they are for the overall detrimental effects of social media in general. Social networks are inherently bad for kids; they are addictive and directly harm children by constantly dosing them with dumb entertainment and cutting off their attention spans.

jshaqaw
2 replies
1d9h

Does YouTube count? Yes I think TikTok is largely a hellscape for my daughters around this age. But one of them learns all sorts of crafting projects via YouTube and the other has taught herself an incredible amount on how to draw. Would be a shame to throw away access to resources like this with the bath water.

pavlov
0 replies
1d9h

The law has a list of applications that are specifically excepted. User tzs posted it in this thread.

It seems like YouTube would be covered by the ban because it doesn't fall under any of the exceptions. The closest one is this:

"A streaming service that provides only licensed media in a continuous flow from the service, website, or application to the end user and does not obtain a license to the media from a user or account holder by agreement to its terms of service."

But of course YouTube does "obtain a license to the media from a user or account holder", so it's not covered by this exception.

AlecSchueler
0 replies
1d9h

Does she need her own account in her name where she can upload though?

insane_dreamer
2 replies
1d9h

As the parent of an 11 year old, I wholeheartedly agree. The science is pretty clear that social media has had a very detrimental effect on teens' mental health. We should treat it like we do other substances that are harmful for teens; once they're older they are better able to make wiser decisions as to if, when and how they want to consume social media.

It may be impossible to enforce outside school, but so is the 13-year old limit on opening accounts (my kid's classmates all have accounts; they just lie about their age). But that's not a reason not to have it on the books, as it sets a social standard, and more importantly puts pressure on social media companies.

jeffbee
0 replies
1d7h

The evidence is not all that solid. The most demonstrable link is between use of portable devices at bedtime and poor sleep quality. Everything else has mixed evidence.

coolbreezetft24
0 replies
1d4h

Seems like it does even more harm to adults, so much of the "content" is just a cess-pool of conspiracies and vitriol

financypants
2 replies
1d1h

This is such a silly issue on a governmental level. Shouldn't the parents, who spend more time around their children than the Florida House of Representatives does, worry about and monitor their own children?

it_citizen
0 replies
1d1h

Should we apply the same reasoning for banning kids from buying alcohol or firearms then?

arijun
0 replies
1d1h

Off the top of my head, I can think of two reasons why it might be preferable to have the government intervene.

1) We are happy to have the government intervene in other cases for the sake of children; I would be pretty upset at any politician who espoused removing age restrictions on cigarettes. I don’t know that social media is as bad but it certainly has some of the addictive properties

2) An argument from tragedy of the masses: if all kids’ social lives currently revolve around social media, unilaterally disallowing one child to use it could result in alienation from their peer group, which might be worse for the kid than social media. A government mandate would remove this issue.

bilsbie
2 replies
1d7h

I don’t know where I saw this idea but instead of banning just force these companies to make their feed algorithms open source.

It would be much less heavy handed and would freedom increasing instead of decreasing.

It would work because There would be enough outrage over seeing nefarious topics being pushed that the companies would refrain.

mindslight
0 replies
1d7h

I agree with where you're coming from, but publicly documenting their feed algorithms (which is what a call for "open source" effectively is) wouldn't change much. What is actually needed are open API access to the data models, so that competitive non-user-hostile clients can flourish.

I believe this would be legally straightforward by regulating based on the longstanding antitrust concept of bundling - just because someone chooses use to use Facebook's data hosting product, or their messaging products, does not mean they should also be forced to use any of Facebook's proprietary user interface products.

This would not solve the collective action problem where it's hard for an individual parent(/kid) to individually reject social media, but I also don't see this bill doing much besides making it so that kids have to make sure their fake birthday is three years earlier than the one they already had to make up due to COPPA. Of course the politicians pushing this bill are likely envisioning strict identity verification to stop that, but such blanket totalitarianism should be soundly rejected by all.

Unfortunately the larger problem here is that the digital surveillance industry has been allowed to grow and fester for decades with very few constraints. Now it's gotten so big and its effects so pervasive that none of the general solutions to reigning it in (like similarly, a US GDPR) are apparent to politicians. It's all just lashing out at symptoms.

dartharva
0 replies
1d4h

Is the feed algorithm the only problem that is harming children? Not the concept of a social network in general, the entire point of whose is to publicise lives and keep its users stuck onto their screens for as much time as possible?

The entire fault of social networks is that it is hampering children's development by keeping them online. Trying to improve those services by making the algorithms better will worsen the situation. You want to make children lose the appeal of social media, not increase it!

tzs
1 replies
1d9h

The social media platforms the bill would target include any site that tracks user activity, allows children to upload content or uses addictive features designed to cause compulsive use.

That does not appear to be correct. It says it applies if any of 3 conditions hold. The bill text says all of the conditions must hold (and there are 5, not 3). Here is what the current text says social media platform means:

< Means an online forum, website, or application offered by an entity that does all of the following:

< a. Allows the social media platform to track the activity of the account holde

< b. Allows an account holder to upload content or view the content or activity of other account holders.

< c. Allows an account holder to interact with or track other account holders.

< d. Utilizes addictive, harmful, or deceptive design features, or any other feature that is designed to cause an account holder to have an excessive or compulsive need to use or engage with the social media platform.

< e. Allows the utilization of information derived from the social media platform's tracking of the activity of an account holder to control or target at least part of the content offered to the account holder.

There's also a huge list of exceptions. It says that it:

< Does not include an online service, website, or application where the predominant or exclusive function is:

< a. Electronic mail.

< b. Direct messaging consisting of text, photos, or videos that are sent between devices by electronic means where messages are shared between the sender and the recipient only, visible to the sender and the recipient, and are not posted publicly.

< c. A streaming service that provides only licensed media in a continuous flow from the service, website, or application to the end user and does not obtain a license to the media from a user or account holder by agreement to its terms of service.

< d. News, sports, entertainment, or other content that is preselected by the provider and not user generated, and any chat, comment, or interactive functionality that is provided incidental to, directly related to, or dependent upon provision of the content.

< e. Online shopping or e-commerce, if the interaction with other users or account holders is generally limited to the ability to upload a post and comment on reviews or display lists or collections of goods for sale or wish lists, or other functions that are focused on online shopping or e-commerce rather than interaction between users or account holders.

< f. Interactive gaming, virtual gaming, or an online service, that allows the creation and uploading of content for the purpose of interactive gaming, edutainment, or associated entertainment, and the communication related to that content.

< g. Photo editing that has an associated photo hosting service, if the interaction with other users or account holders is generally limited to liking or commenting.

< h. A professional creative network for showcasing and discovering artistic content, if the content is required to be non-pornographic.

< i. Single-purpose community groups for public safety if the interaction with other users or account holders is generally limited to that single purpose and the community group has guidelines or policies against illegal content.

< j. To provide career development opportunities, including professional networking, job skills, learning certifications, and job posting and application services.

< k. Business to business software.

< l. A teleconferencing or videoconferencing service that allows reception and transmission of audio and video signals for real time communication.

< m. Shared document collaboration.

< n. Cloud computing services, which may include cloud storage and shared document collaboration.

< o. To provide access to or interacting with data visualization platforms, libraries, or hubs.

< p. To permit comments on a digital news website, if the news content is posted only by the provider of the digital news website.

< q. To provide or obtain technical support for a platform, product, or service.

< r. Academic, scholarly, or genealogical research where the majority of the content that is posted or created is posted or created by the provider of the online service, website, or application and the ability to chat, comment, or interact with other users is directly related to the provider's content.

< s. A classified ad service that only permits the sale of goods and prohibits the solicitation of personal services or that is used by and under the direction of an educational entity, including:

< (I) A learning management system;

< (II) A student engagement program; and

< (III) A subject or skill-specific program.

I hope they add 8 more exceptions. I want to see what they do when they run out of letters for labeling the exceptions.

insane_dreamer
0 replies
1d9h

a lot of loopholes there

proc0
1 replies
1d

I'm calling it now, this is like the War on Drugs. Banning it will only create an underground environment with even less oversight, not to mention the burning desire kids have for doing things they should not be doing, which is guaranteed because this is not even remotely enforceable.

seattle_spring
0 replies
17h50m

It's so, so obvious. Also strongly reminiscent of the "satanic panic" and the war on violent video games.

notbeuller
1 replies
22h0m

One possible result of this that sounds dystopian cool - script kiddy kids < 16 spinning up mastodon server instances and creating their own very leaky insecure rolling social networks. Unless it suddenly becomes illegal to run a server without a license.

Buttons840
0 replies
21h52m

Kids could use a social network hosted in another country. Imagine how quickly the US will erect a national firewall, especially since they can plausibly say it's to protect the children.

neutralino1
1 replies
21h39m

I often draw the parallel with cigarettes and alcohol. Kids need to produce an ID to purchase them. Sure they can fake it, but then they are breaking the law, and that still raises the barrier.

But that's likely not enough. In addition, there should be public health campaigns to warn against the risks.

Cigarette use has plummeted since the 90's, so something must be working.

zmgsabst
0 replies
21h36m

I like to compare it to gambling — another real gated isdustry, due to using professional psychology to engineer super stimuli for the purposes of addiction.

Social media should be regulated for the same reasons gambling is regulated.

iosystem
1 replies
1d3h

The tech industry needs an association similar to how doctors join the American Medical Association, where they can collectively agree on ethics and guidelines that must be followed. Any person in tech is behaving unethically if they assist in implementing software to restrict children in Florida from accessing information on the internet that their peers in other states can access. Florida shows little concern for the potential harm to children resulting from information restrictions. Kids in abusive environments greatly benefit from the social connections online communities provide, as well as the diverse information and perspectives from other people. Florida has created this bill as a means to censor content it deems immoral, whether it be abortion information for girls, understanding sexuality, the existence of trans kids, or any other topic arbitrarily designated as immoral by the ruling political party. It is disconcerting to target the rights of children, who have the lowest chance of having the resources needed to challenge something like this bill in court, which should happen under the first amendment.

tmpz22
0 replies
1d2h

The AMA has many problems but Ill just mention one - they artificially restrict the number of graduates each year WHILE reporting nation wide shortages. Its not a silver bullet for ethical behavior or efficient economics.

bandyaboot
1 replies
23h51m

I’m no fan of some of the things that have been happening in FL, and I’m not sure that a simple outright ban on “social media” for <16 is the way to go, but at the same time, I think it’s a good thing they’re pushing this because the conversation needs to happen and with more urgency.

djfdat
0 replies
9h45m

We really need to find a better way to have these conversations as a society.

apapapa
1 replies
19h36m

I think that instead of banning it all together, they should ban practices within those platforms.... Wtf Florida, this sounds life something CA would do... It's almost as if they banned all speech because some speech is hurtful

djfdat
0 replies
9h49m

How does this sound like something CA would do? This sounds exactly like something Florida would do.

andreygrehov
1 replies
8h4m

Whatever happens in Florida, HN doesn’t like it. Why? Because of the distorted perception about FL driven the mass media.

A couple years ago I made a decision to move to FL from NY. I’ll say it out loud, FL is ahead of NY. They just have bad marketing skills.

zaccusl
0 replies
7h57m

I feel like there is no good way to market the fact that some school districts are banning the dictionary to comply with all the "word" and "thought" ban laws.

Like how do you spin that so it's a good thing that some teachers can't even have books in their classrooms to avoid running afoul of the law?

How do you spin book bans as a good thing in any circumstance?

How do you spin the laws that remove parents rights to make medical decisions for their children when the medical decisions conform with the current state of the art and evidence based treatments?

Other than that, I think most people just treat Florida as a meme thanks to the Florida man stereotype.

zoklet-enjoyer
0 replies
23h34m

Yet they just eased restrictions on child labor

yieldcrv
0 replies
1d7h

control behavior by regulating the intermediary

this is a strategy that works under any governance system on the planet. so to actually make an enforceable law don't try to impose restrictions on the action, provider, or users of the action, instead you should think about things they rely on and restrict those.

this law doesn't do that. but it would be fun to think of things that would. can we take something away from social media users? can we incentivize non-social media users? maybe we can leverage our disdain of data brokers into a partnership where data brokers can't serve companies that have children's data

just spitballing, I don't actually care about this law or any of those proposals, just noticing the current state of the discussion lacks... inspiration.

xbar
0 replies
19h28m

"Florida Sneaks into Requiring Everyone to Have an Internet License"

whatasaas
0 replies
23h5m

I don’t want to verify myself and I don’t trust the government to not work with Facebook to say they are doing some “zero knowledge proof” and still map us all. Implement a fine for parents in Florida. Let it stay Florida’s problem. If the Amish were in charge we would all be in trouble. Raise your own kids.

tamimio
0 replies
1d5h

This is idiotic, they will find a way to watch and be on social media regardless. Also, I don’t see how social media is bad but MSM that brainwashed generations is any good, are they going to enforce the same rules on other forms of media? Or is it because “we can’t censor XY social media” so we are gonna ban them all?

swozey
0 replies
1d1h

I wonder what this would affect culturally. There is a LOT more to this that will happen than just keeping children off of social media.

The USA exports its culture/pop/etc all over the world. I don't follow teenager arts/music/etc sources but a lot of musicians start in middle school and have so many mix tapes online and get known around their cities from using social media. Artists find other artists, learn other styles, etc.

I got into programming through IRC as a kid, which maybe that's like tiktok nowadays, I don't know a good comparison. I learned so much through sites/apps I could "upload content" and "tracks user activity."

So, what happens when every kid-teenager in a nation thats the worlds biggest culture exporter isn't getting their culture out?

I can't believe this got 106 to 13 with the "Regardless of parental approval."

spacebacon
0 replies
1d9h

I would like to see regulation on notifications to address reaction driven addictions as opposed to an outright ban. Classical conditioning is clearly the issue at hand but opponents are not referencing the proven science enough.

If we don’t teach children how to use these platforms in moderation now they will certainly not be educated on how to use them responsibly in adulthood. I’m not against an outright ban totally but we are missing educational opportunities with what is likely an unenforceable attack on the problem.

seattle_spring
0 replies
18h15m

An extremely transparent attempt to ensure Florida children do not have access to news and information that goes against a very particular narrative. It's all about the control of information, which sure is ironic coming from "the party of small government" and self-proclaimed "free speech absolutists." Absolutely disgusting.

It would be fantastic if social media companies instead simply blocked all access to IPs located in Florida regardless of age.

redder23
0 replies
1d2h

I am all for it but the question is how will this be enforced? If they use this to require government ID on every other website now and crack down on semi anonymous accounts and go full surveillance I do not like it.

You might say "they do know you are when you make any account" and it might be true, well if I would use a VPN all the time and really no let any info slip maybe not. I just to not like the total deletion of privacy.

plorg
0 replies
20h53m

Excited to see how Gab users react to having to provide a government ID to use it.

paxys
0 replies
1d

Is this a real bill or another one of those performative ones that they know will get deemed unconstitutional by a court?

notnmeyer
0 replies
23h40m

impossible to enforce, but i dont hate it. im thoroughly convinced that history will see social media as a net negative for society.

nojvek
0 replies
1d4h

What if the social network is created by kids?

I guess I need to write to my representatives.

It's funny that the Republicans tout they are the party of "freedom", yet restrict the liberties.

Why not let parents decide how they want to raise their kids?

mullingitover
0 replies
1d5h

Florida House essentially breaking out the classic: “We’re from the government and we’re here to help.”

mnky9800n
0 replies
9h56m

What happens if a 14 year old goes to florida for summer vacation, posts tiktoks, then some how is found responsible for violating this ban?

maxslch
0 replies
9h1m

govts are soo behind the tech curve and everything they do now only decelerates our progress and creates more problems

kmeisthax
0 replies
1d2h

My impression of these bills was that none of them had survived contact with SCOTUS. What is going to make this bill any different?

And why aren't we just passing a proper damned privacy law already? All of the addictive features of social media are specifically enabled by their ability to collect unlimited amounts of data. Yes, I know, the NSA really likes that data thank-you-very-much, but SCOTUS isn't going to be OK with "censor children" to protect that data flow.

k12sosse
0 replies
1d9h

Does this mean Florida will be the first state off the Internet?

If traffic source is Florida, redirect to null

Easier than implementing an ID verification platform that isn't a massive tracking anklet for a 3rd party or government.

jokethrowaway
0 replies
1d2h

I don't like it's mandated by law (id check on the internet is a identity fraud disaster waiting to happen), but I'm copying this for my kids

jay_kyburz
0 replies
1d3h

A much better better move in my local state, they banned all phones in school up to year 10.

https://www.act.gov.au/our-canberra/latest-news/2023/decembe...

int0x21
0 replies
1d

Ahh yes. When you can't parent, let the government do it.

exabrial
0 replies
1d7h

I fail to see how anyone under 18 can legally agree to any kind of contract without a parental co-signature. This should be enforceable without new laws, but I'm glad to make it explicit: progress over perfection.

einpoklum
0 replies
1d4h

I'm willing to buy an argument that certain kinds of "social media" have negative impact on kids under 16; but I'm absolutely not willing to buy an argument for a world in which the government is able to ban your communications with other people because you're under 16.

ecocentrik
0 replies
1d4h

This might be the first bill approved by the Florida legislature in the last 8 years that I agree with. I like it in spite of all the reasons these people voted for it. And I like it in spite of the absolute horror show of an enforcement dilemma it's going to impose on the residents of Florida. Will it force all Florida residents including children to use VPNs to use the internet? Yes it will.

deadbabe
0 replies
1d4h

I have no idea how this would be enforced but I agree with the spirit of the law.

We either live in a world where children are hopelessly pressured into joining social media early in life and suffering its effects, or we ban them from it all together and allow them to have something that still looks like a childhood.

carabiner
0 replies
1d4h

It's crazy how in 2017, YC was proposing a social network for kids as a startup idea:

Social Network for Children: This is a super tough nut to crack. You have to be cool and offer a safe and private environment for kids to communicate with each other while enabling trusted adults to peer in/interact, etc… The company that can build something that is used and useful for all parties can build something of great value throughout a person’s entire life. - Holly Liu, Kabam/Y Combinator

https://www.ycombinator.com/blog/13-startup-ideas

There was very little notion that a social network, no matter how safe, was inherently detrimental to childhood development. Like cigarettes initially, it just seemed to be mostly positive with some niggling annoyances. I wonder what other current YC ideas will be considered horrible 7 years from now.

billfor
0 replies
23h26m

And we still have film ratings. A kid isn't supposed to go to R rated movies if they're under 17....

arrosenberg
0 replies
1d4h

Social media exclusively uses a predatory pricing model and the companies should be forced to stop subsidizing their products with targeted ads. The algorithms drive max engagement because that drives max impressions and cpcs. The algorithm doesn't care that its' making people angry and divisive - it's optimizing revenue!

All of the other evils stem from this core issue - Meta, et al. makes money off of their customers misery. It should hardly surprise anyone that children are affected much more strongly by industrialized psychology.

aktuel
0 replies
1d1h

I would ban it for everyone under 21. It is certainly not safer than alcohol.

TriangleEdge
0 replies
1d3h

What's an enforcement mechanism that works for this? My concern is that teenagers are going to learn yet another way to be dishonest.

TheCaptain4815
0 replies
1d7h

I don't agree with any type of "outright ban". However, having EXTREME restrictions on these social media sites for children seems so obvious. I'd prefer a complete restriction on any content outside of friends groups, any algorithm restriction, etc.

MrYellowP
0 replies
16h12m

The current generation is already heavily manipulated by it, which means the next generation - their children - will also be heavily manipulated by it regardless of consumption or lack thereof.

Mindless parents -> mindless children.

HumblyTossed
0 replies
1d4h

The social media platforms the bill would target include any site that tracks user activity, allows children to upload content or uses addictive features designed to cause compulsive use.

So, is ClassLink exempt? This seems pretty broad.

Ekaros
0 replies
1d9h

I would prefer it to be straight 18. Much more reasonable limit and I think 16-18 is very vulnerable group from effects of social media.

BadHumans
0 replies
20h16m

Go after advertising and data collection and the problem solves itself.

0xbadcafebee
0 replies
1d2h

Great! Now can we ban it for those over 16?