I might be the only one here in favor of this, and wanting to see a federal rollout.
It is not reasonable to expect parents to spontaneously agree on a strategy for keeping kids off social media- and that kind of coordination is what it would take, because the kids + social media companies have more than enough time to coordinate workarounds. Have the law put the social media companies on the parents side, or these kids may never be given the chance to develop into healthy adults themselves.
But the only way to do this is to require ID checks, effectively regulating and destroying the anonymous nature of the internet (and probably unconstitutional under the First Amendment, to boot.)
It's the same problem with requiring age verification for porn. It's not that anyone wants kids to have easy access to this stuff, but that any of these laws will either be (a) unenforceable and useless, or (b) draconian and privacy-destroying.
The government doesn't get to know or regulate the websites I'm visiting, nor should it. And "protecting the children" isn't a valid reason to remove constitutional rights from adults.
(And if it is, let's start talking about gun ownership first...)
That seems intuitive, but it's not actually true. I suggest looking up zero-knowledge proofs.
Using modern cryptography, it is easy to send a machine-generated proof to your social media provider that your government-provided ID says your age is ≥ 16, without revealing anything else about you to the service provider (not even your age), and without having to communicate with the government either.
The government doesn't learn which web sites you visit, and the web sites don't learn anything about you other than you are certified to be age ≥ 16. The proofs are unique to each site, so web sites can't use them to collude with each other.
That kind of "smart ID" doesn't have to be with the government, although that's often a natural starting point for ID information. There are methods which do the same based on a consensus of people and entities that know you, for example. That might be better from a human rights perspective, given how many people do not have citizenship rights.
If it would be unconstitutional to require identity-revealing or age-revealing ID checks for social media, that's all the more reason to investigate modern technical solutions we have to those problems.
I'm not a cryptographer so I might miss something but I have the impression that
- either a stolen card can be reused thousands of time meaning that it's so easy to get a fake that it's not worth the implementation cost
- either there is away to uniquely identify a card and then it becomes another identifier like tracking ids.
Assuming you can make active queries to the verifier, you could do something like
- Have your backend generate a temporary AES key, and create a request to the verifier saying "please encrypt a response using AES key A indicating that the user coming from ip X.Y.Z.W is over 16". Encrypt it with a known public key for the verifier. Save the temporary AES key to the user's session store.
- Hand that request to the user, who hands it to the verifier. The verifier authenticates the user and gives them the encrypted okay response.
- User gives the response back to your backend.
Potentially the user could still get someone to auth for them, but it'd at least have to be coming from the same IP address that the user tried to use to log into the service. The verifier could become suspicious if it sees lots of requests for the same user coming from different IP addresses, and the service would become suspicious if it saw lots of users verifying from the same IP address, so reselling wouldn't work. You could still find an over-16 friend and have them authenticate you without raising suspicions though, much like you can find an over-21 friend to buy you beer and cigarettes.
Since you use a different key with each user request, the verifier can't identify the requesting service. Both the service and the verifier know the user's IP, so that's not sensitive. If you used this scheme for over-16 vs. over-18 vs. over-21 services, the verifier does learn what level of service you are trying to access (i.e. are you buying alcohol, looking at porn, or signing up for social media). Harmonizing all age-restricted vices to a single age of majority can mitigate that. Or, you could choose to reveal the age bucket to the service instead of the verifier by having the verifier always send back the maximum bucket you qualify for instead of the service asking whether the user is in a specific bucket.
If you can make active queries to the verifier, so can any adversarial party. These kinds of ZK-with-oracle schemes need to be very carefully gamed to ensure they're truly ZK, and not just "you learn nothing if you only query once."
This implodes under CGNAT, cafe internet, hotel internet, etc.
You can make active queries, with the user's involvement. The verifier can potentially have a prompt with e.g. "The site you were just on would like to know that you are over 21. Would you like to share that with them?"
We do need to get people onto ipv6 so CGNAT can die. Restricted services could potentially disallow signups or require more knowledge (e.g. full ID) if coming from shared IPs as a risk mitigation strategy, depending on how liable we want to hold them to properly validate age. If you've already signed up for facebook at home, obviously you don't need to validate your age again at the cafe.
Fake IDs exist in the real world. The system doesn't have to be perfect, and we can say that there's some standard of reasonable verification that they should do for these sorts of cases.
Personally I'm more in favor of an approach where sites label their content in a way where parents can configure filters (ideally using labels that are descriptive enough that we don't get into fights over what's "adult", and instead leave that decision to individual families), but if we're going to go an ID-based route, there are at least more private ways we could do it, and I think technologists should be discussing that, and perhaps someone at one of these big companies can propose it.
There is no way my 94-year-old neighbor can successfully do any of that.
That's the protocol for the computer, similar to oauth. From the user perspective, your 94-year-old neighbor would have an account with id.gov that they've somehow previously established (potentially the DMV or the post office does this for them), and the user flow works much like "Sign in with Google" buttons do today.
Addendum: you can actually preserve the privacy of which bucket the user is in to all parties if this is sufficiently standardized that it goes through a browser API.
- Have the service generate the request as above, but now the request is "Please encrypt a response with key A for the user coming from ip X.Y.Z.W".
- Service calls a standard browser API with the request, telling the browser it would like to know the user is in the over 16 bucket. Browser prompts the user to verify that they want to let the service know they are over 16. Browser sends the request to the verifier.
- Verifier responds with a token for each bucket the user is in. So a 22 year old gets an over-16 token, an over-18 token, and an over-21 token.
- Browser selects the appropriate response token and gives it back to the service.
So the service only ever learns you are over the age limit they care about, and the verifier only ever learns that you asked for some token, but not which one.
It would be neat if some authority like the passport office or social security office also provided a virtual ID that includes the features OP described and allowed specific individual attributes to be shared or not shared, revoked any time, much like when you authenticate a 3rd party app to gmail or etc.
Putting on my conspiracy hat for a minute: They don't want to make it easy for you to authenticate anonymously. They obtain their surveillance data from the companies that tie you, individually, to your data. They'd be shooting themselves in the feet.
These are the things that the post office should be handling.
Yeah, hell no.
You can also use fake ID to buy booze.
Making it illegal is, by itself, enough to discourage a lot.
You can't run for loop on buying booze with fake id.
Those need some kind of face to face interaction. The perceived risk of being caught is much higher.
mDL wide scale rollout works be using the trusted computing element that is part of your phone and enrollment would be the same as obtaining a driver’s license in the first place.
There is no physical card - there is an attestation that only an enrolled device can hand out with revocation support in case of security flaws.
Is it going to be absolutely secure? No. The cost just needs to be high enough that it becomes inaccessible to the vast majority of adolescents.
Theft of your parents phone becomes a lot easier attack vector but phone biometrics/password requirements will thwart that for most parents.
This doesn’t need to be 100% fool proof.
There's a unique identifier, but it's your secret and can't be used for tracking. Sites needing verification don't learn anything except that you "have" a token matching the condition they are checking. This includes not learning your unique identifier, so they can't use it for tracking. The issuer also doesn't learn anything about your verification queries.
You have an incentive to keep the secret token to yourself, and would probably use existing mechanisms for that: You might manage like your phone number, private email and other personal accounts today. Not perfect, but effective most of the time for most people.
You might decide to share it with someone you trust, like your sibling. That's up to you, but you wouldn't share it widely or with people you don't trust, even under pressure, because:
To prevent mass reuse of stolen tokens, it's possible to use more cryptography to detect when the same token is reused in too many places, either on the same site or across many sites, without revealing tokens that don't meet the mass-reuse condition, so they still can't be used for tracking. If mass-reused tokens auto-revoke, they can't be reused thousands of times by anyone, and that also provides an incentive to avoid sharing with people you don't trust.
I won't pretend this is trivial. It's fairly advanced stuff. But the components exist these days. The last paragraph above requires combining zero-knowledge proofs (ZKP) with other branches of modern cryptography, called multi-party computation (MPC) and perhaps fully homomorphic encryption (FHE).
If the zero-knowledge proof doesn't communicate anything other than the result of an age check, then the trivial exploit is for 1 person to upload an ID to the internet and every kid everywhere to use it.
It's not sufficient to check if someone has access to an ID where the age is over a threshold. Implementing a 1:1 linkage of real world ID to social media account closes the loophole where people borrow, steal, or duplicate IDs to bypass the check.
As I mentioned elsewhere, you’re falling for letting perfect be the enemy of good. The ZKP + phone biometrics only needs to raise the cost of bypass above what adolescents have access to. And no, you can’t just share the same ID because there’s revocation support in the mDL and it’s difficult to extract the raw data once it’s stored on the trusted element. This is very similar to how credit cards on phones work which are generally very difficult to steal.
Sorry, you’re not thinking like a group of 15 year olds trying to get online.
The revocation list means nothing when they can get ahold of someone’s older sibling’s ID and sign up for social media.
Did everyone just forget what it’s like to an ambitious kid who wants to get online?
Do people really think a platform that needs people to jump through these hoops and use this imaginary international ID architecture is feasible?
Does anyone really think that kids won’t just set their location to Estonia and/or use a VPN to circumvent all of this?
You’re thinking like a group of technically proficient 15 year olds and their friends. That’s a small minority. The vast majority of teens are likely to be stymied.
Revocations are not for the individual ID but if an exploit is found compromising the IDs stored on a trusted element. Your older siblings ID can’t be used to sign for millions of accounts - just those who the older sibling lets borrow their phone that has their ID (and assuming there isn’t some kind of uniqueness cookie that can be used to prevent multiple accounts under a single ID). That’s a much different and more manageable problem (fake ids via older siblings have been a thing for forever).
This. The parties that this will detriment are all older people. Kids will simply bypass it.
No, this line of reasoning deserves nothing but absolute contempt when it comes to laws. We are not talking about getting the finnicky API to work at your job. Too often laws have had unintended consequences as a result of loopholes or small peculiarities. If the damn law doesn't even work on a fundamental level then it should be opposed on principle.
You’ve just described literally every single law. Congrats. You’re now appreciating what it’s like to live in a law-based society.
It don't have to be perfect, but it need to have some way to do spot checks.
If there is no risk involved, everyone will jump on doing it.
There are technical methods to detect and revoke large-scale reuse of an uploaded id. I wrote more detail in another comment.
That only covers large-scale reuse. It doesn't cover lending your id to your younger sibling if you want to, or if they find a way. Maybe that should be acceptable anyway. Same as you can lend your phone or computer to someone to use "as you", or you can buy them cigarettes and alcohol. Your responsibility.
I dont want government crapware on my device to access the internet.
I also dont want third party crapware on my device to access the internet.
"wowee it can be done without revealing my identity to anyone but the government or the corp running the chain"
No thanks
Identity is not revealed.
Do you think the NSA would balk at that challenge?
It doesn't have to be based on crypto designed by NSA, does it?
Design is far from the only threat vector. Any implementation that is less than perfect is prone to all kinds of attacks. A few years ago, there was a report that the NSA could decrypt a double-digit percentage of encrypted web traffic thanks to a larger-than-expected bag of factored primes they keep handy.
Great story, are you claiming that NSA can infer from zero-knowledge proofs inputs, maybe map cryptographic hashes to plain input text or something of that nature?
No, but I bet a dollar that NSA isn't just going to collectively fold hands and say "These schemes and implementations are too good and too secure for us to break. We'll ignore the meta data, network analysis, huge data centers and side channels, we'll give up and focus on defensive security only"
Definitely, we can use a government issued id, or we can create our own. Social graphs i call em. Zero knowledge proofs have so many ground breaking applications. I have made a comment in the past, relevant to how could a social graph be build, without the need of any government [1]. We can create effectively one million new governments to compete with existing ones.
[1] https://news.ycombinator.com/item?id=36421679
Governments monopolize violence. At least at the foundational level. When too many of them compete at once it can get very messy very quickly.
Let's suppose that 1 million new governments are founded, and violence still can be enforced only by the existing ones. The new governments will be in charge of ids, signatures, property ownership and reputation. Governments of Rust programmers, or Python programmers, or football players, or pool players, or truck drivers will be created.
When a citizen of Rust programmers social graph uploads code, he can prove his citizenship via his id. We may not even know his name, but he can prove his citizenship. He can sign his code via his signature, even pull request in other projects. He can prove his ownership of an IT company, as it's CEO, the stock shares and what not. And he will be tied to a reputation system, so when a supply attack happens, his reputation will be tainted. Other citizens of Rust's social graph, will be able to single out the id of the developer, and future code from him will be rejected, as well as code from non-citizens.
Speaking of supply chains, how about the king of supply chains of products and physical goods? By transferring products around, in a more trustworthy way, by random people tied to reputation, Amazon may get a little bit of competition ain't it?
see also an older comment of mine https://news.ycombinator.com/item?id=38800744
Government is not the correct word to use for this idea.
Allright, social graphs then. I use social graphs and e-gov interchangeably, but social graphs might be better.
I've been thinking a lot lately about decentralized moderation.
All we need to do is replace the word "moderate" with "curate". Everything else is an attestation.
We don't really need a blockchain, either. Attestations can be asserted by a web of trust. Simply choose a curator (or collection of curators) to trust, and you're done.
Yeah, blockchain is not needed at all. A computer savvy sheriff might do it, an official person of some kind. Or even private companies, see also "Fido alliance".
Additionally the map of governments which accept Esthonian passport might be of some relevancy here[1].
[1] https://passports.io/programs/EE1
The government need not know what sites you visit. It is damaging enough that the government know that you are visiting sites that require an age verification. You can then be flagged for parallel construction if you should, I don't know, start a rival political party.
Not if this were widespread. I wouldn’t be too bothered if the government knew that I either watched an R-rated movie or rented a car or purchased alcohol or created a Facebook account.
Now do need an abortion or sought out gender affirming care. Today’s “no big deal” can become tomorrow’s privacy nightmare.
You missed the either in GP's comment. i.e. they know you did one of those things because you requested an over-18 token, but not which one. The more covered activities there are, the more uncertainty they have about why you might have asked for a token.
This isn't really my area of expertise, is there a way to know for sure that those are all the same token? Or could the government just lie and say they are all the same when in reality they can really differentiate.
The government would have to document the API for requesting tokens for anyone to use it. I suggested a scheme here[0] where it's clear that the government doesn't get any information about the service (unless the service re-uses AES keys) and the service doesn't get any information about the user other than whether they're in the appropriate age group.
Potentially there could be coordination between .gov and the service to track users by having each side store the temporary AES key and reconcile out-of-band. But .gov has other ways they could get that information anyway if they have cooperation from businesses (e.g. asking your ISP for your IP address, and asking the service provider for a list of user IPs).
[0] https://news.ycombinator.com/item?id=39183486
A super interesting example of this is the proof-of-passport project.
https://github.com/zk-passport/proof-of-passport
Today you can scan your passport with your phone, and get enough digitally signed material chained up to nation level passport authorities to prove anything derived from the information on your passport.
You could prove to an arbitrary verifier that you have a US passport, that your first name starts with the letter F, and that you were born in July before 1970, and literally share zero other information.
The selective disclosure is super cool, I wonder how it works since smthing like a hash of DG1 is what is actually signed, how can you selectively disclose verified data from "inside" the hashed area? It does not sound very feasible to me but I am not an expert in zk-snarks etc.
There are some wrinkles that prevent passport data being used more broadly - technically it is a TOS violation to verify passports / use the ICAO pkd without explicit permission from ICAO or by direct agreement with the passport holder's CSCA (country signing certificate authority). Some CSCAs allow open use but many do not.
Also, without being too pedantic about it, what you are able to prove is more like possession of a document. An rfid passport (or rfid dump & mrz) - or in fact any kind of identity document - does not prove that you are the subject - you need some kind of biometric bind for that.
ZK circuts have gotten really fancy lately, to the point where full blown ZK virtual machines are a thing, which means you can write a program in rust or whatever, compile it to riscv, and then run it on the risc zero zkVM. (https://github.com/risc0)
This means you can literally just write a rust program that reads in the private data, verifies the signature, reads the first byte in the name string and confirms that it matches what you expect, and then after everything looks good, it returns "true", otherwise it returns "false". This all would happen on your phone when you scan a QR code or something that makes the request, then you send the validity proof you generated to the verifier, they can see that the output was true, and nothing else.
In theory, the private data would be stored on a trusted device you own, like your phone or something, so someone who steals your phone would have a hard time using your identity. Using fancy blockchain stuff you could even to a one time registration of your passport such that even if someone steals your passport, they wouldn't be able to import as a usable ZK credential. Presumably there would be some logic around it so you can re-register after a delay period or something, giving the current credential holder a chance to revoke new enrollment requests or whatever. So, yes, proving your exact identity to a website isn't perfect, but it's easy enough to make it really noisy if someone is trying to tamper with your identity, and maybe that's good enough.
If you want to go the trusted hardware route, you could make someone take a picture of their face with some sort of trusted hardware camera on their phone or laptop, and then use some zkml magic to make sure it kinda looks like the face on the passport data. Given the right resources, trusted hardware is never that hard to tamper with, so I don't like that solution very much.
What's often more important in an online context is that your credential is unique. It doesn't matter who you are, it matters that you've never used this credential to sign up for a twitter account, or get past a cloudflare captcha, or any other captcha use case. If you steal 10 passports, maybe you can set up a bot that will automatically vote for something 10 times, but at least you can't vote millions of times. This is sybil resistance, and it's massively important for a ton of things.
Thanks! I have a big rabbit hole to go down now :)
I don't get what causes the proof to fail if I provide the wrong bytes to the zkvm when it tries to read from inside the hashed area after the hash & signature are verified (this might not be directly sequential I guess, I think it has to be part of the same proof).
Put another way, I get we have to zk prove that a) I know a message M that hashes to H ... (can see this is do-able from googling), but also that a particular byte range M[A-B] is part of M, in a way that the verifier can trust I'm not lying and I don't see how the second bit is accomplished. It feels like there are also details in proving that the data comes from the right "field" in the DG1.
This stuff is such black magic! EDIT: will try this out in ZoKrates...
Sure, but is the Florida legislature actually looking into stuff like this?
Why would they, when it is not in the governments interest?
You can send a proof that someone's government-provided ID says that their age is ≥ 16.
That's not enough proof to levy a requirement.
It'd be cool if any of the proposed bills actually suggested something like this. They do not. They specify an ID check.
There's just one problem. How does the machine proving your age know that you are who you say you are? Modern cryptography doesn't have any tools whatsoever that can prove anything about the real body currently operating the machine--it can never have such a tool. And the closest thing that people can think of to a solution is "biometrics," which immediately raises lots of privacy concerns.
So I hash some combination of ID, name and birthday and send it to Facebook to create an account. Facebook relays that hashed info to a government server which responds with a binary yes/no.
Of course you need to trust that the hash is not reversable.
That doesn’t stop kids from using Facebook, but it stops kids’ ID from being used to create an account.
We did a hackathon at work and one of the guys from one of my project teams covered this stuff as his project.
I trust that it _would_ work 100%,but what I don't trust is that a government would implement it properly and securely, because no government works like that lmao (even NZ's great one).
I mean living in the UK now I got like a dozen different fucking gov numbers for all manner of things, dvla, NHS, nin, other tax numbers, visa, etc...why isn't there just one number or identity. Gov.uk sites are mostly pretty stellar besides.
And as with electronic voting, the contract will go to the lowest bidder with the worst security, not the company that's got the CS chops to do it right.
Yes, this isn't the right solution. The power needs to be given to the users.
A better solution is more robust device management, with control given to the device owner (read: the parent). The missing legislative piece is mandating that social media companies need to respond differently when the user agent tells them what to send.
I should be able to take my daughter's phone (which I own), set an option somewhere that indicates "this user is a minor," and with every HTTP request it makes it sets e.g. an OMIT_ADULT_CONTENT header. Site owners simply respond differently when they see this.
Already exists, simply include the header
Rating: RTA-5042-1996-1400-1577-RTA
in HTTP responses that include adult content, and every parental controls software in existence will block it by default, including the ones built into iPhones/etc and embedded webviews. As far as I know all mainstream adult sites include this (or the equivalent meta tag) already.
In general, I don’t think communicating to every site you visit that you are a minor and asking them to do what they will with that information is a good idea. Better to filter on the user’s end.
This is not a response header. It's a meta tag that's added to a websites head element to indicate it's not kid friendly. The individual payloads returned from an adult site don't include this as a header.
That'd be the "equivalent meta tag" I mentioned. And this site claims the header works too, though I haven't tested it myself. https://davidwalsh.name/rta-label
That's a much better approach in general.
It's much easier to regulate and enforce that websites must expose these headers so that UAs can do their own filtering. Adult Content = headers in response, no ifs, ands, or buts
Response headers are encrypted in the context of HTTPS, so there's no real sacrifice in privacy. Implementation effort about as close to trivial as can be. No real free speech implications (unless you really want to argue that those headers constitute compelled speech). All in all, it's a pretty decent solution.
I honestly wasn't aware of this, and it sounds like a great solution for "adult content." Certainly, the site specifying this is better than the user agent having to reveal any additional details about its configuration.
Emancipation of children is also a thing, where a minor may petition the court to be treated as an adult. This also falls afoul of a blanket age restriction.
https://jeannecolemanlaw.com/the-legal-emancipation-of-minor...
I think there's an ambiguity here. Even on this page, talking about "for all purposes". This seems to mostly refer to parental decision making and rights.
i.e. even as an emancipated minor, being treated as a "legal adult" does not mean you can buy alcohol or tobacco.
Being an actual legal adult does not mean that you can buy alcohol or tobacco.
Understood, but being emancipated could mean that you need to be able to used LinkedIn, perhaps as part of a job search. An argument could be made that an emancipated minor should have access to some social media.
Another option that allows for better privacy and versatility is the website setting a MIN_USER_AGE header based on IP geolocation.
Geolocation to that degree not that reliable and not necessarily 1:1 with jurisdiction or parental intent.
If we're already trusting a parental-locked device to report minor-status, then it's trivial to also have it identify what jurisdiction/ruleset exists, or some finer-grained model of what shouldn't work.
In either case, we have the problem of how to model things like "in the Flub province of the nation of Elbonia children below 150.5 months may not see media containing exposed ankles". OK, maybe not quite that bad, but the line needs to be drawn somewhere.
then propose the RFC.
I haven't read the legislation myself but I don't see why this couldn't still be done, I doubt the legislation specified _how_ to do it.
So no, that wouldn't work right now.
Sounds like you want PICS (though it works on the opposite direction, with the web site sending the flag, and the browser deciding whether to show the content based on it).
Exactly, any design for this stuff requires parental-involvement because every approach without it is either (A) uselessly-weak or (B) creepy-Orwellian.
If we assume parents are involved enough to "buy the thingy that advertises a parental lock", then a whole bunch of less-dumb options become available... And more of the costs of the system will be borne by the people (or at least groups) that are utilizing it.
Lately I'm repeatedly reminded of how in Ecuador citizens, when interviewed during a protest, see it as a normal thing to tell their name as well as their personal ID number into the camera when also speaking about their position in regards of the protest. They stand to what they are saying without hiding.
Since about half a year I've noticed the German Twitter section getting sunk in hate posts, people disrespecting each other, ranting about politicians or ways of thinking, but being really hateful. It's horrible. I've adblocked the "Trending" section away, because its the door to this horrible place where people don't have anything good to share anymore but disrespect and hate.
This made me think about what we're really in need for, at least here in Germany, is a Twitter alternative, where people register by using their eID and can only post by using their real name. Have something mean to say? Say it, but attach your name to it.
This anonymity in social media is really harming German society, at least as soon as politics are involved.
I don't know exactly how it is in the US but apparently it isn't as bad as here, at least judging from the trending topics in the US and skimming through the posts.
The algorithm powering the trending section, which rewards angry replies and accusatory quote-tweets is at least a good a candidate as a source of harm to political discourse than anonymity.
Take it a step forward, ban Engagement based algorithmic feeds. I've said this and I'll continue to say this type of behavioral science was designed at FB by a small group of people and needs to be outlawed. It never should have been allowed to take over the new age monetization economy. There's so much human potential with the internet and its absolutely trainwrecked atm b.c of Facebook.
I agree.
It's been deliberately designed to cause strife. They are genuinely anti-human companies that seem to want conflict and tension to occur.
https://www.theguardian.com/commentisfree/2014/jun/30/facebo...
This article was very on-the-nose, that really should have been the writing on the wall.
I think this is a partial attribution error; the goal is to make money by capturing attention and selling it to advertisers. The fact that strife is one of the most effective ways to do so, and the companies appear utterly unconcerned about the resulting damage to society makes it look like strife is the end goal, but it is merely a means to make money.
They would change their algorithms immediately if large advertisers applied financial incentives. We've seen some of this with Youtube's policies leading to videos with censored profanity where its use was previously normal and neologisms like "unalived" to mean killed.
Musk's Twitter may be a partial exception since it's decisions are now driven by a single man's preferences rather than an amoral mechanistic imperative to increase shareholder value. That doesn't seem to have improved things.
I mean, how do you know that?
How do you know that the goal isn't actually to perform social engineering on a huge scale and that the advertising is just the way that goal is being funded?
I suppose I don't. It could be that Facebook, Twitter, Youtube, and TikTok are all actively trying to create chaos, but greed adequately explains their behavior. One thing that points more strongly to greed is that the companies, aside from Twitter post-Musk, rapidly change their behavior when it impacts their revenue.
With TikTok, there's some chance geopolitics is a factor as well.
People have zero qualms about being absolute ghouls under their wallet names. The people with the most power in society don't need anonymity. The people with the least often can't safely express themselves without it.
Also:
https://theconversation.com/online-anonymity-study-found-sta...
There are two points which matter:
- No more bots or fake propaganda accounts.
- Illegal content, such as insults or the like, will not get published. And if it does, it will have direct consequences.
I'm also not tending towards a requirement to have all social networks ID'd, but I think that a Twitter alternative which enables a more serious discussion should exist. A place where politicians and journalists or just citizens can post their content and get commented on it, without all that extreme toxicity from Twitter.
The thing is, the political climate is very toxic and the absence of anonymity can have a real impact for things that are basically wrong think.
Say for example I held the opinion that immigration threshold should be lower. No matter how many non-xenophobic justifications I can put on that opinion, my possibly on H1B colleagues can and would look up my opinion on your version of Twitter and it would have a real impact on my work life.
There is a reason why we hold voting in private, it's because when boiled down to its roots, there are principles that guide your opinions they are usually non -reconciliable with someone else's opinion and we preserve harmony by keeping everyone ignorant of their colleagues political opinions. It's not a bad system, but it's one that requires anonymity
Or, to post on a political forum you must have an ID. You can have and post from multiple accounts, but your id and all associated accounts can be penalized for bad behavior.
I wonder what percentage of the hate stuff is bots.
My experience with propaganda bots is that the really nasty hate stuff will usually be posted by actual, real people (perhaps as a result of being prodded by bot-provided outrage), and bots will rather have all kinds of more subtle hinting and agenda-pushing - because bots are managed by semi-professionals who care about bots not being blocked and (often) don't really care about the agenda they're pushing, while there also is a substantial minority of semi-crazy people who just don't care and will escalate from zero to Hitler in a few minutes.
Attach their pictures too, so you can see the ghoul spouting hate is a basement dweller
Plenty of hate under plain names on Facebook, been that way for a decade and I doubt it will change with ID verification.
The First Amendment guarantees free expression, not anonymous expression.
For example, there are federal requirements for identification for political messages. [1] These requirements do not violate the First Amendment.
[1] https://www.fec.gov/help-candidates-and-committees/advertisi...
In particular, "anonymous speech is required if you want to have free speech" is actually a very niche position, not a mainstream one. It just happens to be widely spammed in certain online cultures.
Correct.
I am staunch believer in the moral and societal good of free speech.
Anonymous speech is far more dubious.
Like, protests seem valuable. Protests while wearing robes and masks however...
This is absolutely permitted in America. It is illegal in countries like Germany which have no strong free speech protections.
No. It is not "absolutely permitted in America".
They're Klan Acts, because a bunch of guys in masks and robes marching through is obviously a threat of violence.
https://en.wikipedia.org/wiki/Anti-mask_law
Those laws are frequently overturned as unconstitutional, but may still remain on the books because we don't do a good job of clearing out laws that were ruled unconstitutional.
As a general rule of thumb, almost every time someone brings an edge case about whether or not speech is First Amendment-privileged before SCOTUS, SCOTUS rules in favor of the speech. (The main exception is speech of students in school.) SCOTUS hasn't specifically ruled on anti-mask laws to my knowledge, but I strongly doubt it would uphold those laws.
The last court case was 2004, and the New York anti-mask was upheld.
You don’t have to pontificate. The link was right there.
Random aside, I haven’t seen NY enforce the anti mask legislation during the Halloween parade, various protests, or Covid. So I bet a new constitutional challenge could be erected.
You're trolling right?
There’s no issue protesting in masks. Antifa did it all the time with no repercussions.
Protests while wearing robes and masks protect the protester from followup action, and it also protects their families.
If you assume a just government that doesnt demand revenge for every petty slight, you are living in a fantasy land.
There are places where it wouldn't be safe to protest without masks. In that case people effectively would be losing their freedom of speech if it isn't anonymous.
America has a very long tradition of anonymity being part of free speech, going back to the Federalist Papers. This is not some new online issue.
Wealthy, politically connected men with the ability to read and write about political philosophy and get it distributed is a bit different situation than a thousand Russian AI trollbots posting bad-faith "opinions" on American current events.
Not at all. Just the social media sites, which are objectively bad for kids. As an adult, you do what you want on the internet.
And what makes a site a social media site? Anywhere you can post interactive content?
You do realize that laws like this would apply to sites like HN, Reddit, the comment section of every blog, and every phpBB forum you ever used? It's not just Instagram and Tiktok.
I think a perfectly clear line could be drawn that would separate out phpBB from TikTok very easily. I genuinely don't understand this comment, we shouldn't do it because it's hard or the results might be imperfect?
I think your comment would be much stronger if you laid out precisely what you think that line would be.
Laws do not have to be perfect to be good, but they do have to be workable. It's not clear that there's a working definition of "social media" that includes both TikTok and Reddit but doesn't include random forums.
So if a random person on the internet doesn't have a perfect solution then it shouldn't be considered?
This is not a charitable reading. Nobody is asking for a perfect solution; it is reasonable to demonstrate some prior consideration for the ways in which most solutions are dangerously imperfect.
Their opinion that it would be straightforward shouldn’t be considered.
Kids want to communicate. Whether it's TikTok, Discord, phpBB, chatting in Roblox or Minecraft, they will if they can.
If we want to "ban social media" we'll need a consistent set of guidelines about what counts as social media and what doesn't, and what exactly the harms are so they can be avoided.
I don't believe that's as easy as you think.
To me the issue is it’s a waste of government and waste of time. Parental controls already exist on all devices. The answer to every problem can’t be “more government, more laws”.
Yes. Imperfect solutions, when driven by the government, don’t change for decades, causing terrible consequences.
“SSH is a munition” comes to mind.
Any clear line would be gamed pretty quickly I imagine.
Trying to force independently owned and operated forums to enforce laws that might not even be applicable in the country that the owners / admins live and work in is going to be about as effective as trying to force foreign VPS/server/hosting providers to delete copyrighted content from their server using laws that don't apply in their jurisdiction.
You know, I'm not really sure that requiring IDs for access to porn / social media is a terrible idea. Sure it's been anonymous and free since the advent of the internet, but perhaps it's time to change that. After all, we don't allow a kid into a brothel or allow them to engage in prostitution (for good reasons), and porn is equally destructive.
But with the topic at hand being social media, I think a lot of the same issues and solutions apply. It's harmful to allow kids to interact with anyone and everyone at any given time. Boundaries are healthy.
Aaaaand, finally there's much less destruction of human livelihood by guns than both of the aforementioned topics if we measure "destruction" by "living a significantly impoverished life from the standard of emotional and mental wellbeing". I doubt we could even get hard numbers on the number of marriages destroyed by pornography, which yield broken households, which yield countless emotional and mental problems.
So, no, guns aren't something we should discuss first. Also, guns have utility including but not limited to defending yourself and your family. Porn has absolutely zero utility, and social media is pretty damn close, but not zero utility.
The biggest problem with this is how we would define "porn". Some states are currently redefining the existence of a transgender person in public as an inherently lewd act equivalent to indecent exposure.
I have no doubt that if your proposal were to pass that there would be significant efforts from extremist conservatives to censor LGBT+ communities online by labeling sex education or mere discussion of our lives as pornographic. How are LGBT+ people supposed to live if our very existence is considered impolite?
Nevermind the fact that the existence of a government database of all the (potentially weird) porn you look at is a gold mine for anyone who wants to blackmail or pressure you into silence.
The horrors and dangers of porn are squarely a domestic and family issue. The government does not need to come into my bedroom and look over my shoulder.
This is the first I've heard about this. Link?
This isn't really happening, the closest I can find is the controversial topic of drag shows in public or banning kids from drag shows.
AFAIK there is no real legislation banning a trans person in public.
Agreed, it's just rhetoric. Same as all these claims of an ongoing 'trans genocide' in the USA. Absolute nonsense, but it gets the believers in this ideology all riled up, and so the purpose of this rhetoric is fulfilled.
If you don't want a record of you looking at it, then don't look at it. All you need to do is refrain from pornography consumption. It really is that easy and simple.
I wholeheartedly agree. And that’s a problem we should lean into and solve. Its difficulty doesn’t make it less worth of solving.
Therein lies the problem however. Every systemic issue in our world begins in a family or domestic situation of some form. While I am well aware and also concerned about the implications of government overreach here, I don’t think we can throw up our hands and say, “Meh”. At a minimum it can begin with education. We can teach people about the destructive nature of porn (and social media).
The fact that this impacts every family, domestic situation, and therefore indirectly or directly touches every single life in our society actually kinda makes it a great candidate for government oversight.
You think watching porn is equally destructive to engaging in prostitution? I'd hate to see what kind of porn you're watching.
I think you’re vastly underestimating the destructive nature of porn. I’m no longer watching porn, by the grace of God.
COPPA has entered the building. If you're under 13 and a platform finds out, they'll usually ban you until you prove that you're not under 13 (via ID) or can provide signed forms from your parent / legal guardian.
I've seen dozens of people if not more over the years banned from various platforms over this. We're talking Reddit, Facebook, Discord and so on.
I get what you're saying, but it kind of is a thing already, all one has to do is raise the age limit from 13 to say... 16 and voila.
"Finds out" is the operative part. COPPA is not a proactive requirement; it's a reactive one. Proactive legislation is a newer harm that can't easily be predicted based on past experiences with reactive laws.
Indeed, nothing is stopping said companies from scanning and assessing age of a user uploading selfies though. This is allegedly something that TikTok does. My point being, the framework is there, and then people actually report minors, the companies have to take it seriously, or face serious legal consequences.
How do you know the selfie is from the "primary" user? And how do you know they're underage, versus being a chubby-faced 18 year old (like yours truly was?)
It doesn't matter to the platform, mind you, I've seen people abuse this. They will deactivate the account and require ID.
I propose the Leisure Suit Larry method. Just make users answer some outdated trivia questions that only olds will know when they sign up for an account.
In the Internet era, the answers will just be googleable. People wil quickly compile a page with all possible questions and with answers to them.
But with ChatGPT, all the answers will be wrong.
The internet has not been anonymous in fact or theory for decades now, and if you think the government can't get your complete browsing history on a whim I'm guessing you haven't paid any attention to the news about NSA buying user data bundles from online brokers. That said, "muh freedoms" is hardly a quality argument in the face of the widely documented pervasive harms caused to children by exposure to social media. The logical extreme of your position would be to declare smoking in public a form of self-expression and then demand age limits be removed for the sale of tobacco products because First Amendment. :P
As the OP said, if freedoms are not a quality argument then we can rid ourselves of millions of guns, but this is a non-starter for the freedom crowd.
Ironically the "freedom crowd" are also statistically significantly more likely to get shot by their own toddlers accidentally so I'm not convinced they represent a pool of quality decision-making or grounded worldview. It's interesting how quickly any discussion of potential solutions to real-world problems gets chucked out the window the second someone says "freedom".
Aren't the really problematic social networks the ones where you've lost your privacy and anonymity long ago and are being tracked and mined like crazy?
That's like saying "80% of the internet has gone to shit, might as well destroy the remaining good 20%".
I don't think it's like saying that at all.
We say this is the only way, but what about regulating these companies properly???
"Regulating them properly" could mean a lot of things to a lot of people. Do you mean e.g. just not allowing porn on the internet or do you mean e.g. just not requiring identification verification? If neither, what way of regulation that allows distinction without banning content or identifying users?
Putting conditions on the company so that they do not even risk this in the first place, e.g. moderation, a demand to make tools to protect children, there are a lot of different things and prohibition is the least likely to work at any level. You can look at any issue in history with prohibition and see what that amounts to.
Surely not.
Imagine: government sells proof of age cards. They contain your name, and a unique identifier.
Each time you sign up to a service, you enter your name and ID. The service can verify with the government that your age is what it needs to be for the service. There are laws that state that you can't store that ID or use it for any other purpose.
Doesn't seem impossible.
- Would only work if the government, does not have access to the reverse mapping. (otherwise law enforcement will eventually expand to get its hands on it)
- It will likely be phished very quickly (and you'd have no way of knowing since no one is storing it). (making it short lived, means u'd have to announce tot he goverment eveytime u want to watch porn)
- Eventually there will be dumps of ids, like there are dumps of premium porn accounts now
Still doesn't do anything about foreign websites.
The bulk of the internet has not been anonymous for a while. Facebook requires an id already, Google tracks you using Google and the OS, reddit is tightening to control bots, amazon requires a phone number.
Think about it. What portion of your activities day to day on the Internet are anonymous? Now try to do them anonymously. It isn't practical/possible anymore and the internet of yesteryear is gone.
It is pretty hard to give access to youtube to your kid with an account where age is stated. Yes kids can open browser in private mode… but they rarely do because it is a friction. If every social media would be moved to adult category the current rules in operating systems would do a good job. I am not sure about 16 years ( i would support it as a father )… but up to 13-14 feels appropriate, there is PG-13
People can post all kinds of illegal things online and no one is suggesting that content should be approved before it can be visible on the Internet. It doesn't have to be strictly enforced to act as a deterrent. How effective of a deterrent it would be has yet to be seen.
Alcohol and tobacco websites have been doing fine without checking IDs.
We ARE talking about social media.
Like, the least private software on the planet.
effectively regulating and destroying the anonymous nature of the internet
Social media is on the internet. It is not the internet.
Depends on the argument being made, on the ideology of the audience, on the current norms, etc.
I had an exchange here on HN some time back (topic was about schools removing certain books from their libraries), and very many people in support of those books, which dealt with gender-identity and sexual orientation, also supported outright porn (the example I used was pornhub) for kids of all ages as long as those books with pictures (not photos) of male-male sexual intercourse could stay in the library.
Right now, if you made the argument "There are some things kids below $AGE shouldn't be exposed to", you'll still get some (vocal) minority disagreeing because:
1. They feel that what $AGE kids get exposed to should be out of the parent's hands ("Should we allow parents to hide evolution from their children?", "Should we allow parents to hide transgenderism from their children?")
2. They know that, especially with young children, they will lose their chance to imprint a norm on the child if they are prevented from distributing specific material to young children.
In the case of sex and sexual education, there is currently a huge push for specific thoughts to be normalised, and unfortunately if it means that graphic sexual depictions are made to children, so be it.
The majority is rarely so vocal about things they consider "common sense", like no access to pornhub for 10 year olds.
I do. Anything to avoid "the talk", tbh. I grew up Catholic. I never had "the talk". I don't even know where to start. Blow jobs?
Ah so be it. I don’t care much for the things that come from anonymous culture. I want gatekeepers. This tyranny of the stupid online is pretty tiresome.
"probably unconstitutional under the First Amendment, to boot"
Probably not. Minors have all sorts of restrictions on rights, including first amendment restrictions such as in schools.
"(And if it is, let's start talking about gun ownership first...)"
Are you advocating for removing ID checks for this? If not, it seems that this point actually works against your argument.
Not saying that I agree with a ban, but your arguments against it don't really stand.
Technically you can make that work without issues (You only need to prove your age not your identity, something which can reasonably be archived without leaking your identity).
There are just two practical issues:
- companies, government and state (at least US police & spy agencies) will try to undermine any afford to create a reasonable anonymous
- it only technically works if a "reasonable degree" of proof is "good enough", i.e. it's must be fine that a hacker can create a (illegal?) tool with which a child could pretend to be 16+, e.g. by proxing the age check to a hacked device of an adult. Heck it should be fine if a check can be tricked by using the parents passport or phone. I mean it's an 16+ check, there really isn't much of a reason why it isn't okay to have a system which is only "good enough". But lawmakers will try nonsense.
Interestingly this is more a problem for the US then some other states AFIK due to how 1) you can't expect everyone 18+ to have an id and everyone 16+ to be able to easily get one (a bunch of countries have owing (not carrying) id requirements without it being a privacy issue. 2) Terrible consumer protection making it practically nearly impossible to create a privacy preserving system even if government and state agencies do not meddle.
Similar if there wouldn't be the issue with passports in US it probably wouldn't touch the First Amendment as it in the end protects less then a lot of people believe it does.
"Unconstitutional" arguments only go so far. I am not America (I'm a proud Australian) so I can easily see the incredibly obvious and ridiculous destruction "freedom" in your country entails.
Anonymity services can still exist without fostering an environment to addict children and young adults to social media or a device... and without your precious "rights" being taken away.
Hopefully this will reduce amount of people using social media.
Not necessarily, consider the counterexample of devices with parental-controls which--when locked--will always send a "this person is a minor" header. (Or "this person hits the following jurisdictional age-categories", or some blend of enough detail to be internationally useful and little-enough to be reasonably private and not-insane to obey.)
That would mostly puts control into the hands of parents, at the expense of sites needing some kind of code-library that can spit out a "block or not" result.
The definition of "social media" in this bill actually seems to exempt anonymous social networks since it requires the site "Allows an account holder to interact with or track other account holders".
Ban portable electronics for children. Demand that law enforcement intervene any time it's spotted in the wild. If you still insist that children be allowed phones, dumb flip phones for them.
It could be done if there was the will to do it, it just won't be done.
No, it isn't. Check out Yivi [1]. Its fundamental premise is to not reveal your attributes. It's based on academic work into (a.o.) attribute-based encryption. The professor then took this a step further and spun off a (no profit) foundation to expand and govern this idea.
[1] https://privacybydesign.foundation/irma-explanation/
How many social media users who create accounts and "sign in" are "anonymous". How would targeted advertising work if the website did not "know" their ages and other demographic information about them. Are the social media companies lying to advertisers by telling them they can target persons in a certain age bracket.
I'm in favor of kids not using social media, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required. And the chance of actually enforcing it is zero anyway. It's no more realistic to expect this to work than to expect all parents to do it as you say. It's just wasted money plus personal intrusion that won't achieve anything.
We have a ban on gambling for minors, so if you see social media as more harmful than gambling (personally, I do) it probably makes sense.
Gambling is outright banned for a majority of regions in the US, not just kids. I don't think they are equally bad, just different bad. Gambling is addictive, and it destroys people. Social media is addictive and socially toxic, on the whole it erodes the very fabric of a society.
It's interesting how every generation seems to decry new forms of media as eroding the fabric of society. They said it about video games, television, music, movies, etc. I'm sure we're right this time, though.
… we’ve been right. Television, music, video games, movies, etc., have decimated social capital and communication in the western world. It is not uncommon to “hang out” with people who are your “friends” without ever interacting with them in any meaningful way because everyone is focused on, e.g., a television. That’s assuming everyone isn’t too busy playing video games to leave their houses.
Whether you agree or not that they’re “eroding the very fabric of a society” (I would argue they are), it should be acknowledged that almost all the downsides predicted have come to pass, and life has gone on not because these things didn’t happen, but in spite of them having happened.
People are interacting through online games quite often though. You can't throw them all in one basket. You can make friends / longer term connections that way (I did) or keep in touch with people you don't live near to.
That's not community, though. You're not inhabiting the same space, share the same problems and try to work them out them together. You're not helping each other out if someone falls into trouble.
It's up to you what you build that way. You can build a real community, you can build a shallow friend group, the medium doesn't have to limit it. My boss met his wife in a MUD.
You can meet your wife on a Tinder, via ads posted in newspaper or on a cruise. It does not mean that those are communities.
You can do all of those things sans inhabit the same space with people you meet online. I met my 2 girlfriends online and we support each other through everything even though we aren't in the same place most of the time. I'm part of a niche group on reddit that supports each other emotionally in a way I've struggled to find in physical spaces. I still meet up with people in physical space, but I absolutely get meaningful social connection from online spaces.
It seems that all data on the subject of social media point emphatically to, Yes! It's terrible for adolescents[0] (and probably society writ large?).
[0] https://jonathanhaidt.com/social-media/
Yeah, none of those are comparable. TV never lead to children bullying each other anonymously which then leads to kids committing suicide.
Television clearly did something pretty significant to society. Cable news makes the country more polarized, that we didn’t do anything about it doesn’t mean it wasn’t a problem. We just missed and now the problem is baked into the status quo.
Video games are typically fiction, so the ability to pass propaganda through them is usually a little more limited. It isn’t impossible, just different.
Social media is a pretty bad news source for contentious issues. We should be pretty alarmed that people are getting their news there.
Social media isn't new. It's over 20 years old by now. If you count things like forums you're looking at 30+ years old.
Apples-to-oranges comparison.
Plenty has been written already on the ravaging effects of social media on society and it's pretty plain to see.
https://www.mcleanhospital.org/essential/it-or-not-social-me...
https://www.psychologytoday.com/intl/blog/the-myths-sex/2022... missing-out-the-benefits-sex
Not anymore. A 2018 Supreme Court decision opened the floodgates to legalized sports gambling, much of it online. The only states with existing bans are Hawaii and Utah, which combined have only 5 million residents.
https://en.m.wikipedia.org/wiki/Murphy_v._National_Collegiat...
This is just blatantly not true, anyone who has tried to gamble on sports can tell you that companies go to quite incredible lengths to make sure that nobody outside of the few jurisdictions where they're legal can gamble online. I live in Nebraska, and I have to go travel across to Iowa before I can do anything.
True, it's state by state, but a lot of states allow it. And even if you're not in the right state you can always go to sites from other countries (e.g. betonline.ag)
Oof.
The ban we have on gambling seems weak. From trading card games to loot boxes to those arcade games that look to be skill based but are entirely up to chance, children are allowed to do all. The rules feel very inconsistent to the point that they appear arbitrary in nature.
Agreed, the gaming industry has done it's damndest to undermine the restrictions of gambling.
There's a pretty clear difference between gambling and "gambling": whether you're winning money.
Several governments have already effectively banned sites like Pornhub by creating regimes where people have to mail their ID to a central clearinghouse (which creates a huge chilling effect.) The article talks about “reasonable age verification measures” and so saying it’s unenforceable seems a little bit premature. Also, you can bet those measures won’t be in any way reasonable once the Florida legislature gets through with them.
You're proving the argument that the parent set forth. Anyone who wants to visit Pornhub can just visit one of the many sites that isn't abiding by the new law. However, that's not due to a lack of legislation, but rather a lack of enforcement, or, perhaps, enforceability. If laws always worked I'd be for more of them. My argument is not that we should never make laws because it's futile but rather that some laws are more futile than others and having laws go unenforced weakens government and enforcing them inequitably is injust.
Also social policy enforcement is a generational thing. The UK is only just getting toward outright banning cigarettes by making it illegal for anyone born after X date from ever buying them. Eventually you have a whole generation that isn't exposed to smoking and on the whole thinks the habit is disgusting, which it is.
Except that some people born after that date will still acquire them, get addicted, and then what? Prosecute them like drug possession?
It's infantilizing and dumb. Grown adults should be allowed to smoke tobacco if they so wish and smoking rates are already way down due to marketing and alternatives. Noone needs to be prosecuted.
You don't need to prosecute any buyers at all though. All you need to do is make it illegal to sell in shops, and illegal to import. There will be a black market, sure, but how many people are going to go through the trouble and expense to source black market tobacco? Not that many. And everyone benefits because universal healthcare means everyone shares the cost of the health effects that are avoided.
Should mention the govt. has to find budget to fill the gap of tobacco budget, but they've been slowly doing this as demand has slumped since 2008.
Just see how many people already go through that trouble to source illegal drugs...
I think it's hyperbolic to look at tobacco like other drugs. Tobacco is a lifestyle thing, it doesn't get you high, it's a cultural habit. There are only upsides to getting rid of the social demand for it.
If you think taking tobacco away from consumers is infantilizing, why yes, yes it is. We are dealing with children's futures. Adults get to continue smoking, children less likely to even want to smoke as the social acceptance goes down and with that there is less and less desire to smoke. Nicotine doesn't do much other than get you addicted, no one is chasing after a pronounced high with it, people start smoking because it's perceived as cool.
I can't imagine an adult wanting to start smoking, most adults get addicted in their teens.
I think you have have an import ban, and a black market, and still see significant gains in eroding the demand. I do not think people should be prosecuted for possession, but the UK will probably make some bad decisions there, but that doesn't mean the overall policy is bad.
In my opinion, these governments haven't implemented 'effective' bans (though maybe chilling, as you say) but primarily created awkward new grey markets for the personal data that these policies rely on for theatrics. Remember when China 'banned' youth IDs from playing online games past 10PM? I think a bunch of grandparents became gamers around the same time...
https://www.nature.com/articles/s41562-023-01669-8
Which is exactly what happens for markets that are desirable enough. We compare bans of things not enough people care about, to bans of things that people are willing to do crazy things for. They don't yield the same results.
No policy is 100% effective. Kids still get into alcohol, but the policy is sound.
Another example is some Korean games requiring essentially a Korean ID to play. A few years ago there was a game my guild was hyped about and we played the Korean version a bit. You more or less bought the identity of some Korean guy via a service that was explicitly setup for this. Worked surprisingly well and was pretty streamlined.
At least from personal experience, when there was a period where my ISP in the UK started requiring ID verification for porn, I literally ceased to watch it.
Making something difficult to do actually works to _curb_ behavior.
Does this actually work or does it just push those same people to sketchier websites?
Worked in my case when my ISP required ID for it in the UK.
I just noped out of it entirely.
My main issue is that the only effective way to ban access to a website is to also ban VPNs and any sort of network tunneling. A great firewall would have to be constructed which I am very much against. Even China’s firewall is surpassable and it is questionable how much it is worth operating given the massive costs which would be incurred.
I think the government should invest in giving parents the tools to control their child’s online access. Tools such as DNS blocklists, open source traffic filtering software which parents could set up in their home, etc.
There is a societal problem that is beyond just parenting. The peer pressure for kids to feel left out and ostracized because they are the only ones not on the socials is something a teen is going to definitely rebel against their parents on. It's part of being a teen. I'm guessing the other parents would even put pressure on the parents denying the social access.
To me, the only way out of this is by changing one nightmare for another giving the gov't the decision of allowing/denying access. Human nature is not a simple thing to regulate since the desire for that regulating is part of human nature
Is talking to other people online really so bad that we need the government to step in and tell us who we can and can't talk to? How quickly will that power expand to what we can and can't talk about?
I agree that neither solution is perfect, but exchanging an imperfect but undoubtedly free system of communication for one that is explicitly state-controlled censorship is an obvious step backwards.
"Thinking about the children" should also involve thinking about what kind of a society you want to build for them. A cage is not the answer, especially not with fascism creeping back into our politics.
I think you are willingly playing this down as "talking to people online" to make some point. However, it is beyond what one kid online says to another online. It is what predators say to those kids online. I don't just mean Chester and his panel van. I'm talking about anyone that is attempting to manipulate that kid regardless of the motive, they are all predators.
Social media has long since past just being a means of communicating with each other, and you come across as very disingenuous for putting this out there.
I think people are being incredibly disingenuous when they imagine that the government won't abuse this power to censor and harm marginalized communities. Many states are trying to remove LGBT books from school libraries for being "pornographic" right now, for example. All it takes is some fancy interpretations of "safety" and "social media" for it to become a full internet blackout, for fear of "the liberals" trying to "trans" their kids.
I don't deny that kids can get into trouble and find shocking or dangerous things online. But kids can also get in trouble walking down the street. We should not close streets or require ID checks for walking down them. Parents should teach their kids how to be safe online, set up network blocks for particularly bad sites, and have some kind of oversight for what their kids are doing.
Maybe these bills should mandate that sites have the ability to create "kid" accounts whose history can be checked and access to certain features can be managed by an associated "parent" account. Give parents better tools for keeping their kids safe, don't just give the government control over all internet traffic.
I've suggested no such thing. In fact, I described putting regulations in place as a "nightmare". Parenting alone will not work. Self regulation from the socials will not work. The entire situation is a nightmare of our own making. There is no simple solution. These tendencies of human nature were present long before social media. It was just fuel for the fire for the worst qualities.
They won't engage on this topic in a fair and balanced manner. I've tried. They want unlimited ability to push agendas and incite strife and they just throw out keywords and thought-terminating cliches when criticised; diversity, marginalisation, fascism, they just gaslight and gaslight and gaslight.
I mean, don't stop trying, but you'll be very frustrated
It seems like you are in favor of something that requires coordination, but don't believe in coordination. Is there a different way you think this could be achieved?
I don't believe in the disbelief of coordination so it seems like we're at a bit on an impasse. Please expound.
I think GP means the coordination between parents, and I agree on that: if you can’t get a strong majority of parents to agree on keeping their children off of social media & smart phones, you as parents have the choice between two outcomes: enforcing isolation vs. letting social media slip through and just trying to delay as much as possible.
Almost all parents either don’t care or opt for the second option (which I also think is the better one), but the dynamics today are that between 10-12yo the pressure to get your kid a smartphone will mount and you have to give in at some point. Being able to wait till 16 would be much better IMO.
I used to want no govt intrusion for this. Then I understood how there are teams of PHDs tweaking feed to maximize addition at each social network. I think there could be even limits, or some sort of tax on gen pop.
It would be less imposing on the populace if you just demanded a government hit squad kill those PHD's instead of demanding everyone submit to some onerous government crapware to access the internet.
There are plenty of middle ground approaches - one possible one: App default would be restricted time use. To get unrestrxistricted use you use a driver's license to authenticate birthdate only with OS. Mobile OS only confirming with app that user is above age requirement (does not share birthdate). App can only query if user is above 18yrs and nothing else. Easy peasy.
I'm in favor of kids not using social media, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required.
People said the same thing about age restrictions for smoking, alcohol, movies, and on and on and on.
It's not some unsolvable new problem just because it's the ad-tech industry.
Your comment begs the question of whether or not those age restrictions are a "solved problem." In the US, government age restrictions on movies DO NOT exist for the exact reason the government cannot impose age restrictions on social media: the First Amendment flatly forbids it.
So, maybe it a law - but requiring parents-guardians to enforce it, and not the state - sounds like a reasonable inbetween?
I think it should just be opt-out. A parent can opt-out without any stated reason, easily, and overrule the law.
So kids get a social media "license," maybe.
We don't even have to speculate, isn't it already the case for <13yo? Or is just Europe? Anyway - yeah, of course they're still on it. Expect less compliance and harder enforcement the older they are, not more/easier.
The only protection in the Us is technically “collection of personal information” via COPPA[0] which you can argue would kneecap social media. Any parent can provide consent for their child, however. Children themselves can also just click the button that says they are over 13 if it gets them what they want.
[0]: https://www.ftc.gov/legal-library/browse/rules/childrens-onl...
There are degrees of enforceability.
When I was first getting online, the expectation was that you at least had to be bright enough to lie about your age. Now I have to occasionally prune my timeline after it fills up with "literally a minor." Even an annoyance tax might have some positive effect. Scare the pastel death-threats back into their hole...
Is there an alternative? Self-control - as we have now - brought us here. If the government shouldn't step in then the only other option left (only I can see) is magic. And we have a bad record with magic.
Social media's existence is predicated on their algorithms being good at profiling you. Facebook's already got some level of ID verification for names, where they'll occasionally require people to submit IDs. No reason that similar couldn't be applied to age if society agreed it was worthwhile.
I'm in favor of kids not using porn, but not of the government forcing this on people nor spinning up whatever absurd regulatory regime is required. And the chance of actually enforcing it is zero anyway. It's no more realistic to expect this to work than to expect all parents to do it as you say. It's just wasted money plus personal intrusion that won't achieve anything.
I only changed 2 words for 1.
I’m normally anti-regulation as well, but as a parent I’m fully on board with this. The amount of peer pressure to be on social media is insane.
Are you asserting that parents should not have the rights to determine this for their own children?
Parents don’t have the right to get their kids a tattoo, vote or buy alcohol before a regulated age. Is this different?
So your answer is yes?
Your position is that decisions about youth access to social media should be fully taken from the parents and made by govs instead. Penalties can be assumed from your examples.
This is the reality that you want imposed on parents and children - yes?
There's actually a really simple and elegant penalty - forfeiture of the device used to access the social media. With all seized devices to then be wiped and donated to low income school districts/families. This gets more complex when using something like a school computer, but I think it's a pretty nice solution outside of that. That's going to be a tremendous deterrent, yet also not particularly draconian.
Yep thats what low income families need: more cell phones.
Now they too can get on social media!
This is already the reality for alcohol and plenty of other things. Maybe not everywhere. Reality check: parents giving unrestricted access to these things are usually perceived as irresponsible.
Is your position that no age limits should exist for anything?
Actually, in many states there is no minimum age to get a tattoo with parental consent.
https://en.wikipedia.org/wiki/Legal_status_of_tattooing_in_t...
Likewise for cosmetic surgical modification of the penis by a doctor, or puncturing the earlobe by a non-medical-professional.
That is…shocking
Missouri law allows minors to consume alcohol if purchased by a parent or legal guardian and consumed on their private property.
edit: Apparently Missouri is not the only state. I had trouble finding a definitive list though. There are also other exceptions such as wine during religious service.
While there are exceptions, and in general exceptions seem pretty common, they still require businesses to officially get approval and it gives power to parents to enforce rules that would otherwise be hard to do so. Even with the exceptions for children to legally be allowed to drink, I would be surprised if that led to more kids drinking than alcohol obtained illegally, which means the question should be back on how well does the law work (obviously not perfectly, but there is a large gap between perfect and so poorly that it is useless purely from an efficiency perspective).
Are you a parent? It's not as easy as saying "no social media" to your kids. In this day and age, it's basically equivalent to saying "you can't have friends". Online is where kids meet, hang out, converse etc. I'd LOVE to go back to the days before phones and social media, where kids played with neighbours and ride their bikes to their friends house, but that's slowly slipping away.
We try pretty hard to get our kids to play with their friends in person (we invite them or give rides to playdates) but what do they do when they meet up? Sit on the couch with their tablets and play virtually in Roblox :-)
You organize playdates for middle & high schoolers?
They didn't say anything about the age of their kids. Roblox is played by almost all ages.
When my friends would meet up in the 80's-90's, quite a bit of Nintendo happened. Is it really that different? The proportion of video games should eventually drop (not to zero), in favor of (if you're lucky) talking and whatever music bothers you the most.
Do you also think that children should be allowed to buy cigarettes? I'll be honest, I am not certain that social media is any less deleterious than tobacco.
I'm a pretty pro-market guy, but there are times when the interests of the market are orthogonal to the interests of mankind.
I can pretty confidently say that the half a million deaths a year attributable to smoking is a little more deleterious than getting bullied online and the suicides which follow. Many orders of magnitude more.
Just because one thing is worse than the other doesn't mean the less-bad thing is suddenly good.
The original statement was:
To which I pointed out that cigarettes kill far more people than social media. And your response was somehow that I'm implying that a less-bad thing is good? Are you sure you're following the conversation? It's really not clear that you're addressing anything I said, and it's unclear what your point is.
Yes, parents should not have unlimited rights to determine what is good/bad for their children.
Social media is a powerful, addictive and dangerous. Pretty much anywhere on this earth, parents will end up in jail or lose custody of their kids, if they give them harmful substances like drugs. Social media should be regulated like how drugs, alcohol, and cigarettes are regulated.
Because drug regulations are so effective with no collateral damage at all. /s
I'm almost invariably anti-regulation, but in this case - absolutely!
There's extensive evidence that social media is exceptionally harmful to people, especially children. And in this case there's also a major network effect. People want to be on social media because their friends are. It's like peer pressure, but bumped up by orders of magnitude, and in a completely socially acceptable way. When it's illegal that pressure will still be there because of course plenty of kids will still use it, but it'll be greatly mitigated and more like normal vices such as alcohol/smoking/drugs/etc. It'll also shove usage out of sight which will again help to reduce the peer pressure effects.
This will also motivate the creation/spread/usage of means of organizing/chatting/etc outside of social media. This just seems like a massive win-win scenario where basically nothing of value is being lost.
Parents can make accounts to use on collaboration with their children.
The law prevents the corporation from directly engaging with the child without parental oversight.
Because regulation worked so well in eliminating peer pressure for drinking, smoking, and drugs.
Regulation worked remarkably well on smoking.
How so?
A massive, decades-long decline in the habit? Stemming from ad restrictions, warning labels, media campaigns, taxation, legal action by states/Feds, etc.
Have you not seen people vaping at a young age now?
Vaping isn't the same thing as smoking.
They are consuming nicotine, and tobacco companies are invested in / are the companies producing products in that space. It seems functionally to be the same.
Only if you ignore... a lot. Cancer rates, smoking sections in restaurants, the smell, the yellow grime and used butts sprinkled everywhere, the impact on asthmatics... Smoking a cigarette gets you a lot more than just the nicotine.
A smoker moving to vaping is an enormous benefit to health and society.
That sounds like you are being disingenuous. Smoking sections haven't been a thing in the US for a long time (I went to the last one I could find around 2009). Waste from single use vapes is also a huge problem. Similarly there are health effects specific to vaping, time will tell if cancer is among them.
Yes, the regulations that made this happen are good. That's my point.
Nothing like the cigarette butts that used to be everywhere.
We've plenty of data to safely conclude vaping is safer than smoking tobacco. That doesn't make it safe, but it's absolutely safer.
No, they are not remotely the same. Nicotine isn’t really that harmful, but combustion byproducts very much are. Also the effects on bystanders are orders of magnitude better.
Most of the harm from smoking comes from the smoke, not the nicotine or associated addiction.
Nicotine by itself is harmless besides the addictiveness. A nicotine addiction is not going to drastically affect your mental state or cause socially disruptive behavior like domestic violence or armed robbery so it’s really nothing to be concerned about.
I agree it is different, but the jury is out on whether it is better. Banning "social media" is likely to push users to a "lite" version of it. I'm not convinced that will be better.
I would call IRC "social media lite", and it is indeed better.
Vaping is way cooler than smoking.
Vaping (non flavored) vapes has basically a negligible impact on your health.
This must be why there’s not 50 vape shops in my town with big neon signs.
Don't get me wrong, I'd love to see vaping turn into a prescription-only smoking cessation aid, but it's not smoking. I'm 100% happy with even a one-for-one replacement of smoking for vaping, even in kids, given the dramatically lower risk of resulting health problems.
It's more influence than actual regulation enforcement. Smokers these days are seen as social pariahs in some circles.
Growing up in the 80s, no one I knew avoided cigarettes because of regulations. Cigarettes were easy to get your hands on as a kid, even though it was illegal to sell to children. We avoided cigarettes because we had a general understanding of the health risks, because we knew our parents would beat our asses if they caught us smoking, and because smoking makes your clothes smell like shit.
Yet that wasn't enough to stop your generation from smoking, they added even more regulations and then the generation after yours stopped.
You also had to go in store to buy cigarettes, so the application of regulation would work a little differently in the case of social media.
Drinking, smoking and drugs don't depend on a central point of control. Social media companies of any significance can be counted on two hands, and are accountable to corporate boards.
Absolutely. Almost no kids smoke cigarettes (vaping non flavored varieties have almost no risk associated with it) and drunk driving is a shadow of what it used to be. Getting alcohol for someone under 21 is not child’s play either.
It absolutely did for smoking and drunk driving.
I'm fairly sure more 13 year olds are on social media, than are drinking, smoking, or on drugs.
Answering a problem with unenforceable garbage like this doesn't seem like a very sound strategy.
This discussion can be seen every time when the EU decides on some regulation against tech industry. A lot of people will jump that it won't be enforced, then when we see the first fines those people will jump that it won't move a needle, then when the tech giants do change a bit their course then... well the tech bros will always find a reason to jump against doing anything to curb tech.
This is not a "tech bro" thing. It's not particular to tech nor bros. This a business thing. Phillip Morris weren't "tech bros".
You are right. I was having in mind the HN crowd when I commented.
It doesn't have to be perfectly enforceable to have a positive effect.
Even just making illegal the promotion of social media toward children would have a huge effect.
Seeing this opinion all over this thread. Unenforceable laws are one way to create the pretext for discriminatory enforcement.
And you have to be trolling with your second paragraph.
Are you aware of e.g. https://www.nbcnews.com/tech/social-media/facebook-documents... ?
Enforceability is a foregone conclusion, and when it comes to things like this it's somewhat expected. The same can be said for pornography, drugs and alcohol and tobacco (remember Joe Camel?), and anything else that would fall under blue-laws.
The goal of this is to bring attention to the fact that it's a problem and should be seen as undesirable, like pornography or Joe Camel. The cancellation of Joe didn't prevent kids from getting cigarettes but it did draw attention to the situation and there has been a marked decline in youth smoking since the late 90s when the mascot was removed. It's correlative, for sure, but the outcomes are undeniable. The same happened with the DARE program and class 1 drugs (except for marijuana iirc).
Multiple studies have determined that the dare program is entirely or almost entirely ineffective.
Here's just one: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1448384/
You can be in favor but in the US it is unconstitutional for a gov to broadly restrict speech. It's why each of these age verification + social media laws eventually get tossed. Legislators know this (or are too dumb to) but it's not their own dollars that are getting burned during this vote-baiting performance.
honest question, is an american child granted the right of free speech? or do you get that right at a certain age?
This isn't the right way to frame the question. The Constitution prohibits government actors from infringing upon the right to free speech[0], including that of children; meaning e.g. an attempt to censor a child's treatise on some subject would be as illegal as doing the same to an adult. It's not that you get a "free speech license" when you turn 18. The restrictions on government power applies whether or not the person being infringed upon is a minor or an adult.
[0]: This isn't an absolute, unqualified right. A child breaking into a profanity-laden tirade in the middle of class at a public (meaning taxpayer-funded and an extension of the government) school may believe they are exercising their First Amendment rights, but the school may still legally send that child home, for example, on the grounds that it disrupts the schooling of other children.
The FCC seems to get around that pretty easily not allowing nipples on Tv.
Only on broadcast TV, and the decision is fundamentally reliant on the nature that RF spectrum is a finite resource to be able to justify the restriction.
SCOTUS has routinely struck down prohibitions against the same things in other media, including explicitly the internet.
Deprecating the Constitution is long overdue.
IMHO the general idea isn't terrible the implementation is, suppar. But hy that's why it's good that's it's not yet US wide as it means there is time to make improvements.
- I'm not the biggest fan of hard cutoff
- addictive dark patterns which cause compulsive use should be general banned or age restricted no matter where they are used, honestly just ban most dark patterns they are intentional always malicious consumer deception not that far away from outright committing fraud. (And age restrict some less dark but still problematic patterns.)
- I think this likely will make all MMORPGs (and Roblox, lol) and similar 16+, I'm quite split about that. I have seen people between 14-18 get addicted to them and mess up their education path. But I have also seen cases of people which might not be alive anymore today if they hadn't fund a refuge and companions in some MMORPG.
- I guess if it can make platforms like YT, Facebook, Instagram, Snapshat etc. implement a "teen" mode with less dark patterns and tracking it would be good.
- The balance between proving your age and making things available and keeping privacy is VERY tricky (especially in the US) and companies, the government and spy agencies will try to abuse the new requirements for age verification to spy more reliable on everyone 16+.
- It's interesting how that affects messengers. Many have less dark patterns, some do not track users, or can easily decide not to track children. The aren't social networks per-se. But most have some social network like features. Even such which do not try to create compulsive use might still end up with it as long as their as is "live" chatting.
How would you write a law that accomplishes your goal?
I'm not a lawyer, but I imagine something like deceptive advertising regulations could serve as a starting point.
The usage with of user interface design patterns which take advantage of [..] to cause compulsive use is not allowed. In context of this law user interface refers to the mechanism of a user interacting with a software, this includes any kind of user interface no matter weather which medium it uses for presenting information and allowing interactions and no matter which form it is presented at in the software.
Where [..] is a rough definition of dark patterns which there exist multiple of in CS and which you can base on factors like 1) deceptiveness, 2) intend to manipulate the users behavior, 3) how it interact with various human feedback systems etc.
The thing about law people on HN often tend to forget or complain about is that they intentionally do not need to be perfect precise scientific definitions, predicate logic, or anything like that. This means you can reasonable describe them by likely effect and weather it seems reasonable that the effect could have been intended (without bothering to actually determine intend) without needing to go into _any_ technical details outside of clearing that indeed any form of user interface is covered.
I can't agree. This teaches kids that the government is the answer to everything. this should 100% be the responsibility and decision of parents. Kids are different, and these one-size-fits-all authoritarian tactics that have become a signature of the current GOP Floridian government are just the beginning of the totalitarian christofascist laws that they want to implement. Before you ask, I am a parent, and my kid's devices all have this crap blocked and likely will remain that way until he's at least 15, give or take a year depending on what I determine when he gets to that age. He knows that there are severe ramifications if he tries to work around my decision, and will lose many, many privileges if such a thing happens.
Yeah I hate that at the first sign of trouble people run to the government.
My kid, and I have told my wife this, is ready to view whatever he wants on the internet when he can circumvent me.
The libertarian equivalent is the model we are currently running and it hasn't worked. It's psychologically addictive, harmful and has parallels with smoking in a way. If the evidence for its harm was less robust.
Do you think simply labelling it as bad is sufficient? Parents have no idea.
If you ignore history, yes, it may teach kids that the gov is the answer. Don't ignore history.
The answer to our problems is not less freedom. It should never be less freedom. I’m opposed of any law that restricts freedom of information. No matter the age. I think we need to do better on educating kids on online behaviors and we should hold social media companies accountable for addictive features but what we absolutely shouldn’t do is blame little Jenny and take away her access to groups and social interaction online.
Yes. Little Jenny should also be free to purchase and drink a bottle of whiskey, but only if her parents are sufficiently negligent.
I would've backed your argument up until a few years ago, but the science is coming down pretty hard now showing that social media use is absolutely detrimental to still-developing minds.
I’m not refuting that. Only I don’t think we should be reacting with laws to restrict access. Who’s to say “what” a social media platform is? Does this mean discord as well? Steam? Roblox? Where do we draw the line on what is social media and what is social?
I am an adult and I wish someone would take social media away from me. Honestly, I think social media has done more harm than good and I wish it would just cease to exist.
However, especially in Florida, social media may be the only way for some teens to escape political and religious lunacy and I fear for them. I think it's not wise to applaud them taking away means of communication to the "outside", in the context of legislation trends and events there.
Why don’t you take it away from yourself? Just delete your account.
I tried but then got betrayed by myself.
Addiction mechanics are a real thing and sooner or later social media will pop into your life again. For me it's Reddit, not Instagram or Twitter. Luckily, HN makes me always ragequit at some point, cause y'all are a bit special.
Right there with you. This is probably the only thing I’m on board with the republicans about.
Funny, conservative beliefs are a like a slow feed as you get older and have kids... This is just the gateway.
It happened to me and millions of others.
I’m pretty old and I don’t think anything else remotely appeals to me.
I'm not American, I think it's perfectly reasonable to ban kids from the internet just by applying the logic used for film classification. Even just the thumbnails for YouTube (some) content can go beyond what I'd expect for a "suitable for all audiences" advert.
This isn't an agreement with the specific logic used for film classification: I also find it really weird how 20th century media classification treated murder perfectly acceptable subject in kids shows while the mere existence of functioning nipples was treated as a sign of the end times (non-functional nipples, i.e. those on men, are apparently fine).
Also, I find it hilariously ironic that Florida also passed the "Stop Social Media Censorship Act". No self-awareness at all.
Film classification is also dumb. In Australia Margaret Pomeranz used to run banned film viewing sessions that ended up in cops wrestling them for dvds.
https://www.theguardian.com/film/2023/jul/31/margaret-pomera...
Hence my second paragraph.
The first is saying that no matter what your rules are, the internet may at any time show you a clip from the highest rated category and should be treated as such.
The internet may also at any time show you outright banned content, but that's often treated as a separate battle no matter if that's Tiananmen Square (or "content promoting terrorism", although now I'm wondering if the Chinese Communist Party think the former is a subset of the latter?) or sexual acts you're not allowed to show in film (varies by jurisdiction).
I find myself agreeing with this. Me 15 years ago would have raged at this. I have kids now and they are pressured to join all sorts of social media platforms. I still don't allow them to have it but I know they take a slight social hit for it.
There is zero positive to giving kids the ability to access social media sites designed to be addictive when they don't have the mental facilities to determine real from not real. Many adults seem to suffer from this as well. Plus kids don't understand that the internet is forever, really no need for an adult looking for a job or running for office to be crippled by a questionable post they made as an edgy teen.
I'm against a lot of government regulation but in this case I am even more against feeding developing kids to an algorithm
Just remove the temptation and pressure all together.
The internet is just as real as anything else. Out of touch.
I have two teens and have yet to see the negative effects of social media for them or any of their peers. Not to say it doesn't exist, but I sincerely doubt it's as awful as the doomsayers think. My personal observation of being raised in he 80s is that kids were far more awful to each other then than now.
Fits my observation.
Every generation has this freakout about something or another. I expect modern kids will end up better at handling social media than their parents are.
You're not the only one!
I also believe that this is a Big Deal™ that we need to take seriously as a nation. I have yet to see any HN commentator offer a robust pro-social media argument that carries any weight in my opinion. The most common "they'll be isolated from their peers" argument seems pretty superficial and can easily be worked around with even a tiny amount of efforts on the parents' part.
As an added bonus, this latest legislation removes the issue of "everyone is doing it". I mean, sure, a lot still will be—but then it's illegal and you get to have an entirely separate conversation with your kid. :)
This is so incorrect it makes the flat earth theory look good.
Same here. With the way the internet is nowadays, its probably best to keep kids off the internet until they are older. One just has to look at whats on places like Youtube 'Kids' to see all the stuff that is not kid friendly and probably detrimental to their mental health.
16 seems too young. Why not tie it to drinking age. There are way too many people who have gotten better at online manipulation in the past few decades.
social media replaced the old internet. this is like banning kids from the internet. Talk about pulling the ladder up behind yourself
Yes. Rather than mandating verification, can we just mandate that there a registry or that websites are legally required to include a particular HTTP header, combined with opt-in infrastructure in place for parents to use?
e.g. You could set up a restricted account on a device with a user birthdate specified. Any requests for websites that return an AGE_REQUIRED header that don’t validate would be rejected.
One thing I've found interesting with social media and children is that almost every parent I know recognizes the impact social media has on their children, but they willfully ignore it or feel powerless to avoid it. I hear stuff like, "It's impossible for kids to not be on social media. It's required these days.", and "The social consequences will be worse for them if they're not a social media."
Not even close. I am with you
No I definitely agree. I'm a little skeptical of how they'll enforce this but ultimately I think less kids on the internet and social media will be a positive and I agree that it doesn't seem like parents have managed to figure out how to address this.
we have laws against drinking and smoking / vaping… for kids under age of 16 and that has been working so GREAT that we should get more of these laws in place (federal preferrably) so that our kids can be even healthier adults. moar laws please - need as much of federal government in our families and parenting as we can get that I’d suggest 1 federal agent be assigned to each child born to ensure healthy adulthood
I'm in favor of this if the only enforcement actions are against social media companies for being predatory, and not against families for breaking the law and allowing their kids on social media. And it's useful for indicating that social media on the balance is not good for kids.
I am against government regulation of websites and classification of websites. If we allow government to do this, it will be politicized at some point.
We need to find a better way, for example (just a quick idea), social websites run and protected by a school-by-school basis. This way, it can be regulated and controlled.
In other words, government should regulate what they already have control over, not impose new control measures over things they don't.
Can they use email? Text? The telephone?
It’s dumb policy, because Florida GOP. The smart move is to target advertising for kids. If you attack the ability to advertise to the underage audience using mom’s iPad, social media will self police.
You and I both know they're doing this for two reasons. To put a heavy burden that will be almost impossible to enforce on tech companies so they can easily punish them for political reasons and to stop teens from learning early how shitty the GOP is.
Random ideas!
- Make this an ISP level thing? Somehow? They already know the makeup of a household. If they know a house has kids, something something ToS "You as the parents are liable..." Then maybe repeat those scary RIAA letters but "for good" when someone in that household hits a known adult IP?
- Maybe browsers send an "I'm an adult" flag similar to "Do Not Track," and to turn it on, the user has to enter a not-to-be-shared-with-kids PIN? If the browser and OS can coordinate, OSes would be able to tell the browser if the user is an adult, and skip the PIN entering.
- Force kids to use a list of Congress-approved devices that gate access to the wider Internet? YouTube Kids but for everything. Yes, hacker kids will be able to get by, but this being Hacker News, they'd deserve the fruits of that particular labor.
Just spitballing. Anything obvious I'm missing?
PS- I am neither for nor against the Florida-type legislation as of this comment.
I don’t think so. There’s nothing about social media that makes me feel like kids need it. Hell, if we could ban it for adults that would be an unmitigated good.