I don't think YouTube really wants to. They don't really care what kids see, so long as they see the advertisements. Everything to protect kids has come from the outside, not from these companies
> If the new system incorrectly identifies a user as under 18 when they are not, YouTube says the user will be given the option to verify their age with a credit card, government ID, or selfie.
Given the numerous security vulnerabilities that make verification data publicly accessible, this is a reason for me to stop using the platform. As soon as the platform classifies me as a minor based on my preference for, say, low-quality memes and cartoons as background noise, I will never visit it again.
I think a line up of specifically branded Android Phones and Chromebook that are locked into (at least initially) using Google Family Link and perhaps more importantly - a marketing campaign on educating parents on how to setup/use it would be a huge win from a PR standpoint.
It'd be a lot more deterministic than this AI thing. And more importantly would completely avoid the digital privacy issues that this solution might possibly present (where normal adult users are going to be blocked or have to provide government ID).
As a user of Family Link (and Apple Screentime) I think all the pieces are there from a usability perspective finally. Google (and Apple) leadership just needs to coordinate the push among the various internal groups.
"If the new system incorrectly identifies a user as under 18 when they are not, YouTube says the user will be given the option to verify their age with a credit card, government ID, or selfie. "
> We will use AI to interpret a variety of signals that help us to determine whether a user is over or under 18. These signals include the types of videos a user is searching for, the categories of videos they have watched, or the longevity of the account
For example, account created less than N years ago searching for skibidi -> probably teen.
Google probably has enough pictures taken with the person's phone's camera and recordings taken with the phone's micrphone, along with all other standard data sources like browsing history, searches and so on. There also likely are similar recordings from friends, allowing to properly profile them, with their respective positions in their networks.
Funny. The story here is that Youtube is asking every customer for legal id - presumably to enhance their adtech stack and target those customers more accurately. Obviously this isn't a great story, so they've screamed "won't somebody think of the children".
I do admire how much they dress it up though, they've developed tools to stop 17 year olds from repeatedly viewing body dismorphia videos that lead to annorexia. Sure, they could've tweaked the algorithm to stop paying people to create annorexia content, but then how would they monetize that 18 year old annorexic girl.
I seriously doubt that. They’ll either verify to watch slop with the occasional “stop watching slop “ popup or they’ll go to a worse platform without verification, eg tiktok.
While I mostly hope that their tech works...another part of me wonders how easy it would be to build a browser plugin that searches for videos about Frank Sinatra, and clicks on ads for denture adhesives, and ...
All that theatre, and still no option to disable YouTube shorts for child accounts.[0]
[0] https://news.ycombinator.com/item?id=44436960
I don't think YouTube really wants to. They don't really care what kids see, so long as they see the advertisements. Everything to protect kids has come from the outside, not from these companies
Yeah its BS. Unhook works great tho, if ur on web
> If the new system incorrectly identifies a user as under 18 when they are not, YouTube says the user will be given the option to verify their age with a credit card, government ID, or selfie.
Given the numerous security vulnerabilities that make verification data publicly accessible, this is a reason for me to stop using the platform. As soon as the platform classifies me as a minor based on my preference for, say, low-quality memes and cartoons as background noise, I will never visit it again.
This is another reason to never put your photo or video of yourself online.
I think a line up of specifically branded Android Phones and Chromebook that are locked into (at least initially) using Google Family Link and perhaps more importantly - a marketing campaign on educating parents on how to setup/use it would be a huge win from a PR standpoint.
It'd be a lot more deterministic than this AI thing. And more importantly would completely avoid the digital privacy issues that this solution might possibly present (where normal adult users are going to be blocked or have to provide government ID).
As a user of Family Link (and Apple Screentime) I think all the pieces are there from a usability perspective finally. Google (and Apple) leadership just needs to coordinate the push among the various internal groups.
What signals could possibly ID teenagers without having hopelessly high false positives?
False positives are a feature, not a bug:
"If the new system incorrectly identifies a user as under 18 when they are not, YouTube says the user will be given the option to verify their age with a credit card, government ID, or selfie. "
https://blog.youtube/news-and-events/extending-our-built-in-...
> We will use AI to interpret a variety of signals that help us to determine whether a user is over or under 18. These signals include the types of videos a user is searching for, the categories of videos they have watched, or the longevity of the account
For example, account created less than N years ago searching for skibidi -> probably teen.
Google probably has enough pictures taken with the person's phone's camera and recordings taken with the phone's micrphone, along with all other standard data sources like browsing history, searches and so on. There also likely are similar recordings from friends, allowing to properly profile them, with their respective positions in their networks.
So I think they can do it pretty accurately.
> So I think they can do it pretty accurately.
Yes, they can, but they just want more.
Enter numeric code emailed to you by rotary dial.
Ask them to define long distance call or what a Save icon depicts.
TC can't even be bothered to link to the YouTube announcement...
https://blog.youtube/news-and-events/extending-our-built-in-...
Funny. The story here is that Youtube is asking every customer for legal id - presumably to enhance their adtech stack and target those customers more accurately. Obviously this isn't a great story, so they've screamed "won't somebody think of the children".
I do admire how much they dress it up though, they've developed tools to stop 17 year olds from repeatedly viewing body dismorphia videos that lead to annorexia. Sure, they could've tweaked the algorithm to stop paying people to create annorexia content, but then how would they monetize that 18 year old annorexic girl.
One random thought/observation: perhaps this might perhaps incentivize some youth to seek-out higher brow material?
Overall this seems pretty wild to me & hard to weigh. But in terms of it's impact on user behavior: maybe there's a little upside?
I seriously doubt that. They’ll either verify to watch slop with the occasional “stop watching slop “ popup or they’ll go to a worse platform without verification, eg tiktok.
Are YouTube accounts opened more than 18 year ago presumed to be "of age?"
That would decrease the amount of juicy personal data Google collects on you, so unlikely.
While I mostly hope that their tech works...another part of me wonders how easy it would be to build a browser plugin that searches for videos about Frank Sinatra, and clicks on ads for denture adhesives, and ...
My Google account is 20 years old. Think that's another heuristic they can use to guess I'm an adult.