Are they any safer? Roadblocks rarely stopped me as a kid. These kinds of impediments most often resulted in me strategically moving what I was doing to somewhere out of sight of the gatekeepers, most often resulting in less safety. Where do most kids learn to play with fire in modern society? in very very dangerous places.
This reminds me of a small but fond memory of mine. One of my friends in high school, up from elementary, was slightly a troublemaker. But not terribly so. One day, we found ourselves sitting at the same lunch table. He occasionally smoked, I did not (I still don't). This meant that he had a lighter and I at the time did not (I now carry a lighter with me at all times for unrelated reasons).
He made a comment about how good orange peels smelled when you burned them. I leaned into this comment with curiosity and personal ignorance on the matter.
He said yeah and then looked around made the shush shush signal and leaned in, and invited me to do the same. He took an orange peel and brushed it across his opened lighter flame. Nobody caught us, and I smelled firsthand What he was talking about. Nobody got into trouble over this innocent demonstration. But for sure as hell you would have gone into trouble for this uncensioned demonstration of fire usage.
>and why were they retained for longer than was necessary?
it's stated in the article. In most cases they weren't, the data breach only affected people who disputed the result of their age verification.
Of course in principle Discord or any third party should never need any photographic identity themselves to begin with if countries would bother to implement a proper trusted identity system where the data stays with an authority and they simply sign off on requests. Like in South Korea or the eID features you have on most European national ID cards.
So they process 70k disputes per day? If not, why 70k ids were stolen?
It’s a flawed design. No reason to retain the personal info for more than the processing time. Aka the duration of the dispute process itself (not the queue of disputes).
The principal engineer who signed it off should go to jail.
I'm not sure how you're coming to that conclusion. If, for example, the id verification says "your id appears to be fake" and the user disputes it, what happens next? A dispute usually has several back-and-forth steps where one party is waiting for the other to respond.
As simple as: “We are processing your request, once we need more evidence we will contact you.” The day that their turn has come remind them to upload their personal data. Process the request, delete the data in 24 hours.
If you don’t hear back, even better, less private data to worry about.
This is not a tradeoff-less scenario. Most users will be pretty irritated if, for example, you ask them to re-upload the front and back of the id in question at a later date because you deleted it last time for their protection.
I personally think doing ID verification of physical documents over the internet is just a non-starter. I've unfortunately had to support such systems for years at a time, and I'm thankful I don't do it anymore.
> it's stated in the article. In most cases they weren't, the data breach only affected people who disputed the result of their age verification.
Saying this only affected disputes doesn't answer the question. It also makes it clear they knew deleting IDs was important, but did they not have proper deletion in their dispute system? If this was only new active disputes, I would expect discord to say so, but it sounds like the data in the leak goes back a lot further.
> Of course in principle Discord or any third party should never need any photographic identity themselves to begin with if countries would bother to implement a proper trusted identity system where the data stays with an authority and they simply sign off on requests.
Indeed. But in the UK the only really loud voices against the porn age laws are also the same voices against the latest digital ID proposals.
It's logical to say "we don't need either of these two things".
But the status quo of ID verification of all kinds (for things like finance agreements, some online purchases, KYC, checking into some hotel chains if you're not the card holder who paid, etc.) is horrifying and involves uploading scans of paper documents. Every time someone says "I don't need a digital ID thanks" I ask them how many times they've let someone take a flatbed or photocopier scan of their passport or driving licence in real life (it's usually not zero) and then I ask them to explain to me how they would do that if it is online, and if they ever asked how long they are retained.
I mostly agree, but your list of situations is places you want your actual identity to be verified. For age checks, a core feature should be not identifying yourself.
Yes, but a core feature of contemporary digital ID is age-only digital attestation -- that is, yes this unnamed person is old enough.
The absence of such means that there are few ways for people to verify their ages without handing over scans of their IDs to far too many organisations.
In the UK we do have one means to do this that is not widely used yet: since all mobile phone providers attempt to block adult content by default until the owner proves they are an adult (a pretty long-standing pre-existing child safety/parental control initiative by PAYG providers that has evolved to be standard across all contract types), the question of "can you prove you are 18" can now be delegated to the MNOs. But not all the age verification agencies are doing it.
And if all the employees have access to this hardware token or passphrase or memorized password or timeboxed token of some kind, does that actually prevent a hack, or does it just let you bullet point "encrypted"?
The main thing encryption prevents is someone that steals a physical device getting access to the data inside. It doesn't do much about unauthorized access to live servers.
It's not defense in depth, it's defense against a different threat entirely.
You want to have encryption, but I doubt their encryption or lack thereof has anything to do with this attack. Do we even have evidence the data wasn't encrypted?.
If someone gets access to a ticketing system they shouldn't have, talking about encryption is about as useful as talking about seatbelts. Important for general safety but irrelevant to the problem at hand.
I mean, this is the problem for all companies with sensitive data (ensuring that "ex" employees no longer have access to <stuff>).
Generally it's done via accessing some 3rd party secret storage system where employees need to verify themselves to get access (eg. Vault, or AWS secrets or what have you)
> The hacker claims an outsourced worker was compromised through a $500 bribe
Also interesting:
> The hacker claims government IDs were just sitting there for months or even years... I have spoken to people familiar with Discord's Age Verification system, and they said after some period of time Discord will delete (the copies of IDs), but they should be deleting them the second they're done
> The hacker claims an outsourced worker was compromised through a $500 bribe
Also interesting:
> The hacker claims government IDs were just sitting there for months or even years... I have spoken to people familiar with Discord's Age Verification system, and they said after some period of time Discord will delete (the copies of IDs), but they should be deleting them the second they're done
The hacker contacted some well known youtuber that talks about discord, they provided contents of support tickets of the YouTuber to prove they were really the hacker
Like this would actually stop any politician from pushing actively malicious legislation just because their kid doesn't love them and it's all the damn phone's fault.
I don't understand. Weren't we told that these age checks are "privacy-preserving"? So why was there anything for hackers to steal? Or do they mean "privacy-preserving" only against other random users of a service, but not against the service itself, the corporation running it, it's subsidiaries and parent conglomerate, their "trusted partners", the process of legal discovery if that corporation ever gets sued, legal subpoena by the police and intelligence agencies of every jurisdiction that conglomerate conducts business in, local councils [1], every government agency you can think of including ambulance service providers [2], and of course data breaches?
Anyone with insight into this kind of thing know if it’s reasonable to doubt Discord’s claims about what the hackers have? I can see motives for both parties to stretch the truth in opposite directions. But maybe there’s some legal risk for Discord to lie about what was compromised, in the event they get found out?
I don’t understand why we need age verification in Discord. Why should people who play games have to prove they’re old enough to talk to others? It’s not like anyone ever forced anybody else to join your Discord community, it’s all opt in!
If parents don’t want their kids playing certain games, or if a community is more adult in nature, then don’t buy those games for them. If they don’t want their kids exposed to bad influences, they can move the computer into a shared space or—better yet—just engage with their kids on a human level. That’s called parenting.
Politicians shouldn’t be meddling in this kind of personal interaction. It didn’t work when Nancy Reagan or Tipper Gore tried to police music, and it’s not working now. Modern authoritarians are just running the same tired playbook.
Age verification doesn’t make kids safer. It adds bureaucracy, harvests private data, and pretends to solve a problem that only families can actually fix. The result is more surveillance, less trust, and the illusion of protection.
> I don’t understand why we need age verification in Discord. Why should people who play games have to prove they’re old enough to talk to others? It’s not like anyone ever forced anybody else to join your Discord community, it’s all opt in!
Discord doesn't require age verirication for voice chat, it requires it for access to "sensitive media", or when yuo try to access a channel that has self opted in as age restricted [0].
A lot of servers have the equivalent of a #nsfw channel where you post dank stuff. I don't agree with the age verification approach, but I see why it concerns people. Discord naturally attracts a very diverse crowd, of which many are quite young. Walking into a random channel in your random all-ages jrpg server and finding horse porn might concern a parent. (This is a concrete example that I have experienced, not a theoretical one.)
And almost all of those servers have those channels marked as such. But when I set it as an NSFW channel, I didn't agree to demand my users' privacy be invaded. Now, I just remove the NSFW flag from those channels. ¯\_(ツ)_/¯
> Politicians shouldn’t be meddling in this kind of personal interaction.
Broadly I agree. I think there is room for good regulation here, though. Specifically, a legal obligation to hook into parental control systems to enable effective parenting in our increasingly complex digital world. While it would be nice if everyone were individually responsible enough to put in the effort to figure out the specifics of what their kids might be exposed to and the control mechanisms available to them, realistically that's probably expecting too much. There's no perfect solution, but intervention focused on obligating (especially large) organizations to empower users and make safety easy to understand and act on is infinitely preferable to obligating companies to restrict and police their users.
It's similar to needing ID for purchasing alchohol. You could use the same excuse that parents shouldn't buy alcohol for their kids, but there is the obvious workaround of kids buying it themselves.
The thing that everybody expected to happen, happened. At least the kids are safe.
Why were these images not encrypted, and why were they retained for longer than was necessary?
> At least the kids are safe.
Are they any safer? Roadblocks rarely stopped me as a kid. These kinds of impediments most often resulted in me strategically moving what I was doing to somewhere out of sight of the gatekeepers, most often resulting in less safety. Where do most kids learn to play with fire in modern society? in very very dangerous places.
This reminds me of a small but fond memory of mine. One of my friends in high school, up from elementary, was slightly a troublemaker. But not terribly so. One day, we found ourselves sitting at the same lunch table. He occasionally smoked, I did not (I still don't). This meant that he had a lighter and I at the time did not (I now carry a lighter with me at all times for unrelated reasons).
He made a comment about how good orange peels smelled when you burned them. I leaned into this comment with curiosity and personal ignorance on the matter.
He said yeah and then looked around made the shush shush signal and leaned in, and invited me to do the same. He took an orange peel and brushed it across his opened lighter flame. Nobody caught us, and I smelled firsthand What he was talking about. Nobody got into trouble over this innocent demonstration. But for sure as hell you would have gone into trouble for this uncensioned demonstration of fire usage.
that was sarcasm, a satire on the situation and ostensible purpose of burdening everyone with this
How does it make kids any safer?
My kids had a honest conversation with me about possible Wikipedia ban and VPNs maybe a week in. Their classmates were already using it.
https://news.ycombinator.com/item?id=45552348
https://news.ycombinator.com/item?id=45552382
Why the files were asked in the first place?
>and why were they retained for longer than was necessary?
it's stated in the article. In most cases they weren't, the data breach only affected people who disputed the result of their age verification.
Of course in principle Discord or any third party should never need any photographic identity themselves to begin with if countries would bother to implement a proper trusted identity system where the data stays with an authority and they simply sign off on requests. Like in South Korea or the eID features you have on most European national ID cards.
So they process 70k disputes per day? If not, why 70k ids were stolen?
It’s a flawed design. No reason to retain the personal info for more than the processing time. Aka the duration of the dispute process itself (not the queue of disputes).
The principal engineer who signed it off should go to jail.
It's not 70k per day. A dispute takes longer than a day; this was their entire ongoing dispute queue.
So they were retaining data that they were not actively processing. They were just waiting to be processed.
Aka, the system design was wrong. The buck has to stop somewhere. Somebody signed it off.
I'm not sure how you're coming to that conclusion. If, for example, the id verification says "your id appears to be fake" and the user disputes it, what happens next? A dispute usually has several back-and-forth steps where one party is waiting for the other to respond.
As simple as: “We are processing your request, once we need more evidence we will contact you.” The day that their turn has come remind them to upload their personal data. Process the request, delete the data in 24 hours.
If you don’t hear back, even better, less private data to worry about.
This is not a tradeoff-less scenario. Most users will be pretty irritated if, for example, you ask them to re-upload the front and back of the id in question at a later date because you deleted it last time for their protection.
I personally think doing ID verification of physical documents over the internet is just a non-starter. I've unfortunately had to support such systems for years at a time, and I'm thankful I don't do it anymore.
You're asking for accountability? Nobody has time for that, stop being silly.
> The principal engineer who signed it off should go to jail.
Indeed.
> it's stated in the article. In most cases they weren't, the data breach only affected people who disputed the result of their age verification.
Saying this only affected disputes doesn't answer the question. It also makes it clear they knew deleting IDs was important, but did they not have proper deletion in their dispute system? If this was only new active disputes, I would expect discord to say so, but it sounds like the data in the leak goes back a lot further.
> Of course in principle Discord or any third party should never need any photographic identity themselves to begin with if countries would bother to implement a proper trusted identity system where the data stays with an authority and they simply sign off on requests.
Indeed. But in the UK the only really loud voices against the porn age laws are also the same voices against the latest digital ID proposals.
It's logical to say "we don't need either of these two things".
But the status quo of ID verification of all kinds (for things like finance agreements, some online purchases, KYC, checking into some hotel chains if you're not the card holder who paid, etc.) is horrifying and involves uploading scans of paper documents. Every time someone says "I don't need a digital ID thanks" I ask them how many times they've let someone take a flatbed or photocopier scan of their passport or driving licence in real life (it's usually not zero) and then I ask them to explain to me how they would do that if it is online, and if they ever asked how long they are retained.
I mostly agree, but your list of situations is places you want your actual identity to be verified. For age checks, a core feature should be not identifying yourself.
Yes, but a core feature of contemporary digital ID is age-only digital attestation -- that is, yes this unnamed person is old enough.
The absence of such means that there are few ways for people to verify their ages without handing over scans of their IDs to far too many organisations.
In the UK we do have one means to do this that is not widely used yet: since all mobile phone providers attempt to block adult content by default until the owner proves they are an adult (a pretty long-standing pre-existing child safety/parental control initiative by PAYG providers that has evolved to be standard across all contract types), the question of "can you prove you are 18" can now be delegated to the MNOs. But not all the age verification agencies are doing it.
Encrypted? Encrypted how? How would the employees tasked with age verification access them if they were encrypted?
By decrypting them with a hardware token or passphrase or memorized password or timeboxed token of another kind.
But honestly just delete them ASAP, that's the issue
And if all the employees have access to this hardware token or passphrase or memorized password or timeboxed token of some kind, does that actually prevent a hack, or does it just let you bullet point "encrypted"?
The main thing encryption prevents is someone that steals a physical device getting access to the data inside. It doesn't do much about unauthorized access to live servers.
Check out Defense in Depth as a security concept
It's not defense in depth, it's defense against a different threat entirely.
You want to have encryption, but I doubt their encryption or lack thereof has anything to do with this attack. Do we even have evidence the data wasn't encrypted?.
If someone gets access to a ticketing system they shouldn't have, talking about encryption is about as useful as talking about seatbelts. Important for general safety but irrelevant to the problem at hand.
I mean, this is the problem for all companies with sensitive data (ensuring that "ex" employees no longer have access to <stuff>).
Generally it's done via accessing some 3rd party secret storage system where employees need to verify themselves to get access (eg. Vault, or AWS secrets or what have you)
Do you think this breach had anything to do with ex-employees retaining access? That also sounds like solving the wrong problem.
I mean this is posted on this page too.
z> nomilk 8 minutes ago | prev | next [–]
> The hacker claims an outsourced worker was compromised through a $500 bribe Also interesting:
> The hacker claims government IDs were just sitting there for months or even years... I have spoken to people familiar with Discord's Age Verification system, and they said after some period of time Discord will delete (the copies of IDs), but they should be deleting them the second they're done
Source (pinned comment, and 7m20s respectively): https://www.youtube.com/watch?v=NnuyT8FgSpA
reply
> The hacker claims an outsourced worker was compromised through a $500 bribe
Also interesting:
> The hacker claims government IDs were just sitting there for months or even years... I have spoken to people familiar with Discord's Age Verification system, and they said after some period of time Discord will delete (the copies of IDs), but they should be deleting them the second they're done
Source (pinned comment, and 7m20s respectively): https://www.youtube.com/watch?v=NnuyT8FgSpA
Didn't they only start doing age verification this summer? Why do they have years worth of IDs?
I think you must be 13+ to use discord, so the IDs were required for Discord’s own age verification.
Disputes over hacked/stolen accounts I guess?
Related : https://www.youtube.com/watch?v=NnuyT8FgSpA
The hacker contacted some well known youtuber that talks about discord, they provided contents of support tickets of the YouTuber to prove they were really the hacker
Like this would actually stop any politician from pushing actively malicious legislation just because their kid doesn't love them and it's all the damn phone's fault.
It's only the beginning, right ?
I already bought a vps in turkey and installed a vpn on it, cost 10€ a year but it's a small price to pay to not have his ID stolen.
I'm grateful for the timing.
I don't understand. Weren't we told that these age checks are "privacy-preserving"? So why was there anything for hackers to steal? Or do they mean "privacy-preserving" only against other random users of a service, but not against the service itself, the corporation running it, it's subsidiaries and parent conglomerate, their "trusted partners", the process of legal discovery if that corporation ever gets sued, legal subpoena by the police and intelligence agencies of every jurisdiction that conglomerate conducts business in, local councils [1], every government agency you can think of including ambulance service providers [2], and of course data breaches?
"Privacy."
[1] https://www.ibtimes.co.uk/british-councils-used-ripa-conduct...
[2] https://en.wikipedia.org/wiki/Investigatory_Powers_Act_2016#...
Anyone with insight into this kind of thing know if it’s reasonable to doubt Discord’s claims about what the hackers have? I can see motives for both parties to stretch the truth in opposite directions. But maybe there’s some legal risk for Discord to lie about what was compromised, in the event they get found out?
I don’t understand why we need age verification in Discord. Why should people who play games have to prove they’re old enough to talk to others? It’s not like anyone ever forced anybody else to join your Discord community, it’s all opt in!
If parents don’t want their kids playing certain games, or if a community is more adult in nature, then don’t buy those games for them. If they don’t want their kids exposed to bad influences, they can move the computer into a shared space or—better yet—just engage with their kids on a human level. That’s called parenting.
Politicians shouldn’t be meddling in this kind of personal interaction. It didn’t work when Nancy Reagan or Tipper Gore tried to police music, and it’s not working now. Modern authoritarians are just running the same tired playbook.
Age verification doesn’t make kids safer. It adds bureaucracy, harvests private data, and pretends to solve a problem that only families can actually fix. The result is more surveillance, less trust, and the illusion of protection.
I agree with you but;
> I don’t understand why we need age verification in Discord. Why should people who play games have to prove they’re old enough to talk to others? It’s not like anyone ever forced anybody else to join your Discord community, it’s all opt in!
Discord doesn't require age verirication for voice chat, it requires it for access to "sensitive media", or when yuo try to access a channel that has self opted in as age restricted [0].
[0] https://support.discord.com/hc/en-us/articles/30326565624343...
A lot of servers have the equivalent of a #nsfw channel where you post dank stuff. I don't agree with the age verification approach, but I see why it concerns people. Discord naturally attracts a very diverse crowd, of which many are quite young. Walking into a random channel in your random all-ages jrpg server and finding horse porn might concern a parent. (This is a concrete example that I have experienced, not a theoretical one.)
The random porn in a JRPG chat is concerning but how does age verification prevent that?
Channels have to opt in and participants have to follow the rules, right?
Isn’t the real issue that you don’t know and trust all the participants personally?
And almost all of those servers have those channels marked as such. But when I set it as an NSFW channel, I didn't agree to demand my users' privacy be invaded. Now, I just remove the NSFW flag from those channels. ¯\_(ツ)_/¯
Yeah. I did the same.
> Politicians shouldn’t be meddling in this kind of personal interaction.
Broadly I agree. I think there is room for good regulation here, though. Specifically, a legal obligation to hook into parental control systems to enable effective parenting in our increasingly complex digital world. While it would be nice if everyone were individually responsible enough to put in the effort to figure out the specifics of what their kids might be exposed to and the control mechanisms available to them, realistically that's probably expecting too much. There's no perfect solution, but intervention focused on obligating (especially large) organizations to empower users and make safety easy to understand and act on is infinitely preferable to obligating companies to restrict and police their users.
It's similar to needing ID for purchasing alchohol. You could use the same excuse that parents shouldn't buy alcohol for their kids, but there is the obvious workaround of kids buying it themselves.
Yes it’s similar, which is the point. Age restrictions have been normalized regardless of effectiveness.
> Age restrictions have been normalized regardless of effectiveness.
For the record.
A law doesn't stop anything.
All a law does is says "If some behaviour meets definition X AND the state becomes aware of it, then consequence Y will be applied by the state"
The hope is that people will see that and make a choice that ensures that they aren't liable for the consequence.
It's also, like everything, as effective as the enforcement. If it's not enforced well, nobody will abide by it.
> All a law does is says "If some behaviour meets definition X AND the state becomes aware of it, then consequence Y will be applied by the state"
Might be applied, and the terms are negotiable.