Hi, everyone.
-
Hi, everyone. I'm at the design stage of a project for a new web app (desktop + mobile), and I'm worried about Online Safety Act compliance. So - two questions:
1) is there an easy way to figure out whether age verification will be needed for my service?
2) are there any age verification services which aren't dumpster fires (horrifically intrusive, security issues, privacy issues etc). Non-US solutions strongly preferred.
#onlinesafetyact #osa #privacy #security -
Hi, everyone. I'm at the design stage of a project for a new web app (desktop + mobile), and I'm worried about Online Safety Act compliance. So - two questions:
1) is there an easy way to figure out whether age verification will be needed for my service?
2) are there any age verification services which aren't dumpster fires (horrifically intrusive, security issues, privacy issues etc). Non-US solutions strongly preferred.
#onlinesafetyact #osa #privacy #security@gsquirrel To answer 1, we need more information on what you’re building. As a rough rule of thumb for the UK Online Safety Act, if you answer yes to any of these, you should assume you’ll need real age assurance:
1. Your service is reachable from the UK and can expose users to content harmful to children (pornography, self-harm/suicide, eating disorders, violent/abusive or bullying content, dangerous stunts, etc.).
2. You’re in an obviously high-risk vertical (pornography, explicit sexual content, gambling, etc.) where Ofcom expects "highly effective" age checks.
3. You have user-generated content or messaging and it’s realistic that kids will use it, even if you "target adults." User-to-user and search-like services have extra duties here.
For something practical and non-US, https://www.yoti.com/ is worth a look: UK-based, supports a "double-blind" mode. Just be careful to stick to their double-blind options, because some features do require more data sharing.
-
@gsquirrel To answer 1, we need more information on what you’re building. As a rough rule of thumb for the UK Online Safety Act, if you answer yes to any of these, you should assume you’ll need real age assurance:
1. Your service is reachable from the UK and can expose users to content harmful to children (pornography, self-harm/suicide, eating disorders, violent/abusive or bullying content, dangerous stunts, etc.).
2. You’re in an obviously high-risk vertical (pornography, explicit sexual content, gambling, etc.) where Ofcom expects "highly effective" age checks.
3. You have user-generated content or messaging and it’s realistic that kids will use it, even if you "target adults." User-to-user and search-like services have extra duties here.
For something practical and non-US, https://www.yoti.com/ is worth a look: UK-based, supports a "double-blind" mode. Just be careful to stick to their double-blind options, because some features do require more data sharing.
@gsquirrel Also worth saying: if you’re thinking, "Doesn’t this technically apply to Meta, TikTok, etc. too?" yes, it does. They’re in scope just like everyone else; they’re simply not being pushed into hard age verification ( due to politics and money), while smaller or new services are being told to implement strong age checks or block UK users.
-
@gsquirrel Also worth saying: if you’re thinking, "Doesn’t this technically apply to Meta, TikTok, etc. too?" yes, it does. They’re in scope just like everyone else; they’re simply not being pushed into hard age verification ( due to politics and money), while smaller or new services are being told to implement strong age checks or block UK users.
@kstrlworks Thank you so much! It's point 3 that I'm worries about - it's travel-related, so not explicitly adult in nature, but I want to allow sharing of user-generated content, at least privately (sharing content with specific users), but preferably also publicly.
-
@kstrlworks Thank you so much! It's point 3 that I'm worries about - it's travel-related, so not explicitly adult in nature, but I want to allow sharing of user-generated content, at least privately (sharing content with specific users), but preferably also publicly.
@gsquirrel In that case, enforce monitoring for harmful material and remove it or make it private if it comes up: nudity, gambling, suicide, etc. Have moderation and include terms and conditions that explicitly state you're not targeting kids.
You don't need forced ID verification as long as you're not targeting harmful material; it's already not allowed/private, and you're filtering it off your platform while doing your best effort to do so.
-
@gsquirrel In that case, enforce monitoring for harmful material and remove it or make it private if it comes up: nudity, gambling, suicide, etc. Have moderation and include terms and conditions that explicitly state you're not targeting kids.
You don't need forced ID verification as long as you're not targeting harmful material; it's already not allowed/private, and you're filtering it off your platform while doing your best effort to do so.
@kstrlworks OK, fantastic - thank you!
-
R relay@relay.infosec.exchange shared this topic