I’m obviously not a lawer, but I’m trying to wrap my head around California AB1043 C. 675. This isn’t for legal advice, nor am I Californian, I’m just trying to make sure I understand the law and its implications.
-
Under section 1798.501. (b) 4A, wouldn’t this make collection of almost any system information illegal? Like, this seems like it would make something as simple as reading the user’s chose theme illegal. I’ve got to be missing something, right?
-
Since 1798.501. (b) 2A seems to require that developers that receive this age flag treat assume it is true, this would at least apply to CCPA, and California Civil Code, right? If so, wouldn’t that mean that even an for an adult who uses this flag, sharing their data is illegal, and they are required to be given access to extra rights, like the right to delete your data?
-
Would 1798.501. (b) 2A also apply to COPPA? I know this is state versus federal law, but are they allowed to, under state law, regard the age value as truthful and under federal law, not?
Edit: Answered by DomeGuy@lemmy.world


Not a lawyer, answers based on https://legiscan.com/CA/text/AB1043/2025
No. Because the terms are defined in 1798.500. They can ask your system directly whatever they want; they just can’t ask Microsofg, Apple, or Google for correlating specifics.
Yes, but only insomuch as laws that protect minors impose additional constraints on those who have “actual knowledge” that a user is actually a child.
It doesn’t mean they need to trust the OS flag if they have suoerior knowledge as to someone’s actual age. If I ask a child to contact Imgur to delete my account they’d block out my porn stash but otherwise treat the request as any other “delete an adult’s account” request.
Statr law can expand upon federal law but not contradict. And it smells like AB1043 is more “add a more explicit signal of user age” than anything affecting data retention relating to children.
What part do you think is contradictory?
So, if I understand right, basically they assume its correct unless given significant evidence otherwise? So like, if this flag is enabled and I visit a website and don’t directly provide personal information, then they have to assume I am a child under CCPA and thus can’t share my data. Right?
I was wondering more if they could just argue that it isn’t an reliable metric and thus was ignored for COPPA if it ever came up in Federal court - esspecially if adults end up using the flag for CCPA or Civil Code protections. As opposed to in California law, where it is assumed to be true unless shown otherwise.
That’s how it reads to me this morning. Assuming by “given” you meant “they have at all”.
Based on the CA AG’s page at https://www.oag.ca.gov/privacy/ccpa , I don’t see how “the browser reports the user as a child” gives a substantial additional burden on website developers. Presumably, the most they’d have to do to comply is use the flag to change “do you agree for yourself” to “PARENT OR GUARDIAN: Do you agree for the user of this account…”
I’m missing the part where an adult setting their age category incorrectly for themselves would do more than get a stronger porn block and a bunch of “go get your parent” pop-ups instead of “click here if you’re over 18.”
Presumably, if Microsoft and Google and Apple don’t get the Digital Age Assurance Act blocked in court, we could see a broad adoption of it as a way to skip paying for third-party age validation for sites like Reddit, BlueSky, and Lemmy, and all of the porn sites on the internet would just ask for the flag in lieu of their current “do we have a cookie where this user clicked that they’re at least 18” code.
Thank you for the help understanding this.