
I’m a millennial parent. I don’t need convincing that the internet can be harmful to kids. I grew up with largely unfettered access at a time when we hadn’t yet understood the risks. We don’t have that same naïveté anymore.
Now we know what’s out there: self-harm content amplified by algorithms, harassment that follows a child home through a screen, and design features engineered to keep young people scrolling long after they should be asleep.
Parents are overwhelmed. Lawmakers feel pressure to act. That instinct is understandable, but urgency does not excuse imprecision. Speed without clarity is how bad law gets made. And if we get this wrong, the consequences won’t be theoretical.
We could end up creating more harm than we prevent.
The Kids Online Safety Act (KOSA) is built around what sounds like a simple premise: platforms should have a “duty of care” to prevent harm to minors. On its surface, that sounds entirely reasonable. Who wouldn’t support protecting kids?
The problem isn’t what it promises. It’s what it actually does.
KOSA creates a broad and somewhat undefined obligation for platforms to mitigate harms to minors, empowering state attorneys general to enforce that obligation. That mix of vague standards and political enforcement is where civil liberties alarms start ringing.
Organizations like the Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) have warned that this structure won’t produce careful, nuanced moderation. It will incentivize overcorrection, encouraging platforms to suppress lawful speech and expand surveillance in order to limit liability.
When liability standards are vague, Big Tech won’t take any chances. They’ll err on the side of deletion because the legal departments drive those decisions, not public debate. The result is predictable: lawful speech that sits outside the mainstream—political dissent, identity-based discussions, controversial viewpoints—becomes the first casualty.
That’s where the real danger lies.
In several states, public officials have already characterized LGBTQ identity as “harmful to minors.” Some have described discussions of gender identity as exploitation and sexual orientation as “grooming.”
Now imagine those same officials empowered to decide whether platforms are meeting a federal “duty of care” to protect minors. What do you think platforms will do?
They won’t litigate fifty different interpretations of “harm.” They’ll standardize to the safest legal position, which may end up being the most restrictive one. In that environment, a federal “duty of care” mandate ceases to be a child safety measure and becomes a lever for the suppression of entire categories of lawful speech.
If you’re a teenager trying to figure out who you are or looking for community because you don’t have support at home, online spaces can be lifesaving. If platforms start broadly suppressing LGBTQ-related content to avoid legal risk in hostile states, those young people won’t just lose content. They’ll lose connection.
They’ll risk losing the only safe space they have.
When harm is defined politically, marginalized communities are almost always the first to pay the price. That’s not a speculative position; we’ve seen it throughout modern history.
Supporters of KOSA often fall back on a familiar line: we have to do something.
I agree. We absolutely do.
But “do something” is not the same as “do the right thing.”
The First Amendment doesn’t disappear because the internet makes us uncomfortable. When government pressure leads platforms to suppress lawful speech to avoid liability, courts rightfully take that seriously.
We can rein in harmful business practices without turning lawful speech into a legal risk. We can restrict targeted advertising to minors, limit data collection on children, increase transparency around recommendation systems, and crack down on exploitative design features. We can invest in digital literacy and mental health support. We can enforce existing laws against abuse and exploitation more aggressively.
Those reforms target the incentives that drive harm. A vague, content-based “duty of care” does not.
The hardest civil liberties tests are the ones where the objective feels noble. Protecting children is about as noble as it gets. That’s precisely why we have to be disciplined in our approach.
Once Congress establishes a broad, content-based “duty of care” enforced by political actors, the power it creates won’t belong to this moment alone. Laws rooted in vague definitions of harm are inevitably molded by whoever holds office next. Technology evolves quickly and so does political extremism. That’s too much uncertainty to build into a speech regime.
If we truly care about kids, our solutions must be durable, constitutional, and resistant to abuse.
We can build a safer internet, but we shouldn’t do it by creating a framework that risks silencing vulnerable communities in the name of protecting them. There is a line between responsible regulation and speech control. KOSA, as currently structured, gets too close to that line for comfort.
Because when civil liberties are at stake, “close” isn’t good enough.
This article was originally published on Ethan Wechtaluk’s Substack. Republished on TANTV News with permission.

