I grew up on Zoom and social media. Protect Illinois kids, teens from the dangers of AI
My generation didn't get a grace period. We were the kids doing active-shooter drills in elementary school. We spent a chunk of our high school years staring at Zoom screens during the pandemic. We watched wildfires turn the sky orange while adults debated whether climate change was real. We grew up under a social media ecosystem that was running experiments on our mental health. Now we're entering a job market that expects us to be grateful for the chance to compete over a shrinking slice of what’s left.
The pattern goes like this: A powerful industry emerges, moves fast, promises transformation and asks the public to trust that self-regulation will be enough. For a while, maybe it is. Then the harms start showing up — in our lungs, our app feeds and our schools. By the time anyone in Washington, D.C., actually does something about it, the kids who grew up breathing that air or scrolling those feeds are already carrying the consequences with them. We are living this with fossil fuel, social media and now with AI.
Fortunately, there is a common sense bill in Illinois that can proactively protect us before the harms snowball. The Artificial Intelligence Public Safety and Child Protection Transparency Act isn't trying to kill AI or go after the companies building it. The point is simple: Get basic transparency and accountability in place now, while we still can.
California understood this need when it enacted SB 53 in September 2025, becoming the first state to require standardized safety disclosures from the developers of the most advanced AI systems. New York followed with the RAISE Act, creating its own reporting and governance framework for frontier models. Illinois now has the opportunity to join that growing coalition of states that decided the public deserves to know what the most powerful AI companies know and are doing about the risks of their own technology.
The core idea behind the Illinois bill is one that we already accept in virtually every other industry where the stakes are high. Pharmaceutical companies don't get to decide for themselves whether their drugs are safe. They’re mandated to publish clinical trial data, submit to independent audits and even report adverse events to an entire federal agency dedicated to exactly that. Airline companies don't get to investigate crashes in private — they, too, answer to an agency responsible for that. It’s a clear precedent of oversight: Industries with the capacity to cause large-scale harm need external accountability structures.
If we demand this level of accountability from industries that have had decades to prove themselves, it's hard to see why we wouldn't expect at least the same from a technology that has been in the public's hands for less than three years — with the greatest potential to reshape how we live.
None of what the proposed legislation calls for is particularly radical. It simply says that the largest developers operating at the frontier of AI technology should document how their companies are thinking about catastrophic risks, publish a safety plan, report serious incidents to the attorney general and let independent auditors check their work. A company running a chatbot that a million minors can access needs a documented strategy for protecting them.
And if none of this is enough to make the case for acting now, then by default, we would be opting to repeat what has not worked for my generation — waiting while harms proliferate. Wait for another incident that makes the front page of the New York Times, wait for another tech CEO to testify they’ll do better at congressional hearings, wait for the legislation that arrives years too late and applies to a landscape that has already moved on. We have lived through enough of those cycles to know that waiting is a choice, and a bad one.
More than anyone else, my generation will experience the ramifications of whatever AI becomes. We are the ones who will raise children in a world where these systems are likely integrated into every institution.
It's reasonable to ask our representatives to pass a bill that requires the most powerful AI companies to be honest about what they know, protect the kids using their products and let someone check their safety work. For a generation that grew up as the test subjects of the "move fast and figure it out later" approach, it might be one of the most reasonable things our legislators can do.
Kashyap Rajesh, 19, grew up in Buffalo Grove and attends Cornell University. He is a member of Design It For Us, a youth-led advocacy organization working to counter Big Tech's influence in policy.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0
