SACRAMENTO — Sneha Revanur has been called the “Greta Thunberg of AI,” which depending on your politics, is an insult or, as the youngs would say, means she’s eating.
That’s good.
Either way, Revanur, a 20-year-old Stanford University senior who grew up in Silicon Valley, isn’t worried about personal attacks, though she’s been getting more of them lately — especially from some big tech bros who wish she’d shut up about artificial intelligence and its potential to accidentally (or purposefully) destroy us all.
Instead of fretting about invoking the ire of some of the most powerful men on the planet, she’s staying focused on the breakneck speed with which AI is advancing; the utter ignorance, even resistance, of politicians when it comes to putting in place the most basic of safety measures to control it; and what all that will mean for kids who will grow up under its influence.
“Whatever long-term future AI creates, whether that’s positive or negative, it’s [my generation] that’s going to experience that,” she told me. “We’re going to inherit the impacts of the technology we’re building today.”
This week, California will make a big decision about that future, as legislators vote on Senate Bill 53.
Because I am a tech idiot who struggles to even change the brightness on my phone’s display, I will use the simplest of metaphors, which I am sure will make engineers wince.
Imagine lighting your gas stove, then leaving on vacation. Maybe it will all be fine. Maybe it will start a fire and burn your house down. Maybe it blows up and takes out the neighborhood.
Do you cross your fingers and hope for the best? Do your neighbors have a right to ask you to pretty please turn it off before you go? Should you at least put a smoke alarm up, so there’s a bit of warning if things go wrong?
The smoke alarm in this scenario is SB 53.
The bill is a basic transparency measure and applies only to the big-gun developers of “frontier” AI models — these are the underlying, generic AI creatures that may later be honed into a specific purpose, like controlling our nuclear weapons, curing cancer or writing term papers for cheating students.
But right now, companies are just seeing how smart and powerful they can make them, leaving any concerns about what they will actually do for the future — and for people like Revanur, whose lives will be shaped by them.
If passed, the law would require these developers to have safety and security protocols and make them public.
It would require that they also disclose if they are aware of any ways that their product has indicated it may in fact destroy us all, or cause “catastrophic” problems, defined as ones with the potential to kill or seriously injure more than 50 people or cause more than $1 billion in property damage.
It requires the companies to report those risks to the state Office of Emergency Services, and also to report if their models try to sneakily get around commands to not do something — like lying — a first requirement of its kind in law.
And it creates a whistleblower protection so that if, say, an engineer working on one of these models suddenly finds herself receiving threats from the AI (yes, this has happened), she can, if the company won’t, give us a heads-up about the danger before it’s unleashed.
There are a couple other rules in there, but that’s the gist of it. Basically, it gives us a tiny glimpse inside the companies that quite literally hold the future of humanity in their hands but are largely driven by the desire to make oodles of money.
Big Tech has lobbied full force against the bill (and has been successful in watering it down some). Enter Revanur and the AI safety organization she started when she was 15: Encode.
The California Capitol is nothing if not a mean high school, so maybe Revanur was more prepared than the suits expected. But her group of “backpack kids,” as they have been derogatorily called, has lobbied in favor of government oversight of AI with such force and effect that SB 53 actually has a chance of passing. This week, it is likely to receive final votes in both the Assembly and Senate, before potentially heading to the governor’s desk.
I’m not huge on quoting lobbyists, but Lea-Ann Tratten summed it up pretty well.
Revanur and her group have gone from being dismissed with a “who are you, you’re nothing” attitude from lawmakers to having “an equal seat at the table” with the clouty tech bros and their billions, Tratten said. And they’ve done it through sheer persistence (though they are not the only advocacy group working on the bill).
Tratten was hired by Encode last year when Revanur was backing a much stronger piece of legislation by the same author, Sen. Scott Wiener. That bill, SB 1047, would have regulated the AI industry, not just watched over it.
Gov. Gavin Newsom vetoed that bill, basically saying it went too far, but still acknowledging that a “California-only approach may well be warranted especially absent federal action by Congress.” He also set up a commission to recommend how to do that, which released its report recently — much of which is incorporated into the current legislation.
But since that veto, Congress has indicated approximately zero interest in taking on AI. And last week, Trump hosted a formal dinner for the titans of AI where they sucked up to the businessman-in-chief, leaving little hope of any federal curbs on their aspirations.
Shortly after that meal, the White House sent out a press release entitled, “President Trump, Tech Leaders Unite to Power American AI Dominance.”
In it, OpenAI CEO Sam Altman, gushed, “Thank you for being such a pro-business, pro-innovation President. It’s a very refreshing change. We’re very excited to see what you’re doing to make our companies and our entire country so successful.”
That left me wondering who exactly will be dominated. OpenAI recently sent a subpoena to Encode, digging around to see if it’s being funded by competitor Elon Musk (who is in a notoriously nasty legal battle with Altman), and demanding all their emails and communications about the bill. Revanur said Encode has no affiliation with Musk other than having filed an amicus brief in his lawsuit, and those claims are “ridiculous and baseless.”
“Like, we know for a fact that we have no affiliation with Elon,” she said.
Still, “people expect us to sort of hide in the corner and stop what we’re doing,” because of the pressure, she said.
But that’s not going to happen.
“We’re going to keep doing what we’re doing,” she said. “Just being a balanced, objective, thoughtful third party that’s able to be this watchdog, almost, as the most powerful technology of all time is developed. I think that’s a really important role for us.”
Right now, AI is in its toddler stages, and it’s already outsmarting us in dangerous ways. The New York Times documented how it may have pushed a teen to suicide.
In July, Musk’s AI tool Grok randomly starting calling itself “MechaHitler” and began making antisemitic comments, according to the Wall Street Journal. Another AI model apparently resorted to blackmailing its maker when it was threatened with being turned off.
An AI safety researcher familiar with that blackmail incident, Aengus Lynch, warned it wasn’t a one-off, according to the BBC.
“We see blackmail across all frontier models — regardless of what goals they’re given,” he said.
So here we are in the infancy of a technology that will profoundly change society, and we already know the genie is out of the bottle, has stolen the car keys and is on a bender.
Before we get to the point of having to choose who will go back in time to save Sarah Connor from Skynet and the Terminator, maybe we just don’t go there. Maybe we start with SB 53, and listen to smart, young people like Revanur who have both the knowledge to understand the technology and a real stake in getting it right.
Maybe we put up the smoke alarm, whether the billionaire tech bros like it or not.
Newsletter
You’re reading the L.A. Times Politics newsletter
Anita Chabria and David Lauter bring insights into legislation, politics and policy from California and beyond. In your inbox three times per week.
You may occasionally receive promotional content from the Los Angeles Times.
What else you should be reading
The must-read: ICE Agents Are Wearing Masks. Is That Un-American?
The Sign of the Times: In face of extreme heat, L.A. may require landlords to keep their rentals cool
The L.A. Times Special: A ‘Roomba for the forest’ could be SoCal’s next wildfire weapon
Stay Golden,
Anita Chabria
—
Was this newsletter forwarded to you? Sign up here to get it in your inbox.