Close Menu
Gossips Today
  • Tech & Innovation
  • Healthcare
  • Personal Finance
  • Lifestyle
  • Travel
  • Business
  • Recipes
What's Hot

I’m a TSA Employee—These 10 Mistakes Will Make You 'That' Person in the Security Line, and How to Avoid Them

From lab to market: Monetizing R&D 

OpenAI’s planned data center in Abu Dhabi would be bigger than Monaco

Facebook X (Twitter) Instagram
Saturday, May 17
Gossips Today
Facebook X (Twitter) Instagram
  • Tech & Innovation

    OpenAI’s planned data center in Abu Dhabi would be bigger than Monaco

    May 17, 2025

    xAI blames Grok’s obsession with white genocide on an ‘unauthorized modification’

    May 16, 2025

    Sam Altman’s goal for ChatGPT to remember ‘your whole life’ is both exciting and disturbing

    May 16, 2025

    Coinbase says customers’ personal information stolen in data breach

    May 15, 2025

    Billionaire founder of Luminar replaced as CEO following ethics inquiry

    May 15, 2025
  • Healthcare

    Residents more likely to suffer physical restraints, bedsores at bankrupt nursing homes: report

    May 16, 2025

    Kaiser invests in AI supply chain startup

    May 16, 2025

    RFK Jr. calls GOP Medicaid plans ‘not true cuts’

    May 15, 2025

    Women’s health faces growing headwinds, despite jump in venture investment

    May 15, 2025

    Stopping a ‘moral obscenity’: Senate Judiciary Committee expresses support for PBM reform

    May 14, 2025
  • Personal Finance

    4 Steps to Navigate Marriage and Debt

    May 11, 2025

    Buying a Fixer-Upper Home: What to Know

    May 10, 2025

    How to Talk to Your Spouse About Money

    May 10, 2025

    Millennials and Retirement – Ramsey

    May 9, 2025

    Retirement Education – Ramsey

    May 9, 2025
  • Lifestyle

    3 Fixes If You Hate the Way Your Pants Fit (That Have Nothing to Do with Your Waist Size)

    May 14, 2025

    On Sale Now: 9 Nike Sneakers Under $100 You’ll Want to Wear All Summer

    May 10, 2025

    Get the Look: Chateau Vibes, Courtyard Rates

    May 8, 2025

    Midlife Crisis, but Make It Casual

    May 6, 2025

    The Shoes You Buy Will Last Longer If You Just Understand This

    April 23, 2025
  • Travel

    I’m a TSA Employee—These 10 Mistakes Will Make You 'That' Person in the Security Line, and How to Avoid Them

    May 17, 2025

    This U.S. State Has the Most Road Rage, Report Finds

    May 16, 2025

    One of New Zealand's Most Impressive Resorts Has 20 Suites Set Along the Country's Longest River

    May 16, 2025

    These Are the Top Trending Food Destinations for Summer 2025—From Italy to Upstate New York

    May 15, 2025

    This Japanese City Is Having a Major Hotel Moment—and We Got a Peek at the Newest Luxury Stay

    May 15, 2025
  • Business

    From lab to market: Monetizing R&D 

    May 17, 2025

    OpenAI launches Codex, an AI agent for coding

    May 16, 2025

    Will NJ Transit go on strike? New warning as Friday midnight deadline nears

    May 16, 2025

    How Congress’ weakening began decades before Trump

    May 15, 2025

    The competitive edge you could be overlooking?

    May 15, 2025
  • Recipes

    challah french toast

    May 6, 2025

    charred salt and vinegar cabbage

    April 25, 2025

    simplest brisket with braised onions

    April 2, 2025

    ziti chickpeas with sausage and kale

    February 26, 2025

    classic lemon curd tart

    February 1, 2025
Gossips Today
  • Tech & Innovation
  • Healthcare
  • Personal Finance
  • Lifestyle
  • Travel
  • Business
  • Recipes
Business & Entrepreneurship

California spiked a landmark AI regulation. But that doesn’t mean the bill is going away

gossipstodayBy gossipstodayOctober 3, 2024No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
California Spiked A Landmark Ai Regulation. But That Doesn’t Mean
Share
Facebook Twitter LinkedIn Pinterest Email

With the veto of California’s AI bill, the idea of regulating frontier models may be in jeopardy.

The bill, SB 1047, would have required developers of the largest AI models (OpenAI, Anthropic, and the like) to set up and report on a safety framework, and submit to outside safety audits. The bill also included a whistleblower protection clause, and required developers to build a “kill switch” into models in case they began acting on their own in harmful ways. 

Most of the tech industry came out against the bill, saying its passage would shift the focus from innovation to compliance in AI research. It’s worth noting, however, that much of the public supported the bill’s protections, as did a number of respected AI researchers.

Nonetheless, Governor Gavin Newsom vetoed the bill this week, saying it fails to assess the risk of AI models based on where and how they’re deployed. “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047—at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good,” Newsom wrote. 

So, what comes next? SB 1047’s main author and champion, State Senator Scott Wiener, hasn’t ruled out the possibility of introducing the bill again in some form next session, a source close to the matter says. AI researcher Dan Hendrycks, who helped shape the bill, says his organization, the Center for AI Safety (CAIS), which sponsored SB 1047, intends to fight on.

“We’re taking some time to plan, to determine what’s next,” Hendrycks wrote in an email to Fast Company. “There has been a broad bipartisan coalition that came together to support this bill, so we’re incredibly optimistic about future opportunities to coauthor, advance, and advocate for sensible AI safety regulation.”

Time for working groups

One of Newsom’s main complaints about the bill was that it didn’t cover enough types of AI models and applications. As part of his veto, the governor called for the formation of a working group to develop a set of sensible guardrails for AI model developers, and potentially new legislation. The working group will be led by Stanford professor Fei Fei Li, a source with knowledge says. Li, who came out against SB 1047, is an AI pioneer best known for leading Stanford’s Human-Centered AI institute, but she also has a new AI company called World Labs, which is reportedly valued at $1 billion. One of her investors is Andreessen Horowitz, perhaps the loudest critic of SB 1047.

For its part, Andreessen Horowitz plans to hold “blueprint sessions” to help guide legislators in AI regulation. Wiener’s office says the senator has been invited to participate, but the two sides aren’t likely to find much common ground. Indeed, SB 1047’s proponents and critics have fundamentally different ideas on how to regulate AI safety. 

Wiener’s bill sought to put regulatory oversight on the frontier models developed by labs like OpenAI and Anthropic. Wiener and his ilk reason that these huge models could potentially enable an AI app to cause catastrophic harms (shutting down the power grid, for example).

Andreessen Horowitz and others in the industry believe that regulation  should not focus on the model’s capacity for causing catastrophic harms, but rather on the application that actually does a specific thing using the model. For example, if a frontier model-powered medical app causes deaths in a hospital, the app maker (sometimes called the “deployer”) would be held liable. 

But Wiener’s staff points out that such an application-focused law would only be additive to tort liability that already exists in the law. There is no law in California, nor at the federal level, that mandates specific safety guardrails and transparency standards for companies developing frontier models. 

SB 1047 and Congress

California Representative Anna Eshoo believes regulation should focus on requiring AI labs to be transparent about their models and their risks, not on prescribing specific safeguarding requirements and penalties for not using them, as SB 1047 does. Eshoo’s 2023 Foundation Model Transparency Act (with Virginia Democrat Don Beyer), which did not become law, required foundation model developers to disclose facts about training and training data to third-party app developers and the public. 

A legislative aide in her office says SB 1047 wasn’t a major topic of conversation in the halls of Congress. And the lawmakers who were aware of it were mainly interested in how the legislation might integrate with a similar bill at the federal level. 

Eshoo and three other California representatives sent a letter to Newsom urging him to veto SB 1047. The Congresswoman was concerned that the bill might stifle AI research at places like Stanford, which could affect the rest of the country. 

Congress has grown more thoughtful about regulating AI, the aide says. When ChatGPT was released almost two years ago, many lawmakers rushed to get up to speed on generative AI and potential regulatory approaches. But that sense of urgency has faded with the realization that generative AI isn’t going to transform the world overnight. In fact, applying generative AI in useful ways has proved a slow and complex process for many organizations.

If AI is poised to change the world it’s just getting started. Not only is the research into frontier models pushing the state of the art forward quickly, but research into steering and safeguarding models is evolving rapidly too, explains Navrina Singh, CEO of the AI governance platform Credo AI. Asking lawmakers to prescriptively regulate something so fluid is asking a lot. 

“The problem is, as the director of [National Institute of Standards and Technology] said recently, we don’t yet have a science of AI safety,” says Neil Chilson, former FTC chief technologist and current Head of AI Policy for the Abundance Institute, says in an email to Fast Company. Chilson says we don’t even understand the risks that safety guardrails should target. “[W]e lack good evidence on the risk profile of AI models or how to mitigate that risk, if any. Until we have more evidence, we simply don’t know if model-level regulation will help or hurt on net.”

Others believe that SB 1047’s focus on imposing safety guidelines was misguided. If lawmakers want to stop frontier models from enabling catastrophic harms, they should focus on transparency around the data used to train them, says Appian CEO Matt Calkins.

“AI is a function of its data,” he says. “If we don’t want a model to create a killer virus we have to make sure it’s not been trained on data explaining how to make a killer virus. You would prevent the usage of that gain-of-function data.”

bill California doesnt landmark regulation spiked
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSales professionals need mental health ‘helmets’
Next Article Digital tools help companies retain hourly workers, WorkJam CEO says
admin
gossipstoday
  • Website

Related Posts

From lab to market: Monetizing R&D 

May 17, 2025

OpenAI launches Codex, an AI agent for coding

May 16, 2025

Will NJ Transit go on strike? New warning as Friday midnight deadline nears

May 16, 2025
Leave A Reply Cancel Reply

Demo
Trending Now

How to Get and Stay Motivated When Starting a New Exercise and Diet Phase

Alignment Healthcare names new president as insurer eyes growth

What Is a Bear Market?

I’m a TSA Employee—These 10 Mistakes Will Make You 'That' Person in the Security Line, and How to Avoid Them

Latest Posts

I’m a TSA Employee—These 10 Mistakes Will Make You 'That' Person in the Security Line, and How to Avoid Them

May 17, 2025

From lab to market: Monetizing R&D 

May 17, 2025

OpenAI’s planned data center in Abu Dhabi would be bigger than Monaco

May 17, 2025

Subscribe to News

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Advertisement
Demo
Black And Beige Minimalist Elegant Cosmetics Logo (4) (1)
Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram

Categories

  • Tech & Innovation
  • Health & Wellness
  • Personal Finance
  • Lifestyle & Productivity

Company

  • About Us
  • Contact Us
  • Advertise With Us

Services

  • Privacy Policy
  • Terms & Conditions
  • Disclaimer

Subscribe to Updates

© 2025 Gossips Today. All Right Reserved.

Type above and press Enter to search. Press Esc to cancel.