How do you know when AI is powerful enough to be dangerous? Regulators try to do the math | Canada News Media
Connect with us

News

How do you know when AI is powerful enough to be dangerous? Regulators try to do the math

Published

 on

How do you know if an artificial intelligence system is so powerful that it poses a security danger and shouldn’t be unleashed without careful oversight?

For regulators trying to put guardrails on AI, it’s mostly about the arithmetic. Specifically, an AI model trained on 10 to the 26th floating-point operations per second must now be reported to the U.S. government and could soon trigger even stricter requirements in California.

Say what? Well, if you’re counting the zeroes, that’s 100,000,000,000,000,000,000,000,000, or 100 septillion, calculations each second, using a measure known as flops.

What it signals to some lawmakers and AI safety advocates is a level of computing power that might enable rapidly advancing AI technology to create or proliferate weapons of mass destruction, or conduct catastrophic cyberattacks.

Those who’ve crafted such regulations acknowledge they are an imperfect starting point to distinguish today’s highest-performing generative AI systems — largely made by California-based companies like Anthropic, Google, Meta Platforms and ChatGPT-maker OpenAI — from the next generation that could be even more powerful.

Critics have pounced on the thresholds as arbitrary — an attempt by governments to regulate math.

“Ten to the 26th flops,” said venture capitalist Ben Horowitz on a podcast this summer. “Well, what if that’s the size of the model you need to, like, cure cancer?”

An executive order signed by President Joe Biden last year relies on that threshold. So does California’s newly passed AI safety legislation — which Gov. Gavin Newsom has until Sept. 30 to sign into law or veto. California adds a second metric to the equation: regulated AI models must also cost at least $100 million to build.

Following Biden’s footsteps, the European Union’s sweeping AI Act also measures floating-point operations per second, or flops, but sets the bar 10 times lower at 10 to the 25th power. That covers some AI systems already in operation. China’s government has also looked at measuring computing power to determine which AI systems need safeguards.

No publicly available models meet the higher California threshold, though it’s likely that some companies have already started to build them. If so, they’re supposed to be sharing certain details and safety precautions with the U.S. government. Biden employed a Korean War-era law to compel tech companies to alert the U.S. Commerce Department if they’re building such AI models.

AI researchers are still debating how best to evaluate the capabilities of the latest generative AI technology and how it compares to human intelligence. There are tests that judge AI on solving puzzles, logical reasoning or how swiftly and accurately it predicts what text will answer a person’s chatbot query. Those measurements help assess an AI tool’s usefulness for a given task, but there’s no easy way of knowing which one is so widely capable that it poses a danger to humanity.

“This computation, this flop number, by general consensus is sort of the best thing we have along those lines,” said physicist Anthony Aguirre, executive director of the Future of Life Institute, which has advocated for the passage of California’s Senate Bill 1047 and other AI safety rules around the world.

Floating point arithmetic might sound fancy “but it’s really just numbers that are being added or multiplied together,” making it one of the simplest ways to assess an AI model’s capability and risk, Aguirre said.

“Most of what these things are doing is just multiplying big tables of numbers together,” he said. “You can just think of typing in a couple of numbers into your calculator and adding or multiplying them. And that’s what it’s doing — ten trillion times or a hundred trillion times.”

For some tech leaders, however, it’s too simple and hard-coded a metric. There’s “no clear scientific support” for using such metrics as a proxy for risk, argued computer scientist Sara Hooker, who leads AI company Cohere’s nonprofit research division, in a July paper.

“Compute thresholds as currently implemented are shortsighted and likely to fail to mitigate risk,” she wrote.

Venture capitalist Horowitz and his business partner Marc Andreessen, founders of the influential Silicon Valley investment firm Andreessen Horowitz, have attacked the Biden administration as well as California lawmakers for AI regulations they argue could snuff out an emerging AI startup industry.

For Horowitz, putting limits on “how much math you’re allowed to do” reflects a mistaken belief there will only be a handful of big companies making the most capable models and you can put “flaming hoops in front of them and they’ll jump through them and it’s fine.”

In response to the criticism, the sponsor of California’s legislation sent a letter to Andreessen Horowitz this summer defending the bill, including its regulatory thresholds.

Regulating at over 10 to the 26th flops is “a clear way to exclude from safety testing requirements many models that we know, based on current evidence, lack the ability to cause critical harm,” wrote state Sen. Scott Wiener of San Francisco. Existing publicly released models “have been tested for highly hazardous capabilities and would not be covered by the bill,” Wiener said.

Both Wiener and the Biden executive order treat the metric as a temporary one that could be adjusted later.

Yacine Jernite, who leads policy research at the AI company Hugging Face, said the flops metric emerged in “good faith” ahead of last year’s Biden order but is already starting to grow obsolete. AI developers are doing more with smaller models requiring less computing power, while the potential harms of more widely used AI products won’t trigger California’s proposed scrutiny.

“Some models are going to have a drastically larger impact on society, and those should be held to a higher standard, whereas some others are more exploratory and it might not make sense to have the same kind of process to certify them,” Jernite said.

Aguirre said it makes sense for regulators to be nimble, but he characterizes some opposition to the flops threshold as an attempt to avoid any regulation of AI systems as they grow more capable.

“This is all happening very fast,” Aguirre said. “I think there’s a legitimate criticism that these thresholds are not capturing exactly what we want them to capture. But I think it’s a poor argument to go from that to, ‘Well, we just shouldn’t do anything and just cross our fingers and hope for the best.'”



Source link

Continue Reading

News

Whitehead becomes 1st CHL player to verbally commit to playing NCAA hockey

Published

 on

Braxton Whitehead said Friday he has verbally committed to Arizona State, making him the first member of a Canadian Hockey League team to attempt to play the sport at the Division I U.S. college level since a lawsuit was filed challenging the NCAA’s longstanding ban on players it deems to be professionals.

Whitehead posted on social media he plans to play for the Sun Devils beginning in the 2025-26 season.

An Arizona State spokesperson said the school could not comment on verbal commitments, citing NCAA rules. A message left with the CHL was not immediately returned.

A class-action lawsuit filed Aug. 13 in U.S. District Court in Buffalo, New York, could change the landscape for players from the CHL’s Western Hockey League, Ontario Hockey League and Quebec Maritimes Junior Hockey League. NCAA bylaws consider them professional leagues and bar players from there from the college ranks.

Online court records show the NCAA has not made any response to the lawsuit since it was filed.

“We’re pleased that Arizona State has made this decision, and we’re hopeful that our case will result in many other Division I programs following suit and the NCAA eliminating its ban on CHL players,” Stephen Lagos, one of the lawyers who launched the lawsuit, told The Associated Press in an email.

The lawsuit was filed on behalf of Riley Masterson, of Fort Erie, Ontario, who lost his college eligibility two years ago when, at 16, he appeared in two exhibition games for the OHL’s Windsor Spitfires. And it lists 10 Division 1 hockey programs, which were selected to show they follow the NCAA’s bylaws in barring current or former CHL players.

CHL players receive a stipend of no more than $600 per month for living expenses, which is not considered as income for tax purposes. College players receive scholarships and now can earn money through endorsements and other use of their name, image and likeness (NIL).

The implications of the lawsuit could be far-reaching. If successful, the case could increase competition for college-age talent between North America’s two top producers of NHL draft-eligible players.

“I think that everyone involved in our coaches association is aware of some of the transformational changes that are occurring in collegiate athletics,” Forrest Karr, executive director of American Hockey Coaches Association and Minnesota-Duluth athletic director said last month. “And we are trying to be proactive and trying to learn what we can about those changes.

Karr was not immediately available for comment on Friday.

Earlier this year, Karr established two committees — one each overseeing men’s and women’s hockey — to respond to various questions on eligibility submitted to the group by the NCAA. The men’s committee was scheduled to go over its responses two weeks ago.

Former Minnesota coach and Central Collegiate Hockey Association commissioner Don Lucia said at the time that the lawsuit provides the opportunity for stakeholders to look at the situation.

“I don’t know if it would be necessarily settled through the courts or changes at the NCAA level, but I think the time is certainly fast approaching where some decisions will be made in the near future of what the eligibility will look like for a player that plays in the CHL and NCAA,” Lucia said.

Whitehead, a 20-year-old forward from Alaska who has developed into a point-a-game player, said he plans to play again this season with the Regina Pats of the Western Hockey League.

“The WHL has given me an incredible opportunity to develop as a player, and I couldn’t be more excited,” Whitehead posted on Instagram.

His addition is the latest boon for Arizona State hockey, a program that has blossomed in the desert far from traditional places like Massachusetts, Minnesota and Michigan since entering Division I in 2015. It has already produced NHL talent, including Seattle goaltender Joey Daccord and Josh Doan, the son of longtime Coyotes captain Shane Doan, who now plays for Utah after that team moved from the Phoenix area to Salt Lake City.

___

AP college sports:

The Canadian Press. All rights reserved.



Source link

Continue Reading

News

Calgary Flames sign forward Jakob Pelletier to one-year contract

Published

 on

CALGARY – The Calgary Flames signed winger Jakob Pelletier to a one-year, two-way contract on Friday.

The contract has an average annual value of US$800,000.

Pelletier, a 23-year-old from Quebec City, split last season with the Flames and American Hockey League’s Calgary Wranglers.

He produced one goal and two assists in 13 games with the Flames.

Calgary drafted the five-foot-nine, 170-pound forward in the first round, 26th overall, of the 2019 NHL draft.

Pelletier has four goals and six assists in 37 career NHL games.

This report by The Canadian Press was first published Sept. 13, 2024.

The Canadian Press. All rights reserved.



Source link

Continue Reading

News

Kingston mayor’s call to close care hub after fatal assault ‘misguided’: legal clinic

Published

 on

A community legal clinic in Kingston, Ont., is denouncing the mayor’s calls to clear an encampment and close a supervised consumption site in the city following a series of alleged assaults that left two people dead and one seriously injured.

Kingston police said they were called to an encampment near a safe injection site on Thursday morning, where they allege a 47-year-old male suspect wielded an edged or blunt weapon and attacked three people. Police said he was arrested after officers negotiated with him for several hours.

The suspect is now facing two counts of second-degree murder and one count of attempted murder.

In a social media post, Kingston Mayor Bryan Paterson said he was “absolutely horrified” by the situation.

“We need to clear the encampment, close this safe injection site and the (Integrated Care Hub) until we can find a better way to support our most vulnerable residents,” he wrote.

The Kingston Community Legal Clinic called Paterson’s comments “premature and misguided” on Friday, arguing that such moves could lead to a rise in overdoses, fewer shelter beds and more homelessness.

In a phone interview, Paterson said the encampment was built around the Integrated Care Hub and safe injection site about three years ago. He said the encampment has created a “dangerous situation” in the area and has frequently been the site of fires, assaults and other public safety concerns.

“We have to find a way to be able to provide the services that people need, being empathetic and compassionate to those struggling with homelessness and mental health and addictions issues,” said Paterson, noting that the safe injection site and Integrated Care Hub are not operated by the city.

“But we cannot turn a blind eye to the very real public safety issues.”

When asked how encampment residents and people who use the services would be supported if the sites were closed, Paterson said the city would work with community partners to “find the best way forward” and introduce short-term and long-term changes.

Keeping the status quo “would be a terrible failure,” he argued.

John Done, executive director of the Kingston Community Legal Clinic, criticized the mayor’s comments and said many of the people residing in the encampment may be particularly vulnerable to overdoses and death. The safe injection site and Integrated Care Hub saves lives, he said.

Taking away those services, he said, would be “irresponsible.”

Done said the legal clinic represented several residents of the encampment when the City of Kingston made a court application last summer to clear the encampment. The court found such an injunction would be unconstitutional, he said.

Done added there’s “no reason” to attach blame while the investigation into Thursday’s attacks is ongoing. The two people who died have been identified as 38-year-old Taylor Wilkinson and 41-year-old John Hood.

“There isn’t going to be a quick, easy solution for the fact of homelessness, drug addictions in Kingston,” Done said. “So I would ask the mayor to do what he’s trained to do, which is to simply pause until we have more information.”

The concern surrounding the safe injection site in Kingston follows a recent shift in Ontario’s approach to the overdose crisis.

Last month, the province announced that it would close 10 supervised consumption sites because they’re too close to schools and daycares, and prohibit any new ones from opening as it moves to an abstinence-based treatment model.

This report by The Canadian Press was first published Sept. 13, 2024.

The Canadian Press. All rights reserved.



Source link

Continue Reading

Trending

Exit mobile version