Examining AI pioneer Geoffrey Hinton’s fears about AI | Canada News Media
Connect with us

Tech

Examining AI pioneer Geoffrey Hinton’s fears about AI

Published

 on

When prominent computer scientist and Turing Award winner Geoffrey Hinton retired from Google due to what he said are his concerns that AI technology is becoming out of control and a danger to humans, it triggered a frenzy in the tech world.

Hinton, who worked part-time at Google for more than a decade, is known as the “godfather of AI.” The AI pioneer has made major contributions to the development of  machine learning, deep learning, and the backpropagation technique, a process for training artificial neural networks.

In his own words

While Hinton attributed part of his decision to retire on May 1 to his age, the 75-year-old also said he regrets some of his contributions to artificial intelligence.

During a question-and-answer session at MIT Technology Review’s EmTech Digital 2023 conference on May 3, Hinton said he has changed his mind about how AI technology works. He said he now believes that AI systems can be much more intelligent than humans and are better learners.

“Things like GPT-4 know much more than we do,” Hinton said, referring to the latest iteration of research lab OpenAI’s large language model. “They have sort of common sense knowledge about everything.”

The more technology learns about humans, the better it will get at manipulating humans, he said.

Hinton’s concerns about the risks of AI technology are analogous to those of other AI leaders who recently called for a pause in the development of AI.

While the computer scientist does not think a pause is possible, he said the risks of AI technology and its misuse by criminals and other wrongdoers — particularly those who would use it for harmful political ends — can become a danger to society.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial for us,” Hinton said. “We need to try and do that in a world with bad actors who want to build robot soldiers that kill people.”

AI race and need for regulation

While Hinton clarified that his decision to leave Google was not because of any specific irresponsibility on the part of the tech giant about AI technology, the computer scientist joins a group of notable Google employees to sound the alarm about AI technology.

Last year, ex-Google engineer Blake Lemoine claimed the vendor’s AI chatbot LaMDA is aware and can hold spontaneous conversations and have human feelings. Lemoine also said that Google acted with caution and slowed down development after he provided it with his data.

Even if some consider that Google has been suitably responsible in its AI efforts, the pace at which major tech vendors, particularly Google archrival Microsoft, have introduced new AI systems (from integrating ChatGPT into the Azure and office applications) has spurred Google to scramble faster in what has become a frantic AI race.

However, the frenetic pace at which both Google and Microsoft are moving may be too fast to assure enterprise and consumer users of AI technology that the AI innovations are safe and ready to use effectively.

“They’re putting things out at a rapid pace without enough testing,” said Chirag Shah, a professor in the information school at the University of Washington. “We have no regulations. We have no checkpoints. We have nothing that can stop them from doing this.”

But the federal government has taken note of problems with AI and generative AI technology.

On May 4, the Biden administration invited CEOs from AI vendors Microsoft, Alphabet, OpenAI and Anthropic to discuss the importance of responsible and trustworthy innovation.

The administration also said that developers from leading AI companies, including Nvidia, Stability AI and Hugging Face will participate in public evaluations of the AI systems.

But the near total lack of checkpoints and regulation makes the technology risky, especially as generative AI is a self-learning system, Shah said.

Unregulated and unrestrained generative AI systems could lead to disaster, primarily when people with unscrupulous political intentions or criminal hackers misuse the technology.

“These things are so quickly getting out of our hands that it’s a matter of time before either it’s bad actors doing things or this technology itself, doing things on its own that we cannot stop,” Shah said. For example, bad actors could use generative AI for fraud or even to try to trigger terrorist attacks, or to try to perpetuate and instill biases.

However, as with many technologies, regulation follows when there’s mass adoption, said Usama Fayyad, professor and executive director at the Institute for Experiential AI at Northeastern University.

And while ChatGPT has attracted more than 100 million since OpenAI released it last November, most of those users are using it only occasionally, and not relying on it on a daily basis like they do with other popular AI tools such as Google Maps or Translate, Fayyad said.

“You can’t do regulation ahead of understanding the technology,” he continued. Because regulators still don’t fully understand the technology, they are not yet able to regulate it.

“Just like with cars, and with guns and with many other things, [regulation] lagged for a long time,” Fayyad said. “The more important the technology becomes, the more likely it is that we will have regulation in place.”

Therefore, regulation will likely come when AI technology becomes embedded into every application and help most knowledge workers do their jobs faster, Fayyad said.

AI tech’s intelligence

Fayyad added just because it “thinks” quickly doesn’t mean AI technology will be more intelligent than humans.

“We think that only intelligent humans can sound eloquent and can sound fluent,” Fayyad added. “We mistake fluency and eloquence with intelligence.”

Because large language models follow stochastic patterns (meaning they follow common practices but also include a bit of randomization), they’re programmed to tell a story, meaning they may end up telling the wrong story. In addition, their nature is to want to sound smart, which can make humans see them as more intelligent than they really are, Fayyad said.

Moreover, the fact that machines are good at discrete tasks doesn’t mean they’re smarter than humans, said Sarah Kreps, John L. Wetherill Professor in the department of government and an adjunct law professor at Cornell University.

“Where humans excel is on more complex tasks that combine multiple cognitive processes that also entail empathy, adaptation and intuition,” Krepps said. “It’s hard to program a machine to do these things, and that’s what’s behind the elusive artificial general intelligence (AGI).”

AGI is software (that still does not formally exist) that possesses the general cognitive abilities of a human, which would theoretically enable it to perform any task that a human can do.

Next steps

For his part, Hinton has claimed that he’s bringing the problem to the forefront to try to spur people find effective ways to confront the risks of AI.

Meanwhile, Krepps said Hinton’s decision to speak up now, decades after first working on the technology, could seem hypocritical.

“He, of all people, should have seen where the technology was going and how quickly,” she said.

On the other hand, she added that Hinton’s position may make people more cautious about AI technology.

The ability to use AI for good requires that users are transparent and accountable, Shah said. “There will also need to be consequences for people who misuse it,” he said.

“We have to figure out an accountability framework,” he said. “There’s still going to be harm. But if we can control a lot of it, we can mitigate some of the problems much better than we are able to do right now.”

For Hinton, the best thing might be to help the next generation try use AI technology responsibly.

“What people like Hinton can do is help create a set of norms around the appropriate use of these technologies,” Kreps said. “Norms won’t preclude misuse but can stigmatize it and contribute to the guardrails that can mitigate the risks of AI.”

Esther Ajao is a news writer covering artificial intelligence software and systems.

 

Source link

Continue Reading

Tech

Ottawa orders TikTok’s Canadian arm to be dissolved

Published

 on

 

The federal government is ordering the dissolution of TikTok’s Canadian business after a national security review of the Chinese company behind the social media platform, but stopped short of ordering people to stay off the app.

Industry Minister François-Philippe Champagne announced the government’s “wind up” demand Wednesday, saying it is meant to address “risks” related to ByteDance Ltd.’s establishment of TikTok Technology Canada Inc.

“The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners,” he said in a statement.

The announcement added that the government is not blocking Canadians’ access to the TikTok application or their ability to create content.

However, it urged people to “adopt good cybersecurity practices and assess the possible risks of using social media platforms and applications, including how their information is likely to be protected, managed, used and shared by foreign actors, as well as to be aware of which country’s laws apply.”

Champagne’s office did not immediately respond to a request for comment seeking details about what evidence led to the government’s dissolution demand, how long ByteDance has to comply and why the app is not being banned.

A TikTok spokesperson said in a statement that the shutdown of its Canadian offices will mean the loss of hundreds of well-paying local jobs.

“We will challenge this order in court,” the spokesperson said.

“The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive.”

The federal Liberals ordered a national security review of TikTok in September 2023, but it was not public knowledge until The Canadian Press reported in March that it was investigating the company.

At the time, it said the review was based on the expansion of a business, which it said constituted the establishment of a new Canadian entity. It declined to provide any further details about what expansion it was reviewing.

A government database showed a notification of new business from TikTok in June 2023. It said Network Sense Ventures Ltd. in Toronto and Vancouver would engage in “marketing, advertising, and content/creator development activities in relation to the use of the TikTok app in Canada.”

Even before the review, ByteDance and TikTok were lightning rod for privacy and safety concerns because Chinese national security laws compel organizations in the country to assist with intelligence gathering.

Such concerns led the U.S. House of Representatives to pass a bill in March designed to ban TikTok unless its China-based owner sells its stake in the business.

Champagne’s office has maintained Canada’s review was not related to the U.S. bill, which has yet to pass.

Canada’s review was carried out through the Investment Canada Act, which allows the government to investigate any foreign investment with potential to might harm national security.

While cabinet can make investors sell parts of the business or shares, Champagne has said the act doesn’t allow him to disclose details of the review.

Wednesday’s dissolution order was made in accordance with the act.

The federal government banned TikTok from its mobile devices in February 2023 following the launch of an investigation into the company by federal and provincial privacy commissioners.

— With files from Anja Karadeglija in Ottawa

This report by The Canadian Press was first published Nov. 6, 2024.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Health

Here is how to prepare your online accounts for when you die

Published

 on

 

LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?

It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.

Here’s how you can prepare your digital life for your survivors:

Apple

The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.

For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.

You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.

Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.

Google

Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.

When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.

You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.

There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.

Facebook and Instagram

Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.

When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.

The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.

You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.

TikTok

The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.

Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.

X

It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.

Passwords

Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?

Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.

But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.

___

Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.

Source link

Continue Reading

Tech

Google’s partnership with AI startup Anthropic faces a UK competition investigation

Published

 on

 

LONDON (AP) — Britain’s competition watchdog said Thursday it’s opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.

The Competition and Markets Authority said it has “sufficient information” to launch an initial probe after it sought input earlier this year on whether the deal would stifle competition.

The CMA has until Dec. 19 to decide whether to approve the deal or escalate its investigation.

“Google is committed to building the most open and innovative AI ecosystem in the world,” the company said. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”

San Francisco-based Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, who previously worked at ChatGPT maker OpenAI. The company has focused on increasing the safety and reliability of AI models. Google reportedly agreed last year to make a multibillion-dollar investment in Anthropic, which has a popular chatbot named Claude.

Anthropic said it’s cooperating with the regulator and will provide “the complete picture about Google’s investment and our commercial collaboration.”

“We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” it said in a statement.

The U.K. regulator has been scrutinizing a raft of AI deals as investment money floods into the industry to capitalize on the artificial intelligence boom. Last month it cleared Anthropic’s $4 billion deal with Amazon and it has also signed off on Microsoft’s deals with two other AI startups, Inflection and Mistral.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Trending

Exit mobile version