Examining AI pioneer Geoffrey Hinton’s fears about AI
When prominent computer scientist and Turing Award winner Geoffrey Hinton retired from Google due to what he said are his concerns that AI technology is becoming out of control and a danger to humans, it triggered a frenzy in the tech world.
Hinton, who worked part-time at Google for more than a decade, is known as the “godfather of AI.” The AI pioneer has made major contributions to the development of machine learning, deep learning, and the backpropagation technique, a process for training artificial neural networks.
In his own words
While Hinton attributed part of his decision to retire on May 1 to his age, the 75-year-old also said he regrets some of his contributions to artificial intelligence.
During a question-and-answer session at MIT Technology Review’s EmTech Digital 2023 conference on May 3, Hinton said he has changed his mind about how AI technology works. He said he now believes that AI systems can be much more intelligent than humans and are better learners.
“Things like GPT-4 know much more than we do,” Hinton said, referring to the latest iteration of research lab OpenAI’s large language model. “They have sort of common sense knowledge about everything.”
The more technology learns about humans, the better it will get at manipulating humans, he said.
Hinton’s concerns about the risks of AI technology are analogous to those of other AI leaders who recently called for a pause in the development of AI.
While the computer scientist does not think a pause is possible, he said the risks of AI technology and its misuse by criminals and other wrongdoers — particularly those who would use it for harmful political ends — can become a danger to society.
“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial for us,” Hinton said. “We need to try and do that in a world with bad actors who want to build robot soldiers that kill people.”
AI race and need for regulation
While Hinton clarified that his decision to leave Google was not because of any specific irresponsibility on the part of the tech giant about AI technology, the computer scientist joins a group of notable Google employees to sound the alarm about AI technology.
Last year, ex-Google engineer Blake Lemoine claimed the vendor’s AI chatbot LaMDA is aware and can hold spontaneous conversations and have human feelings. Lemoine also said that Google acted with caution and slowed down development after he provided it with his data.
Even if some consider that Google has been suitably responsible in its AI efforts, the pace at which major tech vendors, particularly Google archrival Microsoft, have introduced new AI systems (from integrating ChatGPT into the Azure and office applications) has spurred Google to scramble faster in what has become a frantic AI race.
However, the frenetic pace at which both Google and Microsoft are moving may be too fast to assure enterprise and consumer users of AI technology that the AI innovations are safe and ready to use effectively.
“They’re putting things out at a rapid pace without enough testing,” said Chirag Shah, a professor in the information school at the University of Washington. “We have no regulations. We have no checkpoints. We have nothing that can stop them from doing this.”
But the federal government has taken note of problems with AI and generative AI technology.
On May 4, the Biden administration invited CEOs from AI vendors Microsoft, Alphabet, OpenAI and Anthropic to discuss the importance of responsible and trustworthy innovation.
The administration also said that developers from leading AI companies, including Nvidia, Stability AI and Hugging Face will participate in public evaluations of the AI systems.
But the near total lack of checkpoints and regulation makes the technology risky, especially as generative AI is a self-learning system, Shah said.
Unregulated and unrestrained generative AI systems could lead to disaster, primarily when people with unscrupulous political intentions or criminal hackers misuse the technology.
“These things are so quickly getting out of our hands that it’s a matter of time before either it’s bad actors doing things or this technology itself, doing things on its own that we cannot stop,” Shah said. For example, bad actors could use generative AI for fraud or even to try to trigger terrorist attacks, or to try to perpetuate and instill biases.
However, as with many technologies, regulation follows when there’s mass adoption, said Usama Fayyad, professor and executive director at the Institute for Experiential AI at Northeastern University.
And while ChatGPT has attracted more than 100 million since OpenAI released it last November, most of those users are using it only occasionally, and not relying on it on a daily basis like they do with other popular AI tools such as Google Maps or Translate, Fayyad said.
“You can’t do regulation ahead of understanding the technology,” he continued. Because regulators still don’t fully understand the technology, they are not yet able to regulate it.
“Just like with cars, and with guns and with many other things, [regulation] lagged for a long time,” Fayyad said. “The more important the technology becomes, the more likely it is that we will have regulation in place.”
Therefore, regulation will likely come when AI technology becomes embedded into every application and help most knowledge workers do their jobs faster, Fayyad said.
AI tech’s intelligence
Fayyad added just because it “thinks” quickly doesn’t mean AI technology will be more intelligent than humans.
“We think that only intelligent humans can sound eloquent and can sound fluent,” Fayyad added. “We mistake fluency and eloquence with intelligence.”
Because large language models follow stochastic patterns (meaning they follow common practices but also include a bit of randomization), they’re programmed to tell a story, meaning they may end up telling the wrong story. In addition, their nature is to want to sound smart, which can make humans see them as more intelligent than they really are, Fayyad said.
Moreover, the fact that machines are good at discrete tasks doesn’t mean they’re smarter than humans, said Sarah Kreps, John L. Wetherill Professor in the department of government and an adjunct law professor at Cornell University.
“Where humans excel is on more complex tasks that combine multiple cognitive processes that also entail empathy, adaptation and intuition,” Krepps said. “It’s hard to program a machine to do these things, and that’s what’s behind the elusive artificial general intelligence (AGI).”
AGI is software (that still does not formally exist) that possesses the general cognitive abilities of a human, which would theoretically enable it to perform any task that a human can do.
For his part, Hinton has claimed that he’s bringing the problem to the forefront to try to spur people find effective ways to confront the risks of AI.
Meanwhile, Krepps said Hinton’s decision to speak up now, decades after first working on the technology, could seem hypocritical.
“He, of all people, should have seen where the technology was going and how quickly,” she said.
On the other hand, she added that Hinton’s position may make people more cautious about AI technology.
The ability to use AI for good requires that users are transparent and accountable, Shah said. “There will also need to be consequences for people who misuse it,” he said.
“We have to figure out an accountability framework,” he said. “There’s still going to be harm. But if we can control a lot of it, we can mitigate some of the problems much better than we are able to do right now.”
For Hinton, the best thing might be to help the next generation try use AI technology responsibly.
“What people like Hinton can do is help create a set of norms around the appropriate use of these technologies,” Kreps said. “Norms won’t preclude misuse but can stigmatize it and contribute to the guardrails that can mitigate the risks of AI.”
Esther Ajao is a news writer covering artificial intelligence software and systems.
Apple's AR/VR Headset Expected to Enter Mass Production in October Ahead of Late 2023 Launch – MacRumors
Apple’s long-rumored AR/VR headset will enter mass production in October and launch by December, according to investment firm Morgan Stanley. Apple is still expected to unveil the headset at WWDC next week, and provide developers with tools to create apps for the device, which is expected to have its own App Store.
“While we expect Apple’s AR/VR headset to be unveiled next week, our supply chain checks suggest mass production won’t start until October ’23, with general availability most likely ahead of the December holidays,” said Erik Woodring, an Apple analyst at Morgan Stanley, in a research note obtained by MacRumors.
Apple’s supply chain is preparing to assemble only 300,000 to 500,000 headsets in 2023, according to Woodring. As widely rumored, he believes the headset will have a starting price of around $3,000, and he expects gross margins to be “close to breakeven at first,” suggesting that Apple will initially make minimal profits on the device.
Morgan Stanley also reiterated that Apple plans to announce a new MacBook Air at WWDC, but it’s unclear if this information is independently sourced or simply corroborating other rumors. Apple’s keynote begins on Monday, June 5 at 10 a.m. Pacific Time.
Earlier this year, Google announced that it planned to unify its Drive File Stream and Backup and Sync apps into a single Google Drive for desktop app. The company now says the new sync client will roll out “in the coming weeks” and has released additional information about what users can expect from the transition.
To recap, there are currently two desktop sync solutions for using Google…
Motorola Says iPhone Owners Are Switching to Get Foldable Phones – CNET
From Samsung to Motorola and Google, just about every major Android phone maker has released a foldable phone. But there’s one big, non-Android outlier: Apple. And according to Motorola, that’s prompting some iPhone users to make the switch.
Specifically, Motorola has seen 20% of new Razr users coming from Apple products. That data point is from 2021 following the launch of the previous-generation Razr.
“This is definitely the family that we have the most amount of iPhone users switching to us,” Allison Yi, Motorola’s head of North America product operations, said to CNET ahead of the company’s Razr Plus launch.
Foldable phones still account for a fragment of the global smartphone market, but the category is growing quickly as tech giants search for the next major evolution of the mobile phone. Market research firm International Data Corporation estimates that worldwide shipments of foldables will increase more than 50% in 2023 compared to 2022. Motorola’s Razr line faces the most competition from Samsung’s Galaxy Z Flip series, although Samsung hasn’t broken out its sales numbers to specify its percentage of iPhone converts.
This year is shaping up to be a milestone moment for foldables with the arrival of newcomers including Google and OnePlus, giving Motorola and other early entrants like Samsung more competition. Apple, however, is still noticeably absent from the foldable phone race, and that isn’t likely to change anytime too soon.
‘Diablo 4’ PS5 Players Hit With ‘Unable To Find A Valid License’ Error, Blizzard Comments – Forbes
I’ll be honest, my jaw was hanging open a bit when I logged into Diablo 4 a minute after 7 PM ET, got a 4 minute queue which was…actually over in four minutes. I got in, created my Barbarian and I’m already level 4, pausing only to write this article. No wild wait times, no errors, no disconnects (yet).
But I’m on PC. PS5 players? They’re no so lucky.
At the time of this writing, there is a widespread error that says “Unable to find a valid license for Diablo IV (Code 315306).” This is happening, obviously, to people who very much have indeed purchased the game, so something is going wrong here with an interaction between Blizzard and PlayStation it seems.
However, it may be Blizzard and console more broadly. I have heard of at least some Xbox players getting this error message as well, but overall players seem to be having more success with Microsoft. The PS5 error appears to be more widespread, for whatever reason.
Blizzard has indeed acknowledged the issue, via its community manager and a forum post here. They do say PlayStation specifically, even if a few Xbox folks are hitting it. The message just says:
“We are seeing reports regarding PlayStation users experiencing Invalid License errors. The team is looking into this right now and will update once we have more information.”
If you’ve come here for advice on a fix, I’m sorry I can’t help you yet, as there does not appear to be one. I would avoid drastic steps like reinstalling the 80 GB game or anything, as that is probably going to be unnecessary and not fix the problem anyway. But yes, there is a widespread problem, you are not alone.
Naturally, many Diablo players were concerned we could have another Error 37 issue on our hands, the old error code that endlessly crippled Diablo 3 at launch. It…doesn’t seem likely that we’re headed into something that bad. Given that this is a console-specific issues, it means that Blizzard’s servers are not totally melting down as a whole. This time around they also did a “server slam” and this is early access launch which are mitigating factors. But that’s cold comfort to PS players who can’t play yet because of the “no valid license” error.
As soon as there’s a new update on the situation I will post it here. Stay tuned, and hopefully this will be resolved soon.
Update: It seems this may be a PS-wide issue, as there are reports of many games returning the license error right now, not just Diablo 4. Bad coincidence or…sparked by a flood of Diablo logons? Not clear yet.
Follow me on Twitter, YouTube, Facebook and Instagram. Subscribe to my free weekly content round-up newsletter, God Rolls.
Pick up my sci-fi novels the Herokiller series and The Earthborn Trilogy.
Scientists discover mysterious cosmic threads in Milky Way – The Guardian
Equities may rally since the U.S. economy remains strong: Dennis Mitchell – BNN Bloomberg
Man charged after allegedly threatening to shoot Toronto mayoral candidates, police say – CBC.ca
Silver investment demand jumped 12% in 2019
Iran anticipates renewed protests amid social media shutdown
Search for life on Mars accelerates as new bodies of water found below planet’s surface
Science13 hours ago
Private Astronaut Crew, Including First Arab Woman in Orbit, Returns from Space Station – Voice of America – VOA News
Science9 hours ago
Private astronaut crew, including first Arab woman in orbit, returns from space station – Indiatimes.com
Media12 hours ago
Will Google's AI Plans Destroy the Media? – New York Magazine
News12 hours ago
Air Canada flight communicator system breaks down, causing widespread delays – CBC.ca
Sports12 hours ago
Brad Treliving on the criticism the Maple Leafs’ core players face in the market. "Whether it’s raining or sunny, it seems to be the core four’s fault every day… Quite frankly, I don’t want to hear the [core four] term"
Real eState10 hours ago
Victoria real estate sales up and prices down year-over-year – Times Colonist
News13 hours ago
Digital banking complications resolved at RBC – CTV News
News9 hours ago
Air Canada flight delays at Toronto Pearson | CTV News – CTV News Toronto