Connect with us


Examining AI pioneer Geoffrey Hinton’s fears about AI



When prominent computer scientist and Turing Award winner Geoffrey Hinton retired from Google due to what he said are his concerns that AI technology is becoming out of control and a danger to humans, it triggered a frenzy in the tech world.

Hinton, who worked part-time at Google for more than a decade, is known as the “godfather of AI.” The AI pioneer has made major contributions to the development of  machine learning, deep learning, and the backpropagation technique, a process for training artificial neural networks.

In his own words

While Hinton attributed part of his decision to retire on May 1 to his age, the 75-year-old also said he regrets some of his contributions to artificial intelligence.

During a question-and-answer session at MIT Technology Review’s EmTech Digital 2023 conference on May 3, Hinton said he has changed his mind about how AI technology works. He said he now believes that AI systems can be much more intelligent than humans and are better learners.


“Things like GPT-4 know much more than we do,” Hinton said, referring to the latest iteration of research lab OpenAI’s large language model. “They have sort of common sense knowledge about everything.”

The more technology learns about humans, the better it will get at manipulating humans, he said.

Hinton’s concerns about the risks of AI technology are analogous to those of other AI leaders who recently called for a pause in the development of AI.

While the computer scientist does not think a pause is possible, he said the risks of AI technology and its misuse by criminals and other wrongdoers — particularly those who would use it for harmful political ends — can become a danger to society.

“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial for us,” Hinton said. “We need to try and do that in a world with bad actors who want to build robot soldiers that kill people.”

AI race and need for regulation

While Hinton clarified that his decision to leave Google was not because of any specific irresponsibility on the part of the tech giant about AI technology, the computer scientist joins a group of notable Google employees to sound the alarm about AI technology.

Last year, ex-Google engineer Blake Lemoine claimed the vendor’s AI chatbot LaMDA is aware and can hold spontaneous conversations and have human feelings. Lemoine also said that Google acted with caution and slowed down development after he provided it with his data.

Even if some consider that Google has been suitably responsible in its AI efforts, the pace at which major tech vendors, particularly Google archrival Microsoft, have introduced new AI systems (from integrating ChatGPT into the Azure and office applications) has spurred Google to scramble faster in what has become a frantic AI race.

However, the frenetic pace at which both Google and Microsoft are moving may be too fast to assure enterprise and consumer users of AI technology that the AI innovations are safe and ready to use effectively.

“They’re putting things out at a rapid pace without enough testing,” said Chirag Shah, a professor in the information school at the University of Washington. “We have no regulations. We have no checkpoints. We have nothing that can stop them from doing this.”

But the federal government has taken note of problems with AI and generative AI technology.

On May 4, the Biden administration invited CEOs from AI vendors Microsoft, Alphabet, OpenAI and Anthropic to discuss the importance of responsible and trustworthy innovation.

The administration also said that developers from leading AI companies, including Nvidia, Stability AI and Hugging Face will participate in public evaluations of the AI systems.

But the near total lack of checkpoints and regulation makes the technology risky, especially as generative AI is a self-learning system, Shah said.

Unregulated and unrestrained generative AI systems could lead to disaster, primarily when people with unscrupulous political intentions or criminal hackers misuse the technology.

“These things are so quickly getting out of our hands that it’s a matter of time before either it’s bad actors doing things or this technology itself, doing things on its own that we cannot stop,” Shah said. For example, bad actors could use generative AI for fraud or even to try to trigger terrorist attacks, or to try to perpetuate and instill biases.

However, as with many technologies, regulation follows when there’s mass adoption, said Usama Fayyad, professor and executive director at the Institute for Experiential AI at Northeastern University.

And while ChatGPT has attracted more than 100 million since OpenAI released it last November, most of those users are using it only occasionally, and not relying on it on a daily basis like they do with other popular AI tools such as Google Maps or Translate, Fayyad said.

“You can’t do regulation ahead of understanding the technology,” he continued. Because regulators still don’t fully understand the technology, they are not yet able to regulate it.

“Just like with cars, and with guns and with many other things, [regulation] lagged for a long time,” Fayyad said. “The more important the technology becomes, the more likely it is that we will have regulation in place.”

Therefore, regulation will likely come when AI technology becomes embedded into every application and help most knowledge workers do their jobs faster, Fayyad said.

AI tech’s intelligence

Fayyad added just because it “thinks” quickly doesn’t mean AI technology will be more intelligent than humans.

“We think that only intelligent humans can sound eloquent and can sound fluent,” Fayyad added. “We mistake fluency and eloquence with intelligence.”

Because large language models follow stochastic patterns (meaning they follow common practices but also include a bit of randomization), they’re programmed to tell a story, meaning they may end up telling the wrong story. In addition, their nature is to want to sound smart, which can make humans see them as more intelligent than they really are, Fayyad said.

Moreover, the fact that machines are good at discrete tasks doesn’t mean they’re smarter than humans, said Sarah Kreps, John L. Wetherill Professor in the department of government and an adjunct law professor at Cornell University.

“Where humans excel is on more complex tasks that combine multiple cognitive processes that also entail empathy, adaptation and intuition,” Krepps said. “It’s hard to program a machine to do these things, and that’s what’s behind the elusive artificial general intelligence (AGI).”

AGI is software (that still does not formally exist) that possesses the general cognitive abilities of a human, which would theoretically enable it to perform any task that a human can do.

Next steps

For his part, Hinton has claimed that he’s bringing the problem to the forefront to try to spur people find effective ways to confront the risks of AI.

Meanwhile, Krepps said Hinton’s decision to speak up now, decades after first working on the technology, could seem hypocritical.

“He, of all people, should have seen where the technology was going and how quickly,” she said.

On the other hand, she added that Hinton’s position may make people more cautious about AI technology.

The ability to use AI for good requires that users are transparent and accountable, Shah said. “There will also need to be consequences for people who misuse it,” he said.

“We have to figure out an accountability framework,” he said. “There’s still going to be harm. But if we can control a lot of it, we can mitigate some of the problems much better than we are able to do right now.”

For Hinton, the best thing might be to help the next generation try use AI technology responsibly.

“What people like Hinton can do is help create a set of norms around the appropriate use of these technologies,” Kreps said. “Norms won’t preclude misuse but can stigmatize it and contribute to the guardrails that can mitigate the risks of AI.”

Esther Ajao is a news writer covering artificial intelligence software and systems.



Source link

Continue Reading


Apple's AR/VR Headset Expected to Enter Mass Production in October Ahead of Late 2023 Launch – MacRumors



Apple’s long-rumored AR/VR headset will enter mass production in October and launch by December, according to investment firm Morgan Stanley. Apple is still expected to unveil the headset at WWDC next week, and provide developers with tools to create apps for the device, which is expected to have its own App Store.

“While we expect Apple’s AR/VR headset to be unveiled next week, our supply chain checks suggest mass production won’t start until October ’23, with general availability most likely ahead of the December holidays,” said Erik Woodring, an Apple analyst at Morgan Stanley, in a research note obtained by MacRumors.


Apple’s supply chain is preparing to assemble only 300,000 to 500,000 headsets in 2023, according to Woodring. As widely rumored, he believes the headset will have a starting price of around $3,000, and he expects gross margins to be “close to breakeven at first,” suggesting that Apple will initially make minimal profits on the device.

Morgan Stanley also reiterated that Apple plans to announce a new MacBook Air at WWDC, but it’s unclear if this information is independently sourced or simply corroborating other rumors. Apple’s keynote begins on Monday, June 5 at 10 a.m. Pacific Time.

Popular Stories

Earlier this year, Google announced that it planned to unify its Drive File Stream and Backup and Sync apps into a single Google Drive for desktop app. The company now says the new sync client will roll out “in the coming weeks” and has released additional information about what users can expect from the transition.
To recap, there are currently two desktop sync solutions for using Google…

Adblock test (Why?)


Source link

Continue Reading


Motorola Says iPhone Owners Are Switching to Get Foldable Phones – CNET



From Samsung to Motorola and Google, just about every major Android phone maker has released a foldable phone. But there’s one big, non-Android outlier: Apple. And according to Motorola, that’s prompting some iPhone users to make the switch. 

Specifically, Motorola has seen 20% of new Razr users coming from Apple products. That data point is from 2021 following the launch of the previous-generation Razr. 

“This is definitely the family that we have the most amount of iPhone users switching to us,” Allison Yi, Motorola’s head of North America product operations, said to CNET ahead of the company’s Razr Plus launch.


Foldable phones still account for a fragment of the global smartphone market, but the category is growing quickly as tech giants search for the next major evolution of the mobile phone. Market research firm International Data Corporation estimates that worldwide shipments of foldables will increase more than 50% in 2023 compared to 2022. Motorola’s Razr line faces the most competition from Samsung’s Galaxy Z Flip series, although Samsung hasn’t broken out its sales numbers to specify its percentage of iPhone converts.

This year is shaping up to be a milestone moment for foldables with the arrival of newcomers including Google and OnePlus, giving Motorola and other early entrants like Samsung more competition. Apple, however, is still noticeably absent from the foldable phone race, and that isn’t likely to change anytime too soon.

Apple hasn’t launched a foldable iPhone, nor has it announced any plans to do so. But these renders by YouTuber ConceptsiPhone imagines what a foldable iPhone could look like. 


Ming-Chi Kuo, an analyst with TF International Securities known for his Apple product predictions, tweeted in April 2022 that he doesn’t expect Apple to release its first foldable gadget until 2025. A study from Counterpoint Research suggests there’s certainly demand for a foldable iPhone, at least in the US. Among the respondents, 39% named Apple as their preferred brand for a foldable phone, while 46% named Samsung — one of the earliest and most dominant players in foldable phones. Only 6% said Motorola.

The new Razr Plus and 2023 Razr are the company’s latest attempts to change that. The $1,000 Razr Plus, announced on Thursday and launching on June 23, has a giant cover screen that the company is betting will set it apart from rivals like the Galaxy Z Flip 4. Motorola is also launching a cheaper version of the Razr later this year for an undisclosed price that will be less expensive than the Plus model.

But whether it’s competition from Samsung or eventually the long-rumored iPhone Flip, Motorola isn’t fixated on its rivals.

“It’s not about what our competition is doing,” Yi said. “It’s more of what the consumer needs are, what consumers are wanting, rather than really focusing on competition.”

The Motorola Razr Plus 

John Kim/CNET

As first-timers like Google are just getting into foldable phones in 2023, companies like Motorola and Samsung are already brainstorming what could be next. Both companies earlier this year showcased concept devices with rollable or slidable screens that can expand as needed earlier. Motorola’s take involves a smartphone-sized device that can unroll to extend its display with the press of a button. It’s still a concept, and Motorola hasn’t said when or if this rollable phone will graduate to becoming a real product. 

But Jeff Snow, Motorola’s product manager for premium and flagship devices, said he could eventually see it becoming an “offshoot” of the Razr we know today. While both the Razr and the rollable concept aim to make phones more portable, they execute that goal through different means. The Razr’s clamshell shape enables it to fold shut, function as a regular phone when opened or serve as something in between when propped open halfway. The rollable concept changes its shape in a different way by expanding and contracting its screen.

“It’s a little bit of a different experience,” said Snow. “But we see it becoming part of the same category.” 

Motorola’s rollable phone concept

Andrew Lanxon/CNET

Motorola is also evaluating larger book-style foldables like the Galaxy Z Fold, although Yi said she couldn’t comment on future products. Snow also said there’s “merit to that form factor,” but the company would have to make sure it’s not compromising the regular phone experience while also providing improved productivity and content consumption. 

“That space is taking off,” he said. “It’s something we’ll pay attention to.”

For now, the Razr Plus is Motorola’s biggest attempt at standing out in a crowded market, especially as Samsung and Apple continue to command the global smartphone market. The combination of technology improving and broader awareness around foldables makes now the right time for a new Razr, according to Yi.

“Sometimes you see the technology is ready, but the market is not ready to accept it,” she said. “And consumers are not willing to adapt or adopt. But in this case, we really feel that this is the right time.” 

Adblock test (Why?)


Source link

Continue Reading


‘Diablo 4’ PS5 Players Hit With ‘Unable To Find A Valid License’ Error, Blizzard Comments – Forbes



I’ll be honest, my jaw was hanging open a bit when I logged into Diablo 4 a minute after 7 PM ET, got a 4 minute queue which was…actually over in four minutes. I got in, created my Barbarian and I’m already level 4, pausing only to write this article. No wild wait times, no errors, no disconnects (yet).

But I’m on PC. PS5 players? They’re no so lucky.

At the time of this writing, there is a widespread error that says “Unable to find a valid license for Diablo IV (Code 315306).” This is happening, obviously, to people who very much have indeed purchased the game, so something is going wrong here with an interaction between Blizzard and PlayStation it seems.


However, it may be Blizzard and console more broadly. I have heard of at least some Xbox players getting this error message as well, but overall players seem to be having more success with Microsoft. The PS5 error appears to be more widespread, for whatever reason.

Blizzard has indeed acknowledged the issue, via its community manager and a forum post here. They do say PlayStation specifically, even if a few Xbox folks are hitting it. The message just says:

“We are seeing reports regarding PlayStation users experiencing Invalid License errors. The team is looking into this right now and will update once we have more information.”

If you’ve come here for advice on a fix, I’m sorry I can’t help you yet, as there does not appear to be one. I would avoid drastic steps like reinstalling the 80 GB game or anything, as that is probably going to be unnecessary and not fix the problem anyway. But yes, there is a widespread problem, you are not alone.

Naturally, many Diablo players were concerned we could have another Error 37 issue on our hands, the old error code that endlessly crippled Diablo 3 at launch. It…doesn’t seem likely that we’re headed into something that bad. Given that this is a console-specific issues, it means that Blizzard’s servers are not totally melting down as a whole. This time around they also did a “server slam” and this is early access launch which are mitigating factors. But that’s cold comfort to PS players who can’t play yet because of the “no valid license” error.

As soon as there’s a new update on the situation I will post it here. Stay tuned, and hopefully this will be resolved soon.

Update: It seems this may be a PS-wide issue, as there are reports of many games returning the license error right now, not just Diablo 4. Bad coincidence or…sparked by a flood of Diablo logons? Not clear yet.

Follow me on Twitter, YouTube, Facebook and Instagram. Subscribe to my free weekly content round-up newsletter, God Rolls.

Pick up my sci-fi novels the Herokiller series and The Earthborn Trilogy.

Adblock test (Why?)


Source link

Continue Reading