adplus-dvertising
Connect with us

Tech

‘Godfather of AI’ quits Google with regrets and fears about his life’s work

Published

 on

 

a:hover]:text-black [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-e9 dark:[&>a:hover]:shadow-underline-gray-63 [&>a]:shadow-underline-gray-13 dark:[&>a]:shadow-underline-gray-63″>Geoffrey Hinton (foreground) has left Google to speak out on the dangers of AI.

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: Getty

Geoffrey Hinton, who alongside two other so-called “Godfathers of AI” won the 2018 Turing Award for their foundational work that led to the current boom in artificial intelligence, now says a part of him regrets his life’s work. Hinton recently quit his job at Google in order to speak freely about the risks of AI, according to an interview with the 75-year-old in The New York Times.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” said Hinton, who had been employed by Google for more than a decade. “It is hard to see how you can prevent the bad actors from using it for bad things.”

Hinton notified Google of his resignation last month, and on Thursday talked to CEO Sundar Pichai directly, according to the NYT. Details of that discussion were not disclosed.

The life-long academic joined Google after it acquired a company started by Hinton and two of his students, one of whom went on to become chief scientist at OpenAI. Hinton and his students had developed a neural network that taught itself to identify common objects like dogs, cats, and flowers after analyzing thousands of photos. It’s this work that ultimately led to the creation of ChatGPT and Google Bard.

According to the NYT interview, Hinton was happy with Google’s stewardship of the technology until Microsoft launched the new OpenAI-infused Bing, challenging Google’s core business and sparking a “code red” response inside the search giant. Such fierce competition might be impossible to stop, Hinton says, resulting in a world with so much fake imagery and text that nobody will be able to tell “what is true anymore.”

Google’s chief scientist, Jeff Dean, worked to soften the blow with the following statement: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

Hinton also took to Twitter to clarify his position on Google’s stewardship:

The spread of misinformation is only Hinton’s immediate concern. On a longer timeline he’s worried that AI will eliminate rote jobs, and possibly humanity itself as AI begins to write and run its own code.

“The idea that this stuff could actually get smarter than people — a few people believed that,” said Hinton to the NYT. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Update May 1st, 8:48AM ET: Added tweet from Hinton clarifying his position on Google’s stewardship so far.

 

728x90x4

Source link

Continue Reading

Tech

Slack researcher discusses the fear, loathing and excitement surrounding AI in the workplace

Published

 on

 

SAN FRANCISCO (AP) — Artificial intelligence‘s recent rise to the forefront of business has left most office workers wondering how often they should use the technology and whether a computer will eventually replace them.

Those were among the highlights of a recent study conducted by the workplace communications platform Slack. After conducting in-depth interviews with 5,000 desktop workers, Slack concluded there are five types of AI personalities in the workplace: “The Maximalist” who regularly uses AI on their jobs; “The Underground” who covertly uses AI; “The Rebel,” who abhors AI; “The Superfan” who is excited about AI but still hasn’t used it; and “The Observer” who is taking a wait-and-see approach.

Only 50% of the respondents fell under the Maximalist or Underground categories, posing a challenge for businesses that want their workers to embrace AI technology. The Associated Press recently discussed the excitement and tension surrounding AI at work with Christina Janzer, Slack’s senior vice president of research and analytics.

Q: What do you make about the wide range of perceptions about AI at work?

A: It shows people are experiencing AI in very different ways, so they have very different emotions about it. Understanding those emotions will help understand what is going to drive usage of AI. If people are feeling guilty or nervous about it, they are not going to use it. So we have to understand where people are, then point them toward learning to value this new technology.

Q: The Maximalist and The Underground both seem to be early adopters of AI at work, but what is different about their attitudes?

A: Maximalists are all in on AI. They are getting value out of it, they are excited about it, and they are actively sharing that they are using it, which is a really big driver for usage among others.

The Underground is the one that is really interesting to me because they are using it, but they are hiding it. There are different reasons for that. They are worried they are going to be seen as incompetent. They are worried that AI is going to be seen as cheating. And so with them, we have an opportunity to provide clear guidelines to help them know that AI usage is celebrated and encouraged. But right now they don’t have guidelines from their companies and they don’t feel particularly encouraged to use it.

Overall, there is more excitement about AI than not, so I think that’s great We just need to figure out how to harness that.

Q: What about the 19% of workers who fell under the Rebel description in Slack’s study?

A: Rebels tend to be women, which is really interesting. Three out of five rebels are women, which I obviously don’t like to see. Also, rebels tend to be older. At a high level, men are adopting the technology at higher rates than women.

Q: Why do you think more women than men are resisting AI?

A: Women are more likely to see AI as a threat, more likely to worry that AI is going to take over their jobs. To me, that points to women not feeling as trusted in the workplace as men do. If you feel trusted by your manager, you are more likely to experiment with AI. Women are reluctant to adopt a technology that might be seen as a replacement for them whereas men may have more confidence that isn’t going to happen because they feel more trusted.

Q: What are some of the things employers should be doing if they want their workers to embrace AI on the job?

A: We are seeing three out of five desk workers don’t even have clear guidelines with AI, because their companies just aren’t telling them anything, so that’s a huge opportunity.

Another opportunity to encourage AI usage in the open. If we can create a culture where it’s celebrated, where people can see the way people are using it, then they can know that it’s accepted and celebrated. Then they can be inspired.

The third thing is we have to create a culture of experimentation where people feel comfortable trying it out, testing it, getting comfortable with it because a lot of people just don’t know where to start. The reality is you can start small, you don’t have to completely change your job. Having AI write an email or summarize content is a great place to start so you can start to understand what this technology can do.

Q: Do you think the fears about people losing their jobs because of AI are warranted?

A: People with AI are going to replace people without AI.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Tech

Biden administration to provide $325 million for new Michigan semiconductor factory

Published

 on

 

WASHINGTON (AP) — The Biden administration said Tuesday that it would provide up to $325 million to Hemlock Semiconductor for a new factory, a move that could help give Democrats a political edge in the swing state of Michigan ahead of election day.

The funding would support 180 manufacturing jobs in Saginaw County, where Republicans and Democrats were neck-in-neck for the past two presidential elections. There would also be construction jobs tied to the factory that would produce hyper-pure polysilicon, a building block for electronics and solar panels, among other technologies.

Commerce Secretary Gina Raimondo said on a call with reporters that the funding came from the CHIPS and Science Act, which President Joe Biden signed into law in 2022. It’s part of a broader industrial strategy that the campaign of Vice President Kamala Harris, the Democratic nominee, supports, while Republican nominee Donald Trump, the former president, sees tariff hikes and income tax cuts as better to support manufacturing.

“What we’ve been able to do with the CHIPS Act is not just build a few new factories, but fundamentally revitalize the semiconductor ecosystem in our country with American workers,” Raimondo said. “All of this is because of the vision of the Biden-Harris administration.”

A senior administration official said the timing of the announcement reflected the negotiating process for reaching terms on the grant, rather than any political considerations. The official insisted on anonymity to discuss the process.

After site work, Hemlock Semiconductor plans to begin construction in 2026 and then start production in 2028, the official said.

Running in 2016, Trump narrowly won Saginaw County and Michigan as a whole. But in 2020 against Biden, both Saginaw County and Michigan flipped to the Democrats.

The Canadian Press. All rights reserved.

Source link

Continue Reading

News

The Internet is Littered in ‘Educated Guesses’ Without the ‘Education’

Published

 on

Although no one likes a know-it-all, they dominate the Internet.

The Internet began as a vast repository of information. It quickly became a breeding ground for self-proclaimed experts seeking what most people desire: recognition and money.

Today, anyone with an Internet connection and some typing skills can position themselves, regardless of their education or experience, as a subject matter expert (SME). From relationship advice, career coaching, and health and nutrition tips to citizen journalists practicing pseudo-journalism, the Internet is awash with individuals—Internet talking heads—sharing their “insights,” which are, in large part, essentially educated guesses without the education or experience.

The Internet has become a 24/7/365 sitcom where armchair experts think they’re the star.

Not long ago, years, sometimes decades, of dedicated work and acquiring education in one’s field was once required to be recognized as an expert. The knowledge and opinions of doctors, scientists, historians, et al. were respected due to their education and experience. Today, a social media account and a knack for hyperbole are all it takes to present oneself as an “expert” to achieve Internet fame that can be monetized.

On the Internet, nearly every piece of content is self-serving in some way.

The line between actual expertise and self-professed knowledge has become blurry as an out-of-focus selfie. Inadvertently, social media platforms have created an informal degree program where likes and shares are equivalent to degrees. After reading selective articles, they’ve found via and watching some TikTok videos, a person can post a video claiming they’re an herbal medicine expert. Their new “knowledge,” which their followers will absorb, claims that Panda dung tea—one of the most expensive teas in the world and isn’t what its name implies—cures everything from hypertension to existential crisis. Meanwhile, registered dietitians are shaking their heads, wondering how to compete against all the misinformation their clients are exposed to.

More disturbing are individuals obsessed with evangelizing their beliefs or conspiracy theories. These people write in-depth blog posts, such as Elvis Is Alive and the Moon Landings Were Staged, with links to obscure YouTube videos, websites, social media accounts, and blogs. Regardless of your beliefs, someone or a group on the Internet shares them, thus confirming your beliefs.

Misinformation is the Internet’s currency used to get likes, shares, and engagement; thus, it often spreads like a cosmic joke. Consider the prevalence of clickbait headlines:

  • You Won’t Believe What Taylor Swift Says About Climate Change!
  • This Bedtime Drink Melts Belly Fat While You Sleep!
  • In One Week, I Turned $10 Into $1 Million!

Titles that make outrageous claims are how the content creator gets reads and views, which generates revenue via affiliate marketing, product placement, and pay-per-click (PPC) ads. Clickbait headlines are how you end up watching a TikTok video by a purported nutrition expert adamantly asserting you can lose belly fat while you sleep by drinking, for 14 consecutive days, a concoction of raw eggs, cinnamon, and apple cider vinegar 15 minutes before going to bed.

Our constant search for answers that’ll explain our convoluted world and our desire for shortcuts to success is how Internet talking heads achieve influencer status. Because we tend to seek low-hanging fruits, we listen to those with little experience or knowledge of the topics they discuss yet are astute enough to know what most people want to hear.

There’s a trend, more disturbing than spreading misinformation, that needs to be called out: individuals who’ve never achieved significant wealth or traded stocks giving how-to-make-easy-money advice, the appeal of which is undeniable. Several people I know have lost substantial money by following the “advice” of Internet talking heads.

Anyone on social media claiming to have a foolproof money-making strategy is lying. They wouldn’t be peddling their money-making strategy if they could make easy money.

Successful people tend to be secretive.

Social media companies design their respective algorithms to serve their advertisers—their source of revenue—interest; hence, content from Internet talking heads appears most prominent in your feeds. When a video of a self-professed expert goes viral, likely because it pressed an emotional button, the more people see it, the more engagement it receives, such as likes, shares and comments, creating a cycle akin to a tornado.

Imagine scrolling through your TikTok feed and stumbling upon a “scientist” who claims they can predict the weather using only aluminum foil, copper wire, sea salt and baking soda. You chuckle, but you notice his video got over 7,000 likes, has been shared over 600 times and received over 400 comments. You think to yourself, “Maybe this guy is onto something.” What started as a quest to achieve Internet fame evolved into an Internet-wide belief that weather forecasting can be as easy as DIY crafts.

Since anyone can call themselves “an expert,” you must cultivate critical thinking skills to distinguish genuine expertise from self-professed experts’ self-promoting nonsense. While the absurdity of the Internet can be entertaining, misinformation has serious consequences. The next time you read a headline that sounds too good to be true, it’s probably an Internet talking head making an educated guess; without the education seeking Internet fame, they can monetize.

______________________________________________________________

 

Nick Kossovan, a self-described connoisseur of human psychology, writes about what’s

on his mind from Toronto. You can follow Nick on Twitter and Instagram @NKossovan.

 

Continue Reading

Trending