adplus-dvertising
Connect with us

Tech

OpenAI's Sora has left AI experts either enthused or skeptical. It's left most everyone else terrified – Yahoo Canada Finance

Published

 on


Hello and welcome to Eye on AI.

It’s been a week since OpenAI unveiled Sora, its new text-to-video generative AI model it says can turn short text prompts into strikingly realistic videos up to a minute long. The videos shared thus far have been received as thoroughly impressive and a giant leap for AI video generation (said with some reservations to account for the fact that OpenAI hasn’t demonstrated the model actually working or released a technical report). But today, I’m diving into a different through line of the reaction to Sora, and that’s fear.

Outside the tech and AI communities’ continued debate about accelerationism versus doomerism, there are a whole lot of everyday people who are personally terrified seeing this technology progress so rapidly before their eyes. This is not exactly a new response to AI, but it’s one I’ve seen growing and that seemed to hit a new level over the past week with Sora. While scrolling through my For You Page on TikTok, for example, I saw video after video of everyday users who do not typically post about tech news expressing their fears about Sora and AI. One account I follow that typically posts about pop culture and pottery took a break from sharing techniques for making succulent planters to lament about the announcement, asking “Is anyone else concerned that AI is going to be the downfall of society or is it just me?”

“RIP reality,” replied one user.

“I give humans 5 more years tops,” commented another.

“I’m genuinely scared to death,” wrote another user, along with countless others expressing the same sentiment.

It’s easy to see why. We know this technology will be used to deceive, create harmful deepfakes, generate and spread disinformation, and sow chaos. It’s already happening—and at a critical time with democracies on the line. The entire world has already been reeling from these issues as they’ve been perpetuated by social media platforms, and now generative AI is poised to add fuel to the fire. Perhaps even more importantly, technologies like ChatGPT, DALL-E, and now Sora are being positioned as more efficient writers, visual artists, and filmmakers, potentially swallowing up the creative arts that people enjoy, that empower us to express ourselves, and that makes us feel human. And what exactly will we all gain from this? Cheaper stock footage? Movies created by AI? The creators of AI will certainly gain unfathomable amounts of money and power, but for everyone else, it’s not exactly clear what the benefits are and if they’ll be worth the costs.

The way tech companies are going about it isn’t helping either. The shared research community-oriented approach to building AI that existed for decades went fully out the window when OpenAI released ChatGPT and was quickly replaced with secretive development, rapid commercialization, shareholder demands, corporate lobbying, and pursuits for market dominance and $7 trillion valuations.

“At the top AI firms, employees are now being asked to keep their accomplishments secret, and in some cases, stop publishing research papers altogether, upsetting the symbiotic relationship between the pursuit of profits and scientific discovery,” wrote Reed Albergotti in Semafor this past week.

That’s not to say there weren’t commercial incentives before, and it doesn’t account for the passionate community of AI professionals working to bring transparency and accountability to the field. But many everyday people feel burned by the impact digital technologies have had on their lives and society, by tech companies themselves, and by the state of capitalism overall. They don’t feel technology has lived up to its promises, and they’re seeing tech companies rake in record amounts of wealth (not to mention lay off thousands) while they struggle to meet their basic needs. People are already wary tech companies will prioritize profits over their best interest, and so the increasingly secretive development, lobbying, and so on isn’t exactly furthering trust in AI or the companies creating it.

It’s important to point out that many new technologies were initially met with fear only to become accepted and even critical in our lives. The introduction of electricity sparked (valid) safety concerns and an entire anti-electricity movement. Photography, while welcomed by some, received massive backlash in the arts world as people viewed it as a cheap shortcut that would supersede the true art of painting (sound familiar?). And anyone reading this likely witnessed the advent of video games—and reactionary concerns they’d foster mass violence and addiction—only to see those fears haven’t exactly come true, research actually suggests positive social and cognitive impacts associated with gaming, and that it’s grown to become a massive $200+ billion industry—larger than the film and music industries combined. Even the modern mirror, a now everyday household item, was widely feared upon its introduction as it was thought to pose a moral threat to society by encouraging vanity. Many said the same about front-facing cameras and selfies.

Now that we have some historical context, we can talk about what feels different this time around, because AI does feel different. The TikToker I mentioned earlier put it well when he said he thinks there will be a day in the next few months when we wake up and cannot tell the difference between what’s fake and real online. In many ways, this is already happening and people cannot tell the best AI-generated images from real photographs. Electricity is in fact dangerous when not properly set up and managed. And while mirrors and photography were unknown, they at least reflected reality. Generative AI, on the other hand, distorts and deceives us of reality by design.

Over these past several months, I’ve seen some argue these generative AI tools are not enabling people to do anything they couldn’t already do in Photoshop or After Effects. Photoshop is probably the closest proxy for software that can distort reality, and we could debate the positive and negative impact it’s had. But I think it’s becoming clearer every day that these AI tools are far more powerful. More importantly, they manipulate reality via a black box, not a human controlling a Lasso tool. And critically, Photoshop takes serious time and skill to learn (and is of course now offering generative AI tools), whereas anyone who can write a sentence can prompt AI tools to create whatever they want almost instantaneously.

As someone who’s covered AI for almost a decade, I’ve gone from few people around me even knowing what AI is to overhearing a table of teachers out to lunch desperately trying to figure out how to handle it in their classrooms, receiving concerned messages from family members asking me to explain it, and encountering fear of AI from everyday people with every swipe on social media.

Sora may not be generally available right now, but just the knowledge of its existence is already spreading a ripple of uneasiness through society. Well, at least we can all console ourselves with the thought that, as the AI accelerationists like to say, the AI-generated video you’re watching right now is the worst AI-generated video you’ll ever watch from here on out. Wait, why don’t you look happy? 

And with that, here’s more AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

This story was originally featured on Fortune.com

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Health

Here is how to prepare your online accounts for when you die

Published

 on

 

LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?

It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.

Here’s how you can prepare your digital life for your survivors:

Apple

The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.

For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.

You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.

Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.

Google

Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.

When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.

You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.

There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.

Facebook and Instagram

Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.

When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.

The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.

You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.

TikTok

The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.

Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.

X

It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.

Passwords

Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?

Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.

But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.

___

Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.

Source link

Continue Reading

Tech

Google’s partnership with AI startup Anthropic faces a UK competition investigation

Published

 on

 

LONDON (AP) — Britain’s competition watchdog said Thursday it’s opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.

The Competition and Markets Authority said it has “sufficient information” to launch an initial probe after it sought input earlier this year on whether the deal would stifle competition.

The CMA has until Dec. 19 to decide whether to approve the deal or escalate its investigation.

“Google is committed to building the most open and innovative AI ecosystem in the world,” the company said. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”

San Francisco-based Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, who previously worked at ChatGPT maker OpenAI. The company has focused on increasing the safety and reliability of AI models. Google reportedly agreed last year to make a multibillion-dollar investment in Anthropic, which has a popular chatbot named Claude.

Anthropic said it’s cooperating with the regulator and will provide “the complete picture about Google’s investment and our commercial collaboration.”

“We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” it said in a statement.

The U.K. regulator has been scrutinizing a raft of AI deals as investment money floods into the industry to capitalize on the artificial intelligence boom. Last month it cleared Anthropic’s $4 billion deal with Amazon and it has also signed off on Microsoft’s deals with two other AI startups, Inflection and Mistral.

The Canadian Press. All rights reserved.

Source link

Continue Reading

News

Kuwait bans ‘Call of Duty: Black Ops 6’ video game, likely over it featuring Saddam Hussein in 1990s

Published

 on

 

DUBAI, United Arab Emirates (AP) — The tiny Mideast nation of Kuwait has banned the release of the video game “Call of Duty: Black Ops 6,” which features the late Iraqi dictator Saddam Hussein and is set in part in the 1990s Gulf War.

Kuwait has not publicly acknowledged banning the game, which is a tentpole product for the Microsoft-owned developer Activision and is set to be released on Friday worldwide. However, it comes as Kuwait still wrestles with the aftermath of the invasion and as video game makers more broadly deal with addressing historical and cultural issues in their work.

The video game, a first-person shooter, follows CIA operators fighting at times in the United States and also in the Middle East. Game-play trailers for the game show burning oilfields, a painful reminder for Kuwaitis who saw Iraqis set fire to the fields, causing vast ecological and economic damage. Iraqi troops damaged or set fire to over 700 wells.

There also are images of Saddam and Iraq’s old three-star flag in the footage released by developers ahead of the game’s launch. The game’s multiplayer section, a popular feature of the series, includes what appears to be a desert shootout in Kuwait called Scud after the Soviet missiles Saddam fired in the war. Another is called Babylon, after the ancient city in Iraq.

Activision acknowledged in a statement that the game “has not been approved for release in Kuwait,” but did not elaborate.

“All pre-orders in Kuwait will be cancelled and refunded to the original point of purchase,” the company said. “We remain hopeful that local authorities will reconsider, and allow players in Kuwait to enjoy this all-new experience in the Black Ops series.”

Kuwait’s Media Ministry did not respond to requests for comment from The Associated Press over the decision.

“Call of Duty,” which first began in 2003 as a first-person shooter set in World War II, has expanded into an empire worth billions of dollars now owned by Microsoft. But it also has been controversial as its gameplay entered the realm of geopolitics. China and Russia both banned chapters in the franchise. In 2009, an entry in the gaming franchise allowed players to take part in a militant attack at a Russian airport, killing civilians.

But there have been other games recently that won praise for their handling of the Mideast. Ubisoft’s “Assassin’s Creed: Mirage” published last year won praise for its portrayal of Baghdad during the Islamic Golden Age in the 9th century.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Trending