Artificial intelligence (AI) is developing at breakneck speed and this weekend, OpenAI delivered one of the biggest updates to the system that we have seen in a while. Sora is OpenAI’s newest AI model that can create realistic and imaginative scenes simply with text instructions.
With Sora, industry professionals will now be able to create realistic and complex videos all without leaving their seat.
This is more important than ever before especially because consumers today watch more videos and the demand for short-form content has rapidly increased, with 66% finding the type of content to be the most engaging, according to a report done by Munch, an AI-powered automation platform for social media.
According to the report, video content is no longer an option, but a necessity for business and brands aiming for success, with 42% of businesses preferring Instagram and 26% preferring Facebook to post such videos. TikTok does not rank among the top three platform choices for marketers.
With the importance of short-form video content in marketing efforts, here’s a breakdown of exactly what you need to know about Sora and how it can help industry professionals in the space.
What is Sora?
Sora is OpenAI’s solution to getting AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction, it said in a statement.
As such, Sora is a text-to-video model that can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.
Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.
The model has a deep understanding of language which then enables it to accurately interpret prompts and generate compelling characters that express vibrant emotions. Sora can also create multiple shots within a single generated video that accurately depict characters and visual style.
“Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that operates on spacetime patches of video and image latent codes,” said OpenAI.
How does it work exactly?
This part is a bit technical but according to OpenAI, it takes inspiration from large language models which acquire generalist capabilities by training on internet-scale data.
“The success of the LLM paradigm is enabled in part by the use of tokensthat elegantly unify diverse modalities of text—code, math and various natural languages. In this work, we consider how generative models of visual data can inherit such benefits,” it said.
OpenAI explained in its technical report that where LLMs have text tokens, Sora has visual patches. Patches have previously been shown to be an effective representation for models of visual data.
“We find that patches are a highly scalable and effective representation for training generative models on diverse types of videos and images,” it said.
Sora then is essentially a diffusion model, which generates a video by starting off with one that looks like static noise and gradually transforms it by removing the noise over many steps.
It is, as a result, capable of generating entire videos all at once or extending generated videos to make them longer.
The model also builds on past research in DALL·E and GPT models. It uses the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data. As a result, the model is able to follow the user’s text instructions in the generated video more faithfully.
In addition to being able to generate a video solely from text instructions, the model is able to take an existing still image and generate a video from it, animating the image’s contents with accuracy and attention to small detail.
What are some of its weaknesses?
As with all AI models, there are weaknesses, bias and misinformation that can sometimes arise. Sora is no exception as OpenAI admits.
Currently, Sora may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark, said OpenAI.
The model may also confuse spatial details of a prompt, for example, mixing up left and right, and may struggle with precise descriptions of events that take place over time, such as following a specific camera trajectory.
Ahead of its public launch, OpenAI said that it will be working with domain experts in areas such as misinformation, hateful content, and bias — who will be adversarially testing the model.
“We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora. We plan to include C2PA metadata in the future if we deploy the model in an OpenAI product,” it said. It added that it also will utalise its existing safety methods that it has already built for products that use DALL-E 3.
For example, once in an OpenAI product, its text classifier will check and reject text input prompts that are in violation of its usage policies. These include those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.
“We’ve also developed robust image classifiers that are used to review the frames of every video generated to help ensure that it adheres to our usage policies, before it’s shown to the user,” it said.
OpenAI will also engage policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology.
Adopting the technology in marketing
With all that said, the main question really still lies in how marketers and industry professionals can adopt the technology in their day-to-day work and according to industry professionals MARKETING-INTERACTIVE spoke to, it has the potential to be “stunning”.
According to Pramodh Rai, co-founder at Cyber Sierra, the capabilities, and low barrier to entry by Sora means that consumers have a “very high chance” of igniting experimentation in creative teams that include marketers and advertisers.
“Content creation is now revolutionised in very exciting ways. Our ability to prototype rapidly and produce high quality videos as well as dynamic visuals significantly reduces time to market and resources required for traditional ad campaigns,” said Rai. He added that routine editing tasks and content tailored to different platforms as well as audiences can be automated, which frees creative teams to focus on strategic and innovative aspects of their campaigns.
“As the line between reality and AI is blurring thanks to advancements in AI such as with Sora, personalised advertising through custom content is set to soar. Existing workflows can be streamlined to enable more collaboration between team members as well as tighter feedback loops. It looks like we can do this cheaply too, so it’s going to spark experimentation at new levels across society,” he said.
Agreeing with him, Milind, an AI Scientist from Mercedes, who was expressing independent views, notes that from what has been shared so far, the capabilities of the model seem “quite amazing”.
“The consistency and quality of the videos over extended period of time is quite a breakthrough. It would be safe to say that for use cases such as hyper-personalised video content creation it would quite useful. I’m also sure that it will continue to improve offering sound generation and fine-grained control in future,” he said.
Exercising caution around the technology
Saying that, one should not get too excited about the technology too quickly. According to Edwin Yeo, general manager at Strategic Public Relations Group, marketers need to be “wary” of speedily adopting Sora, or they do so with a “big degree of risk”. He said:
If there’s one thing we learned from advancement in technology, it’s that technology tends to outpace regulations and safety concerns.
He added that with Sora and generative AI in general, questions over usage and copyright still remain a big challenge for marketers and content producers.
Apart from copyright and safety concerns, there’s also a question still of quality, added Yeo. “It’s not great with hands, just like AI art, and there are still questions of the computing power needed to output videos in 4K or 8K formats.”
He added that personally, he has been using the likes of Midjourney for concept presentations. Once approved through, he will still go back to photography and DI. “I reckon for the immediate future, Sora will be similarly useful. That’s already a big impact in the marketing workflow, but we’re very far from Sora being able to replace video production,” said Yeo.
Adding to his point, Rai noted that there are also still a number of potential brand safety concerns that marketers need to be wary of.
“For one, deepfakes and misinformation constitute a new level of risk not seen before which could impact brand safety,” said Rai. Additionally, brands may face issues with inappropriate content generation that does not align with brand value or that may be offensive or insensitive. Rai said:
Brand authenticity may take a hit if the world starts to rely on AI generated content and less on human oversight.
Aside from these issues, marketers should also be wary of a lack of human input as AI models such as Sora could misinterpret creative briefs and also create data privacy and security challenges, which may lead to copyright infringement cases.
“Marketers need to use Sora for generating content that resonates with individual preferences and behaviours while placing humans in the part of the loop where the combination of creativity, strategy, analytics and inimitable personal touch shines,” said Rai.
Join us this coming 24 – 25 April for #Content360, a two-day extravaganza centered around four core thematic pillars: Explore with AI; Insight-powered strategies; Content as an experience; and Embrace the future. Immerse yourself in learning to curate content with creativity, critical thinking, and confidence with us at Content360!
The federal government is ordering the dissolution of TikTok’s Canadian business after a national security review of the Chinese company behind the social media platform, but stopped short of ordering people to stay off the app.
Industry Minister François-Philippe Champagne announced the government’s “wind up” demand Wednesday, saying it is meant to address “risks” related to ByteDance Ltd.’s establishment of TikTok Technology Canada Inc.
“The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners,” he said in a statement.
The announcement added that the government is not blocking Canadians’ access to the TikTok application or their ability to create content.
However, it urged people to “adopt good cybersecurity practices and assess the possible risks of using social media platforms and applications, including how their information is likely to be protected, managed, used and shared by foreign actors, as well as to be aware of which country’s laws apply.”
Champagne’s office did not immediately respond to a request for comment seeking details about what evidence led to the government’s dissolution demand, how long ByteDance has to comply and why the app is not being banned.
A TikTok spokesperson said in a statement that the shutdown of its Canadian offices will mean the loss of hundreds of well-paying local jobs.
“We will challenge this order in court,” the spokesperson said.
“The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive.”
The federal Liberals ordered a national security review of TikTok in September 2023, but it was not public knowledge until The Canadian Press reported in March that it was investigating the company.
At the time, it said the review was based on the expansion of a business, which it said constituted the establishment of a new Canadian entity. It declined to provide any further details about what expansion it was reviewing.
A government database showed a notification of new business from TikTok in June 2023. It said Network Sense Ventures Ltd. in Toronto and Vancouver would engage in “marketing, advertising, and content/creator development activities in relation to the use of the TikTok app in Canada.”
Even before the review, ByteDance and TikTok were lightning rod for privacy and safety concerns because Chinese national security laws compel organizations in the country to assist with intelligence gathering.
Such concerns led the U.S. House of Representatives to pass a bill in March designed to ban TikTok unless its China-based owner sells its stake in the business.
Champagne’s office has maintained Canada’s review was not related to the U.S. bill, which has yet to pass.
Canada’s review was carried out through the Investment Canada Act, which allows the government to investigate any foreign investment with potential to might harm national security.
While cabinet can make investors sell parts of the business or shares, Champagne has said the act doesn’t allow him to disclose details of the review.
Wednesday’s dissolution order was made in accordance with the act.
The federal government banned TikTok from its mobile devices in February 2023 following the launch of an investigation into the company by federal and provincial privacy commissioners.
— With files from Anja Karadeglija in Ottawa
This report by The Canadian Press was first published Nov. 6, 2024.
LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?
It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.
Here’s how you can prepare your digital life for your survivors:
Apple
The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.
For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.
You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.
Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.
Google
Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.
When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.
You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.
There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.
Facebook and Instagram
Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.
When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.
The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.
You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.
TikTok
The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.
Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.
X
It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.
Passwords
Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?
Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.
But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.
___
Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.
LONDON (AP) — Britain’s competition watchdog said Thursday it’s opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.
The Competition and Markets Authority said it has “sufficient information” to launch an initial probe after it sought input earlier this year on whether the deal would stifle competition.
The CMA has until Dec. 19 to decide whether to approve the deal or escalate its investigation.
“Google is committed to building the most open and innovative AI ecosystem in the world,” the company said. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”
San Francisco-based Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, who previously worked at ChatGPT maker OpenAI. The company has focused on increasing the safety and reliability of AI models. Google reportedly agreed last year to make a multibillion-dollar investment in Anthropic, which has a popular chatbot named Claude.
Anthropic said it’s cooperating with the regulator and will provide “the complete picture about Google’s investment and our commercial collaboration.”
“We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” it said in a statement.
The U.K. regulator has been scrutinizing a raft of AI deals as investment money floods into the industry to capitalize on the artificial intelligence boom. Last month it cleared Anthropic’s $4 billion deal with Amazon and it has also signed off on Microsoft’s deals with two other AI startups, Inflection and Mistral.