adplus-dvertising
Connect with us

Media

We Got Social Media Wrong. Can We Get AI Right?

Published

 on

Interactions that dehumanize us.

Disinformation that misleads us.

Algorithms that manipulate us.

These are the risks posed by the explosion in generative artificial intelligence—AI that uses massive amounts of pre-existing content (also known as “large language models”)—to generate text, images, and code as well as to provide information and answers to an ever-growing range of questions.

300x250x1

They’re also the risks that made many people worry about social media.

What We Missed about Social Media

I wish I had worried about social media more. In 2005, my partner and I launched what would now be called a social media agency, at a time when few had even heard the term “social media.” Like a lot of people working on the nascent social web at that time, we were a lot more attuned to its potential than to its risks.

Before the advent of YouTube, Facebook, and Twitter, social media was decentralized, not very corporate, and pretty small: It felt more like a club of people exploring the way user-created content could fuel activism, community, and creativity than the next gold rush. I was so confident that this new medium was intrinsically biased towards social engagement that I used to tell companies that they would have a hard time competing with the grassroots causes and callings that drove most online participation at that time.

But I forgot about this little thing called money. It turns out that if you’re prepared to buy attention with ads and celebrity spokespeople and an endless array of contests and prizes, you can absolutely pry attention away from social advocacy and creativity and direct it towards buying stuff and reviewing stuff and even unboxing stuff on camera.

Money and Media

Once people figured out that there was money to be made with social media—and a lot of it—the dynamics changed quickly. “With digital ad revenues as their primary source of profit,” Douglas Guilbeault writes in “Digital Marketing in the Disinformation Age,” “social-media companies have designed their platforms to influence users on behalf of marketers and politicians, both foreign and domestic.”

Advertising became more sophisticated, to recover the eyeballs and attention that TV and newspapers were losing to social networks and web browsing. In turn, “digital platforms driven by ad revenue models were designed for addiction in order to perpetuate the stream of data collected from users,” as L. M. Sacasas puts it in “The Tech Backlash We Really Need.”

And content became more sensational and more polarizing and more hateful, because sensational and polarizing is what attracted the traffic and engagement that advertisers were looking for; an explosion in hate speech was the result. As Bharath Ganesh notes in “The Ungovernability of Digital Hate Culture,” “[i]n a new media culture in which anonymous entrepreneurs can reach massive audiences with little quality control, the possibilities for those vying to become digital celebrities to spread hateful, even violent, judgements with little evidence, experience, or knowledge are nearly endless.”

Most of the terrible, destructive impacts of social media stem from this core dynamic. The bite-sized velocity of social media has made it endlessly distracting and disruptive to our families, communities, relationships, and mental health. As an ad-driven, data-rich, and sensational medium, it’s ideally suited to the dissemination of misinformation and the explosion of anti-democratic manipulation. And as a space where users create most content for free, while companies control the platforms and the algorithms that determine what gets seen, it has put creators at the mercy of corporate interests and made art subservient to profits.

Where We Went Wrong

Now we’re getting ready to do it all again, only faster and with far more wide-reaching implications. As Allen and Thadani note in “Advancing Cooperative AI Governance at the 2023 G7 Summit,” “the transition to an AI future, if managed poorly, can…displace entire industries and increase socioeconomic disparity.”

We’re embracing technologies that create content so rapidly and so cheaply that even if that content is not yet quite as good as what humans might create, it will be more and more difficult for human creators to compete with machines.

We’re accepting opaque algorithms that deliver answers and “information”—in quotes, because AIs often present wholly invented “hallucinations” as facts—without much transparency about where this information came from or how the AI decided to construct its answers.

We’re sidestepping crucial questions about bias in they ways these AIs think and respond, and we’re sidestepping crucial decisions about how we deploy these AIs in ways that mitigate rather than compound existing inequalities.

How To Do AI Better

If all this makes me sound like a terrible pessimist, it’s only because I have to fight so hard against my innate fascination with emergent tech. I’m falling hard for the magic and power of AI, just like I fell hard for social media and like I fell hard for my first experiences of the web, of the internet, of the personal computer.

Those of us who are truly inspired and enchanted by the advent of new technologies are the ones who most need to rein in our enthusiasm; to anticipate the risks and to learn from our past mistakes.

And there’s a lot we can learn from, because we know what we were warned about last time, what we disregarded, and how we missed the opportunities to avert the worse excesses of social media.

That begins with the companies driving this transformation. Instead of fighting regulation, AI companies could advocate for effective regulation so that they’re less tempted to sideline ethical and safety issues in order to race ahead of the competition. Some AI leaders are already signaling their support for regulation, as we saw when OpenAI’s Sam Altman appeared at a recent Senate hearing.

But we’ll still be in a dangerous position if regulators depend on the technical advice of AI executives in order to set appropriate rules, because even well-intentioned execs are going to be less than objective about regulations that constrain their potential for profit. AI is also a much more complicated, much faster moving area to regulate; legislators who were hard-pressed to comprehend and regulate social media are unlikely to do better with AI.

That’s why, as King and Shull argue in “How Can Policy Makers Predict the Unpredictable,” “policy makers must prioritize developing a multidisciplinary network of trusted experts on whom to call regularly to identify and discuss new developments in AI technologies, many of which may not be intuitive or even yet imagined.”

It’s going to take international coordination and investment to develop an independent source of regulatory advice that is genuinely independent and capable of offering meaningful advice: Think of an AI equivalent of the World Health Organization, with the expertise and resources to guide AI policy and response at a global level.

Becoming a Smarter User of AI

It’s just as crucial for ordinary folks to improve their own AI literacy and comprehension. We need to be alert to both the risks and opportunities AI poses for our own lives, and we need to be informed and effective citizens when it comes to pressing for government regulation.

Here, again, the example of social media is instructive. Social networks made massive investments in understanding how to capture, sustain, and monetize our attention. We only questioned this effort once we saw the impact it had on our mental health, our kids’ wellbeing, and the integrity of our democracies. By then, these networks were so embedded in our personal and professional lives that extracting oneself from social media imposed very real social and professional costs.

This time, let’s figure out how to be the agents who use the tools, rather than the subjects who get manipulated. We won’t get there by avoiding ChatGPT, DALL-E and the like. Avoidance only makes us more vulnerable to manipulation by artificially generated content or to replacement by AI “workers.”

Instead, we human workers and tech users need to become quickly and deeply literate in the tools and technologies that are about to transform our work, our daily lives, and our societies—so that we can meaningfully shape that path. In a delightful paradox, the AIs themselves can help us achieve that rapid path to AI literacy by acting as our self-documenting guides to what’s newly possible.

How AI Helps Build Mastery

If you have yet to delve deep into the potential of generative AI, here’s one place you can start: ask an AI for some examples of how it can transform your own work.

For example, you might prompt ChatGPT with something like:

You are a productivity consultant who has been hired to support the productivity and well-being of a team of policy analysts. You have been asked to identify ten ways these policy analysts can use ChatGPT to facilitate or support their work, which includes reading news stories and academic articles, attending conferences, booking briefings, drafting briefing notes and recommendations, and writing reports. Please provide a list of ten ideas for how to use ChatGPT to support these functions.

Once ChatGPT provides you with a list of options, pick one that you’d like to try out. Then ask ChatGPT to give you step-by-step instructions on how to use it for that particular task. You can even follow up your request for step-by-step instructions with a prompt like,

You are an automation researcher. Review the previous conversation and note five risks or considerations when automating these tasks or adopting this approach.

Seeing how generative AI analyses and enables the automation of your own work or personal tasks is a great way to understand how AI works, where its limits lie, and how it might transform your own corner of the world.

That understanding is what will allow you to use AI instead of getting used by it, and it’s what will allow you to participate meaningfully in the public conversation about how to shape AI, right now. And now is when we need to hear many thoughtful, informed, human voices engaging with the question of how to regulate and use AI.

Otherwise, our voices will be drowned out by the ever louder, ever more pervasive voices of our new AI companions.


 

728x90x4

Source link

Continue Reading

Media

Taylor Swift's new album apparently leaks, causing social media chaos – CBC News

Published

 on


The hype for Taylor Swift’s new album went into overdrive as it appeared to leak online two days ahead of its Friday release.

Swifties started sharing tracks on X that they claimed were from the singer’s upcoming album, The Tortured Poets Department, saying they came from a Google Drive link containing all 17 songs.

Some fans were upset by the leak and said they would wait until Friday to listen while others started frantically posting fake links on X to bury the “real” tracks.

300x250x1

“Raise your hand if ur an ACTUAL Taylor Swift fan and aren’t listening to leaks,” one user wrote.

Several media outlets reported that X briefly blocked the search term “Taylor Swift leak” on Wednesday.

CBC has reached out to Swift’s publicist for comment.

Swift announced the release, her 11th studio album and the first with all new songs since 2022’s Midnights, at the Grammy Awards ceremony in February.

Fans have been speculating about the lyrical themes that would appear on The Tortured Poets Department, based in part on a physical “library installation” that opened Tuesday in Los Angeles, curated with items that drop hints and references to the inspirations behind the album.

Swift’s 2022 album Midnights, which featured the hit Anti-Hero, also leaked online ahead of its scheduled release date, and went on to win the Grammy for album of the year. Swift’s previous albums 1989, Reputation and Lover also leaked ahead of their official releases. 

The singer is in the midst of her billion-dollar-grossing Eras tour, which is moving through the U.S. and is scheduled to conclude in Vancouver in December. 

Swift was added to Forbes magazine’s annual new billionaires list earlier this month, with Forbes saying she was the first musician to become a billionaire based solely on her songs and performances. 

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Media

DJT Stock Jumps. The Truth Social Owner Is Showing Stockholders How to Block Short Sellers. – Barron's

Published

 on


[unable to retrieve full-text content]

DJT Stock Jumps. The Truth Social Owner Is Showing Stockholders How to Block Short Sellers.  Barron’s

728x90x4

Source link

Continue Reading

Media

Taylor Swift's new album allegedly 'leaked' on social media and it's causing a frenzy – CTV News

Published

 on


Social media can be a divisive place, but even more so when it comes to Taylor Swift.

A Google Drive link allegedly containing 17 tracks that are purportedly from Swift’s eagerly awaited “The Tortured Poets Department” album has been making the rounds on the internet in the past day and people are equal parts mad, sad and happy about it.

CNN has reached out to Swift’s representative for comment.

300x250x1

The actual album is slated to drop at midnight Friday, but the claimed leak is both being hailed and nailed by Swift’s supporters.

One person shared a drawing of a young woman asleep in a sparkly bed with sparkly blankets on X, writing, “How I slept last night knowing I’m going to hear TTPD for the very first time tonight cause I haven’t listened to any leaks.”

Yet another person posted a video of two models walking and wrote, “Me and my bestie on our way to listen to #TSTTPD leaks.”

On Thursday, “Taylor Swift leaks” was a prevented search phrase on X.

The general consensus among those who have decided to be “leak free” appears to be that they are the true Swifties – as her hard core fan base is known – because they don’t believe the singer would have sanctioned such a “leak.”

Swift herself has gone to great lengths to prevent unintended early releases in the past.

“I have a lot of maybe, maybe-not-irrational fears of security invasion, wiretaps, people eavesdropping,” Swift said of her music during an 2014 appearance on” Jimmy Kimmel Live.” She added that her “1989” album only existed on her phone, “covered in cat stickers and the volume buttons don’t work very well because there’s candy stuck in there,” for nearly two years.

“The Tortured Poets Department” is Swift’s 11th album and comes after she became the first woman and only solo artist to win the Grammy for album of the year three times.

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Trending