adplus-dvertising
Connect with us

Media

Who regulates social media?

Published

 on

Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority who says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.

Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.

Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol, and automotive have additional rules, indeed entire agencies, specific to them; Not so social media companies.

I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)

300x250x1

Social media can roughly defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.

Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.

1. Federal regulators

Image Credits: Andrew Harrer/Bloomberg

The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.

The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.

The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.

The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.

But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take Trump’s feeble executive actions along these lines seriously.

The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.

The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility towards Twitter as it does towards Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)

On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstate the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.

You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.

In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.

The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:

So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.

2. State legislators

States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).

The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.

Californian officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.

The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.

BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google, and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.

Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.

What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.

In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.

This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.

State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.

3. Congress

Image: Bryce Durbin/TechCrunch

What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest, and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.

Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)

But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.

Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.

Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.

4. European regulators

eu gdpr 1Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders, and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.

The most obvious example is the General Data Protection Regulation or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.

But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the E.U. member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.

Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.

When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.

The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.

(TechCrunch’s E.U. regulatory maven Natasha Lomas contributed to this section.)

5. No one? Really?

As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised, or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.

As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.

Capitol building

Image Credit: Bryce Durbin/TechCrunch

What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.

Like the FCC (and somewhat like the E.U.’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)

Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.

Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion, and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.

With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.

Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naive trust of their users across the globe — perhaps it’s time we asked them to trust us for once.

Source:- TechCrunch

Source link

Continue Reading

Media

Touché/Omnicom exec says 2024 'an inflection point' for media biz – National Post

Published

 on


‘This year will be the first time that we’ll see a global ad spend of over a trillion’ U.S. dollars, says Charles Etienne Morier

Article content

Like their partners in the Canadian news industry, the country’s media agencies are undergoing unprecedented transformation. The National Post is holding conversations with leaders of Canada’s largest agencies on the fast-changing fundamentals. This week, Charles Etienne Morier, chief operating officer of Touché! & Omnicom Media Group Montreal, speaks to writer Rebecca Harris.

Advertisement 2

Article content

How have the fundamentals of media planning and buying changed in recent years?

Article content

It has dramatically changed with technological advancement and shifts in consumer behaviour. Now, more than 80% of digital ad spend is transacted through digital buying platforms, so it has become increasingly important for our workforce to have a good understanding of the algorithms and how to maximize them.

The process has changed also. It’s no longer about creating a 30-second spot and then selecting a media channel to distribute the message. We start with the audiences, the channels where we need to reach them, and then tailor a message that will be appealing. And so, we need to work even more closely with our creative partners.

And we think 2024 will change even more. It’s going to be an inflection point despite all the changes we have gone through over the last three years. This year will be the first time that we’ll see a global ad spend of over a trillion (U.S. dollars). It shows the responsibility that we have as advertisers and agencies to spend that money wisely and ensure we make every ad dollar count, and that we are engaging consumers in a way that speaks to them in an age where there’s a lot of uncertainty about how they share their data and private information.

Article content

Advertisement 3

Article content

What skills do today’s media professionals need?

The team now needs to be proficient in so many areas. We used to have strategy, media buying and planning, and optimization and reporting. Now, we need to be able to help our clients navigate within this complex digital ecosystem with clean rooms (environments where brands, publishers and advertisers share data), the deprecation of cookies, and dynamic creative optimization. Our agency has changed dramatically in the sense that we offer much more depth in our services now. So, our leaders need to be proficient in being able to discuss those subjects with clients. We have a strong learning system in place and it’s part of our value, to make sure that our teams stay curious because it’s changing so much by the day.

What are the brands breaking through to consumers doing right?

Brands that are breaking through are able to prioritize authenticity, relevance and creativity in their messaging and their approach to media. Consumers are bombarded with messages every day and there’s ad blocking, so we have to find new ways of capturing consumer attention… We need to make ads relevant to consumers and bring more value into their lives. And leverage the data we have at our disposal to tailor the message to specific audience segments and engage the consumer in multiple touchpoints.

Advertisement 4

Article content

Cookie deprecation is a big topic this year (Third-party cookies are coming to an end.) What conversations are you having with clients now and what’s the expectation in terms of impact?

We’ve been working for almost two years on educating our clients, making sure that they are prepared. So, we are doing assessments to make sure we have everything in place to prepare for the impact of the deprecation of cookies. It will change a lot for measurement because we will not be able to measure the same things the same way. We will not be able to target in the same way. But I see it as an opportunity somewhat, to be able to come back to (advertising) that is more creative and more around content and context… and more in relation to targeting the right people in the right moment instead of relying too much on the data.

Can you share your predictions for where the industry is going next?

Retail media (platforms that allow retailers to sell ads to brands) will be expanding. Now, the stat is one in five dollars will be spent in retail media globally and 20 per cent of the commerce ecosystem will be done online. So, it’s going to be more important to have a strong omnichannel approach and deliver a positive consumer experience.

Advertisement 5

Article content

There’s also social commerce… There are so many influencers – there are 50 million creators globally. So how, as an agency, we’re able to harness that and power that at scale is crucial, and how we can partner with creators effectively. It’s changing a lot in media planning on that front. There is a real shift from curation to generation of content.

Television as well is changing a lot, from linear to connected TV. There is a streaming war at the moment, so we need to create new standards, overcome walled gardens (where the platform provider controls the content and data) and figure out measurement.

And obviously automation will play a bigger role. The way I see it is (artificial intelligence) will bring more value to what we do to bring smarter, faster and more effective work. For me, it’s not just about AI itself. It’s more about connected intelligence with the human at the centre of it. So, it’s how we can use the tool to amplify what we are doing.

Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our newsletters here.

Article content

Comments

Join the Conversation

This Week in Flyers

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Media

13-year-old charged for online harassment, banned from social media – CBC.ca

Published

 on


A 13-year-old western Quebec boy accused of harassing and threatening another child online is facing four charges and conditions restricting his internet activity.

In a news release issued Friday, police in the MRC des Collines-de-l’Outaouais said the alleged victim’s parent filed a complaint after being “subjected to the suspect’s wrath for several months.”

Police said they went to the accused’s home on Sunday to arrest him, but had to return with a warrant the following day after his parents initially refused to co-operate.

300x250x1

The 13-year-old was arrested Monday evening and detained. He was formally charged on Tuesday with criminal harassment, uttering threats to cause death or bodily harm, distributing child pornography and unauthorized possession of an unspecified restricted weapon.

Among his release conditions, the boy can’t access social media and can’t use the internet without adult supervision.

Police didn’t offer details about the alleged threats or where the youth lives. The municipality includes the communities of Chelsea, Quyon, Val-des-Monts and Wakefield.

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Media

Muting people on social media is fast and free and will change your life – The Guardian

Published

 on


I don’t generally believe in life hacks. As much as I’d love to imagine that one easy tweak could resurface my life like it’s a cracked tennis court, time and experience have shown me that positive change usually comes slowly and incrementally.

But there is one hack I fully believe in. It’s fast and free, and will instantly change your life for the better: just mute people who annoy you on social media.

The process is different for each platform – typically, you go to the offending poster’s profile page or one of their posts and tap “mute”, “snooze” or “unfollow” – but then that’s it! This digital dusting leaves your social media spick-and-span, or at least less grimy than before. They’re gone from your timeline, and so are the various minor irritations they brought. And, unlike unfollowing or blocking someone, the muted party has no idea they’ve been silenced, so you don’t risk any awkwardness or drama.

300x250x1

I have a handful of people muted. A couple of them are people I don’t want to unfollow. Others I have unfollowed, but I’ve also muted them because someone else might repost them and sully my pristine timeline. One is a semi-famous person who was rude to me many years ago about a work thing; another was rude to my friend. There’s also an ex and someone who constantly humble-brags in a way that makes me want to bang my head against something hard.

These individuals brought out the worst in me. When I saw their posts, I felt angry, petty and small. I wondered how much it might cost to buy billboard signs along major highways printed with bullet points detailing how, actually, they are terrible.

Fortunately, I almost never think of these individuals anymore because I’ve muted them across all platforms. Unless someone brings them up in conversation, I usually forget these people exist. They have been weeded from the lush garden of my brain.

But don’t just take my word for it.

“Muting accounts that repeatedly upset you is putting in digital boundaries to create a healthier digital environment,” says Bailey Parnell, founder and president of the Center for Digital Wellbeing. It allows you to avoid distressing content without severing connections, she says – a solution for those perplexing situations in which a relationship with someone is important to you, despite their bothersome online presence.

“This can preserve your mental wellbeing while maintaining social or professional networks,” she says.

This might seem like obvious advice. Yet it can be hard to follow. The irritation we feel when seeing someone’s bad posts can come with a satisfying rush: look at them! Being annoying!

“There can be a dopamine kick that comes on the back end of big emotions,” says Monica Amorosi, a licensed trauma therapist in New York City. We may come to crave the adrenaline spikes that accompany content that makes us feel shock, rage or disgust.

“If we have mundane lives, if we are understimulated, if we are bored or underwhelmed, then consuming this material can become a form of entertainment or distraction,” Amorosi says.

Amorosi emphasizes that it’s important not to create a “space of ignorance” on our feeds by avoiding different perspectives or troubling news about current events. But this does not mean that social media should only be a place to access upsetting information. Our feeds “can be utilized for healthy, positive education, connecting with like-minded people, seeing nuance and variety in the world, fact-checking information, learning new hobbies or ideas”, she says.

As such, muting is perhaps most effectively deployed against those who irritate you in a bland, quotidian way – a pompous co-worker, for instance. Not seeing a humble bragger pretend to be embarrassed about another professional success isn’t going to limit my worldview. Instead, I am regaining five to 10 minutes I might have wasted taking a screenshot of their post and complaining to my friends about it.

Candidly, I have done nothing with the time I’ve gained from not bad-mouthing the people I’ve muted. But how nice to have days that are at least five minutes more pleasant.

So, mute freely and often. And if you don’t agree with me? Just mute me. I’ll never know!

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Trending