A little over a year ago, social media companies were put on notice for how they protect, or fail to protect, their youngest users.
In a series of congressional hearings, executives from Facebook
(FB), TikTok, Snapchat and Instagram faced tough questions from lawmakers over how their platforms can lead younger users to harmful content, damage mental health and body image (particularly among teenage girls), and lacked sufficient parental controls and safeguards to protect teens.
Those hearings, which followed disclosures in what became known as the “Facebook Papers” from whistleblower Frances Haugen about Instagram’s impact on teens, prompted the companies to vow to change. The four social networks have since introduced more tools and parental control options aimed at better protecting younger users. Some have also made changes to their algorithms, such as defaulting teens into seeing less sensitive content and increasing their moderation efforts. But some lawmakers, social media experts and psychologists say the new solutions are still limited, and more needs to be done.
“More than a year after the Facebook Papers dramatically revealed Big Tech’s abuse, social media companies have made only small, slow steps to clean up their act,” Sen. Richard Blumenthal, who chairs the Senate’s consumer protection subcommittee, told CNN Business. “Trust in Big Tech is long gone and we need real rules to ensure kids’ safety online.”
Michela Menting, a digital security director at market research firm ABI Research, agreed that social media platforms are “offering very little of substance to counter the ills their platforms incur.” Their solutions, she said, put the onus on guardians to activate various parental controls,such as those intended to filter, block and restrict access, and more passive options, such as monitoring and surveillance tools that run in the background.
Alexandra Hamlet, a New York City-based clinical psychologist, recalls being invited to a roundtable discussion roughly 18 months ago to discuss ways to improve Instagram, in particular, for younger users. “I don’t see many of our ideas being implemented,” she said. Social media platforms, she added, need to work on “continuing to improve parental controls, protect young people against targeted advertising, and remove objectively harmful content.”
The social media companies featured in this piece either declined to comment or did not respond to a request for comment on criticism that more needs to be done to protect young users.
For now, guardians must learn how to use the parental controls while also being mindful that teens can often circumvent those tools. Here’s a closer look at what parents can do to help keep their kids safe online.
After the fallout from the leaked documents, Meta-owned Instagram paused its much-criticized plan to release a version of Instagram for kids under age 13 and focused on making its main service safer for young users.
It has since introduced an educational hub for parents with resources, tips and articles from experts on user safety, and rolled out a tool that allows guardians to see how much time their kids spend on Instagram and set time limits. Parents can also receive updates on what accounts their teens follow and the accounts that follow them, and view and be notified if their child makes an update to their privacy and account settings. Parents can see which accounts their teens have blocked, as well. The company also provides video tutorials on how to use the new supervision tools.
Another feature encourages users to take a break from the app, such as suggesting they take a deep breath, write something down, check a to-do list or listen to a song, after a predetermined amount of time. Instagram also said it’s taking a “stricter approach” to the content it recommends to teens and will actively nudge them toward different topics, such as architecture and travel destinations, if they’ve been dwelling on any type of content for too long.
Facebook’s Safety Center provides supervision tools and resources, such as articles and advice from leading experts. “Our vision for Family Center is to eventually allow parents and guardians to help their teens manage experiences across Meta technologies, all from one place,” Liza Crenshaw, a Meta spokesperson, told CNN Business.
The hub also offers a guide to Meta’s VR parental supervision tools from ConnectSafely, a nonprofit aimed at helping kids stay safe online, to assist parents with discussing virtual reality with their teens. Guardians can see which accounts their teens have blocked and access supervision tools, as well as approve their teen’s download or purchase of an app that is blocked by default based on its rating, or block specific apps that may be inappropriate for their teen.
In August, Snapchat introduced a parent guide and hub aimed at giving guardians more insight into how their teens use the app, including who they’ve been talking to within the last week (without divulging the content of those conversations). To use the feature, parents must create their own Snapchat account, and teens have to opt-in and give permission.
While this was Snapchat’s first formal foray into parental controls, it did previously have a few existing safety measures for young users, such as requiring teens to be mutual friends before they can start communicating with each other and prohibiting them from having public profiles. Teen users have their Snap Map location-sharing tool off by default but can also use it to disclose their real-time location with a friend or family member even while their app is closed as a safety measure. Meanwhile, a Friend Check Up tool encourages Snapchat users to review their friend lists and make sure they still want to be in touch with certain people.
Snap previously said it’s working on more features, such as the ability for parents to see which new friends their teens have added and allow them to confidentially report concerning accounts that may be interacting with their child. It’s also working on a tool to give younger users the option to notify their parents when they report an account or piece of content.
The company told CNN Business it will continue to build on its safety features and consider feedback from the community, policymakers, safety and mental health advocates, and other experts to improve the tools over time.
In July, TikTok announced new ways to filter out mature or “potentially problematic” videos. The new safeguards allocated a “maturity score” to videos detected as potentially containing mature or complex themes. It also rolled out a tool that aims to help people decide how much time they want to spend on TikToks. The tool lets users set regular screen time breaks, and provides a dashboard that details the number of times they opened the app, a breakdown of daytime and nighttime usage and more.
The popular short form video app currently offers a Family Pairing hub, which allows parents and teens to customize their safety settings. A parent can also link their TikTok account to their teen’s app and set parental controls, including how long they can spend on the app each day; restrict exposure to certain content; decide if teens can search for videos, hashtags, or Live content; and whether their account is private or public. TikTok also offers its Guardian’s Guide that highlights how parents can best protect their kids on the platform.
In addition to parental controls, the app restricts access to some features to younger users, such as Live and direct messaging. A pop-up also surfaces when teens under the age of 16 are ready to publish their first video, asking them to choose who can watch the video. Push notifications are curbed after 9 p.m. for account users ages 13 to 15, and 10 p.m. for users ages 16 to 17.
The company said it will be doing more around boosting awareness of its parental control features in the coming days and months.
Discord did not appear before the Senate last year but the popular messaging platform has faced criticism over difficulty reporting problematic content and the ability of strangers to get in touch with young users.
In response, the company recently refreshed its Safety Center, where parents can find guidance on how to turn on safety settings, FAQs about how Discord works, and tips on how to talk about online safety with teens. Some existing parental control tools include an option to prohibit a minor from receiving a friend request or a direct message from someone they don’t know.
Still, it’s possible for minors to connect with strangers on public servers or in private chats if the person was invited by someone else in the room or if the channel link is dropped into a public group that the user accessed. By default, all users — including users ages 13 to 17 — can receive friend invitations from anyone in the same server, which then opens up the ability for them to send private messages.
Jan. 6 Committee failed to hold social media companies to account for their role in the Capitol attack, staffers and witnesses say – CNN
“There might be someone getting shot tomorrow.”
That was the warning from Twitter staff at an internal meeting on Jan. 5, 2021, the eve of the deadly attack on the US Capitol. It wasn’t the only stark warning Twitter management received ahead of the insurrection, according to two former Twitter employees who spoke to the House Jan. 6 Committee.
But now these witnesses, along with some committee staff, are frustrated, saying the committee failed to adequately hold major social media companies to account for the role they played in the worst attack on the Capitol in 200 years.
It was a “real missed opportunity,” Anika Collier Navaroli, a former Twitter employee turned whistleblower who gave evidence to the committee, told CNN in an interview last week. “I risked a lot to come forward and speak to the committee and to share the truth about these momentous occasions in history,” Navaroli said.
CNN spoke to half a dozen people who interacted with and were familiar with the Jan. 6 Committee’s so-called “purple team” – a group that included staff with expertise in extremism and online misinformation. Some witnesses and staff said the committee pulled its punches when it came to Big Tech, failing to include critical parts of the team’s work in its final report. The discontent has poured into public view, with an unpublished draft of the team’s findings leaked and obtained by multiple news organizations, including CNN.
One source familiar with the probe acknowledged that the committee obtained evidence that social media companies like Twitter largely ignored concerns that were raised internally prior to Jan. 6, but while those platforms should have done something at the time, the panel was limited in its ability to hold them accountable. A lawyer who worked on the committee said the panel did its job and focused on the unique and malign role of then-President Donald Trump in an unprecedented attack on American democracy. They also said the final report outlines structural issues across social media and society that need to be studied further.
Disagreement about social media companies’ role in the Jan. 6 attack comes as 2023 looks to be a pivotal year for Silicon Valley firms in Washington, DC. Spurred in part by the release of Elon Musk’s so-called “Twitter Files,” House Republicans are set to investigate purported Big Tech censorship, particularly as it pertains to social media companies’ handling of a 2020 New York Post story about Hunter Biden and his laptop. Facebook parent company Meta’s high-stakes decision Wednesday to reinstate Trump on its platforms is also expected to stoke further scrutiny of tech companies’ influence in elections. At the Supreme Court, justices are set to rule this year on a case that could strip key protections afforded to tech companies moderating online speech.
A missed opportunity
It isn’t just Navaroli who has taken issue with the committee’s findings. Three of the committee’s own staff members, part of the so-called purple team, published an article earlier this month, sharply criticizing the decisions made by social media companies in the lead up to the attack.
The final report’s “emphasis on Trump meant important context was left on the cutting room floor,” they wrote.
“Indeed, the lack of an official Committee report chapter or appendix dedicated exclusively to these matters does not mean our investigation exonerated social media companies for their failure to confront violent rhetoric,” they wrote.
In wake of the decision, CNN has reviewed thousands of pages of deposition transcripts and other supporting documents the committee has publicly released that provide insight into Silicon Valley’s action and inaction in the critical period between Election Day 2020 and Jan. 6, 2021.
Navaroli, who worked on Twitter’s safety policy team, told the committee she had repeatedly warned Twitter’s leadership in the lead-up to Jan. 6 about the dangers of not cracking down on what she said was violent rhetoric.
Navaroli pointed to Trump’s infamous “stand back and stand by” message to the Proud Boys at the first 2020 presidential debate as one instance that incited more violent rhetoric on Twitter.
Navaroli initially appeared before the committee as an anonymous whistleblower. Part of her testimony was played during the public committee hearings last summer, with her voice distorted to protect her identity. However, she later decided to go public, testifying before the committee for a second time, and speaking to The Washington Post.
In an interview with CNN, Navaroli said she is speaking out now because she believes it is important for the “truth to be on the record.” She warned that without a full reckoning of social media’s role in the Capitol attack, political violence could once again ignite in the United States and elsewhere around the world, pointing to recent unrest in Brazil where supporters of former President Jair Bolsonaro stormed the country’s top government offices.
The final report from the Jan. 6 Committee stated, “Social media played a prominent role in amplifying erroneous claims of election fraud.”
But a far more blistering assessment was laid out in an unpublished draft document prepared by committee staff that was obtained by several news organizations, including CNN. Its key findings included:
- “Social media platforms delayed response to the rise of far-right extremism—and President Trump’s incitement of his supporters—helped to facilitate the attack on January 6th.”
- “Fear of reprisal and accusations of censorship from the political right compromised policy, process, and decision-making.”
- “Twitter failed to take actions that could have prevented the spread of incitement to violence after the election.”
- “Facebook did not fail to grapple with election delegitimization after the election so much as it did not even try.”
Tech companies would broadly dispute these findings and have repeatedly said they are working to keep their platforms safe.
Twitter’s previous management repeatedly outlined steps it said it was taking to crack down on hateful and violent rhetoric on its platform prior to Jan. 6, 2021, but stressed it didn’t want to unnecessarily limit free expression. Under Musk’s leadership, Twitter no longer has a responsive communications team, and the company did not respond to CNN’s request for comment.
Andy Stone, a spokesperson for Facebook parent company Meta, pointed to an earlier statement from the company where it said it was cooperating with the committee.
Jacob Glick, an investigative counsel who conducted multiple depositions for the Jan. 6 Committee, including Navaroli’s, told CNN he believes the committee did its job to show “the American public the dangers posed by President Trump’s multilayered attack on our democracy.”
He said the lack of awareness he believes tech companies have shown about their role in the attack was “stark.”
“I don’t think social media companies recognize they were dealing with a sustained threat to American democracy,” he said.
Glick, who now works at the Georgetown Institute for Constitutional Advocacy and Protection, said the purple team’s report had not been fact-checked, contains some errors, and should not have been leaked.
Another source familiar with the committee’s work told CNN, “It couldn’t be clearer that Trump was at the center of this plot to overturn the election. Not everything staff worked on could fit into this extensive report and hearings, including some who wanted their work to be the center of the investigation.”
Culture wars and content moderation
How social media platforms write and enforce their rules has become a central and ongoing debate, raising the key question of what power the companies should wield when it comes to politicians like Trump.
While some, including Navaroli, insist Trump repeatedly broke social media platforms’ rules by inciting violent rhetoric that should have resulted in his removal before Jan. 6, others including Musk and Twitter’s previous management, argue that what politicians say should be made available to as many people as possible so they can be held to account.
Meta and Twitter have both reversed their bans on Trump.
“We’re moving backwards and it’s concerning to me,” Navaroli said of the return of prominent election conspiracy theorists to major tech platforms. “History has taught us what happens when political speech on social media companies is allowed to fester unchecked.”
Bolivia media guide – Yahoo News Canada
Many media outlets are in private hands and ownership is highly concentrated.
The government controls numerous newspapers and has stepped up monitoring of critical media, especially on social networks, says the NGO Reporters Without Borders (RSF).
The political turbulence and instability as a result of the forced exile of former President Evo Morales in 2019, saw an increase in attacks on journalists. The 2020 election of Luis Arce brought this to an end.
Officials use legal, political and economic means to pressure independent media, says Freedom House. It says self-censorship is commonplace, with many journalists fearing that they could lose their jobs over reporting critical of the authorities.
Media deemed to “play party politics” or “insult” the government face being denied funding from state advertising, says Reporters Without Borders. Arbitrary arrests and impunity for violence against journalists are other problems.
Newspaper readership is limited by low literacy. Radio is important, especially in rural areas. There are hundreds of stations. The government operates a TV station and community radios.
There were 8.9 million internet users by July 2022, comprising 74% of the population (Internetworldstats.com).
Trump’s Evolution on Truth Social: More QAnon, More Extremes – The New York Times
The former president, now free to post again on Facebook and Twitter, has increasingly amplified far-right accounts on Truth Social. Experts on extremism worry that he will bring this approach to a far wider audience.
In September, former President Donald J. Trump went on Truth Social, his social network, and shared an image of himself wearing a lapel pin in the form of the letter Q, along with a phrase closely associated with the QAnon conspiracy theory movement: “The storm is coming.”
In doing so, Mr. Trump ensured that the message — first posted by a QAnon-aligned account — would be hugely amplified, visible to his more than four million followers. He was also delivering what amounted to an unmistakable endorsement of the movement, which falsely and violently claims that leading Democrats are baby-eating devil worshipers.
Even as the parent company of Facebook and Instagram announced this past week that Mr. Trump would be reinstated — a move that followed the lifting of his ban from Twitter, though he has not yet returned — there is no sign that he has curtailed his behavior or stopped spreading the kinds of messages that got him exiled in the first place.
In fact, two years after he was banished from most mainstream social media sites for his role in inciting the Capitol riot, his online presence has grown only more extreme — even if it is far less visible to most Americans, who never use the relatively obscure platforms where he has been posting at a sometimes astonishing clip.
Since introducing his social media website in February 2022, Mr. Trump has shared hundreds of posts from accounts promoting QAnon ideas. He has continued to falsely insist that the 2020 election was stolen and that he is a victim of corrupt federal law enforcement agencies. And he has made personal attacks against his many perceived enemies, including private citizens whose names he has elevated.
Now, Mr. Trump’s increasingly probable return to major platforms raises the prospect that he will carry over his more radicalized behavior to a far wider audience on Facebook and Instagram, with a combined five billion active users, and Twitter, with 360 million active users.
The potential for such an outcome has alarmed extremism experts; pushed the platforms to explain that they have installed “guardrails” to deter incendiary posts; and prompted questions about how Mr. Trump’s assertions, long siloed in a right-wing arena, are likely to play with mainstream voters, particularly as a sizable share of his party signals that it is ready to move on.
“It’s not that Trump has meaningfully changed the way he behaves online. In fact, he’s grown more extreme,” said Jared Holt, a researcher at the Institute for Strategic Dialogue who studies technology and extremism in the United States. “I don’t think anybody should reasonably expect him to be any different if he’s back on Facebook and Twitter. And when it comes to spreading conspiracy theories, Trump is the big tuna.”
Last month, as Meta considered whether to reinstate Mr. Trump, he wrote on Truth Social that even the Constitution should not stand in the way of his return to power.
“A Massive Fraud of this type and magnitude allows for the termination of all rules, regulations, and articles, even those found in the Constitution,” he said.
Steven Cheung, a spokesman for Mr. Trump, said on Thursday that “Truth Social has been a success because President Trump has created a true free-speech platform, unlike the Big Tech oligarchs who censor conservatives.” He added, “President Trump should have never been banned on these social media platforms, and everybody knows their decisions were unjust and ultimately destroyed the integrity of our democracy.”
In a letter sent this month to three top Meta officials, including Mark Zuckerberg, the company’s chief executive, a lawyer for Mr. Trump argued that the ban had “dramatically distorted and inhibited the public discourse.”
The petition for reinstatement was timed to coincide with the second anniversary of the decision to bar him from Facebook and Instagram, made one day after the deadly attack on the Capitol by Trump supporters. At the time, the company said his presence on its sites posed a risk to public safety.
Democrats have said he’s still dangerous. Last month, four of the party’s members of Congress urged Meta not to reinstate Mr. Trump, writing in a letter that he was still “undermining our democracy.”
But on Wednesday, Nick Clegg, Meta’s president for global affairs, wrote in a blog post that “our determination is that the risk has sufficiently receded.” He added that the suspension was “an extraordinary decision taken in extraordinary circumstances” and that normally, “the public should be able to hear from a former president of the United States, and a declared candidate for that office again, on our platforms.”
To try to stop Mr. Trump from provoking future unrest, Meta said, it would prevent sharing of posts that, among other things, question the legitimacy of elections or promote QAnon content. Violations of the company’s policies could also result in his being blocked from the site again, Meta said. Conservatives praised the decision, and the A.C.L.U. and Senator Bernie Sanders defended the move.
No such restrictions exist for Mr. Trump on Twitter, which had barred him soon after the Capitol riot but reinstated him in November after Elon Musk, the company’s new owner, conducted a public poll about a possible return.
How Times reporters cover politics. We rely on our journalists to be independent observers. So while Times staff members may vote, they are not allowed to endorse or campaign for candidates or political causes. This includes participating in marches or rallies in support of a movement or giving money to, or raising money for, any political candidate or election cause.
Mr. Trump also often handled his Twitter account directly, unlike his Facebook account. He used the platform as a cudgel during his presidency, issuing a steady flow of stream-of-consciousness thoughts, insults and policy declarations on the fly.
He has been talking to aides about when and what to post on Twitter upon his return, according to two people familiar with the discussions who asked for anonymity.
The former president delivered the first-ever post on Truth Social, in which he has a significant financial stake, in February 2022, writing: “Get Ready! Your favorite President will see you soon!”
He didn’t return for more than two months, but the floodgates then opened, with Mr. Trump Truthing and Retruthing — as posts and shares are called — dozens of times a day.
On Aug. 31, for example, he posted over 50 times, making wild claims about Hunter Biden’s laptop, Dominion voting machines, and supposed links by President Biden and Vice President Kamala Harris to Russia.
He has often repeated lies about the 2020 election. This past week, he posted that his infamous phone call seeking more votes in Georgia was “perfect” and that officials had “cheated in many ways including STUFFING Ballots.”
If Mr. Trump returns to major social media sites, Republican candidates and elected officials — who spent his presidency dodging questions about his incendiary tweets — are far likelier to be pressed for their opinions on what he says.
Mr. Trump would also have to figure out how to manage his online presences.
According to regulatory filings, he is obliged to place his posts exclusively on Truth Social and to not share them elsewhere for six hours. That contract has a significant exception, though, allowing him to post material “that specifically relates to political messaging, political fund-raising or get-out-the vote efforts at any time” on other sites.
To date, Mr. Trump has not taken advantage of the loophole, posting exclusively to his 4.8 million followers on Truth Social and at times reposting that content to his nearly 800,000 subscribers on Telegram.
Those follower counts pale in comparison to his potential reach elsewhere. A Pew Research Center analysis in October found that only 2 percent of Americans used Truth Social or Telegram as a regular source for news, compared with 28 percent for Facebook and 14 percent for Twitter.
Mr. Trump’s own statistics underscore that difference. He has nearly 88 million Twitter followers; his Facebook account has 34 million followers. His Instagram page, which tended to focus more on family photos, has 23 million followers.
According to people close to Mr. Trump, he is aware that a return to those platforms would risk starving Truth Social of its largest draw. But it may be that his desire for more income, they said, is outweighed by the enormous attention that Facebook and Twitter can provide him as he runs again for president.
Rashad Robinson, the president of Color of Change, a civil rights group, said Mr. Trump’s outsize following could partly explain why Meta made its decision.
“Corporations like Facebook have continued to find ways to profit off Trump even as they’ve condemned him,” said Mr. Robinson, whose group has pressured Facebook to enact policy changes through advertiser boycotts. “It’s not just that they let Donald Trump back on their platform, it’s that they benefit from it.”
He and others pointed to the fact that Mr. Trump’s campaign spent $89 million to advertise on Facebook and Instagram during the 2020 election, and $56 million to advertise on Google and YouTube. (Google, which also suspended Mr. Trump from YouTube in January 2021, has not announced plans to reinstate him.)
“Facebook has more followers than Christianity,” Mr. Robinson said. “There is not really a comparison point in terms of reach and advertising power.” Meta declined to comment on Mr. Robinson’s criticism. But executives have in the past noted that political advertising represents a tiny fraction of the company’s overall revenue, and Meta has acknowledged tweaking its algorithm to downplay political content over the past two years.
The Pew social media study found that Truth Social was “heavily partisan,” with half of its most influential accounts self-identifying as pro-Trump or right-wing.
In a podcast interview in June, Kash Patel, an adviser to Mr. Trump and, at the time, a director of the company that owns Truth Social, described the proliferation of QAnon-friendly content on the site as a deliberate business decision by the platform, which has struggled financially.
“We try to incorporate it into our overall messaging scheme to capture audiences,” Mr. Patel said. “You can’t ignore that group of people that has such a strong dominant following.”
While it is possible that Mr. Trump will moderate his flow of extreme posts if he returns to mainstream platforms, it is far from clear he will do so.
On Wednesday, Mr. Trump showed no sign of slowing down, posting or reposting 19 times on Truth Social about the 2020 election, the news media and the end of what he called his “deplatforming” from Facebook.
“Such a thing should never happen again,” he wrote.
Google's AI generated music is unsurprisingly great – MobileSyrup
Samsung Galaxy S23 Ultra: Launch Date, Specs And Twitter Reactions – AugustMan HongKong
RSV, influenza numbers dropping in Manitoba | CTV News – CTV News Winnipeg
Silver investment demand jumped 12% in 2019
Iran anticipates renewed protests amid social media shutdown
Search for life on Mars accelerates as new bodies of water found below planet’s surface
Business22 hours ago
Being Charismatic Greatly Benefits Your Job Search
Tech15 hours ago
Oppo Reno8 T 4G Sunset Orange model poses for the camera, revealing key specs – GSMArena.com news – GSMArena.com
Investment20 hours ago
Opinion: Now is the time to invest in post-secondary education – Edmonton Journal
Real eState6 hours ago
Housing market: Jason Oppenheim warns of an 'armageddon' in the real estate industry – Yahoo Finance
Real eState16 hours ago
Real estate agents say they can't imagine working without ChatGPT now – CNN
Art23 hours ago
Wish you could set fire to the last 3 years? A huge flaming art installation is coming to Toronto – CBC.ca
Economy16 hours ago
US inflation and consumer spending cooled in December – Al Jazeera English
Media10 hours ago
Trump’s Evolution on Truth Social: More QAnon, More Extremes – The New York Times