Connect with us

Media

Twitter's slap-down of Liberal video reveals growing role of social media giants in election – CBC.ca

Published

 on


They were two-words launched into the middle of a Canadian election that exploded online.

A week into the election campaign, Deputy Prime Minister Chrystia Freeland tweeted a video of Conservative Leader Erin O’Toole responding to a question about his views on private, for profit, health care in Canada.

Twitter suddenly slapped a tag on the posts, first in French and then in English, saying they were “manipulated media,” apparently because part of O’Toole’s answer upholding the principle of universal access had been edited out.

An online furor erupted, spawning criticism and conspiracy theories. The commotion eventually died down but not before the English video was viewed nearly 232,000 times – far more than it likely would have been seen if Twitter had not tagged it.

“If Freeland had posted this doctored video and sent it out into the Twittersphere, a small number of people would have seen it and the conversation would have moved on,” said Aengus Bridgman, director of the Canadian Election Misinformation Project, which is monitoring what is happening online during the election.

“The fact that Twitter flagged it as manipulated media meant that, suddenly, the issue and the tweet got an enormous amount of attention and sort of has driven the news cycle.”

The incident shines a light on the role of American social media giants in Canada’s election – a role that risks being a lot more active than in past campaigns.

Aengus Bridgman, director of the Canadian Election Misinformation Project, says social media companies are trying to navigate a difficult path between controlling misinformation and allowing free speech. (Louis-Marie Philidor/CBC)

Companies such as Facebook and Twitter have come under fire in recent years for not doing enough to stop their platforms from being used to spread misinformation or to manipulate elections and public opinion.

Concern about the role social media companies could play in political campaigns came to the fore in 2018, when it was revealed that British consulting firm Cambridge Analytica used the data of millions of Facebook users to help former U.S. president Donald Trump’s successful 2016 election campaign.

Now, faced with the prospect of governments in various countries moving to regulate what happens on their platforms, some of the larger players have been starting to act and have become more proactive, removing, labelling or limiting the visibility of some posts in the name of fighting misinformation or election tampering.

While some social media companies are taking steps to combat misinformation, there are others, such as Telegram, where it is spreading rapidly. Telegram, owned by Russian billionaire Pavel Durov, is also being used by those opposed to COVID-19 vaccines, lockdowns and mask mandates to organize loud, angry protests in recent days at Prime Minister Justin Trudeau’s campaign stops.

However, it also raises the question of what role the decisions of corporations based in other countries should play in the middle of a Canadian election when it comes to limiting free speech by removing posts or reducing the number of people who see them.

As part of its updated election integrity policy, Facebook is taking several steps, including beefing up its fact-checking, applying warning labels to posts with false information and blocking fake accounts. Its monitors are also on the lookout for attempts by foreign state actors to influence the course of the election campaign.

Facebook will also be continuing a pilot project it introduced in Canada in February to reduce the amount of political content in the feeds of Canadian users, although it won’t reduce the number of paid political ads that they see.

Twitter began acting on posts by politicians even before the election call. In July, it suspended MP Derek Sloan from its platform for 12 hours after he posted a link to a Reuters article about the U.K. deciding against a mass vaccination program for teenagers and urged Canada to do the same. Twitter has also slapped labels on tweets by Ontario MPP Randy Hillier, who has opposed COVID vaccines and lockdowns, and on a manipulated video of NDP Leader Jagmeet Singh posted during the election by a regular Twitter user. It has since been taken down by the user.

Bridgman said Twitter began increasing its enforcement actions several months ago.

I think what Twitter did is a real shot across the bow that is going to shake up how the campaigns are being run-NDP MP Charlie Angus

“This is part of an initiative of Twitter that started last year during COVID-19, when they really ramped up their labelling of media content on the platform,” Bridgman explained.

“So they did it initially because there was so much misinformation about COVID-19 circulating, and they were getting a lot of flak for that. So they put this in place. Then it became applied to political content, sort of famously through the American election with Donald Trump in particular. And now it’s being applied to the Canadian election.”

Bridgman said Twitter has a small army of algorithmically assisted human fact-checkers who manually label problematic tweets.

Bridgman said social media companies find themselves trying to steer a difficult course.

“It’s hard for social media companies to win the PR role here,” Bridgman said.  “They’re in a tight place, because they want to clean up misinformation on their platforms, but they also don’t want to be playing kingmaker. That’s not in their interest and it’s not a good look.”

University of Ottawa professor Michael Geist said the incidents highlights the challenges that come with trying to moderate content online.

“The government wants the platforms to be more aggressive in moderating content, including creating liability and incentives for failure to take down content within 24 hours. But this case highlights that many of these cases are very difficult.”

NDP MP Charlie Angus says Canada, thanks to lax rules, is a destination for white-collar criminals looking to hide money. (CBC)

New Democratic Party MP Charlie Angus, who has been part of Canadian and international committees that have studied the role of social media companies in society, said the fact that someone with Freeland’s status was given an edited video to tweet out is “very concerning.”

“I think what Twitter did is a real shot across the bow that is going to shake up how the campaigns are being run,” Angus said, adding the Liberal government is supposed to be fighting disinformation.

“The fact that Twitter was willing to call out someone of the stature of Chrystia Freeland for posting disinformation, I think that’s a very healthy sign.”

It’s also coming at a good time, he said.

“Things are going to heat up a lot, so Twitter stepping in at this point in the campaign, I think, is going to make everyone think they’re going to have to be a little bit more careful.”

Liberal MP Nathaniel Erskine-Smith, who was also a key figure in Canadian and international committee hearings into social media companies, said Twitter should have been more transparent about how it makes decisions — not just pointing to a multi-pronged policy the way it did with the video tweeted by Freeland.

Liberal MP Nathaniel Erskine-Smith told CBC News Network’s Power and Politics his government missed an opportunity to be a leader on treating the global drug problem as a health issue. (CBC)

“I assume that it’s the isolated editing that they are drawing from there. But again, yes, it removed the reference ‘universal access.’ But given the nature of the comments in relation to the Saskatchewan MRI policy, I don’t think it inaccurately characterized the concern around private pay in a universal system.”

Erskine-Smith also questioned the way Twitter applied its policy when it labelled the video tweeted by Freeland.

“The government wants the platforms to be more aggressive in moderating content, including creating liability and incentives for failure to take down content within 24 hours. But this case highlights that many of these cases are very difficult.”

Conservative MP Bob Zimmer, who served with Angus and Erskine-Smith on the committees that studied the impact of social media companies, and the Conservative Party did not respond to several requests from CBC News for an interview.

When it comes to how the actions of social media companies risk affecting the election, opinions vary.

“I think there is no question that social media companies impact the election now, with their policies around moderation and misinformation,” said Bridgman. “I think that that ship has sailed, and it’s not a question of whether they will or not. It’s a question of how much and how they will do it.”

Erskine-Smith, however, is convinced that traditional campaign elements like door-knocking, policies and the debates will have more impact than what happens on social media.

“I don’t think it will have a great impact in the end, in so far as I don’t think the decisions that the private platforms make will have a great impact in the end.”

Daniel Bernhard, executive director of Friends of Canadian Broadcasting, was sharply critical of Facebook’s track record when it comes to cracking down on misuse of its platform.

“Canada is foolish to depend for the health of our democracy on the good will and the competency of a company like Facebook that has proven over and over and over again that it is both incapable and unwilling to act in an ethical and democratic way.”

Bernhard also wants social media companies to have to divulge the algorithms they are using to govern how their platforms operate.

“These algorithms make hugely consequential editorial choices that have major consequences for politics and democracy. And so their operation but also their transparency, should be a matter of regulation — not of good will and voluntary compliance.”

Erskine-Smith would like to see new rules to require more transparency from social media companies, pointing out that Canada has a Broadcast Standards Council but no watchdog for social media companies.

“When we see the power and influence that private platforms do wield in our public discourse, bringing a level of transparency to the way decisions are made by those platforms is incredibly important … not only in relation to specific, discrete policy decisions, like Twitter’s decision to apply its own standards, but how the algorithms themselves are promoting or downgrading certain content,” he said.

“As algorithms replace editors, and increasingly so, we do need greater algorithmic transparency.”

Elizabeth Thompson can be reached at elizabeth.thompson@cbc.ca

Adblock test (Why?)



Source link

Continue Reading

Media

Social Media Has the Same Downsides As Alcohol – The Atlantic

Published

 on


Last year, researchers at Instagram published disturbing findings from an internal study on the app’s effect on young women. “Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” the authors wrote in a presentation obtained by The Wall Street Journal. “They often feel ‘addicted’ and know that what they’re seeing is bad for their mental health but feel unable to stop themselves.”

This was not a new revelation. For years, Facebook, which owns Instagram, has investigated the app’s effects on its users, and it kept getting the same result. “We make body image issues worse for one in three teen girls,” said one slide from a 2019 presentation. “Teens who struggle with mental health say Instagram makes it worse.”

The findings weren’t all negative. Although many teenagers reported that Instagram was compulsive but depressing, most teenagers who acknowledged this dark side said they still thought the app was enjoyable and useful.

So a fair summary of Instagram according to Instagram might go like this: Here is a fun product that millions of people seem to love; that is unwholesome in large doses; that makes a sizable minority feel more anxious, more depressed, and worse about their bodies; and that many people struggle to use in moderation.

What does that sound like to you? To me, it sounds like alcohol—a social lubricant that can be delightful but also depressing, a popular experience that blends short-term euphoria with long-term regret, a product that leads to painful and even addictive behavior among a significant minority. Like booze, social media seems to offer an intoxicating cocktail of dopamine, disorientation, and, for some, dependency. Call it “attention alcohol.”

I personally don’t spend much time on Instagram, but on reflection I love Twitter quite like the way I love wine and whiskey. Other analogies fall short; some people liken social media to junk food, but ultra-processed snacks have few redeemable health qualities compared with just about every natural alternative. I have a more complicated relationship with Twitter. It makes my life better and more interesting. It connects me with writers and thinkers whom I would never otherwise reach. But some days, my attention will get caught in the slipstream of gotchas, dunks, and nonsense controversies, and I’ll feel deeply regretful about the way I spent my time … only to open the app again, several minutes later, when the pinch of regret has relaxed and my thumb reaches, without thought, toward a familiar blue icon on my phone.

For the past decade, writers have been trying to jam Facebook into various analogical boxes. Facebook is like a global railroad; or, no, it’s like a town square; or, perhaps, it’s like a transnational government; or, rather, it’s an electric grid, or a newspaper, or cable TV.

Each of these gets at something real. Facebook’s ability to connect previously unconnected groups of people to information and commerce really does make it like a 21st-century railroad. The fact that hundreds of millions of people get their news from Facebook makes it very much like a global newspaper. But none of these metaphors completely captures the full berserk mosaic of Facebook or other social-media platforms. In particular, none of them touches on what social media does to the minds of the young people who use it the most.

“People compare social media to nicotine,” Andrew Bosworth, a longtime Facebook executive, wrote in an extensive 2019 memo on the company’s internal network. “I find that wildly offensive, not to me but to addicts.” He went on:

I have seen family members struggle with alcoholism and classmates struggle with opioids. I know there is a battle for the terminology of addiction but I side firmly with the neuroscientists. Still, while Facebook may not be nicotine I think it is probably like sugar. Sugar is delicious and for most of us there is a special place for it in our lives. But like all things it benefits from moderation.

But in 2020, Facebook critics weren’t the ones comparing its offerings to addiction-forming chemicals. The company’s own users told its research team that its products were akin to a mildly addictive depressant.

If you disbelieve these self-reports, perhaps you’ll be persuaded by the prodigious amounts of outside research suggesting the same conclusion. In June, researchers from NYU, Stanford, and Microsoft published a paper with a title that made their position on the matter unambiguous: “Digital Addiction.” In closing, they reported that “self-control problems cause 31 percent of social media use.” Think about that: About one in three minutes spent on social media is time we neither hoped to use beforehand nor feel good about in retrospect.

Facebook acknowledges these problems. In a response to the Wall Street Journal exposé published on Tuesday, Karina Newton, the head of public policy at Instagram, stood by the company’s research. “Many find it helpful one day, and problematic the next,” she wrote. “Many said Instagram makes things better or has no effect, but some, particularly those who were already feeling down, said Instagram may make things worse.” But this self-knowledge hasn’t translated into sufficient reform.

Thinking of social media as attention alcohol can guide reform efforts. We have a kind of social infrastructure around alcohol, which we don’t have yet for social media. The need to limit consumption is evident in our marketing: Beer ads encourage people to drink responsibly. It’s in our institutions: Established organizations such as Alcoholics Anonymous are devoted to fighting addiction and abuse. It’s in our regulatory and economic policy: Alcohol is taxed at higher rates than other food and drink, and its interstate distribution has separate rules. There is also a legal age limit. (Instagram requires its users to be 13 years old, although, as it goes with buying alcohol, many users of the photo-sharing app are surely lying about their age.)

Perhaps most important, people have developed a common vocabulary around alcohol use: “Who’s driving tonight?”; “He needs to be cut off”; “She needs some water”; “I went too hard this weekend”; “I might need help.” These phrases are so familiar that it can take a second to recognize that they communicate actual knowledge about what alcohol is and what it does to our bodies. We’ve been consuming booze for several thousand years and have studied the compound’s specific chemical effects on the liver and bloodstream. Social media, by contrast, has been around for less than two decades, and we’re still trying to understand exactly what it’s doing, to whom, and by what mechanism.

We might be getting closer to an answer. A 124-page literature review compiled by Jonathan Haidt, an NYU professor, and Jean Twenge, a San Diego State University professor, finds that the negative effects of social media are highly concentrated among young people, and teen girls in particular. Development research tells us that teenagers are exquisitely sensitive to social influence, or to the opinions of other teens. One thing that social media might do is hijack this keen peer sensitivity and drive obsessive thinking about body image, status, and popularity. Instagram seems to create, for some teenage girls, a suffocating prestige economy that pays people in kudos for their appearance and presentation. The negative externality is dangerously high rates of anxiety.

How do we fix it? We should learn from alcohol, which is studied, labeled, taxed, and restricted. Similar strictures would discourage social-media abuse among teenagers. We should continue to study exactly how and for whom these apps are psychologically ruinous and respond directly to the consensus reached by that research. Governments should urge or require companies to build more in-app tools to discourage overuse. Instagram and other app makers should strongly consider raising their minimum age for getting an account and preventing young users from presenting fake birthdates. Finally, and most broadly, parents, teens, and the press should continue to build a common vocabulary and set of rules around the dangers of excess social media for its most vulnerable users.

Digital sabbaths are currently the subject of columns and confessionals. That’s a good start, but this stuff should be sewn into our everyday language: “No apps this weekend”; “I need to be cut off”; “I love you, but I think you need to take a break”; “Can you help me stay offline?” These reforms should begin with Facebook. But with social media, as with every other legal, compulsive product, the responsibility of moderation ends with the users.

Adblock test (Why?)



Source link

Continue Reading

Media

Media Availability: Minister Haggie Available to Media to Discuss Emergency Services – News Releases – Government of Newfoundland and Labrador

Published

 on


The Honourable John Haggie, Minister of Health and Community Services, will hold a media availability today (Thursday, September 16) to discuss emergency services following a meeting with NAPE.

The availability will take place in the Media Centre, East Block, Confederation Building, at 2:15 p.m. Media covering the availability are asked to attend in-person.

The availability will be live-streamed on the Government of Newfoundland and Labrador’s Facebook and Twitter accounts and on YouTube.

-30-

Media contacts
Nancy Hollett
Health and Community Services
709-729-6554/327-7878
nancyhollett@gov.nl.ca

2021 09 16
12:45 pm

Adblock test (Why?)



Source link

Continue Reading

Media

The Growing Tensions Between Digital Media Platforms and Copyright Enforcement – AAF – American Action Forum

Published

 on


Executive Summary

  • Copyright infringement tensions between digital “new media” platforms and traditional media are at an all-time high.
  • Pressure from copyright holders combined with aggressive infringement-flagging algorithms and significant penalties under current regulations push platforms to take down content—often before infringement has been proven.
  • While there are legitimate concerns regarding copyright infringement online, current regulation incentivizes over-blocking content in order to avoid fines; this tactic is alienating content creators and limiting free speech and innovation.
  • Moreover, recent reform proposals aim to increase platform liability; this will make platforms even more cautious, exacerbating current problems and seriously limiting the content that has made these platforms a novel means of entertainment.

Introduction

Digital media or “new media” platforms that host user-generated videos such as YouTube or Vimeo, and livestreams such as Twitch, YouTube Gaming, and Facebook Gaming, are gaining a bigger role in the entertainment industry. This trend accelerated during the coronavirus pandemic, with viewership rates increasing to 27.9 billion hours in 2020, an all-time high. While most of the livestreaming platforms initially focused on gaming content, their offerings have expanded to include podcasters, DJs, musicians, and traditional sports. For example, Twitch is now the official streaming partner of USA Basketball and hosted the Spain broadcast of the biggest South American soccer tournament.

As these platforms grow, the attention and level of scrutiny grows as well. One of the most prominent criticisms is that the platforms are failing to properly address copyright infringement on their websites. Record labels and movie studios complain that these platforms are not doing a good enough job protecting their intellectual property rights. Yet on the other side, content creators and their fans complain that overly restrictive application of copyright regulations severely limits content that should constitute “fair use” of copyrighted material.

The “fair use” doctrine” and the Digital Millennium Copyright Act (DMCA) are at the center of this debate. The DMCA, the most important law regarding copyrighted work on the internet, aims to prevent the unauthorized access and copying of copyrighted works, which usually requires authorization by the copyright holder. The exception is “fair use,” or the reproduction of these copyrighted materials for “criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research.” Fair use is key to the development of an online entertainment industry, as it allows the content creators on these platforms to reproduce materials to create original content such as parodies, commentary, reviews, or live reactions.

Copyright Enforcement Is Increasingly Burdensome for Platforms and Creators

Platforms bear the responsibility of enforcing the fair use doctrine. Under the DMCA, they can face fines of over $150,000 per instance of copyright infringement. According to a public statement by Twitch, the number of copyright claims on its platform increased from less than 50 per year to more than 1,000 a week. This can translate into multi-million-dollar fines if platforms’ moderation is deemed inadequate.

This has pushed the platforms in the direction of preemptively taking down content or sanctioning streamers once they receive a DMCA claim and letting content creators appeal their case after the sanction. It is more cost effective to review appeals carefully over a longer period, as they are not bound to respond to appeals within any specific timeline, as is the case with DMCA claims. The number of appeals will certainly be lower, and in case of a mistake, the potential revenue loss for platforms will certainly be lower than the potential fine for a DMCA violation.

Platforms have also moved toward automation as a mechanism to respond to DCMA claims in a timely and cost-effective manner. By using automated systems and algorithms, platforms forgo the need to use human systems, which tend to be costly and slower in their review process. While on-demand platforms such as YouTube have implemented algorithmic systems for around 14 years, livestreaming platforms have started to increasingly implement similar systems in order to quickly remove or mute a potentially copyright-infringing livestream.

While automation has been beneficial in terms of response time, its increased application has presented multiple issues. One of its main issues is its lack of accuracy, where fair use content or original materials can be incorrectly flagged. This is a common problem, as automated systems lack comprehension of context and can be activated with as little as three seconds of audio or video being reproduced. This lack of context has also led to the sanctioning of content where copyrighted music was played unintentionally, such as a video capturing loud music from a passing car or store speaker.

Another common problem with automated systems is that they are vulnerable to exploitation. For example, there are cases of law enforcement officers playing copyrighted music to prevent civilians’ recordings from being uploaded to these platforms. Another example is the weaponization of DMCA claims, where a user flags content as a violation of copyright with the intention of censoring or negatively impacting a content creator, rather than as a legitimate claim over copyright infringement. In fact, it has become common for content creators to be extorted by ill-intentioned individuals who threaten a copyright claim unless they are paid a certain amount of money.

The combination of caution, automation, and preemptive takedowns reflects the rising burden of moderating copyright infringement. An example of this is the introduction of the three-strike system, where content creators are banned from posting content after receiving three copyright claims. Beyond threatening content creators, this practice threatens the platforms themselves, as they run the risk of alienating the creators that provide the content which makes them appealing to the viewers and advertisers that provide revenue for them.

Proposed Changes to the DMCA Will Make the Issue Worse

Current proposals to update the DMCA and copyright enforcement regulations seek to increase platforms’ legal liability, which could make this situation worse. Senator Tom Tillis has led efforts to pass legislation for more stringent copyright enforcement, reforming both the “notice and takedown” system in the DMCA and increasing the legal consequences of copyright infringement. The Protecting Lawful Streaming Act and the Copyright Alternative Small-Claims Enforcement (CASE) Act, both included in last year’s appropriations bill, introduced major tweaks to copyright enforcement. The CASE Act created a small-claims copyright tribunal, with the objective of speeding up the dispute process for copyright cases under $30,000. On the other hand, the Protecting Lawful Streaming Act targets commercial websites designated exclusively to illegally streaming copyrighted content by making this act a felony instead of a civil infraction.

Sen. Tillis has also said he hopes to introduce legislation that would increase platform liability as moderators; this would require the platforms to establish a system that prevents the re-upload of copyrighted content previously taken down. This change would replace the current “notice and takedown,” where platforms are bound to remove content after it has been flagged as a copyright violation, with a “notice and stay-down” system. Such a system would compel platforms to take a more proactive and strict approach, in which they must review and approve content before it is posted, rather than after the fact. Advocates of this system claim it is the best mechanism to prevent the reposting of infringing content, as platforms will be forced to moderate at an earlier stage, allowing them to prevent rather than react.

Yet this approach could further stifle creativity and innovation on these platforms. Increasing platforms’ potential liability would push them to take a further precautionary approach, where they will likely over-block content in order to reduce potential legal liabilities. By placing a higher burden on platforms, platforms would have to review and approve all content before it is published. To do so, platforms would need to further rely on automatization to review content in a timely manner, so that creators are still able to post content, but platforms are able to comply with regulation. While this could potentially prevent some cases of copyright infringement, it will do at a cost to consumers, content creators, and platforms. Consumers would be further deprived of content and content creators would face further barriers to enter a booming market, potentially pushing them out of it. This would severely hinder the platforms’ value proposition and content diversity, effectively hindering their growth.

Better Principles for Potential DMCA Reform

To maintain the growth of the new-media platform industry, policymakers should focus on updating and expanding the definition of fair use so that its application in these platforms is clearer. By establishing clearer fair use guidelines, creators and platforms can more easily moderate potentially infringing content. More important, the definition of fair use must be broadened to include newer uses, such as video game streaming or movie and music reviews. Adopting a broad, technology-neutral definition of fair use is vital for promoting an open internet, which hosts these novel forms of entertainment. This provides platforms with a clearer roadmap to focus on combating privacy and meaningful copyright violations.

While some platforms—such as the Facebook Gaming streaming platform—have been able to strike licensing deals with major record labels to use their music in streams, such agreements usually require the payment of hefty fees that only a few platforms can afford. Under the DMCA, copyright holders hold higher leverage in this kind of negotiations, and licensing fees would have to offset projected earnings from pursuing compensation under the DMCA.

Policymakers and regulators ought to also understand the nuances of content moderation. When formulating content moderation strategies, platforms face continuous and multiple tradeoffs: relying on human systems tends to increase accuracy but will sacrifice timeliness and increase costs. On the other hand, relying on automated systems increases timeliness and reduces costs, but at the expense of over-blocking content, and increasing misreporting and vulnerability for exploitation. While adding a human backstop could be helpful to remedy this issue, the pressure of fines and time-to-takedown restrains push platforms to prioritize timeliness over accuracy.

These challenges are magnified in livestreaming platforms, where responding to copyright infringement should ideally happen in real-time. Yet such immediate responses require significant additional resources to detect, analyze, take down, and notify the streamer of the infringement. This can be an extremely difficult task for platforms, considering livestreams can last for multiple hours and the threshold for what is considered infringement can be as low as three seconds.

Conclusion

New media platforms, or platforms that host livestreaming and video content, have shown tremendous growth as new entertainment, evolving from a niche audience to attracting mainstream users. Nonetheless, this growth might be severely hindered by the platforms’ growing conflicts with current copyright regulation. Increasing pressure from copyright holders and the threats of onerous fines under the DMCA have pushed platforms to implement automated systems to take down materials flagged as infringing on copyrights. The technical limitations with algorithmic systems have generated a problem of over-blocking, where creativity and innovation are stifled, and content creators’ right to fair use can be trampled, pushing them off of the platforms. Reform must be fair both for copyright holders, content creators and new media platforms. Rather than simply piling on more regulation, policymakers and regulators should strive to make fair use policies clearer and more workable, and shift the burden of proof to copyright holders claiming harm, instead of forcing content creators to prove themselves innocent.

Adblock test (Why?)



Source link

Continue Reading

Trending