Connect with us

Media

News Media Canada Announces Local Journalism Initiative Host News Organizations – Financial Post

Published

 on


TORONTO — News Media Canada today announced that it will fund 105 journalists at 94 host news organizations across Canada under the Government of Canada’s Local Journalism Initiative.

News Media Canada’s Local Journalism Initiative (LJI) program will provide funding to grant recipients to hire reporters to create new journalistic content covering civic issues and institutions of importance to Canadians.

A total of 155 applications were received from News Media Canada’s initial call for entries in November and adjudicated by an independent panel of industry experts in early December. Eligible applicants included press agencies, private news organizations, and non-profit news organizations. For a full list of grant recipients, click here.

Thirty-four journalists will be working at daily newspapers, 59 at community newspapers and 12 at digital news media.

The LJI judging panel approved funding for:

  • 4 journalists for Indigenous media
  • 15 journalists in British Columbia
  • 29 journalists in the Prairies
  • 36 journalists in Ontario
  • 3 journalists in Quebec
  • 16 journalists in the Atlantic provinces
  • 2 journalists in the Territories

Stories produced by LJI reporters will be made available to media organizations and the public via an online portal managed by The Canadian Press.

“We were pleased that our first call for applications for the Local Journalism Initiative drew an excellent response,” said John Hinds, president and CEO of News Media Canada. “We believe this will make a significant contribution to the industry and help strengthen civic journalism across Canada, and we look forward to our next round of applications.”

In January, News Media Canada will announce a second call for applications, particularly for French-language media in Quebec, as well as additional journalists in Ontario and in Indigenous media organizations across the country.

Created and funded by the Government of Canada, the Local Journalism Initiative is a five-year program that supports the creation of original civic journalism relevant to the diverse needs of people living in news deserts and areas of news poverty across Canada. News Media Canada’s Local Journalism Initiative program is open to English, French, and Indigenous print and online media across Canada.

The Initiative will provide funding for host newsrooms to hire reporters, supporting accurate and reliable civic journalism in underserved communities. Local Journalism Initiative coverage will help ensure the vitality of democracy, better inform citizens, engage community and foster civic debate, connecting Canadians with their local governments, in their councils, courts and other civic institutions.

About News Media Canada

News Media Canada is the voice of the print and digital news media industry in Canada and represents hundreds of trusted titles in every province and territory. News Media Canada is an advocate in public policy for daily and community media outlets and contributes to the ongoing evolution of the news media industry by raising awareness and promoting the benefits of news media across all platforms. For more information, visit our website at www.newsmediacanada.ca or follow us on Facebook, Twitter, Instagram and YouTube.

Contacts

Tina Ongkeko
Director, Local Journalism Initiative
News Media Canada
lji@newsmediacanada.ca
www.localjournalisminitiative.ca

Let’s block ads! (Why?)



Source link

Media

Social media companies should face new legal duty to 'act responsibly,' expert panel finds – North Shore News

Published

 on


Social media companies can’t be trusted to moderate themselves, so it falls to the government to enforce new restrictions to protect Canadians from harmful content online, according to a report currently under review by the federal heritage minister.

The Canadian Commission on Democratic Expression, an expert panel of seven members, including former chief justice Beverley McLachlin, said it had become difficult to ignore the fact too many real-world manifestations of online interactions are turning violent, destructive or hateful, despite social media’s parallel role in empowering positive social movements.

The panellists were particularly struck by the role they saw social media play last fall in “sowing distrust” in the aftermath of the U.S. presidential election, culminating in the lethal invasion of the U.S. Capitol. And they found, with the Quebec mosque shooting, the Toronto van attack and the armed invasion of Rideau Hall, that “Canada is not immune.”

“We recognize the charter, we recognize the ability of people to express themselves freely,” said Jean La Rose, former chief executive officer of the Aboriginal Peoples Television Network (APTN) and one of the seven commissioners, in an interview.

“But there must be limits at one point. There has to be limits as to where free speech becomes a racist discourse, or a hurtful discourse, or a hateful discourse.”

‘We have been at the receiving end of racist threats’

These limits would come in the form of a new law passed by Parliament, the commission recommended, that would force social media platforms like Twitter and Facebook, search engines like Google and its video-sharing site YouTube and others to adhere to a new “duty to act responsibly.”

The panel purposefully did not spell out what responsible behaviour should look like. Instead, it said this determination should be left to the government — as well as a new regulator that would oversee a code of conduct for the industry and a new “social media council” that would bring together the platforms with civil society and other groups.

La Rose said his experience in the journalism world demonstrated how there needed to be reasonable limits on what people can freely express so they are not permitted to call for the killings of Muslims, for example, or encourage violence against an individual by posting their home address or other personal details online.

“Having worked in media, having worked at APTN, for example, we have been at the receiving end of racist threats, of severe injury to our people, our reporters and others because of the view we present of the situation of the Indigenous community in Canada,” he said.

“Literally, we’ve had some reporters run off the road when they were covering a story because people were trying to block the telling of that story. So as a news entity, we have seen how far sometimes misinformation, hate and hurtful comments can go.”

Rules must reflect issue’s ‘inherent complexity’: Google

Canadian Heritage Minister Steven Guilbeault has himself recently indicated that legislation to address “online hate” will be introduced “very soon.”

The minister has pointed to the popularity of such a move: a recent survey by the Canadian Race Relations Foundation (CRRF), for example, found that fully four-fifths of Canadians are on board with forcing social media companies to rapidly take down hateful content.

“Canadians are now asking their government to hold social media companies accountable for the content that appears on their platforms,” Guilbeault said after the CRRF survey was published.

“This is exactly what we intend to do, by introducing new regulations that will require online platforms to remove illegal and hateful content before they cause more harm and damage.”

Guilbeault has met with the commission to discuss their recommendations and is currently reviewing their report, press secretary Camille Gagné-Raynauld confirmed.

Representatives from Facebook Canada and Twitter Canada were among several people who provided witness testimony and participated in commission deliberations, the report said. Twitter declined comment to Canada’s National Observer.

“We haven’t reviewed the full report yet, so we can’t comment on the specific recommendations,” said Kevin Chan, global director and head of public policy for Facebook Canada. “We have community standards that govern what is and isn’t allowed on our platform, and in most cases those standards go well beyond what’s required by law.”

Chan also said Facebook agreed regulators should make “clear rules for the internet” so private companies aren’t left to make decisions themselves.

Google spokesperson Lauren Skelly said the company shares Canadians’ concerns about harmful content online and said YouTube takes its responsibility to remove content that violates its policies “extremely seriously.” She said the company has significantly ramped up daily removals of hate speech and removed millions of videos last quarter for violations.

“Any regulation needs to reflect the inherent complexity of the issue and the scale at which online platforms operate,” said Skelly. “We look forward to continuing our work with the government and local partners on addressing the spread of online hate to ensure a safer and open internet that works for all Canadians.”

Incentives ‘not aligned with the public interest’: Jaffer

The nine-month study by the commission, an initiative led by the Public Policy Forum, found that with everything from disinformation campaigns to conspiracy theories, hate speech and people targeted for harm, toxic content was being “amplified” by the actions of social media companies.

The study rejected the notion that social media platforms are “neutral disseminators of information,” finding instead that they curate content to serve their own commercial interests.

“The business model of some of the major social media companies involves keeping people engaged with their platforms as much as possible. And it turns out that keeping people engaged means feeding them sensational content because that’s what keeps people clicking,” said Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University and another commissioner.

“The incentives for social media companies are not aligned with the public interest. These are private companies whose obligation is to make money for their shareholders.”

The commission also proposed a tribunal to deal with dispute resolutions quickly, as well as a “transparency regime” that would require social media companies to make certain information available to the regulator, including the “algorithmic architecture used to identify problematic content.”

Jaffer wrote a “concurring statement” in the report, where he confessed it was difficult to endorse the commission’s proposed “duty to act responsibly” without going further to define how that duty will work in reality. He said defining it will require “difficult tradeoffs” between free speech, privacy and other issues.

Carl Meyer / Local Journalism Initiative / Canada’s National Observer

Let’s block ads! (Why?)



Source link

Continue Reading

Media

What the Capitol Riot Data Download Shows about Social Media Vulnerabilities – Scientific American

Published

 on


What the Capitol Riot Data Download Shows about Social Media Vulnerabilities
Pro-Trump protesters at the Capitol used their phones to record and post photos and videos on social media. Credit: Lev Radin Getty Images
Advertisement

<div class="article-block article-text" data-behavior="newsletter_promo dfp_article_rendering " data-dfp-adword="Advertisement" data-newsletterpromo_article-text="

Sign up for Scientific American&rsquo;s free newsletters.

” data-newsletterpromo_article-image=”https://static.scientificamerican.com/sciam/cache/file/CF54EB21-65FD-4978-9EEF80245C772996_source.jpg” data-newsletterpromo_article-button-text=”Sign Up” data-newsletterpromo_article-button-link=”https://www.scientificamerican.com/page/newsletter-sign-up/?origincode=2018_sciam_ArticlePromo_NewsletterSignUp” name=”articleBody” itemprop=”articleBody”>

During the January 6 assault on the Capitol Building in Washington, D.C., rioters posted photographs and videos of their rampage on social media. The platforms they used ranged from mainstream sites such as Facebook to niche ones such as Parler—a social networking service popular with right-wing groups. Once they realized this documentation could get them in trouble, many started deleting their posts. But Internet sleuths had already begun downloading the potentially incriminating material. One researcher, who publicly identifies herself only by the Twitter handle @donk_enby, led an effort that she claims downloaded and archived more than 99 percent of all data posted to Parler before Amazon Web Services stopped hosting the platform. Scientific American repeatedly e-mailed Parler’s media team for comment but had not received a response at the time of publication.

Amateur and federal investigators can extract a lot of information from this massive trove, including the locations and identities of Parler users. Although many of those studying the Parler data are law enforcement officials looking into the Capitol insurrection, the situation provides a vivid example of the way social media posts—whether extreme or innocuous—can inadvertently reveal much more information than intended. And vulnerabilities that are legitimately used by investigators can be just as easily exploited by bad actors.

To learn more about this issue, Scientific American spoke with Rachel Tobac, an ethical hacker and CEO of SocialProof Security, an organization that helps companies spot potential vulnerabilities to cyberattacks. “The people that most people are talking about when they think of a hacker, those are criminals,” she says. “In the hacker community, we’re trying to help people understand that hackers are helpers. We’re the people who are trying to keep you safe.” To that end, Tobac also explained how even tame posts on mainstream social media sites could reveal more personal information than many users expect—and how they can protect themselves.

[An edited transcript of the interview follows.]

How was it possible to download so much data from Parler?

Folks were able to download and archive the majority of Parler’s content … through automated site scraping. [Parler] ordered their posts by number in the URL itself, so anyone with any programming knowledge could just download all of the public content. This is a fundamental security vulnerability. We call this an insecure direct object reference, or IDOR: the Parler posts were listed one after another, so if you just add “1” to the [number in the] URL, you could then scrape the next post, and so on. This specific type of vulnerability would not be found in mainstream social media sites such as Facebook or Twitter. For instance, Twitter randomizes the URLs of posts and requires authentication to even work with those randomized URLs. This [IDOR vulnerability]—coupled with a lack of authentication required to look at each post and a lack of rate limiting (rate limiting basically means the number of requests that you can make to pull data)—means that even an easy program could allow a person to scrape every post, every photo, every video, all the metadata on the Web site.

What makes the archived data so revealing?

The images and videos still contained GPS metadata when they went online, which means that anyone can now map the detailed GPS locations of all the users who posted. This is because our smartphone logs the GPS coordinates and other data, such as the lens and the timing of the photo and video. We call this EXIF data—we can turn this off on our phones, but many people just don’t know to turn that off. And so they leave it embedded within the files that they upload, such as a video or a photo, and they unknowingly disclose information about their location. Folks on the Internet, law enforcement, the FBI can use this information to determine where those specific users live, work, spend time—or where they were when they posted that content.

Can investigators extract similar information from posts on more mainstream platforms?

This EXIF data are scrubbed on places such as Facebook and Twitter, but we still have a lot of people who don’t realize how much they’re compromising their location and information about themselves when they’re posting. Even if Parler did scrub the EXIF data, we saw on a lot of posts during this event that people were geolocation tagging their Instagram Stories to the Capitol Building that day or broadcasting their actions on Facebook Live publicly and tagging where they were located. I think it’s a general lack of understanding or maybe not realizing just how much data they’re leaking. And I think plenty of folks also didn’t realize that maybe they wouldn’t want to geolocation tag during that event.

Under more normal circumstances, is there a problem with geolocation tagging?

Many people think, “Well, I’m not doing anything wrong, so why would I care if I post a photo?” But let’s just take a really innocuous example, such as going on vacation. [If] you geolocation tag the hotel, what could I do as an attacker? Well, the obvious thing is: you’re not home. But I feel like most people get that. What they don’t probably get is that I can social engineer: I can gain access to information about you through human systems at that hotel. I could call up your hotel pretending to be you and gain information about your travel plans. I could steal your hotel points. I could change your room. I could do all this nefarious stuff. We can do so much and really manipulate because our service providers don’t authenticate the way that I would recommend that they authenticate over the phone. Can you imagine if you could log into your Gmail account, your calendar or something like that by just using your current address, your last name and your phone number? But that’s how it works with a lot of these different companies. They don’t use the same authentication protocols that they would use, say, on a Web site.

How can people protect themselves?

I don’t think it would be fair to tell people that they couldn’t post. I post on Twitter multiple times a day! Instead of saying, “You can’t do this,” I would recommend being what I call “politely paranoid” about what we post online. For instance, we can post about the vacation, but we don’t want location- or service-provider-identifying markers within the post. So how about you post a picture of the sunset and the margarita but don’t geolocation tag the hotel? These very small changes can help folks protect their privacy and safety in the long run while still getting everything that they want out of social media. If you really want a geolocation tag, you can save the city that you’re in rather than the hotel: [then] I can’t call up the city and try and get access to your hotel points or change your plans.

Should social media sites just prevent geolocation tagging? What responsibilities do platforms have to protect their users?

I think it’s really important that all platforms, including social media platforms, follow best practices regarding security and privacy to keep their users safe. It’s also a best practice to scrub metadata for your users before they post their photos or videos so they don’t unknowingly compromise themselves. All of that is the platform’s responsibility; we have to hold them to that [and] make sure that they do those things. After that, I would say individuals get to choose how much risk they would like to take. I work hard to ensure nonsecurity folks understand risks: things such as geolocation tagging, [mentioning] service providers [and] taking pictures of their license, credit cards, gift cards, passports, airplane tickets—now we’re seeing COVID-19 vaccination cards with sensitive data on them. I don’t think it’s the social media company’s responsibility, for instance, to dictate what somebody can or cannot post when it comes to their travel photos. I think that’s up to the user to decide how they would like to use that platform. And I think it’s up to us as [information security] professionals to clearly communicate what those risks are so people can make an informed decision.


Rights & Permissions

Let’s block ads! (Why?)



Source link

Continue Reading

Media

GameStop, BlackBerry, AMC stocks see trading halts as social media hype drives volatility – Global News

Published

 on


Stocks of GameStop, BlackBerry and AMC Entertainment Holdings all saw trading halts on Wednesday morning amid continued volatility widely attributed to social media chatter.

The New York Stock Exchange briefly paused trading on GameStop and AMC stocks shortly before 10:15 a.m. ET, while the Investment Industry Regulatory Organization of Canada (IIROC) announced at 9:54 a.m. ET a temporary suspension of BlackBerry shares.

READ MORE: Does Bitcoin have a place in every investment portfolio?

The moves come as all three stocks have been soaring for a fourth day running, sparking calls for scrutiny of a social media-driven trading frenzy.


Click to play video 'You’ve Got Mail: A history of the BlackBerry'



1:00
You’ve Got Mail: A history of the BlackBerry


You’ve Got Mail: A history of the BlackBerry – May 31, 2017

The rally has also forced some hedge funds to retreat with heavy losses. Short-seller Citron, a target for some of the individual traders who have helped drive huge gains for a number of niche Wall Street stocks in the past week, said in a video post it had abandoned its bet on GameStop shares falling.

Story continues below advertisement

With commentators and lawyers calling for scrutiny of the moves, Nasdaq chief Adena Friedman said exchanges and regulators needed to pay attention to the potential for “pump and dump” schemes driven by chatter on social media.

The Securities and Exchange Commission (SEC) declined to comment.

READ MORE: Will the 2nd coronavirus wave wipe away Canada’s movie theatres?

Mainstream commentators have questioned the justification of moves in a number of heavily-hyped stocks in recent days, at a time when some on Wall Street are wondering if months of stellar overall gains have driven shares into bubble territory.

GameStop’s stock has surged nearly 700 per cent in the past two weeks, upping the struggling video retailer’s market value from $1.24 billion to more than $10 billion. BlackBerry is up 185 per cent and on course for its best month ever.

Along with AMC and Nokia Oyj, the two were again among the most heavily traded in pre-market deals, with Reddit discussion threads again humming with chatter about the stocks.

“These are not normal times and while the (Reddit) … thing is fascinating to watch, I can’t help but think that this is unlikely to end well for someone,” Deutsche Bank strategist Jim Reid said.


Click to play video '‘Tenet’ movie release seen as litmus test for industry'



2:36
‘Tenet’ movie release seen as litmus test for industry


‘Tenet’ movie release seen as litmus test for industry – Aug 26, 2020

The advent of easily access apps like Robinhood that allow ordinary Americans to make stock market trades at almost no initial cost has spurred a boom in direct investment over the past year as trillions of dollars in official stimulus drove markets higher.

Story continues below advertisement

On GameStop, the retail army have pitched themselves against some of the institutional short-sellers — a traditional area for hedge funds — who promote and bet on falls in companies they judge as weak.

Overall, short-sellers in GameStop were down $5 billion on a mark-to-market, net-of-financing basis in 2021, which included $876 million of losses early Tuesday, according to analytics firm S3 Partners.

Barron’s reported late on Tuesday that the top securities regulator in Massachusetts believes trading in GameStop stock suggests there is something “systemically wrong” with the options trading around the stock.

Others say that the trades are at the end of the day up to the investors who make them.

“The SEC has investigated Robinhood before, but when you have a structure in place that allows the zero-cost trading platforms to operate – how do you stop that flow?” said Neil Campling, head of tech media and telecom research at Mirabaud Securities.

Trading in GameStop stock was halted for volatility nine times on Monday and five times on Tuesday.

— With files from Global News money reporter Erica Alini

© 2021 Reuters

Let’s block ads! (Why?)



Source link

Continue Reading

Trending