Connect with us

Media

The CBSA launches investigations into grinding media from India – Canada NewsWire

Published

 on


OTTAWA, ON, Dec. 17, 2020 /CNW/ – The Canada Border Services Agency (CBSA) announced today that it is launching investigations to determine whether certain grinding media originating in or exported from India is being sold at unfair prices in Canada and whether it is being subsidized.

The investigations are the result of a complaint filed by Magotteaux Limitée, located in Magog, Quebec. The complainant alleges that they are facing an increase in the volume of the allegedly dumped and subsidized imports, price depression and suppression, loss of  sales, price undercutting, loss of market share, impacted financial results, underutilization of capacity, reduced employment, and threat to continuous investments.

The CBSA and the Canadian International Trade Tribunal (CITT) each play a role in the investigation. The CITT will begin a preliminary inquiry to determine whether the imports are harming the Canadian producers and will issue a decision by February 15, 2021. Concurrently, the CBSA will investigate whether the imports are being sold in Canada at unfair prices, and will make a preliminary decision by March 17, 2021.

Currently, there are 125 special import measures in force, covering a wide variety of industrial and consumer products, from steel products to refined sugar. These measures have directly helped to protect the Canadian economy and jobs.

Quick Facts

  • The subject goods are grinding media. For more product information, please refer to Canada Border Services Agency: Anti-dumping and Countervailing.
  • Grinding media is commonly used in the mining and cement industries to process minerals, particularly ore, to minute particles or fragments.
  • A copy of the Statement of Reasons, which provides more details about these investigations, will be available on the CBSA’s website within 15 days.
  • As of December 31, 2019, special import measures have directly helped to protect 34,810 Canadian jobs and $9.56 billion in Canadian production.

Associated Links

Follow us on Twitter (@CanBorder), join us on Facebook or visit our YouTube channel.

SOURCE Canada Border Services Agency

For further information: Media Relations, Canada Border Services Agency, [email protected], 613-957-6500 or 1-877-761-5945, http://www.cbsa-asfc.gc.ca/media/media-eng.html

Related Links

http://www.cbsa-asfc.gc.ca/

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Social Media Companies Should Self-Regulate. Now. – Harvard Business Review

Published

 on


The world witnessed the worst example of the impact digital platforms can have on society with the debacle at the U.S. Capitol on January 6, 2021. Not only did supporters of Donald Trump try to disrupt the certification of the Electoral College votes, but this deplorable incident was, in large part, fomented over social media.

In the past, Twitter and Facebook have been reluctant to censor posts about conspiracy theories and fake news. Digital platforms also have benefitted from a 1996 law, Section 230 of the Communications Decency Act, that grants them immunity from liabilities related to third-party hosted content. Nevertheless, prompted by false accusations of rigged elections and other fake news, the leading digital platforms in social media recently began tagging some posts as unreliable or untrue and removing some videos. Following the January 6th insurrection attempt, Twitter and Facebook also banned Trump from using their platforms because promotion of violence and criminal acts violates their terms of service. For similar reasons, Apple and Google removed the alternative Parler social media platform from their app stores, and Amazon stopped hosting the service.

How did we get into this mess?

Digital platforms can be highly profitable businesses that connect users and other market actors in ways not possible before the internet. When they are successful, they generate powerful feedback loops called network effects and then monetize them by selling advertisements. But what happened at the U.S. Capitol illustrates how digital platforms can be double-edged swords. Yes, they have generated trillions of dollars in wealth. But they have also enabled the distribution of fake news and fake products, manipulation of digital content for political purposes, and promotion of dangerous misinformation on elections, vaccines, and other public health matters.

The social dilemma is clear: Digital platforms can be used for evil as well as good.

What’s the solution? Should platform companies wait for governments to impose potentially intrusive controls and respond defensively? Or should they act pre-emptively?

Governments will inevitably get more engaged in oversight. However, we believe that platforms should become more aggressive at self-regulation now. To explore the feasibility of self-regulation, we researched the history of self-regulation before and after the widespread adoption of the internet. We found that companies have often risked creating a “tragedy of the commons” when they put their short-term, individual self-interests ahead of the good of the consuming public or the industry overall, and, in the long, destroy the environment that made them successful in the first place.

Before the internet era, several industries, such as movies, video games, broadcasting content, television advertising, and computerized airline reservation systems, faced similar issues and managed to self-regulate with some success. At the same time, these historical examples suggest that self-regulation worked best when there were credible threats of government regulation. The bottom line: Self-regulation may be the key to avoiding a potential tragedy of the commons scenario for digital platforms.

What is “self-regulation”? This refers to the steps companies or industry associations take to preempt or supplement governmental rules and guidelines. For an individual company, self-regulation ranges from self-monitoring for regulatory violations to proactive “corporate social responsibility” (CSR) initiatives. Leaving it up to companies to monitor and restrain themselves can sometimes devolve into a self-regulatory or regulatory “charade.” But that doesn’t need to be the case.

For many decades, companies in the business of producing movies, video games, and television shows and commercials all have faced issues around the appropriateness of “content” in a way that resembles today’s social media platforms. To keep regulators at bay, the movie and video games industries resorted to a self-imposed and self-monitored rating system, still in operation today. The broadcasting and advertisement sectors in the 1950s and 1960s faced pushback on the appropriateness of advertisements, with issues resembling what we see today in online advertising. Launched in 1960, the airline reservation industry, led by American Airlines’ Sabre system, introduced self-preferencing in search results, similar to complaints made against Google and Amazon. Self-regulation in these cases often delivered effective and inexpensive guidelines for company operations as well as forestalled more intrusive government intervention.

History provides several lessons for today’s digital platforms.

First, our leading technology companies need to anticipate when government regulation is likely to become a key factor in their businesses. In movies, radio and television broadcasting, airline reservations via computers, and other new industries, there often occurs a vacuum in regulation in the early years. Then, after a kind of “wild west” environment, governments step in to regulate or pressure firms to curb abuses. To avoid problematic government regulation, platform companies need to introduce their own controls on behavior and usage before the government revokes all Section 230 protections, which is currently under debate in Congress. Technology that exploits big data, artificial intelligence, and machine learning, with some human editing, will increasingly give digital platforms the ability to curate what happens on their platforms. The issue is really to what extent the big platforms have the will to self-regulate. The decisions by Facebook, Twitter, Amazon, Apple, and Google during the first week in January 2021 were steps in the right direction.

Second, we find that firms in new industries tend to eschew self-regulation when the perceived costs imply a significant reduction in revenues or profits. Managers rarely like industry regulations that appear “bad for business.” However, this strategy can be self-defeating. If bad behavior undermines consumer trust, then digital platforms will not continue to thrive. Look closely at Section 230. It states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This act gave online intermediaries broad immunity from liability for user-generated content posted on their sites. Company lawyers generally interpreted this legislation as providing protection as long as they did not engage in curation. However, Section 230 also included a “good Samaritan” exception. This allowed platforms to remove or moderate content deemed obscene or offensive, as long as it was done in good faith. There have been growing calls from both Democrats and Republicans to repeal Section 230 because of accusations of bias (i.e., not acting in good faith) and very little curation over the prior decade by Twitter, Facebook/Instagram, and other platforms. More explicit and transparent self-regulation, like we observed after the U.S. Capitol debacle, might well produce a better outcome for social media platforms, at least compared to leaving their fate up to Congress.

Third, proactive self-regulation was often more successful when coalitions of firms in the same sector worked together. We saw this coalition-type of activity in movie and video-game rating systems limiting violent, profane, or sexual content; television advertisements rules curbing unhealthy products like alcohol and tobacco; and computerized online airline reservations giving equal treatment to airlines, without favoring the system owners. Similarly, social media companies implemented codes of conduct on terrorist activity. Since individual firms may hesitate to enact self-regulation if they incur added costs that their competitors do not, industry coalitions have the benefit of reducing free-riding. Now is the ideal time for more “coopetition,” where platforms compete as well as cooperate with rivals.

Fourth, we found that firms or industry coalitions get serious about self-regulation primarily when they see a credible threat of government regulation, even if it may hurt short-term sales and profits. This pattern occurred with tobacco and cigarette ads, airline reservations, social media ads for terrorist group recruitment, and pornographic material. That threat should be clear and obvious to digital platforms in 2021.

In sum, history suggests that modern digital platforms should not wait for governments to impose controls; they should act decisively and pro-actively now. While the costs of government action in the internet era have been modest so far, the regulatory environment is changing fast. Given the increasing likelihood of government action, the goal of self-regulation should be to avoid a tragedy of the commons, where a lack of trust destroys the environment that has allowed digital platforms to thrive. Going forward, governments and digital platforms will also need to work together more closely. Since more government oversight over Twitter, Facebook, Google, Amazon, and other platforms seems inevitable, new institutional mechanisms for more participative forms of regulation may be critical to their long-term survival and success.

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Social Media Companies Should Self-Regulate. Now. – Harvard Business Review

Published

 on


The world witnessed the worst example of the impact digital platforms can have on society with the debacle at the U.S. Capitol on January 6, 2021. Not only did supporters of Donald Trump try to disrupt the certification of the Electoral College votes, but this deplorable incident was, in large part, fomented over social media.

In the past, Twitter and Facebook have been reluctant to censor posts about conspiracy theories and fake news. Digital platforms also have benefitted from a 1996 law, Section 230 of the Communications Decency Act, that grants them immunity from liabilities related to third-party hosted content. Nevertheless, prompted by false accusations of rigged elections and other fake news, the leading digital platforms in social media recently began tagging some posts as unreliable or untrue and removing some videos. Following the January 6th insurrection attempt, Twitter and Facebook also banned Trump from using their platforms because promotion of violence and criminal acts violates their terms of service. For similar reasons, Apple and Google removed the alternative Parler social media platform from their app stores, and Amazon stopped hosting the service.

How did we get into this mess?

Digital platforms can be highly profitable businesses that connect users and other market actors in ways not possible before the internet. When they are successful, they generate powerful feedback loops called network effects and then monetize them by selling advertisements. But what happened at the U.S. Capitol illustrates how digital platforms can be double-edged swords. Yes, they have generated trillions of dollars in wealth. But they have also enabled the distribution of fake news and fake products, manipulation of digital content for political purposes, and promotion of dangerous misinformation on elections, vaccines, and other public health matters.

The social dilemma is clear: Digital platforms can be used for evil as well as good.

What’s the solution? Should platform companies wait for governments to impose potentially intrusive controls and respond defensively? Or should they act pre-emptively?

Governments will inevitably get more engaged in oversight. However, we believe that platforms should become more aggressive at self-regulation now. To explore the feasibility of self-regulation, we researched the history of self-regulation before and after the widespread adoption of the internet. We found that companies have often risked creating a “tragedy of the commons” when they put their short-term, individual self-interests ahead of the good of the consuming public or the industry overall, and, in the long, destroy the environment that made them successful in the first place.

Before the internet era, several industries, such as movies, video games, broadcasting content, television advertising, and computerized airline reservation systems, faced similar issues and managed to self-regulate with some success. At the same time, these historical examples suggest that self-regulation worked best when there were credible threats of government regulation. The bottom line: Self-regulation may be the key to avoiding a potential tragedy of the commons scenario for digital platforms.

What is “self-regulation”? This refers to the steps companies or industry associations take to preempt or supplement governmental rules and guidelines. For an individual company, self-regulation ranges from self-monitoring for regulatory violations to proactive “corporate social responsibility” (CSR) initiatives. Leaving it up to companies to monitor and restrain themselves can sometimes devolve into a self-regulatory or regulatory “charade.” But that doesn’t need to be the case.

For many decades, companies in the business of producing movies, video games, and television shows and commercials all have faced issues around the appropriateness of “content” in a way that resembles today’s social media platforms. To keep regulators at bay, the movie and video games industries resorted to a self-imposed and self-monitored rating system, still in operation today. The broadcasting and advertisement sectors in the 1950s and 1960s faced pushback on the appropriateness of advertisements, with issues resembling what we see today in online advertising. Launched in 1960, the airline reservation industry, led by American Airlines’ Sabre system, introduced self-preferencing in search results, similar to complaints made against Google and Amazon. Self-regulation in these cases often delivered effective and inexpensive guidelines for company operations as well as forestalled more intrusive government intervention.

History provides several lessons for today’s digital platforms.

First, our leading technology companies need to anticipate when government regulation is likely to become a key factor in their businesses. In movies, radio and television broadcasting, airline reservations via computers, and other new industries, there often occurs a vacuum in regulation in the early years. Then, after a kind of “wild west” environment, governments step in to regulate or pressure firms to curb abuses. To avoid problematic government regulation, platform companies need to introduce their own controls on behavior and usage before the government revokes all Section 230 protections, which is currently under debate in Congress. Technology that exploits big data, artificial intelligence, and machine learning, with some human editing, will increasingly give digital platforms the ability to curate what happens on their platforms. The issue is really to what extent the big platforms have the will to self-regulate. The decisions by Facebook, Twitter, Amazon, Apple, and Google during the first week in January 2021 were steps in the right direction.

Second, we find that firms in new industries tend to eschew self-regulation when the perceived costs imply a significant reduction in revenues or profits. Managers rarely like industry regulations that appear “bad for business.” However, this strategy can be self-defeating. If bad behavior undermines consumer trust, then digital platforms will not continue to thrive. Look closely at Section 230. It states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This act gave online intermediaries broad immunity from liability for user-generated content posted on their sites. Company lawyers generally interpreted this legislation as providing protection as long as they did not engage in curation. However, Section 230 also included a “good Samaritan” exception. This allowed platforms to remove or moderate content deemed obscene or offensive, as long as it was done in good faith. There have been growing calls from both Democrats and Republicans to repeal Section 230 because of accusations of bias (i.e., not acting in good faith) and very little curation over the prior decade by Twitter, Facebook/Instagram, and other platforms. More explicit and transparent self-regulation, like we observed after the U.S. Capitol debacle, might well produce a better outcome for social media platforms, at least compared to leaving their fate up to Congress.

Third, proactive self-regulation was often more successful when coalitions of firms in the same sector worked together. We saw this coalition-type of activity in movie and video-game rating systems limiting violent, profane, or sexual content; television advertisements rules curbing unhealthy products like alcohol and tobacco; and computerized online airline reservations giving equal treatment to airlines, without favoring the system owners. Similarly, social media companies implemented codes of conduct on terrorist activity. Since individual firms may hesitate to enact self-regulation if they incur added costs that their competitors do not, industry coalitions have the benefit of reducing free-riding. Now is the ideal time for more “coopetition,” where platforms compete as well as cooperate with rivals.

Fourth, we found that firms or industry coalitions get serious about self-regulation primarily when they see a credible threat of government regulation, even if it may hurt short-term sales and profits. This pattern occurred with tobacco and cigarette ads, airline reservations, social media ads for terrorist group recruitment, and pornographic material. That threat should be clear and obvious to digital platforms in 2021.

In sum, history suggests that modern digital platforms should not wait for governments to impose controls; they should act decisively and pro-actively now. While the costs of government action in the internet era have been modest so far, the regulatory environment is changing fast. Given the increasing likelihood of government action, the goal of self-regulation should be to avoid a tragedy of the commons, where a lack of trust destroys the environment that has allowed digital platforms to thrive. Going forward, governments and digital platforms will also need to work together more closely. Since more government oversight over Twitter, Facebook, Google, Amazon, and other platforms seems inevitable, new institutional mechanisms for more participative forms of regulation may be critical to their long-term survival and success.

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Investors push for social media controls ahead of U.S. inauguration – Cape Breton Post

Published

 on


By Ross Kerber

BOSTON (Reuters) – Pension fund managers and religious investors on Friday asked top social media companies to step up their content control efforts to reduce the threat of violence ahead of the inauguration of U.S. President-elect Joe Biden next week.

The effort is the latest pressure on Facebook Inc, Twitter Inc and Alphabet Inc over extreme rhetoric after the storming of the U.S. Capitol last week by supporters of President Donald Trump.

In letters sent on Thursday, the investors – including New York State Comptroller Thomas DiNapoli, the Service Employees International Union and the Unitarian Universalist Association – asked for steps including disabling the coding they said tends to elevate conspiracy theories and radicalizing content, and for the companies to continue to flag content with hashtags like #Stopthesteal.

In the longer run, boards and executives must review their “business model and reliance on algorithmic decision making, which has been linked to the spread of hate and disinformation online,” the letters said.

Alphabet representatives did not respond to questions. A Facebook spokesman said it has banned over 250 white supremacist groups and enforced rules like those barring militias from organizing on its platform. A Twitter representative cited actions it has taken like suspending accounts that mainly shared QAnon content.

Violent rhetoric on social media platforms has ramped up in recent weeks as groups planned openly for the gathering in Washington, according to researchers and public postings, prompting criticism of the companies for failing to take action in advance.

Twitter and Facebook banned Trump’s accounts last week as the tech giants scrambled to crack down on Trump’s baseless claims of fraud in the U.S. presidential election.

The activist investors together manage about $390 billion in assets but own relatively small stakes in the social media companies. Top shareholders in the space so far have declined to comment on their responses including BlackRock Inc Vanguard Group Inc and Morgan Stanley.

The bans on Trump have prompted concern among other investors that users and advertisers would leave for different platforms. Twitter CEO Jack Dorsey said the decision was correct but set a dangerous precedent. Facebook operations chief Sheryl Sandberg has said the company has no plans to lift its ban.

(Reporting by Ross Kerber; Editing by Cynthia Osterman and Raju Gopalakrishnan)

Let’s block ads! (Why?)



Source link

Continue Reading

Trending