Connect with us


Canada's C-Suite flocks to emerging audio app Clubhouse, but long-term appeal unclear – Thompson Citizen



TORONTO — When earnings season rolls around, Duncan Fulton spends days preparing for calls with media, analysts and investors, but hardly ever gets a chance to deliver his messages directly to the people who frequent his Tim Hortons coffee shops or Popeyes drive-thrus.

That changed in February when the chief corporate officer of Restaurant Brands International joined chief executive Jose Cil on Clubhouse — an emerging audio platform that gives anyone with an iPhone and an app the ability to host and access discussions on every topic imaginable.

article continues below

“It’s like reimagined talk radio with calls, but we are the producer,” said Fulton, who hosted an “open kitchen” talk the day after RBI released its latest quarterly earnings.

“Our guests don’t care about our adjusted EBITDA. They care about real stuff, about our food, our brands, and so we said, ‘Why don’t we use Clubhouse?'”

Fulton and Cil are the latest Canadian executives to turn to the app started by San Francisco serial entrepreneurs Paul Davidson and Rohan Seth last spring as a new way to host public conversations.

As COVID-19 spread throughout the globe and lockdowns kept millions of people at home, executives from top venture capital and tech firms began to jockey for access to the invite-only audio platform.

By the start of 2021, hundreds of business leaders and other Canadians had joined Clubhouse, which has offered increasing numbers of invites since late last year.

Members have been able to hear SpaceX CEO Elon Musk discuss whether he believes in aliens, Shopify executives Tobi Lutke and Harley Finkelstein wax poetic about entrepreneurship and Wattpad founder Allen Lau talk about his recent decision to sell the company.

“It’s really democratizing corporate Canada and corporate America in a way,” says Fulton, “because normally consumers wouldn’t get this access to senior business leaders.”

He pitched a Clubhouse talk to Cil after being introduced to the platform by Ottawa restaurateur Stephen Beckta, who got his invite from Finkelstein.

After dipping into music conversations, Fulton found he liked the exploratory nature of the platform and that moderators have control over who can speak and when.

“If you’re a business leader that wants the safety of not taking questions, you can still go on there, share your views, and there’s lots of people that are happy to not participate, not ask questions and just listen,” he said.

Richard Lachman, a digital media professor at Ryerson University, agreed the platform can be helpful for executives wanting to manage their image, but said users will quickly drop out of conversations if a speaker is boring them or recognize when someone is too scripted.

Though executives go through media training, he said a few “embarrassments” will likely arise on the app if people don’t know how to respond to “aggressive” questions or can’t kick someone out of a discussion fast enough.

While the app doesn’t overtly market itself as private, its invite-only nature has built a casual atmosphere, even as its userbase grows.

Clubhouse did not respond to a request for comment, but has a “rule” banning transcribing, recording or sharing personal information heard on the app. The company recently removed a bot it found sneaking into discussions to restream them to people without the app.

Still, a quick search on social media reveals dozens of recordings and quotes from the app available online.

Prominent venture capitalists faced criticism last year when audio leaked of them ridiculing New York Times journalist Taylor Lorenz and complaining that so-called cancel culture — sometimes described as withdrawing support for someone caught misbehaving or using outmoded language and expressions — had gone too far.

There have also been privacy complaints from users who opted not to give the app access to their contact lists, but say it is detecting their sign-ups and alerting friends whose numbers they have stored.

Once on the app, some users reported they stumbled upon misogyny and racism in discussions, despite rules against abuse and bullying and a feature to report problematic users.

“Some of the challenges (Clubhouse) is facing is that this content is very unmoderated and we are not in 2003 in (Facebook founder) Mark Zuckerberg’s dorm room, pretending that anything we make we know where it’ll go and we’ll just let the market figure it out,” said Lachman.

“We know what might happen. Online spaces can be incredibly toxic, they can be harsh and we know that things can be taken out context very quickly and easily duplicated on other platforms.”

Despite the issues, Deepak Anand, chief executive of medical cannabis company Materia Ventures, joined the app. He hosts several pot discussions on it every week, but is careful in his approach.

He doesn’t share anything on Clubhouse he wouldn’t be comfortable with if it were leaked, but has seen several instances of people not realizing how public the app is.

“People generally like to share more than they normally would on the platform because it’s easy to get carried away and it almost seems like you’re having a conversation with friends,” he said.

Among the positives, Anand saysClubhouse has helped him discover new ways to network while stuck at home during the pandemic and increased his social media followers.

He’s unsure the app will continue to be his go-to because a competitor, Twitter Spaces, has caught his eye.

Tech Crunch reported that users who mined Twitter’s coding have found Spaces, which is still in pilot mode, experimenting with ways to embed tweets into discussions, offer transcription for users with disabilities and enhance blocking capabilities.

Facebook is said to be developing a similar platform, but hasn’t formally released any details.

The number of emerging audio apps and the flood of new Clubhouse users will make it even tougher for executives to stand out, Lachman predicted.

“This might have value right now, but in a year or two from now, that might get lost.”

This report by The Canadian Press was first published March 1, 2021.

Companies in this story: (TSX:QSR, TSX:SHOP)

Note to readers: This is a corrected story. An earlier version included an incorrect title for Duncan Fulton.

Let’s block ads! (Why?)

Source link

Continue Reading


Apple says it will begin scanning iCloud Photos for child abuse images – TechCrunch



Later this year, Apple will roll out a technology that will allow the company to detect and report known child sexual abuse material to law enforcement in a way it says will preserve user privacy.

Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Most cloud services — Dropbox, Google, and Microsoft to name a few — already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM. But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.

Apple said its new CSAM detection technology — NeuralHash — instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.

News of Apple’s effort leaked Wednesday when Matthew Green, a cryptography professor at Johns Hopkins University, revealed the existence of the new technology in a series of tweets. The news was met with some resistance from some security experts and privacy advocates, but also users who are accustomed to Apple’s approach to security and privacy that most other companies don’t have.

Apple is trying to calm fears by baking in privacy through multiple layers of encryption, fashioned in a way that requires multiple steps before it ever makes it into the hands of Apple’s final manual review.

NeuralHash will land in iOS 15 and macOS Monterey, slated to be released in the next month or two, and works by converting the photos on a user’s iPhone or Mac into a unique string of letters and numbers, known as a hash. Any time you modify an image slightly, it changes the hash and can prevent matching. Apple says NeuralHash tries to ensure that identical and visually similar images — such as cropped or edited images — result in the same hash.

Before an image is uploaded to iCloud Photos, those hashes are matched on the device against a database of known hashes of child abuse imagery, provided by child protection organizations like the National Center for Missing & Exploited Children (NCMEC) and others. NeuralHash uses a cryptographic technique called private set intersection to detect a hash match without revealing what the image is or alerting the user.

The results are uploaded to Apple but cannot be read on their own. Apple uses another cryptographic principle called threshold secret sharing that allows it only to decrypt the contents if a user crosses a threshold of known child abuse imagery in their iCloud Photos. Apple would not say what that threshold was, but said — for example — that if a secret is split into a thousand pieces and the threshold is ten images of child abuse content, the secret can be reconstructed from any of those ten images.

Read more on TechCrunch

It’s at that point Apple can decrypt the matching images, manually verify the contents, disable a user’s account and report the imagery to NCMEC, which is then passed to law enforcement. Apple says this process is more privacy mindful than scanning files in the cloud as NeuralHash only searches for known and not new child abuse imagery. Apple said that there is a one in one trillion chance of a false positive, but there is an appeals process in place in the event an account is mistakenly flagged.

Apple has published technical details on its website about how NeuralHash works, which was reviewed by cryptography experts.

But despite the wide support of efforts to combat child sexual abuse, there is still a component of surveillance that many would feel uncomfortable handing over to an algorithm, and some security experts are calling for more public discussion before Apple rolls the technology out to users.

A big question is why now and not sooner. Apple said its privacy-preserving CSAM detection did not exist until now. But companies like Apple have also faced considerable pressure from the U.S. government and its allies to weaken or backdoor the encryption used to protect their users’ data to allow law enforcement to investigate serious crime.

Tech giants have refused efforts to backdoor their systems, but have faced resistance against efforts to further shut out government access. Although data stored in iCloud is encrypted in a way that even Apple cannot access it, Reuters reported last year that Apple dropped a plan for encrypting users’ full phone backups to iCloud after the FBI complained that it would harm investigations.

The news about Apple’s new CSAM detection tool, without public discussion, also sparked concerns that the technology could be abused to flood victims with child abuse imagery that could result in their account getting flagged and shuttered, but Apple downplayed the concerns and said a manual review would review the evidence for possible misuse.

Apple said NeuralHash will roll out in the U.S. at first, but would not say if, or when, it would be rolled out internationally. Until recently, companies like Facebook were forced to switch off its child abuse detection tools across the bloc after the practice was inadvertently banned. Apple said the feature is technically optional in that you don’t have to use iCloud Photos, but will be a requirement if users do. After all, your device belongs to you but Apple’s cloud does not.

Adblock test (Why?)

Source link

Continue Reading


Fullbright Co-Founder Steps Down Following Toxic Workplace Allegations – TechRaptor



Steve Gaynor, Fullbright’s co-founder, has stepped down as the studio’s manager and creative lead following a series of allegations surrounding a toxic culture within the studio. Gaynor transitioned from his creative lead to a role as a writer as of March of this year according to a Fullbright representative, who spoke to Polygon

Open Roads‘ official Twitter account raised alarm bells when it posted a statement regarding the workplace culture and how the company was going to move forward, citing the importance of a “healthy and collaborative environment”. According to the statement, the decision was made for the health of the entire company, relinquishing daily responsibilities to the remaining staff. 

Open Roads, the studio’s current in-development project has sustained major setbacks as 15 former employees have left the company since development on the game began in 2019, leaving only six staff members. Of the 15 that have left, 12 did so directly because of Gaynor’s behavior toward women. At least 10 of those that left because of his behavior were women, which lines up with multiple anonymous reports concerning what it’s like to work underneath Gaynor. 

The anonymous reports haven’t cited issues such as sexual misconduct or outright sexism. The toxic work environment is reportedly “controlling”, with female employees bearing the brunt of Gaynor’s dismissive and condescending attitude. Gaynor was beyond difficult to work with, cited as making jokes at the expense of his employees in front of others. He’d repeatedly laugh at and embarrass women in front of coworkers while micromanaging women in leadership roles to the point that they felt their creativity, as well as their ability to work, was stifled. 

The studio had attempted a mediator between Gaynor and his team as a means of de-escalating the situation, but it only served as a temporary solution. The team didn’t feel respected enough under Gaynor’s leadership, leading to him stepping down to a remote writing role, relinquishing his prior duties to others in leadership. 

Under the current state of affairs, Gaynor is working on his writing role separately from the core staff. Instead of continuing to work within the same offices, Open Roads’ publisher, Annapurna Interactive, is communicating between the two parties to avoid further friction. Under this set of circumstances, Gaynor no longer has daily collaboration with Fullbright. 

After the story broke out from Polygon, Gaynor released his own statement through a Twitter thread on his own account. According to him, these working conditions have given him the “space and perspective” to reconsider how he approaches leadership. 

“Hi all. I have a statement to share about my role at Fullbright.

Earlier this year, I stepped back from my role as creative lead on Open Roads. My leadership style was hurtful to people that worked at Fullbright, and for that I truly apologize.

Stepping back has given me space and perspective to see how my role needs to change and how I need to learn and improve as part of a team, including working with an expert management consultant, and rethinking my relationship to the work at Fullbright.

I care deeply about Open Roads and the Fullbright team. I’m sad to have stepped back from day-to-day development of Open Roads, but it’s been the right thing to do. The Open Roads team has my full fiath and support as they bring the game to completion.”

Given how many people have left because of Gaynor, some might be wondering why he hasn’t been fired. As the studio’s co-founder, being fired isn’t such a simple thing to do. He wasn’t a person that stepped into a leadership role divorced from the studio’s creation. His own personal Twitter account served as the official Fullbright handle for over a decade. The team created @FullbrightGames,  created May 2021, around when Gaynor stepped down, as the studio’s Twitter handle moving forward. 

Adblock test (Why?)

Source link

Continue Reading


Google Pixel 6 and Pixel 6 Pro official wallpapers revealed; display resolutions deciphered –



9to5Google has also deduced the resolutions of the Pixel 6 and Pixel 6 Pro, based on the size of the punch holes in their respective backgrounds. Supposedly, the Pixel 6 has a 2,400 x 1,080-pixel display, 60 pixels taller than the 1080p panel of the Pixel 5. If this is the case, then the Pixel 6’s 6.4-inch display has a 20:9 aspect ratio and a 411 DPI.

Meanwhile, 9to5Google claims that the Pixel 6 Pro will operate natively at 3,120 x 1,440 pixels, making it 80 pixels taller than the Pixel 4 XL, Google’s last 1440p smartphone. The Pixel 4 XL may have a more pixel-dense display though, albeit only marginally. Based on 9to5Google’s findings, the Pixel 6 Pro has a 513 DPI display, compared to the 537 PPI that the Pixel 4 XL offers. Nonetheless, the Pixel 6 Pro supports 120 Hz, which is beyond the Pixel 4 XL’s capabilities.

Purchase the Google Pixel 5 on Amazon

Adblock test (Why?)

Source link

Continue Reading