adplus-dvertising
Connect with us

Business

Sam Altman’s Second Coming Sparks New Fears of the AI Apocalyps

Published

 on

Open AI’s new boss is the same as the old boss. But the company—and the artificial intelligence industry—may have been profoundly changed by the past five days of high-stakes soap opera. Sam Altman, OpenAI’s CEO, cofounder, and figurehead, was removed by the board of directors on Friday. By Tuesday night, after a mass protest by the majority of the startup’s staff, Altman was on his way back, and most of the existing board was gone. But that board, mostly independent of OpenAI’s operations, bound to a “for the good of humanity” mission statement, was critical to the company’s uniqueness.

As Altman toured the world in 2023, warning the media and governments about the existential dangers of the technology that he himself was building, he portrayed OpenAI’s unusual for-profit-within-a-nonprofit structure as a firebreak against the irresponsible development of powerful AI. Whatever Altman did with Microsoft’s billions, the board could keep him and other company leaders in check. If he started acting dangerously or against the interests of humanity, in the board’s view, the group could eject him. “The board can fire me, I think that’s important,” Altman told Bloomberg in June.

“It turns out that they couldn’t fire him, and that was bad,” says Toby Ord, senior research fellow in philosophy at Oxford University, and a prominent voice among people who warn AI could pose an existential risk to humanity.

The chaotic leadership reset at OpenAI ended with the board being reshuffled to consist of establishment figures in tech and former US secretary of the treasury Larry Summers. Two directors associated with the “effective altruism” movement, the only women, were removed from the board. It has crystallized existing divides over how the future of AI should be governed. The outcome is seen very differently by doomers who worry that AI is going to destroy humanity; transhumanists who think the tech will hasten a utopian future; those who believe in freewheeling market capitalism; and advocates of tight regulation to contain tech giants that cannot be trusted to balance the potential harms of powerfully disruptive technology with a desire to make money.

“To some extent, this was a collision course that had been set for a long time,” says Ord, who is also credited with cofounding the effective altruism movement, parts of which have become obsessed with the doomier end of the AI risk spectrum. “If it’s the case that the nonprofit governance board of OpenAI was fundamentally powerless to actually affect its behavior, then I think that exposing that it was powerless was probably a good thing.”

Governance Gap

The reason that OpenAI’s board decided to move against Altman remains a mystery. Its announcement that Altman was out of the CEO seat said he “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” An internal OpenAI memo later clarified that Altman’s ejection “was not made in response to malfeasance.” Emmett Shear, the second of two interim CEOs to run the company between Friday night and Wednesday morning, wrote after accepting the role that he’d asked why Altman was removed. “The board did not remove Sam over any specific disagreement on safety,” he wrote. “Their reasoning was completely different from that.” He pledged to launch an investigation into the reasons for Altman’s dismissal.

The vacuum has left space for rumors, including that Altman was devoting too much time to side projects or was too deferential to Microsoft. It has also nurtured conspiracy theories, like the idea that OpenAI had created artificial general intelligence (AGI), and the board had flipped the kill switch on the advice of chief scientist, cofounder, and board member Ilya Sutskever.

“What I know with certainty is we don’t have AGI,” says David Shrier, professor of practice, AI, and innovation, at Imperial College Business School in London. “I know with certainty there was a colossal failure of governance.”

Shrier, who has sat on several tech company boards, says that failure isn’t just because of the obvious tension between the board’s nonprofit mission and the commercial desires of the executives and investors involved in the for-profit unit of OpenAI. It’s also a function of the company’s rapid growth in size and influence, reflective of the AI industry’s growing clout. “ChatGPT took six weeks to go from zero to 100 million users. The world wide web took seven years to get to that kind of scale,” he says. “Seven years is enough time for the human brain to catch up with technology. Six weeks, that’s barely enough time to schedule a board meeting.”

Despite the board’s supreme power on paper, the complexity and scale of OpenAI’s operations “clearly outstripped” the directors’ ability to oversee the company, Shrier says. He considers that alarming, given the real and immediate need to get a handle on the risks of AI technology. Ventures like OpenAI “are certainly not science projects. They’re no longer even just software companies,” he says. “These are global enterprises that have a significant impact on how we think, how we vote, how we run our companies, how we interact with each other. And as such, you need a mature and robust governance mechanism in place.”

Regulators around the world will be watching what happens next at OpenAI carefully. As Altman negotiated to return to OpenAI on Tuesday, the US Federal Trade Commission voted to give staff at the regulator powers to investigate companies selling AI-powered services, allowing them to legally compel documents, testimony, and other evidence.

The company’s boardroom drama also unfolded at a pivotal point in negotiations over the European Union’s landmark AI Act—a piece of legislation that could set the tone for regulations around the world. Bruised by previous failures to mitigate the social impacts of technology platforms, the EU has increasingly taken a more muscular approach to regulating Big Tech. However, EU officials and member states have disagreed over whether to come down hard on AI companies or to allow a degree of self-regulation.

One of the main sticking points in the EU negotiations is whether makers of so-called foundation models, like OpenAI’s GPT-4, should be regulated or whether legislation should focus on the applications that foundational models are used to create. The argument for singling out foundation models is that, as AI systems with many different capabilities, they will come to underpin many different applications on top, in the way that GPT-4 powers OpenAI’s chatbot ChatGPT.

This week, France, Germany, and Italy said they supported “mandatory self-regulation through codes of conduct” for foundation models, according to a joint paper first reported by Reuters—effectively suggesting that OpenAI and others can be trusted to keep their own technology in check. France and Germany are home to two of Europe’s leading foundation model makers, Mistral and Aleph Alpha. On X, Mistral CEO Arthur Mensch came out in favor of the idea that he could grade his own homework. “We don’t regulate the C language [a type of programming language] because one can use it to develop malware,” he said.

But for supporters of a more robust regulatory regime for AI, the past few days’ events show that self-regulation is insufficient to protect society. “What happened with this drama around Sam Altman shows us we cannot rely on visionary CEOs or ambassadors of these companies, but instead, we need to have regulation,” says Brando Benifei, one of two European Parliament lawmakers leading negotiations on the new rules. “These events show us there is unreliability and unpredictability in the governance of these enterprises.”

The high-profile failure of OpenAI’s governance structure is likely to amplify calls for stronger public oversight. “Governments are the only ones who can say no to investors,” says Nicolas Moës, director of European AI Governance at the Future Society, a Brussels-based think tank.

Rumman Chowdhury, founder of the nonprofit Humane Intelligence and former head of Twitter’s ethical AI team, says OpenAI’s crisis and reset should be a wake-up call. The events demonstrate that the notion of ethical capitalism—corporate structures that bind nonprofit and for-profit entities together—won’t work; government action is needed. “In a way, I’m glad it happened,” Chowdhury said of Altman’s departure and reinstatement.

Doom Loops

Among those more pessimistic about the risks of AI, the Altman drama prompted mixed reactions. By bringing existential risk to the forefront of international conversations, from the podium of a multibillion-dollar tech company, OpenAI’s CEO had propelled relatively fringe ideas popular among a certain slice of effective altruists into the mainstream. But people within the community that first incubated those notions weren’t blind to the inconsistency of Altman’s position, even as he boosted their fortunes.

Altman’s strategy of raising billions of dollars and partnering with a tech giant to pursue ever more advanced AI while also admitting that he didn’t fully understand where it might lead was hard to align with his professed fears of extinction-level events. The three independent board members who reportedly led the decision to remove Altman all had connections to effective altruism (EA), and their vilification by some of Altman’s supporters—including major power brokers in Silicon Valley—sits uneasily even with members of the EA community who previously professed support for Altman.

Altman’s emergence as the public face of AI doomerism also annoyed many who are more concerned with the immediate risks posed by accessible, powerful AI than by science fiction scenarios. Altman repeatedly asked governments to regulate him and his company for the good of humankind: “My worst fear is that we—the field, the technology, the industry—cause significant harm to the world,” Altman told a Congressional hearing in May, saying he wanted to work with governments to prevent that.

“I think the whole idea of talking about, ‘Please regulate us, because if you don’t regulate that we will destroy the world and humanity’ is total BS,” says Rayid Ghani, a distinguished career professor at Carnegie Mellon University who researches AI and public policy. “I think it’s totally distracting from the real risks that are happening now around job displacement, around discrimination, around transparency and accountability.”

While Altman was ultimately restored, OpenAI and other leading AI startups look a little different as the dust settles after the five-day drama. ChatGPT’s maker and rivals working on chatbots or image generators feel less like utopian projects striving for a better future and more like conventional ventures primarily motivated to generate returns on the capital of their investors. AI turns out to be much like other areas of business and technology, a field where everything happens in the gravitational field of Big Tech, which has the compute power, capital, and market share to dominate.

OpenAI described the new board makeup announced yesterday as temporary and is expected to add more names to the currently all-male roster. The final shape of the board overseeing Altman is likely to be heavier on tech and lighter on doom, and analysts predict the board and company both are likely to cleave closer to Microsoft, which has pledged $13 billion to OpenAI. Microsoft CEO Satya Nadella expressed frustration in media interviews on Monday that it was possible for the board to spring surprises on him. “I’ll be very, very clear: We’re never going to get back into a situation where we get surprised like this, ever again,” he said on a joint episode of the Pivot and Kara Swisher On podcasts. “That’s done.”

Although Altman has portrayed his restoration as a return to business as before, OpenAI is now expected to perform more directly as Microsoft’s avatar in its battle with Google and other giants. Meta and Amazon have also increased their investments in AI, and Amazon has committed a $1.25 billion investment to Anthropic, started by former OpenAI staff in 2021.

“And so now, it’s not just a race between these AI labs, where the people who founded them, I think, genuinely care about the historic significance of what they could be doing,” Ord says. “It’s also now a race between some of the biggest companies in the world, and that’s changed the character of it. I think that that aspect is quite dangerous.”

Additional reporting by Khari Johnson.

 

728x90x4

Source link

Continue Reading

Business

Roots sees room for expansion in activewear, reports $5.2M Q2 loss and sales drop

Published

 on

 

TORONTO – Roots Corp. may have built its brand on all things comfy and cosy, but its CEO says activewear is now “really becoming a core part” of the brand.

The category, which at Roots spans leggings, tracksuits, sports bras and bike shorts, has seen such sustained double-digit growth that Meghan Roach plans to make it a key part of the business’ future.

“It’s an area … you will see us continue to expand upon,” she told analysts on a Friday call.

The Toronto-based retailer’s push into activewear has taken shape over many years and included several turns as the official designer and supplier of Team Canada’s Olympic uniform.

But consumers have had plenty of choice when it comes to workout gear and other apparel suited to their sporting needs. On top of the slew of athletic brands like Nike and Adidas, shoppers have also gravitated toward Lululemon Athletica Inc., Alo and Vuori, ramping up competition in the activewear category.

Roach feels Roots’ toehold in the category stems from the fit, feel and following its merchandise has cultivated.

“Our product really resonates with (shoppers) because you can wear it through multiple different use cases and occasions,” she said.

“We’ve been seeing customers come back again and again for some of these core products in our activewear collection.”

Her remarks came the same day as Roots revealed it lost $5.2 million in its latest quarter compared with a loss of $5.3 million in the same quarter last year.

The company said the second-quarter loss amounted to 13 cents per diluted share for the quarter ended Aug. 3, the same as a year earlier.

In presenting the results, Roach reminded analysts that the first half of the year is usually “seasonally small,” representing just 30 per cent of the company’s annual sales.

Sales for the second quarter totalled $47.7 million, down from $49.4 million in the same quarter last year.

The move lower came as direct-to-consumer sales amounted to $36.4 million, down from $37.1 million a year earlier, as comparable sales edged down 0.2 per cent.

The numbers reflect the fact that Roots continued to grapple with inventory challenges in the company’s Cooper fleece line that first cropped up in its previous quarter.

Roots recently began to use artificial intelligence to assist with daily inventory replenishments and said more tools helping with allocation will go live in the next quarter.

Beyond that time period, the company intends to keep exploring AI and renovate more of its stores.

It will also re-evaluate its design ranks.

Roots announced Friday that chief product officer Karuna Scheinfeld has stepped down.

Rather than fill the role, the company plans to hire senior level design talent with international experience in the outdoor and activewear sectors who will take on tasks previously done by the chief product officer.

This report by The Canadian Press was first published Sept. 13, 2024.

Companies in this story: (TSX:ROOT)

The Canadian Press. All rights reserved.

Source link

Continue Reading

Business

Talks on today over HandyDART strike affecting vulnerable people in Metro Vancouver

Published

 on

 

VANCOUVER – Mediated talks between the union representing HandyDART workers in Metro Vancouver and its employer, Transdev, are set to resume today as a strike that has stopped most services drags into a second week.

No timeline has been set for the length of the negotiations, but Joe McCann, president of the Amalgamated Transit Union Local 1724, says they are willing to stay there as long as it takes, even if talks drag on all night.

About 600 employees of the door-to-door transit service for people unable to navigate the conventional transit system have been on strike since last Tuesday, pausing service for all but essential medical trips.

Hundreds of drivers rallied outside TransLink’s head office earlier this week, calling for the transportation provider to intervene in the dispute with Transdev, which was contracted to oversee HandyDART service.

Transdev said earlier this week that it will provide a reply to the union’s latest proposal on Thursday.

A statement from the company said it “strongly believes” that their employees deserve fair wages, and that a fair contract “must balance the needs of their employees, clients and taxpayers.”

This report by The Canadian Press was first published Sept. 12, 2024.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Business

Transat AT reports $39.9M Q3 loss compared with $57.3M profit a year earlier

Published

 on

 

MONTREAL – Travel company Transat AT Inc. reported a loss in its latest quarter compared with a profit a year earlier as its revenue edged lower.

The parent company of Air Transat says it lost $39.9 million or $1.03 per diluted share in its quarter ended July 31.

The result compared with a profit of $57.3 million or $1.49 per diluted share a year earlier.

Revenue in what was the company’s third quarter totalled $736.2 million, down from $746.3 million in the same quarter last year.

On an adjusted basis, Transat says it lost $1.10 per share in its latest quarter compared with an adjusted profit of $1.10 per share a year earlier.

Transat chief executive Annick Guérard says demand for leisure travel remains healthy, as evidenced by higher traffic, but consumers are increasingly price conscious given the current economic uncertainty.

This report by The Canadian Press was first published Sept. 12, 2024.

Companies in this story: (TSX:TRZ)

The Canadian Press. All rights reserved.

Source link

Continue Reading

Trending