It will create “guardrails and frameworks” to address issues like the use of AI to spread misinformation and data security.
Article content
The federal government says it hopes a new artificial intelligence safety institute will encourage Canadian businesses to make more use of the technology.
Industry Minister François-Philippe Champagne said the Canadian Artificial Intelligence Safety Institute will create “guardrails and frameworks” to address issues like the use of AI to spread misinformation and data security, which he said he believes are slowing the adoption of AI in Canada.
Advertisement 2
Article content
“If you want to move from fear to opportunity, you need to build trust, and if you don’t have trust, you won’t have adoption, if you don’t have adoption, you won’t have innovation,” Champagne told reporters in Montreal.
Champagne said he believes that if Canadian businesses don’t adopt AI, “we will squander the incredible potential of many new technologies that we can see, the potential for revolutionary discoveries in the field of science and health and to fight climate change,” he said. “I fully believe that AI is the Holy Grail of productivity.”
The institute will bring together researchers from the Canadian Institute for Advanced Research (CIFAR) — which will lead the organization’s research efforts — as well as the National Research Council and three AI research organizations: Montreal-based Mila, the Vector Institute in Toronto and the Alberta Machine Intelligence Institute in Edmonton.
The 2024 federal budget set aside $50 million over five years to fund the institute.
Stephen Toope, the president and CEO of CIFAR, said Canadian businesses have been more hesitant to adopt AI technologies than their peers in other countries and want assurances about safety and the regulatory environment they will face.
Article content
Advertisement 3
Article content
“For years, we’ve heard from our research community that while they’re excited about AI’s immense opportunities, they’re also concerned about the potential unintended consequences. These range from present-day risks like bias and misinformation, to longer-term concerns around human control over powerful AI models and agents,” he said.
The institute comes around growing concern about the cost, power consumption and actual capabilities of artificial intelligence software.
A report published this summer by investment bank Goldman Sachs, drawing on research by the bank’s analysts and previously published work by researchers at MIT, suggested that the productivity gains from AI adoption will be relatively small, that in many cases the costs of AI are too high to justify automating tasks and that power grids will not be able to keep up with the technology’s high electricity demands.
But Yoshua Bengio, who helped developed the technology that opened the door to many modern AI applications, said he believes in its potentials — and its risks.
The capabilities of AI systems have been increasing over the past decade and in the last few years that trend has accelerated, said Bengio, the founder and scientific director of Mila, an AI chair at CIFAR and the man who led the creation of a report on AI risks for the United Kingdom’s AI safety institute.
Advertisement 4
Article content
“There’s no reason to think that, eventually, we will not reach human level capabilities across many skills, so that is going to be, clearly, hugely valuable economically and also clearly very dangerous if misused, or if we lose control of these systems that could be eventually smarter than us, thus it is important to prepare in terms of regulation that is going to be adaptive to this progress,” he said.
‘Safe for whom?’
Renée Sieber, a professor at McGill University who studies issues of civic empowerment and computational technologies, said that while it’s important for governments to find ways to regulate companies using the wide variety of technologies that can be described as AI, she’s not sure that “safety” is the right metaphor.
“One always has to ask: safe for whom?” she said.
Putting responsibility for AI safety inside of Innovation, Science and Economic Development Canada, a government department that is actively working to promote the technology, creates the risk that it will focus on making AI “safe for companies, potentially at the expense of individuals and communities,” she said.
“The public doesn’t want to be experimented on, the public wants the government to be responsive to its well-founded skepticism of AI systems,” she said.
Sieber said she thinks regulation needs to focus on accountability. There are already risks associated with facial-recognition systems that are more likely to misidentify Black people, for example, and questions of who’s liable if a chatbot makes a mistake — like an Air Canada bot that told a customer they could receive a refund that wasn’t available.
But regulation takes time, which is difficult in the midst of an AI arms race, and can be complicated by the fact that governments may not know that AI is in a software update, for example, or because multinational tech companies have proven difficult for the Canadian government to effectively regulate.
Recommended from Editorial
-
Quebec will create consultation body to study use of AI in higher education
-
No expert consensus on AI risks, trajectory ‘remarkably uncertain’: report
-
Opinion: Protecting democracy in the age of deepfakes
Advertisement 5
Article content
Article content