Shine a light on tech’s hidden impacts. Triple your donation.
Skip navigation

Hello World

Why Governments Need To Take a More Active Role in Regulating AI

Christelle Tessono talks with the Markup about how AI systems are still mostly monitored by the companies that build them

Photograph of US President Joe Biden chatting with guests during an event about his executive AI order
In October 2023, US President Joe Biden signed America’s first executive order on AI (Credit: Demetrius Freeman/The Washington Post via Getty Images)

Hi everyone,

Ross here—I’m an Investigative Data Journalist at The Markup. We publish a lot of words telling you what AI can (and often shouldn’t) do, but how can public policy keep AI in check?

Governments are ramping up their role in keeping AI algorithms accountable and limiting their harms. We covered President Joe Biden’s AI order back in October; last month, a new executive order required all federal agencies to designate a Chief AI Officer, and by December agencies must implement AI safeguards and publish transparency reports on federal AI deployments. Many individual states like New Jersey, Colorado, Massachusetts, and California—which The Markup will be looking at closely in the future as we join forces with CalMatters—are proposing and passing legislation regulating AI to combat discrimination. 

The EU recently passed the world’s first comprehensive regulation on AI systems, and Canada’s AI safety bill has been working its way through Parliament since 2022. Canada also announced $2.4 billion CAD earlier this month towards public computing resources for AI models, while the U.S. recently launched its own pilot in January (projected to cost $2.6 billion to be fully operational).

To get more insight on upcoming AI policy, I reached out to my good friend Christelle Tessono, a graduate student at the University of Toronto’s Faculty of Information and a policy and research assistant at the Dais, a public policy and leadership think tank at Toronto Metropolitan University. Her work focuses on tackling the relationship between racial inequality and digital technology, with special attention to AI deployments.

Headshot of Christelle Tessono
Caption: Credit:Christian Diotte, House of Commons Photo Services HOC-CDC 2020

Ross: What are the main things you’re working on?

Christelle: I’ve been focusing for the past couple of years on AI governance in Canada, discussed in public policy processes. I’ve also acquired expertise on facial recognition technology, gig work in the Canadian context, as well as looking at social media platform governance.

Ross: So I know that there’s a bunch of new legislation being proposed in Canada and the US as well, about regulating AI. And one of the big issues, something you mentioned, is how do you even define AI? So, to you, how should we define AI?

Christelle: I define AI as the set of computational tools used to process large amounts of data to identify patterns, make inferences, and generate assessments and recommendations.

Ross: Tell me what’s happening in AI legislation in the policy space in Canada and the U.S., or anywhere else in the world that you’re tracking now.

Christelle: I’ve been tracking what’s happening in the US, Canada, and Europe. But let’s talk about Canada, because I feel like people at the international level don’t really know what’s happening. Canada was the first country to develop a national AI strategy back in 2017, the Pan Canadian Artificial Intelligence Strategy, which in recent years has received over $125 million in funding to conduct research and drive AI adoption in the country. However, we have been very slow about developing enforceable regulation to address the harms and risks caused by AI systems.

Canada introduced the AI and Data Act (AIDA) back in June 2022 to regulate the use of AI systems in the private sector. But it never received public consultation, has an overly narrow definition for systems in scope, and doesn’t have prohibitions on certain types of AI systems like the EU AI Act.

Ross: Can you talk more about the difference between different AI legislations?

Christelle: The EU has a “risk based” framework, meaning that they’ve taken the time to outline the different types of systems that would fall under different types of risks, such as “high risk”, “limited risk”, “minimal risk”, and “no risk” in the law itself. Whereas here in Canada, the legislation states that this will apply to “high impact” systems, but it remains unclear whether the government will determine if a given product is considered “high impact” or if it is at the discretion of the developer. So in short, the Canadian proposed framework is an empty shell.

Ross: And do you know which framework the U.S. uses in its proposed law?

Christelle: The U.S. approach is very decentralized, with multiple initiatives across different agencies. Multiple bills have been introduced in the past, but none of them have gotten enough traction to be considered the singular approach that the U.S. will undertake at the federal level. At the state level there are a lot of initiatives, some have become law, such as the Artificial Intelligence Video Interview Act in Illinois that regulates AI in employment contexts. There are several that are also slowly making their way through legislative houses, such as algorithmic discrimination acts in Oklahoma (HB3835) and Washington (HB 1951), so I would say it’s a very decentralized approach. Then there is the AI Bill of Rights, which is a guiding document – so not enforceable.

Ross: What are some good properties of an AI system that you think systems that are deployed out in the world should have?

Christelle: When I think about a system, about an AI system, I don’t only think about the physical machinery, the data, the computing. I think about the context in which it is designed, developed and deployed.

First, an AI system should have a clear accountability framework. That is, do we know who’s responsible for what? And how can people sort of complain or alert authorities that there is a problem? If to me there’s no accountability, then the AI system is simply doomed to fail.

Then there’s transparency. As a researcher, I’m curious about learning how these products are not only developed, but the procurement process. Who decided to make the call for offers? How many people were provided with mock-ups of the product? What was the decision-making that led to this choice? Why are we using this specific product?

I [also] think about functionality: does the system work? Can it even achieve its intended goal? If there’s no match between the task and the capabilities of the system, then the system shouldn’t be operating at all. That is the case with many facial recognition systems used for categorizing people or even identifying their emotions. Facial recognition works in verification contexts… but… when you’re using it to try to categorize people and make predictions based on them… the functionality piece is not there.

I think a lot of people talk about fairness, like ensuring that the system is robust and not perpetuating bias. That to me is good as a property, but that’s not the first one I think about when it comes to the robustness of a system. As a human person, I cannot be 100% fair. So how can I impose that on a system? I think it’s better to figure out whether [a system] is able to complete the desired tasks we wanted to do.

Ross: You’ve talked about premature AI deployments. How can/should an agency decide whether a technology is ready to be deployed in the real world?

Christelle: First, there should be public consultations as to whether this is the right approach to dealing with a problem that the agency has identified. A lot of the issue right now is that we’re seeing technology being deployed without consultation, without regard for prior consultations on a variety of matters. Is this a real need, or are we using technology to [solve] a problem that doesn’t really need a technological intervention?

The second thing is functionality and ensuring that the system is robust. What are the metrics that the company is using in order to prove that their technology is up to task? What standardization bodies are they following? What types of regulations are they respecting? Like, has the company proven that they’re following standards that are followed everywhere else in the world?

The third piece is, again, accountability. How are we gonna responsibly use [this technology]? Are we making sure that we’re not firing people and using technology as a replacement for labor? Who’s gonna be supervising the technology? 

Ross: On the topic of accountability, what does good accountability look like? How can the public actually raise concerns or fight back against tech that they think is being improperly deployed?

Christelle: The Canadian framework for AI doesn’t have a complaint mechanism. And that to me is like the first step with regards to accountability. For example, I’m a student, and [let’s say] there’s a problem with one of the assignments, I cannot upload it onto the website. I can send an email to the professor and say “Hey, I couldn’t submit the assignment because the online platform doesn’t work,” and so on, that works. And if the professor doesn’t respond, then I can go to the dean or other student representative organizations. Like, there are mechanisms for me to alert of an issue.

For AI systems, it’s hard because you can tell the company “Hey, your product is faulty,” but what if the product already removed all the money in your bank account because it assumed that you were making fraudulent transactions? Who do you actually complain to? And who’s gonna listen to you and make sure that this is dealt with in a timely fashion and it’s not burdensome on the person complaining? So in more simple terms, the properties of a good accountability framework rely on making it easier for people to complain once something goes wrong, and also ensure that there are [complaint] options beyond the technology.

Ross: How do you design law to make sure that companies will actually abide by it?

Christelle: That’s something that we’ve been struggling a lot conceptually in Canada. Some companies say that providing criminal penalties for contraventions to the act is a heavy penalty. Others say that a small financial penalty just incentivizes companies to factor it in the operating costs of the product. So they’re just paying a small bill compared to what they can make if they continue producing and deploying that product.

I think that a way to answer those two challenges while also respecting human rights and building trust is having a flexible framework that has a regulator [who is] empowered to conduct proactive audits, impose fines, and draft regulations.

Ross: How should AI systems be audited?

Christelle: The proposed AI and Data Act in Canada says that if the minister suspects a violation, they can request the company to conduct an audit and deliver them the results. And the company itself has the choice of conducting the audit themselves or procuring the auditing service from a third party that they choose and pay. Deb Raji [a fellow accountability researcher argues]… when you let a company audit themselves, then you’re not getting… an impartial assessment of the problem.

I believe a way forward is to build specialized auditing teams within government that include [a variety of] expertise that understand the socio-technical implications of AI. From lawyers, technologists, sociologists, philosophers, [and others].

A lot of industry actors are rapidly developing [infrastructure for auditing AI]. While this is a positive thing for companies who want to use those services to assess their products, the government shouldn’t rely on them for audits. We shouldn’t be outsourcing expertise that we can develop in-house.

Ross: Given that Canada and the US are about to spend billions of dollars on AI, can you talk a bit about what that money will be used for, and what you see as any gaps in the funding?

Christelle: We need more money for regulatory infrastructure, and I really emphasize regulatory infrastructure as a term, because how can we be able to audit or even develop guidelines on how to use systems if we don’t have public servants thinking about these things. We shouldn’t let industry dictate how technologies are used, when they’re used, and whether they should be used. I think this is a responsibility that the government needs to take on. 

[There is] a meager $5.1 million [CAD] for the office of the AI and data commissioner of the country.

The office of the Privacy Commissioner of Canada has five times that budget. So $5 million [CAD] is nothing. If you have $2 billion [CAD] for computing infrastructure, who’s gonna regulate it? We need money for that.

There’s [also] $50 million [CAD] for upskilling and “training” people who are impacted by AI. The government didn’t give too much detail, but they alluded that this would be for, for example, content creators, artists, creative artists who might be impacted by AI. They specifically use the word “training”, which is very interesting because creative artists, creative workers, artists, [they] don’t need more skills to use AI, they just want their intellectual property/copyright to be respected, and to not see their work stolen.

Ross: There was a new bill just introduced in the US by a senator, just by a senator that would require all companies that use AI to disclose any copyrighted works that are included in their training data and to have a database of that. Do you have thoughts on this? Is it feasible?

Christelle: It’s an interesting idea, but I don’t know much about copyright and whether… just disclosing that you use copyrighted work is enough to prevent harms for workers who rely on copyright for their income. When it comes to AI policy, we need to think about these types of interventions as part of a broader puzzle.

Ross: Do you have advice for readers to [make their voices heard about AI policy]?

Christelle: I highly encourage people to learn more about how government operates, how laws are made, even if it’s at the municipal or state level in the US. I highly encourage you, because a lot of people benefit from the majority not knowing how bills are passed. So I highly encourage you all to do that.

I want people to be excited about finding new ways to deal with issues and also building community. Talk to your neighbors, talk to your friends, talk to your parents.


Want to learn more? Check out the OECD tracker of over 1,000 AI initiatives from 69 countries and territories. Want to get involved? Learn more about AI harms and contact your elected officials in the U.S., Canada, or wherever you might be.

Thanks for reading!

Ross Teixeira
Investigative Data Journalist
The Markup

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now