Call to Action: Regulation of AI

Responding to the Office of the Privacy Commissioner of Canada

Governments around the world are contemplating how to regulate artificial intelligence (AI) and the related data practices, and applications, that make up this emerging industrial and political ecosystem.

The Office of the Privacy Commissioner of Canada (OPC) has been at the forefront of global privacy discussions, while also doing their best to hold the digital monopolies accountable. The problem however is that the best the OPC can do is really not much at all.

Privacy legislation in Canada is getting old, and does not provide the OPC with the means to genuinely enforce existing privacy laws (fines are tiny, and the overall consequences for violating the privacy of Canadians are negligible).

This is one of the reasons why the OPC is asking Parliament for an upgrade, with a clear view on regulating AI, not just from a privacy perspective, but in the context of human rights and democracy.

The Office of the Privacy Commissioner of Canada (OPC) is currently engaged in legislative reform policy analysis of both federal privacy laws. We are examining artificial intelligence (AI) as a subset of this work as it relates specifically to the Personal Information Protection and Electronic Documents Act (PIPEDA). We are of the view that PIPEDA falls short in its application to AI systems and we have identified several areas where PIPEDA could be enhanced. We are seeking to consult with experts in the field to validate our understanding of how privacy principles should apply and whether our proposals would be consistent with the responsible development and deployment of these systems.

As a means of building support and momentum for this upgrade, the OPC has initiated a consultation, and are seeking input from people who have something to say about privacy, AI, and the future.

To this end, we have developed what we believe to be key proposals for how PIPEDA could be reformed in order to bolster privacy protection and achieve responsible innovation in a digital era involving AI systems. In our view, responsible innovation involving AI systems must take place in a regulatory environment that respects fundamental rights and creates the conditions for trust in the digital economy to flourish.

We view our proposals as being interconnected and meant to be adopted as a suite within the law. To facilitate a robust discussion with experts on these matters, we pose a number of questions to elicit feedback on our suggested enhancements to PIPEDA. We welcome any additional feedback experts would like to share to help shape our work in this regard.

As a reader of this newsletter, you should consider yourself an interested party, and a growing expert on the role of AI in society, and therefore should let the OPC know what you think of their ideas and regulatory concepts.

Unfortunately the discussion paper the OPC released to facilitate the consultation is intimidating, academic, and does not, in my view, encourage your participation.

Let’s counter that. Let’s break down the 11 proposals for consideration that the OPC is seeking comment on, both as a reminder that you’re the expert the OPC needs to hear from, and as my way of encouraging you to send them an email! #metaviews


Proposal 1: Incorporate a definition of AI within the law that would serve to clarify which legal rules would apply only to it, while other rules would apply to all processing, including AI

Should AI be defined, and given special treatment, or should it be treated like any other kind of data application?

On the one hand the danger of defining it, is that you’ll get that definition wrong, and in treating it special, create the opportunity to give AI exemptions, or a pass, when it comes to our privacy.

On the other hand, AI may potentially do things with data that was not previously possible, and was not anticipated by existing privacy laws.

The key here would be exactly how AI is defined.

For example the OECD definition of AI, which Canada adopted, partially describes AI as making predictions. Yet as we discussed in a past issue, we have reason to be skeptical as to whether those predictions can be accurate, and perhaps we should not describe AI as being able to predict (since a growing number of people argue that it cannot).

Here’s the questions for this proposal that the OPC is asking:

Discussion questions:

  1. Should AI be governed by the same rules as other forms of processing, potentially enhanced as recommended in this paper (which means there would be no need for a definition and the principles of technological neutrality would be preserved) or should certain rules be limited to AI due to its specific risks to privacy and, consequently, to other human rights?

  2. If certain rules should apply to AI only, how should AI be defined in the law to help clarify the application of such rules?

Given that this newsletter’s current definition of AI is “a digital application of might is right” I’m inclined to suggest that there should not be certain rules applied to AI, but rather the rules should apply to data practices in general.

What do you think?

View comments


Proposal 2: Adopt a rights-based approach in the law, whereby data protection principles are implemented as a means to protect a broader right to privacy—recognized as a fundamental human right and as foundational to the exercise of other human rights

Part of the purpose of human rights is to prevent a government (or company) from creating a policy or product that violates those rights. The role of such a rights-based framework is to create clear boundaries of what is acceptable, and what is not acceptable. It acts as a kind of test, or measurement, to determine whether something is ok or not.

A rights based approach to AI, would create a verifiable line in the sand as to whether an application of AI is acceptable or not. This includes privacy, but other human rights as well.

The Human Rights, Big Data and Technology Project in England made a video that helps explain this approach:

The communications and human rights organization Article 19 also published a report on the subject that echos our own writing on the failure of AI ethics:

In short, should we be making it harder for people to create AI by ensuring that they comply with privacy and human rights?

On the one hand this goes against what many perceive as the need for speed, and the drive for innovation. People who believe this also argue that we’re in a global race for AI, and any delay means that another country, or a different company, will win and we will lose.

On the other hand, this makes assumptions about what AI will be able to do, such as make accurate predictions, and it’s not entirely clear that is or will be the case.

Here’s the discussion question from the OPC:

Discussion question:

  1. What challenges, if any, would be created for organizations if the law were amended to more clearly require that any development of AI systems must first be checked against privacy, human rights and the basic tenets of constitutional democracy?

I believe that one of the benefits of this approach is that it encourages if not compels companies to hire people who understand social impacts, privacy, and human rights. While that means more work for people in the social sciences, it also makes it harder for small companies to get into the industry. Is it possible to achieve a rights-based approach without creating unnecessary bureaucracy?

Your thoughts?

View comments


Proposal 3: Create a right in the law to object to automated decision-making and not to be subject to decisions based solely on automated processing, subject to certain exceptions

Yes b’y! This is where this consultation gets interesting.

Should we have the right to object to automated data processing? This includes automated decision making, but it also includes automated direct marketing.

Here’s a relevant paragraph from the OPC’s discussion paper:

Currently, Principle 4.3.8 of PIPEDA provides that an individual may withdraw consent at any time, subject to legal or contractual restrictions and reasonable notice. We view integrating a right to object and to be free from automated decisions as analogous to the right to withhold consent.

In any digital relationship or service, we always have the right to opt out. To change our minds. To withdraw our consent.

Should that not also apply to automated decision making as well?

Anytime you are judged by a machine, should you have the right to withdraw from that process? To demand that a human make the decision, or at least have a human review the machine’s decision? Kind of like a machine justice court of appeal?

This was a big feature of the GDPR, article 22, though it is arguably untested and still theoretical.

However what’s interesting in this particular framing, is the idea that there should be certain exceptions where you cannot withhold consent, or cannot object to automated decision making.

Here’s the discussion questions that are part of this proposal:

Discussion questions:

  1. Should PIPEDA include a right to object as framed in this proposal?

  2. If so, what should be the relevant parameters and conditions for its application?

While I do think the right to object to our machine overlords is essential, it is also rather complicated. There are already a growing number of automated decisions being made about us each day, and the ability to object will be hard to implement retroactively. Of course that doesn’t mean we shouldn’t try.

We’ve discussed this in the context of the workplace, as well as with children. Traditionally neither employees nor students have the right to object. Rather they’re expected to be obedient. Hence why this right to object may not end up applying where it ought to.

What do you think?

View comments


Proposal 4: Provide individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing

This is a natural follow up from proposal 3. When a machine makes a decision about you, then you deserve to know why, and how.

Like the last proposal, this seeks to bring a key element of the EU GDPR to Canada. However it is also relatively untested, and there are some who argue that it is technically impossible. Although that does not mean it has to be technically impossible, as any technology that does not allow for explanation could be deemed illegal under a human rights framework that included this right to explanation.

Although an issue with this proposal is the way it touches upon yet also dodges the larger issue of algorithmic transparency. Almost as if the right to explanation is offered instead of the kind of transparency needed to mitigate the black box society. Yet the OPC discussion paper argues that transparency is tied to trust, although we’ve written about how institutions are not trusted, regardless of transparency.

Similarly others have argued that a right to explanation is not the appropriate solution to the issues raised by automated decision making.

The general concern is that a right to explanation will only reinforce the authority of automated decision making, rather than limit it, or ensure it’s accountability.

Here’s what the OPC wants to know:

Discussion questions:

  1. What should the right to an explanation entail?

  2. Would enhanced transparency measures significantly improve privacy protection, or would more traditional measures suffice, such as audits and other enforcement actions of regulators?

I think that the relationship between the right to object and the right to an explanation is crucial. I can imagine a scenario where the right to an explanation is exercised before the right to object, or rather, enabling the right to object. Can you object before you know the explanation?

Does it matter if you know how a decision was made if you fundamentally disagree with the decision or believe it to be unfair? Do I really want to know why Facebook sends advertisers my way, or would I rather those advertisers fuck off and leave me alone?

Thoughts on the right to explanation and it’s relationship with transparency?

View comments


Proposal 5: Require the application of Privacy by Design and Human Rights by Design in all phases of processing, including data collection

This is similar to proposal 2 (a rights based approach), although with less effect and legal weight. The difference is that rather than complying with a law, you’re being asked to comply with a standard. The former has penalties, the latter has expectations.

I’m not a fan of privacy by design, and regard it as useless jargon that makes it easy for people to feel as if they’re doing the right thing when they’re not. It allows people to believe they can do what they originally wanted, but with a process that protects them from future liability.

As a process, the by design method allows a government or corporation to say that they spent time and effort thinking about privacy and integrating privacy into their product or service. However that does not mean that their product or service actually preserves privacy.

Part of the issue is in the testing process. I know of no technology that has been so thoroughly tested as to be permanently secure or bug free. It doesn’t work that way. In many cases there are dynamics and usage that cannot be anticipated without deployment and active users.

Our society currently fetishizes design, and mistakenly believes that every and any possibility or problem can be anticipated by design. I find this to be arrogant, and misleading. Design is not the cure for all that ails us, in spite of its power and valuable role.

Discussion questions:

  1. Should Privacy by Design be a legal requirement under PIPEDA?

  2. Would it be feasible or desirable to create an obligation for manufacturers to test AI products and procedures for privacy and human rights impacts as a precondition of access to the market?

My concern here is that we’re needlessly creating bureaucracy and preventing small companies from being able to participate. I’m all for testing, rigorous testing even, but worry that design principles of impact assessments are just a means by which large companies can game the system and fabricate evidence that their intentions or applications are benign.

I’d love to hear a rebuttal or dissenting perspective from you, dear subscriber.

View comments


Proposal 6: Make compliance with purpose specification and data minimization principles in the AI context both realistic and effective

This is an interesting and probably contentious proposal. Can we put AI on a diet? Are there limits to the kinds of data necessary to create effective machine learning models?

The current view is the more the better. The greater the volume of data, the greater the accuracy of the AI.

Yet what if the expectation was changed so that instead of focusing on volume, the focus changed to one of accuracy and necessity. What is the minimal amount of data you need to achieve an accurate and effective system?

Should AI (whether commercial or government) be limited to only gathering and using the data it needs, and not the data it wants?

This is what the EU GDPR argues, and the OPC is thinking along the same lines:

Discussion questions:

  1. Can the legal principles of purpose specification and data minimization work in an AI context and be designed for at the outset?

  2. If yes, would doing so limit potential societal benefits to be gained from use of AI?

  3. If no, what are the alternatives or safeguards to consider?

This second question seems rather loaded, and anticipates objections from industry.

I think the third question is the most relevant, although the answer seems to be context or subject specific. (It also evokes our issue about the mental health of AI, i.e. whether an algorithm can exhibit insanity.)

For example do we want to make connections across departments and across aspects of our lives? Or do we want to maintain the silos and sectors that divide our society?

Perhaps this issue acknowledges that true, general, all knowing AI is impossible and undesirable. Instead we can achieve a wide and diverse range of AIs built upon specific and purpose built data sets.

This is a foundational question, that I think requires significant public participation, that I doubt we will see, in spite of the OPC’s desire to consult.

Thoughts on this one?

View comments


Proposal 7: Include in the law alternative grounds for processing and solutions to protect privacy when obtaining meaningful consent is not practicable

This is another complicated one. I’m going to quote from their discussion paper:

The concept of consent is a central pillar in several data protection laws, including the current PIPEDA. However, there is evidence that the current consent model may not be viable in all situations, including for certain uses of AI. This is in part due to the inability to obtain meaningful consent when organizations are unable to inform individuals of the purposes for which their information is being collected, used or disclosed in sufficient detail so as to ensure they understand what they are being invited to consent to.

If anything this speaks to how the use and development of AI is complicated. It involves the use and reuse of data, often obtained from a myriad of sources. The origins of that data are not always understood, and the people who supplied that data are often not informed as to how that data will be used, or by whom.

Rather than ask how that consent can be properly obtained, the question becomes how can we legitimately proceed without that consent. Perhaps that consent is unnecessary.

In this proposal, the OPC asks a lot of substantive questions:

Discussion questions:

  1. If a new law were to add grounds for processing beyond consent, with privacy protective conditions, should it require organizations to seek to obtain consent in the first place, including through innovative models, before turning to other grounds?

  2. Is it fair to consumers to create a system where, through the consent model, they would share the burden of authorizing AI versus one where the law would accept that consent is often not practical and other forms of protection must be found?

  3. Requiring consent implies organizations are able to define purposes for which they intend to use data with sufficient precision for the consent to be meaningful. Are the various purposes inherent in AI processing sufficiently knowable so that they can be clearly explained to an individual at the time of collection in order for meaningful consent to be obtained?

  4. Should consent be reserved for situations where purposes are clear and directly relevant to a service, leaving certain situations to be governed by other grounds? In your view, what are the situations that should be governed by other grounds?

  5. How should any new grounds for processing in PIPEDA be framed: as socially beneficial purposes (where the public interest clearly outweighs privacy incursions) or more broadly, such as the GDPR’s legitimate interests (which includes legitimate commercial interests)?

  6. What are your views on adopting incentives that would encourage meaningful consent models for use of personal information for business innovation?

The key question of course is the last one. Yet it not only questions how AI is developed and used, but also how our society is governed in the first place. In my view the relatively easy answer to number 6 is the proliferation of participatory democracy in communities and jurisdictions in which AI is used. Although that’s a newsletter issue for another day.

Thoughts on this proposal?

View comments


Proposal 8: Establish rules that allow for flexibility in using information that has been rendered non-identifiable, while ensuring there are enhanced measures to protect against re-identification

This builds off of proposal 7, yet also begins to address the 800 pound gorilla in the room: that de-identification techniques are ageing, and re-identification techniques are rapidly improving.

What may be a bit disturbing in this proposal, is the endorsement that consent does not apply to de-identified information.

This is based on the confidence or belief that re-identification is not easy, or can be prevented.

Hence the importance of question 3 in the OPC discussion points:

Discussion questions:

  1. What could be the role of de-identification or other comparable state of the art techniques (synthetic data, differential privacy, etc.) in achieving both legitimate commercial interests and protection of privacy?

  2. Which PIPEDA principles would be subject to exceptions or relaxation?

  3. What could be enhanced measures under a reformed Act to prevent re-identification?

I feel the danger of this proposal is that it assumes question 3 is easy to answer. I know I don’t have an answer to this one. I’m currently of the position that de-identification as a method is no longer valid.

What do you think? Have you got any solutions to prevent re-identification?

View comments


Proposal 9: Require organizations to ensure data and algorithmic traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle

This is an important proposal, both for the potential impact on society, as well as the potential upgrade of the accounting industry.

A requirement for algorithmic traceability would facilitate the application of several principles, including accountability, accuracy, transparency, data minimization as well as access and correction.

Traceability is both an essential element to ensure the accuracy and effectiveness of AI, but it also adds fuel to the fire of the surveillance society. The danger is that in order to achieve true traceability, you’d have to collect, mark, and maintain a ridiculous amount of information.

France’s data protection authority (the Commission nationale de l'informatique et des libertés—CNIL), has recommended the development of a “national platform” for algorithmic auditing.

I’ve also argued that traceability would involve a dramatic upgrade and expansion of the role of accounting firms. They are after all the professionals who perform audits, and that is in essence what algorithmic traceability involves.

Here’s what the OPC wants to know:

Discussion question:

  1. Is data traceability necessary, in an AI context, to ensure compliance with principles of data accuracy, transparency, access and correction and accountability, or are there other effective ways to achieve meaningful compliance with these principles?

This is an interesting question, and I’d be curious to hear about other ways of achieving meaningful compliance. One example might be algorithms auditing other algorithms. Especially if the system in question is a black box.

Is this an example of a cobra effect? In wanting to trace how AI operates we dramatically expand the surveillance society? Or is such a society already pervasive, and traceability is the necessary response to achieve accountability and compliance?

Any accountants reading this who want to respond? And the rest of us, how likely are we to want to know where the data comes from? I think I’d want to know…

View comments


Proposal 10: Mandate demonstrable accountability for the development and implementation of AI processing

This proposal builds off the last, and articulates the need for third party audits of AI processing to ensure compliance and accountability.

It also articulates the OPC’s desire to have greater powers, expanding their ability to conduct investigations and inspect the practices organizations use to build and use AI.

The OPC is asking what it will take to ensure that the proposals described above can be effective, and actually complied with.

Discussion questions:

  1. Would enhanced measures such as those as we propose (record-keeping, third party audits, proactive inspections by the OPC) be effective means to ensure demonstrable accountability on the part of organizations?

  2. What are the implementation considerations for the various measures identified?

  3. What additional measures should be put in place to ensure that humans remain accountable for AI decisions?

This proposal encourages me to contemplate how companies or governments might game the system in spite of some of these accountability measures. It also assumes that regulators will be responsive enough to keep up with novel or unique applications and machine learning models.

The second question however raises an issue that emerged after the GDPR was passed. While the framework has had an overall positive effect, it did not actually change the business model of the digital monopolies it targeted. Rather what it did was create a new bureaucracy that cost large enterprises little, but made it almost impossible for small organizations to comply with. Jeanette and I argued as much in this article last year for CIGI:

This is after all the point of the consultation. Do you want to see an expanded and empowered OPC and related bureaucratic apparatus?

(Full disclosure, I’d probably be part of such an apparatus so I’m kinda in favour).

View comments


Proposal 11: Empower the OPC to issue binding orders and financial penalties to organizations for non-compliance with the law

Should we give the OPC teeth? Shall we give them the ability to levy fines, and big ones at that, so that there are consequences for violating our privacy?

Similarly shall we also give them the power to compel companies to co-operate? To ensure that their orders and investigations have the power necessary to make a difference?

This is after all what the consultation was about. Expanding the authority and powers of the OPC.

Discussion questions:

  1. Do you agree that in order for AI to be implemented in respect of privacy and human rights, organizations need to be subject to enforceable penalties for non-compliance with the law?

  2. Are there additional or alternative measures that could achieve the same objectives?

I very much agree with question 1, and that’s partly why I’ve taken the time to write this newsletter issue.

I am of course interested in your thoughts on number two. Given the issues discussed in this newsletter and consultation, is there another solution other than the empowerment and expansion of the OPC? A free market model? A technical model? A stronger state based model?

View comments


While I am curious what you think about everything discussed today, I’m genuinely hoping you’ll send the OPC your thoughts on these issues, which I’ve just helped stimulate your thinking on.

To be clear, you do not need to submit a formal report or document. An email will suffice. Like you’re writing a friend. Informal is fine. Write what’s on your mind.

You can send your thoughts via email to OPC-CPVPconsult1@priv.gc.ca. You have until March 13th 2020 to do so.

Email the OPC

I value your perspective, and I know our friends at the OPC will as well. Of course I’d love to know what you may say, especially if you disagree with me, so do let me know, and post a comment or two. #metaviews

View comments

I will mention that I sent this email out to non-subscribers as well, because the more people who participate in the OPC consultation the better. Please share this post with anyone you think might be willing or interested in sending the OPC what they think of the proposals detailed above.

Share

Hopefully this email (that I spent my Saturday working on) is enough to earn your subscription to this newsletter. If you can’t afford it, let me know and we can work something out.