FEATURES
Artificial Intelligence: Risk, Reward and Reality
By Steve Holder & Tara Holland
There are a number of perceptions of the impact that artificial intelligence will have on society. Most people focus on the science fiction perception of a sentient robot helping our daily lives or self-driving cars. These are the sexiest applications of AI, the ones that capture the public’s imagination. But the majority of AI applications are more pragmatic and address more mundane tasks like finding fraud, assessing risk and predicting or prescribing cause-and-effect relationships in the business.
Ironically, there’s much more at stake in the latter. The danger of machines malfunctioning and running rampant á la movies like Terminator or Maximum Overdrive is so remote that it is effectively zero. Meanwhile, real-world consequences—denied credit, immigration standing or lack of access to health care—are real and genuine risks today and have a more direct impact.
It’s the so-called “black box” problem. How can we trust and validate decisions made by a machine if we don’t understand the algorithms and modeling that make them?
The Canadian government is taking the lead in setting governance standards in the application of AI, prescribing a risk-based framework that can be a model for creating an AI-powered organization. The Directive on Automated Decision-Making classifies AI decisions based on the potential impact of their outcomes as well as on the sustainability of ecosystems. The directive makes it clear that AI is not a one-size fits all problem. If an automated decision is going to directly affect the rights, health and economic interests of individuals, communities and entities, the AI application needs to be managed by rules that match the potential harm it could cause. In many cases these rules call for the intervention and review of the decision by humans to ensure appropriate oversight.
These governance standards ensure the Canadian government is doing the right thing for citizens. Level I decisions have minor, easily reversible and brief impacts; Level IV decisions, the most serious, have major, irreversible and perpetual effects. Each level has correspondingly rigorous requirements for notification, explanation, peer review and human intervention. Level I decisions can be made without human intervention and explained by an FAQ page. Level IV systems must be approved by the head of the Treasury Board and require extensive peer review and human intervention at every step of the decision-making process. For more specific information on these classifications and requirements, you can consult Appendix B and Appendix C, respectively, of the directive. The directive is the first of its kind at a national level, and it reflects a commitment by the Canadian public service to ensure a data-driven policy with appropriate human intervention.
Need for Transparency
Transparency is a cornerstone of any customer-facing AI implementation. Users of a service are entitled to understand the process that has an impact on them, whether it’s denial of a service, selection for re-assessment, or potentially disruptive land use decision. Processes must not only be fair, they must be seen to be fair. Not all decisions are equal in impact. The directive’s escalating notification scale provides more visibility into the decision-making process according to its impact. And greater visibility into the algorithms and modeling on which decisions are made reveals another paradox of artificial intelligence: algorithmic decisions are more transparent than human decisions. Intuitive decisions are inherently influenced by acquired biases, procedural experience, and fickleness borne of convenience or complacency. At a recent conference on AI in healthcare, one researcher noted that the human brain is, in fact, the black box.
Algorithms can be secure, transparent, free of bias and designed to respect human rights, democratic values and diversity. But they depend on humans providing data sets that are comprehensive, accurate and clean. Users need tools that allow them access across multiple data sets without complex and time-consuming search routines, while maintaining the integrity of that data. At the same time, that data must be subject to the rigorous privacy standards for which the Canadian government enjoys a well-deserved reputation for leadership. Those data access tools and procedures must have the principles of Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) embedded within them.
Fuelling Innovation
The framework established by the Directive does more than shield Canadians from the arbitrary impact of automated decision-making. It provides a platform for innovation, refinement and bold policy initiative.
At the lower end of the scale, Category I and Category II decisions can be made with little or no human intervention. This is not to say they are not important decisions; the framework assures that the mechanism in place is appropriate to the task at hand. But once those parameters are honed, the more mundane decisions that make up much of our current workload don’t demand the attention of a person who can and should be doing more complex work. This frees up program managers and data scientists to ask more keenly crafted questions, prioritize evidence-based policy decisions, and explore possible courses of action in a predictive and prescriptive fashion. In a sense, the human then is able to take action on the decision rather than crafting the decision in the first place.
This requirement to offload mundane tasks is mirrored in the tools required for such data-intensive innovation. By many estimates, data scientists spend as much as three-quarters of their time cleaning, scrubbing and preparing data for use. Tools that ease the effort and time consumed making data ready to use, free up the human, who has the capacity for curiosity, ingenuity and adaptation for more valuable tasks.
Leveraging AI in Government
Adoption of AI by public service organizations in Canada is uneven, according to research by SAS, Accenture and Intel. Pockets of government have robust and well-governed AI capabilities, whereas other organizations haven’t entered the AI discussion.
As of April 1, 2020, new policy requirements come into play that require peer review not just for AI outputs, but for AI projects themselves. But there are no guidelines for conducting those reviews. IOG, in partnership with GARI, has been approached by several departments to be the facilitator and convenor of peer reviews, and develop guidance for the Treasury Board Secretariat regarding the peer review mechanism.
To make these policies more comprehensive, the TBS must explicitly define these processes and provide ready-made tools to support the Directive, both internally and in citizen-facing engagements. There must be dialogue with industry experts to resolve the “black box” issue, and recognize the AI inherently supports the goal of efficient, accurate, consistent and interpretable decision-making and transparent governance.
Private Sector Applications
While the directive is designed for government decision-making, it can also serve as guidance for private sector. Private sector firms can also benefit from appropriate guidelines for applications of AI, machine learning, natural language processing, neural networks and other automated decision-making technologies. As in the public sector, business applications of AI have a range of consequences: an inappropriate purchase suggestion from an online storefront does not have the same impact on a user’s life as the denial of a mortgage. The tiered system of the directive provides ample room for application regardless of the use case.
It’s a useful thought experiment to envision a near-future AI-enabled application—say, for example, real-time insurance rate adjustment—and categorize in according to the directive’s tiers. What is adequate documentation, a referral to an FAQ page or real-time e-mail notification? What is the duration of the impact? How difficult would it be to remediate the impact of a faulty outcome? The tiered system accommodates interpretation and a variety of appetites for risk, and is open to evolution as new applications are conceived and real-world environments change.
As the public sector embraces evidence-based policy and businesses deliver new models for serving their customers and shareholders, AI-augmented decision-making will serve an ever-growing role. The Directive on Automated Decision-Making provides an opportunity for Canadian organizations, public and private, to take bold steps forward in this emerging frontier.