The specific strategy to protect AI products and systems against cyber-attacks is in its infancy: data scientists and the Security IT team must put on their creative Seven League Boots to deal with the fast-moving threat.

By Hubert Laferrière

Use 'AI' to ensure [data security]
Data security

Cybersecurity is of the outmost importance for the AI community. AI is now considered as a vector of crime by many: with the democratization of AI where members of the public have gained access to key resources needed to use and develop their own AI tools (data, software, and hardware) comes the empowerment of malicious actors to use AI for nefarious purposes[1].

From the average computer user to programmers and coders, the key known challenge everyone faced, was, and still is, about cyber-attacks on computers and systems, government systems included. In the summer of 2019, an IBM Study showed that data breach was on the rise and can cost the average business up to $3.92 million.[2] At Immigration, Refugees and Citizenship Canada (IRCC), the specific strategy to protect AI products and systems against cyber-attacks is in its infancy: data scientists and the Security IT team must put on their creative Seven League Boots to deal with the fast-moving threat. 

Defense the Integrity 

When it comes to AI and cyber security, new challenges or vulnerabilities are emerging. The AI community is now assisting in the confrontation of the machines: AI against AI. From the perspective of the IRCC AI team, this would mean attacks on the integrity of algorithms. 

Mr. Jose Fernandez, Associate Professor at Polytechnique Montréal outlined potential outcomes at the Symposium on Algorithmic Government, organized by IRCC in April 2019: “The lack of explanation provided that many Machine Learning-based AI solutions that can lead to unconscious bias, hidden manipulation.” [3] So, any attempts to “game” the algorithm that we had developed for instance may generate dire consequences. 

The IRCC project started over two years ago (the project is described in the first edition of the AGQ Magazine, November 2019). The AI team at that time concentrated efforts on building a machine learning model that is a solid proof-of concept, ensures the best predictive models and, at the deployment phase, the model performs adequately error-free. The team faced many challenges, namely the ethics, privacy and legal aspects; the team ensured the algorithmic models were void of harms, such as discrimination generated by bias, that they respected procedural fairness principles, and that the bottom line outcomes focused on improving productivity enabling more agile, flexible and fast operational processes. 

Gaming the Model 

Cybersecurity was not at the forefront of the team efforts, although a systematic monitoring process (algorithm robustness) to detect odd patterns was in place, patterns that led us to determine whether “gaming” activities were happening. When such an event occurred, the team immediately modified parameters, maintained a higher level of monitoring and conducted some tests. 

François Nadon, Director of the IT Security and Production Services at IRCC, proposed to hire a student to game the models, more precisely to determine the degree at which one of the models was at risk. The student used different ML algorithms and was able to identify rules that had the potential impact of misclassifying applicants. 

Although the impact was limited, the team immediately reviewed parameters and rules. We are now working on developing a method for better assessing gaming risks for every machine learning model, a method that would be systematically integrated into AI processes and procedures. This is a must for ensuring the model’s integrity. 

AI & Increased Vigilance 

In our quest to find solutions to counteract gaming threats, we were looking for existing ones that could be found in the marketplace such as automated model products for monitoring any attacks or consultant expertise to help us learn how artificial intelligence be used to increase cybersecurity. The ideal solution is to have AI to support the work of both cybersecurity analysts and AI teams by detecting an anomaly. To our astonishment, little or no existing models exist and we had difficulty finding qualified experts. According to Mr. Fernandez, only a few research groups in the world are working specifically on securing AI systems.[4] 

Dave Masson, director of the Canadian division of a British firm specializing in this field, points out that AI can equip analysts and, as a result enable them to react more quickly in the event of a cyber-attack5. He is convinced that the contributions of AI will continue to increase because of the ever-increasing number and complexity of cyber threats. As such, it seems that it’s almost impossible to keep up with threats and it is increasingly obvious that we will need to use AI to stay in the race. 

In the meantime, monitoring and testing activities appear to be the best means at our disposal to increase our vigilance. To do so, the IRCC AI team must mobilize resources for constant monitoring and development, as needed, and identify more sophisticated approaches to improve detections. The team will still hire students to game the models. We resume work with our IT Security colleagues and our quest to find a solution in the marketplace. At the end of the 2019 Symposium on Algorithmic Government, some participants suggested the creation of a dedicated working group of civils servants and academics on AI and cybersecurity. That could be a path to tackle the challenge, at least to open a dialogue among members of the AI community. The IRCC AI team may take the lead assuming there is enough interest in the AI community for moving forward.

  1. Dupont, B., Stevens, Y., Westermann, H. & Joyce, M., 2018., Artificial Intelligence in the Context of Crime and Criminal Justice. A report for the Korean Institute of Criminology.
  2. IBM. (2019-07-23). IBM Study Shows Data Breach Costs on the Rise
  3. Fernandez, J., 2019 April, Artificial Intelligence and Cyber security: Challenges
  4. Corriveau. É. (2019-05-25). Cybersécuritaire, l’industrie 4.0? 
  5. Rettino-Parazelli,K. (2018-04-06). L’intelligence artificielle pour accroître la cybersécurité.

About The Author 

Hubert Laferrière 

Hubert has established the Advanced Analytics Laboratory for the Department of IRCC. The Lab has just recently been transformed into a centre of excellence for AI under the name of Advanced Analytics Solution Centre (A2SC). He is currently leading a major transformative project where advanced analytics and machine learning are used to augment and automate decision-making for key business processes.

THIS ISSUE

SUBSCRIBE

ARTIFICIAL INTELLIGENCE

AI AGAINST AI

The specific strategy to protect AI products and systems against cyber-attacks is in its infancy: data scientists and the Security IT team must put on their creative Seven League Boots to deal with the fast-moving threat.

By Hubert Laferrière

Use 'AI' to ensure [data security]

Cybersecurity is of the outmost importance for the AI community. AI is now considered as a vector of crime by many: with the democratization of AI where members of the public have gained access to key resources needed to use and develop their own AI tools (data, software, and hardware) comes the empowerment of malicious actors to use AI for nefarious purposes[1].

From the average computer user to programmers and coders, the key known challenge everyone faced, was, and still is, about cyber-attacks on computers and systems, government systems included. In the summer of 2019, an IBM Study showed that data breach was on the rise and can cost the average business up to $3.92 million.[2] At Immigration, Refugees and Citizenship Canada (IRCC), the specific strategy to protect AI products and systems against cyber-attacks is in its infancy: data scientists and the Security IT team must put on their creative Seven League Boots to deal with the fast-moving threat. 

Defense the Integrity 

When it comes to AI and cyber security, new challenges or vulnerabilities are emerging. The AI community is now assisting in the confrontation of the machines: AI against AI. From the perspective of the IRCC AI team, this would mean attacks on the integrity of algorithms. 

Mr. Jose Fernandez, Associate Professor at Polytechnique Montréal outlined potential outcomes at the Symposium on Algorithmic Government, organized by IRCC in April 2019: “The lack of explanation provided that many Machine Learning-based AI solutions that can lead to unconscious bias, hidden manipulation.” [3] So, any attempts to “game” the algorithm that we had developed for instance may generate dire consequences. 

The IRCC project started over two years ago (the project is described in the first edition of the AGQ Magazine, November 2019). The AI team at that time concentrated efforts on building a machine learning model that is a solid proof-of concept, ensures the best predictive models and, at the deployment phase, the model performs adequately error-free. The team faced many challenges, namely the ethics, privacy and legal aspects; the team ensured the algorithmic models were void of harms, such as discrimination generated by bias, that they respected procedural fairness principles, and that the bottom line outcomes focused on improving productivity enabling more agile, flexible and fast operational processes. 

Gaming the Model 

Cybersecurity was not at the forefront of the team efforts, although a systematic monitoring process (algorithm robustness) to detect odd patterns was in place, patterns that led us to determine whether “gaming” activities were happening. When such an event occurred, the team immediately modified parameters, maintained a higher level of monitoring and conducted some tests. 

François Nadon, Director of the IT Security and Production Services at IRCC, proposed to hire a student to game the models, more precisely to determine the degree at which one of the models was at risk. The student used different ML algorithms and was able to identify rules that had the potential impact of misclassifying applicants. 

Although the impact was limited, the team immediately reviewed parameters and rules. We are now working on developing a method for better assessing gaming risks for every machine learning model, a method that would be systematically integrated into AI processes and procedures. This is a must for ensuring the model’s integrity. 

AI & Increased Vigilance 

In our quest to find solutions to counteract gaming threats, we were looking for existing ones that could be found in the marketplace such as automated model products for monitoring any attacks or consultant expertise to help us learn how artificial intelligence be used to increase cybersecurity. The ideal solution is to have AI to support the work of both cybersecurity analysts and AI teams by detecting an anomaly. To our astonishment, little or no existing models exist and we had difficulty finding qualified experts. According to Mr. Fernandez, only a few research groups in the world are working specifically on securing AI systems.[4] 

Dave Masson, director of the Canadian division of a British firm specializing in this field, points out that AI can equip analysts and, as a result enable them to react more quickly in the event of a cyber-attack5. He is convinced that the contributions of AI will continue to increase because of the ever-increasing number and complexity of cyber threats. As such, it seems that it’s almost impossible to keep up with threats and it is increasingly obvious that we will need to use AI to stay in the race. 

In the meantime, monitoring and testing activities appear to be the best means at our disposal to increase our vigilance. To do so, the IRCC AI team must mobilize resources for constant monitoring and development, as needed, and identify more sophisticated approaches to improve detections. The team will still hire students to game the models. We resume work with our IT Security colleagues and our quest to find a solution in the marketplace. At the end of the 2019 Symposium on Algorithmic Government, some participants suggested the creation of a dedicated working group of civils servants and academics on AI and cybersecurity. That could be a path to tackle the challenge, at least to open a dialogue among members of the AI community. The IRCC AI team may take the lead assuming there is enough interest in the AI community for moving forward.

  1. Dupont, B., Stevens, Y., Westermann, H. & Joyce, M., 2018., Artificial Intelligence in the Context of Crime and Criminal Justice. A report for the Korean Institute of Criminology.
  2. IBM. (2019-07-23). IBM Study Shows Data Breach Costs on the Rise
  3. Fernandez, J., 2019 April, Artificial Intelligence and Cyber security: Challenges
  4. Corriveau. É. (2019-05-25). Cybersécuritaire, l’industrie 4.0? 
  5. Rettino-Parazelli,K. (2018-04-06). L’intelligence artificielle pour accroître la cybersécurité.

About The Author 

Hubert Laferrière 

Hubert has established the Advanced Analytics Laboratory for the Department of IRCC. The Lab has just recently been transformed into a centre of excellence for AI under the name of Advanced Analytics Solution Centre (A2SC). He is currently leading a major transformative project where advanced analytics and machine learning are used to augment and automate decision-making for key business processes.

THIS ISSUE

SUBSCRIBE