DATA SCIENCE

Use Data Science Wisely by Incorporating Human Due Diligence

By Kevin Kells

Your organization is poring through a pile of résumés of candidates for an open position, hoping to choose a new hire who will be a good fit, will excel at the work, and will stay with your organization for a longer time. It’s hard to know answers to these questions from the résumé content. Is there a way to use data science to prioritize the best candidates and to screen out poor matches? 

The obvious answer is “yes.” Supply résumés from prior years’ candidates to a Machine Learning algorithm. Tie each résumé with pertinent facts about their subsequent career trajectory at your organization, where longevity and achievement are scored higher. Having trained the model, now you’re ready to feed the new candidate résumés to the model so that it scores each one. 

This article is not about how to do that. It is about the caveats you need to be aware of when applying data science: Validity, Impartiality, and Transparency. 

In the book, ”Weapons of Math Destruction”, author Cathy O’Neil reveals important aspects of data science that when not properly understood and corrected for, can lead—and have led—to unintended, negative consequences. Some individuals have had their lives ruined and communities have been inadvertently discriminated against by the inexpert application of data science. 

When setting up a system that depends on data science, build in a process of human due diligence through the following five steps: 

First is statistical validity. Let’s say we’ve trained our résumé Machine Learning model and our algorithm has scored the new candidate résumés with a value from 1 to 10. If the number of résumés and career data we’ve used for training is small, the accuracy may be so fuzzy that there is no meaningful difference between a score of 7 and a score of 10. If so, then an alternative scoring should be adopted, say just three piles of résumés: likely, possible, and unlikely job matches, so that undeserved preference is not ascribed within each pile. 

Second is to continually measure the model against reality. If it predicted “likely job match” for an individual last year, how well is that employee doing in their career now? What score would your organization have given the résumé then with the benefit of hindsight now? What score did the model actually give it? The model should be updated and retrained based on continual human audits and reality-checks. 

Third, question whether what you are measuring is what you want to measure. If the career data that trained the model includes the length of employment and number of promotions, then that is what the model will tend to score higher in the new-hire résumés. What about the genius employees who in the past made a lasting contribution well beyond expectations though they left sooner than expected to do bigger and better things? A model based only on promotion count and employment length may end up placing genius candidates in the “unlikely match” pile. Review continually what you are telling the model to score. 

Fourth, avoid bias by having a disinterested party review what you are trying to achieve with the model; this will permit you to assess blind spots and unintended prejudice. Imagine a crime prediction model which integrates the location where serious crimes and petty crimes are committed onto a map to direct where police patrols should spend more time. Perhaps serious crimes in the city occur more often in certain hotspots unrelated to the residents of the area, while petty crimes, which may go unreported without a police presence to witness them, perhaps occur more frequently in poorer areas. The patrols sent into an area by the model who witness petty crimes will report them to the model. The increased reports, in turn, increase the patrols sent there. There is a feedback loop which may bias the model to send more patrols to poorer areas instead of to serious crime hotspots. Such a propensity to bias should be assessed, and the model adjusted to best serve the highest good of the whole city fairly. 

Finally, if your model provides a score, make that score transparent. Your users should be able to click through to see a breakdown of the factors affecting the score, to read the formulas or methods used to calculate the score and its components, and to learn which data was used to train the model and how that data was prepared. Could such disclosure reveal proprietary information or permit individuals to “game” the model? Perhaps, but weigh those downsides to the damage the model may inflict on an individual’s life and the number of individuals affected by the model. 

Data science should be used wisely, and the incorporation of human audit and review processes is essential to avoiding mistakes, especially when they can affect people’s lives. Human due diligence can ensure the validity of a model is understood and properly used within its limitations. The model should have an update process that compares predicted values with actual values on a continual basis to ensure adjustments are made to improve the model. Awareness that unintended bias and hidden feedback loops may exist is important, and the enlistment of a third-party review of the aims of the model and possible hidden biases is a way to ensure impartiality of the model. Finally, providing transparency in how a score is calculated and what methods and data are used plays a vital role in mitigating unintended negative effects on the lives and livelihoods of real individuals that a single, unexplained score might otherwise cause. 

About The Author 

Kevin Kells, Ph.D 

Kevin has worked as an R&D Engineer in software systems in the Financial and Semiconductor industries in Switzerland, Silicon Valley, and Ottawa, and currently works with real-time data and news feed systems at a major market news and data company in New York City. He also has extensive experience in non-profit management, both in the area of human systems and IT systems. He received his PhD from the Swiss Federal Institute of Technology (ETH), Zurich in computer simulation of semiconductor devices and holds an MBA with areas of focus in entrepreneurship and business analytics from the University of Ottawa, Telfer School of Management. 

THIS ISSUE

SUBSCRIBE