Mitigating implicit bias in machine learning

Mitigating implicit bias in machine learning

·

3 min read

Headlines abound on the opportunities and dangers of machine learning. Now armed with plenty of examples, the Data Science community is at least aware that our data lakes and algorithms can make life more difficult for real people if not properly used. But the root problem can be summed up in pretty simple terms:

An algorithm contains the biases of its builder.

Examples of bias include:

  • Reporting bias (at the source) — Maybe the folks who filled out your online survey were mostly young, male, and computer-literate.
  • Implicit bias (the call is coming from inside the building) — Perhaps your image of an ideal job candidate fits into a mental archetype of which you're only semi-aware, and that image excludes those who don't match the archetype.

At Faraday, we have a handful of approaches we use to minimize these effects.

Data sourcing

It's possible to get your hands on a pretty broad range of data points these days. Everything from the basics of home value, age, and income to the more esoteric pet ownership and liklihood to buy high-end men's apparel. Faraday actively excludes data points relating to protected classes. Race, ethnicity, primary language, and religion are excised from our database entirely and therefore excluded from our predictive models and audience definitions.

Of course, seemingly-harmless demographic variables can often act as proxies for these features, but our stance begins with the most effective levers we have available to pull. And further on in the pipeline we do check results against features like race, but we do it at an aggregated level using standardized census data (more on that below).

Model priors

We investigate the examples in our training data before they become machine-actionable profiles. In particular, we look at how well they're balanced by age, gender, and geography against the population at large. If there's a significant skew versus the baseline, we know there's a chance the resulting predictive model will show bias, so we rework the inputs.

Model predictions

When a predictive model is fully-built and ready to be put into action, we typically assign a predictive score to everyone in the country as a matter of efficiency. But this also allows us to check whole populations against statistics that reflect the inclusion or exclusion of protected groups at a census block group level.

If we see - for example - that the lowest scores are concentrated in the block groups with the highest proportions of particular racial or ethnic groups, we'll know that some bias has made it into the model, and we'll revise it.

The bottom line is that these strategies are only a start, and the best defense against unfair algorithms is multi-pronged and in for the long haul. If you have questions about how your own applications might be veering away from fairness, I suggest checking out:


The best of AI, right in your inbox