AI
Making Monitoring AI Bias a Little Easier
June 17, 2019 | Written by: Susannah Shattuck
Categorized: AI
Share this post:
When we launched Watson OpenScale late last year we turned a lot of heads. With this one solution, we introduced the idea of giving business users and non-data scientists the ability to monitor their AI and machine learning models to better understand performance, help detect and mitigate algorithmic bias, and to get explanations for AI outputs. But we’re just getting started.
Since then we’ve continued to work on and advance OpenScale to help organizations ensure fair outcomes from their AI models in production. Starting today, we are making it easier to detect and mitigate bias against protected attributes like sex and ethnicity with Watson OpenScale through recommended bias monitors.
Up till now, users manually selected which features or attributes of a model to monitor for bias in production, based on their own knowledge. With the new recommended bias monitors, Watson OpenScale will now automatically identify whether known protected attributes, including sex, ethnicity, marital status, and age, are present in a model and recommend they be monitored. Such functionality will help users avoid missing these attributes and ensure that bias against them is tracked in production.
In addition, we are working with the regulatory compliance experts at Promontory to continue expanding this list of attributes to cover the sensitive demographic attributes most commonly referenced in data regulation.
In addition to detecting protected attributes, Watson OpenScale will recommend which values within each attribute should be set as the monitored and the reference values—recommending, for example, that within the “Sex” attribute, the bias monitor be configured such that “Female” and “Non-Binary” are the monitored values, and “Male” is the reference value. If you want to change any of Watson OpenScale’s recommendations, you can easily edit them via the bias configuration panel.
Recommended bias monitors help to speed up configuration and ensure that you are checking your AI models for fairness against sensitive attributes. As regulators begin to turn a sharper eye on algorithmic bias, it is becoming more critical that organizations have a clear understanding of how their models are performing, and whether they are producing unfair outcomes for certain groups. Learn more about how Watson OpenScale can help by trying our Lite Plan on IBM Cloud for free.
If you are interested in becoming a sponsor user of Watson OpenScale, to provide feedback and help us determine the future direction of this product, please let us know.
Offering Manager, IBM Watson OpenScale
Making the workplace safe for employees living with HIV
The recent promising news about Covid-19 vaccines is in sharp contrast to the absence of a vaccine for HIV, despite decades of research. Unlike Covid-19 with a single viral isolate that shows minimal diversity, HIV circulates in a wide range of strains that so far have proven impervious to a single vaccine. Fortunately, more people […]
Call for Code for Racial Justice Needs You: Join the Movement
IBM has never avoided taking on big challenges. At IBM, we are privileged to drive impact at scale. We take on challenges that transform our clients, impact people’s lives and innovate for future generations as we strive to effect systematic societal change. Over the course of our 109-year history, the evidence has become clear that […]
A New Wave: Transforming Our Understanding of Ocean Health
Humans have been plying the seas throughout history. But it wasn’t until the late 19th century that we began to truly study the ocean itself. An expedition in 1872 to 1876, by the Challenger, a converted Royal Navy gunship, traveled nearly 70,000 nautical miles and catalogued over 4,000 previously unknown species, building the foundations for modern […]