From Computer Weekly:
Adrian Bridgwater, 20 September 2018 10:50
Built with open source DNA, IBM’s new Watson ‘trust and transparency’ software service claims to be able to automatically detects bias.
No more ‘racist robots’ then, as one creative subheadline writer suggested?
Well it’s early days still, but (arguably) the effort here is at least focused in the right direction. …
This automated software service is designed to detects bias (in so far as it can) in AI models at runtime as decisions are being made. More interestingly, it also automatically recommends data to add to the model to help mitigate any bias it has detected… so we should (logically) get better at this as we go forward.
Unfortunately, no examples are given of instances in which more data would reduce disparate impact.