Like many companies, we’ve been watching the latest advancements in Artificial Intelligence closely, figuring out how we can best use AI in our work, and how we can help clients make their products even better and more efficient with AI.
At the same time, we’ve been wondering what we can do to ensure our values are applied to the AI solutions that we build.
To answer this question, we set out to better understand the current best practices for AI Ethics with the goal of putting together a guide for us and others. What we found was that because of the rapid pace of changes in AI, the ethics of AI today is more about ensuring the right questions are asked, rather than defining the right answers.
We also found that these questions can actually be asked of all of the products we build, and the external systems and algorithms we use, and not just those that rely on AI. Rather than making the guide more generic, which potentially waters down the main concerns too much, we opted to instead add an acknowledgement that many of these questions can actually be asked of all of the products we build and the algorithms we use, not just those that rely on AI.
We’ve gathered the questions we can ask ourselves to guide us towards products that live up to our values in the Playbook in our AI Ethics Guide.
We hope this guide is useful to you too.