Advances in AI promise tremendous benefits to society but also pose significant challenges. Google is at the forefront of AI research, applications and operationalizing principles for Responsible AI. As the field continues to advance, responsibility is becoming increasingly important to meet expectations of all stakeholders. Learn about challenges such as bias, adversarial attacks and unintended user and societal harms. Hear how AI is different from other emerging technologies, and how leaders like Google are organizing to improve responsibility with principles, governance and design practices.
We look at Google techniques, research, learnings and open challenges in areas such as:
- Unintended consequences
- Model understanding
- Secondary metrics
- Design practices
- Engineering objectives (loss functions) We explore a case study of how these considerations come together. More broadly, we look at how emerging approaches for how Machine Learning can be aligned with values and be a force for good.