Ensure that AI is trustworthy and trusted,
for the benefit of humanity

Trustworthy AI — Should we trust AI?

Transformative technology offers tremendous opportunities but raises ethical concerns and the potential for harm. This pillar supports research that mitigates the risks of AI through promoting fairness, accountability, transparency, ethics, and safety. Governance is broadly understood to include laws, markets, networks, standards, and other tools.

Trust in AI — Will we trust AI?

Utilisation of AI will be enhanced by faith that the end-to-end process is robust and accountable. The pillar also supports interdisciplinary research into understanding the factors that shape perceptions of human-machine interaction, influencing the adoption of beneficial AI.

Research Areas (non-exhaustive)

Fairness — Ensure that the benefits of AI are properly shared among the community (Bias detection & prevention)

Accountability — Ensure that AI systems are properly regulated – minimizing risk and properly allocating losses (Liability and compensation framework)

Transparency/explainability — Ensure that systems appropriately enable those affected to understand the process in general and a specific outcome in particular (Transparency tools; explainability standards & measures)

Ethics & human-centricity — Ensure that AI serves the interests of the broader community (Values and trust in AI)

Safety & Security — Ensure that AI systems are safe to use and appropriately protected against hacking and data breaches (Safety certification; Data protection & cybersecurity)