Shadow AI is the use of unapproved consumer AI tools in the workplace; tools adopted by employees without organisational oversight, governance or security controls.
According to Microsoft's latest research, 71% of UK employees have already used unapproved AI tools at work, and more than half continue to do so weekly. This isn’t rogue behaviour. It’s a response to pressure, pace and expectation.
People turn to Shadow AI because they need answers quickly, official systems can’t keep up, clear guidance doesn’t exist and approved alternatives aren’t available.


Shadow AI fills the gap... quietly, informally, at scale. In the public sector, that matters more. Shadow AI can expose:
Most of this risk is unintentional. Most of it is unseen. Most organisations don’t know how widespread it already is. Ignoring Shadow AI doesn’t stop it. Over-controlling it pushes it underground. The challenge is not whether AI is being used, but how safely, how visibly, and how strategically.

This programme brings together:
It includes:
All designed to help public sector leaders understand Shadow AI, reduce risk, and adopt AI responsibly at scale.