UnAlgo builds frameworks, labs, and measurement tools that help people understand how AI systems work, and develop the judgment to navigate, shape, and build with them.
Most AI education teaches people how to use tools. The Investigator Framework teaches something harder and more durable: how AI systems actually work, how they shape thinking and identity, and how to engage them with clarity and sovereignty.
Investigators examine the system. They know how it persuades, where it fails, and what it cannot see. They use AI as a superpower, and they know exactly when to put it down.
Social media gave us influencers. AI demands something different: curious, discerning, sovereign investigators.
Fluency means knowing the technology itself, the systems beneath the surface, and how to engage as a builder, not just a user.
UnAlgo builds the frameworks, labs, and measurement systems that live in that space, teaching the full picture of AI and algorithmic ecosystems, how they were designed, where they are headed, and what must remain human to accelerate the shifts we need most: in education, mental health, community resilience, and AI for society.
Build a clear mental model of how AI actually works.
Understand the systems beneath the surface.
Leave with language, judgment, and tools you can use immediately, whether you are supporting students, writing AI policy, advocating for responsible use, or building solutions for society.
Outputs are measured. The Human Agency Index (Judgment Capacity) tracks shifts in judgment, discernment, and agency before and after.
Sue Gangwani spent decades as the architect behind enterprise AI and predictive systems, designing infrastructure, leading cross-functional organizations, and advising on go-to-market strategy across youth mental health and education. These are sectors where algorithmic decisions shape the most vulnerable populations, and that inside view led to a different problem: how humans stay agents in systems designed to shape them.
She founded UnAlgo to work on that problem directly, developing the Investigator Framework, experiential labs, and the Human Agency Index. These are measurement tools for what matters most and is hardest to see: judgment, discernment, and the capacity to think for yourself in AI-mediated environments. Through the AI Studio, she also guides organizations and practitioners in AI strategy, solution design, and building with intention.
Researcher, speaker, and advisor to educators and institutions navigating responsible AI. Studied applied AI and ethics at MIT and the University of Helsinki
Measurement frameworks, field work, and curriculum research currently in development.
Whether you are navigating AI in your organization, exploring a research partnership, or want to bring UnAlgo to your community, reach out.