Drumsticks is an independent advisory firm shaping the frameworks, policies, and institutions that ensure artificial intelligence serves humanity.
Artificial intelligence is transforming every domain of human activity — from healthcare and finance to national security and democratic institutions. Yet the frameworks needed to govern these systems remain underdeveloped, fragmented, and often invisible to the people most affected.
Drumsticks bridges the gap between technical expertise and policy leadership. We advise governments, international organizations, and leading enterprises on building AI governance structures that are robust, transparent, and genuinely accountable.
Designing legislative and regulatory frameworks that govern AI deployment while preserving innovation capacity across sectors.
Explore →Rigorous evaluation of AI systems for capability risks, distributional harms, and alignment failures before and after deployment.
Explore →Building the oversight bodies, audit mechanisms, and accountability structures that make governance durable and enforceable.
Explore →Facilitating multilateral dialogue and treaty frameworks to ensure AI governance does not fragment along geopolitical lines.
Explore →Advanced AI systems concentrate economic and political power in ways that can destabilize democratic institutions. Proactive governance is the only check on this dynamic.
When consequential decisions are made by systems no one fully understands, accountability evaporates. Governance frameworks restore the human chain of responsibility.
The decisions made in the next five years will shape AI's trajectory for decades. Organizations that wait for consensus before acting cede the field to those who don't.
Former senior policy advisor at the OECD AI Policy Observatory. Led the design of national AI strategies for three G20 governments. Published widely on algorithmic accountability and democratic AI governance.
Constitutional law scholar turned AI governance researcher. Developed the first comparative framework for AI regulatory impact assessment across 18 jurisdictions. Advises the EU AI Office on fundamental rights implications.
Machine learning engineer with a decade at DeepMind and Anthropic. Specializes in interpretability methods and red-teaming frontier systems. Bridges the gap between technical safety research and actionable policy recommendations.
Whether you represent a government, international organization, or enterprise — we're ready to work with you.