Building an AI-Powered Election Tool: Speed, Safety, and Transparency in Practice 

by , , , , | Feb 1, 2026 | Microsoft 365 Copilot | 0 comments

This session explored how AI can be used responsibly to support democratic processes. Led by Daniel Etten, Cloud Solution Architect at Microsoft, the session walked through the creation of a website for the Dutch elections that helps voters determine their own priorities, rather than forcing them into predefined frameworks. 

The Problem: When Choice Becomes Overwhelming 

Daniel began by outlining the core problem with traditional voting tools. Many force voters into a predetermined set of priorities instead of allowing them to explore what truly matters to them. 

The challenge was amplified by the complexity of the Dutch elections: 

  • Voters could choose from 27 different political parties 
  • Some election programs were over 100 pages long 

This raised a key question: how can voters meaningfully engage with this information without being overwhelmed or guided toward artificial conclusions? 

The Idea: Let Voters Define Their Own Priorities 

The idea was to build a website that supports voters in determining their own priorities, rather than ranking or recommending parties for them. The entire project moved at remarkable speed, with the full process taking place between September 9 and September 27. 

Early validation came from conversations with co-workers, family, and friends, helping shape the direction of the solution before development began. 

From Concept to Live Website 

Development started with GitHub Spark to generate an initial draft of the website. From there, Daniel continued refining the solution using VS Code with Copilot. 

The process included: 

  • Soft launching the website to support testing efforts 
  • Iterating based on findings 
  • Going live 
  • GitHub Spark in Action 

Daniel shared how GitHub Spark was used by feeding it a 460-word prompt that included detailed instructions on how the website should be built. This covered both technical and functional requirements, such as: 

  • A React and Tailwind frontend 
  • A chatbot interface 
  • A blog section 
  • Layout, navigation, and additional components 

GitHub Spark provided: 

  • Rapid productization with AI 
  • Full access to source code 
  • Support for backend integration 

The session also covered important considerations when using this approach: 

  • SEO requires additional setup 
  • Client-side rendering is the default 
  • Secrets must be managed via the backend 

Daniel highlighted several limitations as well: 

  • Outdated training data for political parties 
  • Generic, pattern-based answers 
  • No built-in source grounding 

To address these gaps, Retrieval Augmented Generation (RAG) was used to support more specialized and accurate knowledge. 

Managing Risk: Safety Over Speed 

A major theme of the session was the importance of boundaries in AI systems. Daniel emphasized the need to carefully manage AI creativity to avoid unintended bias. 

One example showed how the AI framed one political party as the “Belle” and another as the “Beast,” introducing narrative bias. This led to the implementation of transparent refusals via system prompts. 

Prompt injection attacks were another key risk discussed. An example prompt that forced ranked outputs created false certainty for users by bypassing educational context. Importantly, this issue was identified and fixed within six hours. 

Through testing other AI-powered election websites, Daniel observed a clear tradeoff between speed and safety. His approach deliberately favored safety to support thoughtful and informed engagement. 

Radical Transparency as a Design Choice 

Daniel took a stance of radical transparency, openly sharing information about security incidents, bugs, ethical dilemmas, and costs. 

This approach delivered clear benefits. 

For users: 

  • Built trust through honesty 
  • Removed black box skepticism 
  • Empowered users to report bugs 

For the community: 

  • Helped prevent repeating the same mistakes 
  • Encouraged open collaboration 
  • Raised the overall bar 

For democracy: 

  • Avoided hidden manipulation 
  • Enabled accountable AI development 
  • Supported AI education 
  • Helped define standards for AI usage in political tools

 

Our Takeaways 

This session reinforced that building AI responsibly requires intention, discipline, and openness. 

Key takeaways included: 

  • Boundaries matter for secure and ethical AI usage 
  • Guidance, context, and nuance matter more than raw speed 
  • Radical transparency builds trust 
  • The data used is more important than the model itself 
  • Testing on real devices and sharing findings openly benefits the entire community 

A big thank you to Daniel Etten for sharing a thoughtful, real-world example of how AI can be applied responsibly in high-stakes environments like democratic elections. 

Authors