Securing the Copilot Era – Governance in the Age of AI
This session delivered a clear and cautionary message about the risks of adopting AI without proper governance. Led by Ramkumar Vellingiri, Digital Workplace Architect at Maleon and Microsoft MVP, the session walked through a scenario that felt all too familiar and highlighted why AI governance must be treated as a core responsibility.
A Scenario That Hits Close to Home
The session opened with a scenario many organizations can relate to. A group of developers eagerly tries out a new AI tool. One developer uses the tool to help debug code. A week later, the company discovers that its crown jewel code has been leaked externally due to how the AI tool was used.
This is not a hypothetical example. A similar incident happened to Samsung through the use of ChatGPT.
The message was clear: AI governance matters and cannot be overlooked.
The Pillars of AI Governance
Ramkumar structured the session around five core pillars of AI governance, each addressing a critical risk area.
Pillar 1: Data Quality and Integrity
The first pillar focused on data. AI is only as good as the data behind it, following the principle of “garbage in, garbage out.”
The session highlighted that:
- 69% of organizations say poor data quality undermines decision-making
Maintaining strong data quality and integrity is essential to ensure AI outputs are reliable and trustworthy.
Pillar 2: Security and Privacy
The second pillar emphasized that security is foundational to safe AI usage.
Key points included:
- Clear ownership of security and privacy capabilities
- Correct file permissions and addressing oversharing
- Implementing DLP policies to block sensitive data leakage
- Restricting high-sensitivity SharePoint sites from Copilot processing
- Using encryption controls to protect high-sensitivity files
- Monitoring suspicious activity through audit logs
As Ramkumar stressed, security is not optional. Without it, AI can quickly become an organization’s biggest vulnerability.
Pillar 3: Regulatory Compliance
The session then moved into regulatory risk. One example shared involved doctors and staff uploading PHI data to AI tools for convenience and efficiency. This resulted in regulated data being stored on external servers, creating serious compliance and trust issues.
To address this, organizations should:
- Take a risk-based approach to approving and adopting AI tools
- Conduct legal reviews before deploying AI solutions
- Keep strict records of AI-driven decisions to support future audits
Pillar 4: Ethical Use
Ethical considerations formed the fourth pillar of AI governance. Ramkumar emphasized the importance of:
- Bias testing
- Implementing guardrails aligned with responsible AI policies
- Keeping humans in the loop for sensitive scenarios and decisions
These steps help ensure AI is used responsibly and does not create unintended harm.
Pillar 5: Transparency and Explainability
The final pillar focused on trust. People are more likely to adopt AI securely when they understand how it works.
Transparency and explainability help build that trust and support more secure and thoughtful adoption across the organization.
Our Takeaways
This session reinforced that AI governance is not a single control, but a framework built on multiple, interconnected pillars.
Key takeaways included:
- AI governance must be addressed early and continuously
- Data quality, security, compliance, ethics, and transparency are all critical
- Strong governance prevents AI from becoming a major organizational risk
A big thank you to Ramkumar Vellingiri for delivering a practical and timely reminder of why AI governance must be foundational, not optional.