By: Matt Kalmick
·

As artificial intelligence continues to reshape the workplace, organizations across industries face a new frontier of risk management. And unlike some other compliance areas that evolved over decades, AI regulation is developing at breakneck speed. The EU AI Act has already set global precedent, while states like California, New York, and Texas continue to advance their own AI frameworks. Organizations that wait for greater clarity to plan their compliance response will find themselves scrambling to catch up, like many companies did with GDPR implementation.
For many businesses, the question is where to start. The below sets forth some high-impact steps you can take immediately to prepare your business. They do not require heavy investment, and many leverage the good governance techniques most responsible businesses already utilize. Even basic steps like inventorying your current AI tools, establishing approval processes for AI implementations, and designating a point person for AI oversight can provide substantial protection. The key is to start somewhere and build systematically.
Step 1: Establish AI Inventory and Risk Assessment
Many organizations are already using AI without formal governance structures, or even without the knowledge of leadership. Your first priority should be conducting a comprehensive AI inventory.
Identify existing AI systems: Include everything from chatbots and recommendation engines to automated decision-making tools.
Map data flows: Document what data feeds into AI systems and where outputs are used.
Assess risk levels: Categorize systems based on potential impact on individuals, business operations, and regulatory compliance.
Document decision points: Identify where AI influences or makes decisions affecting customers, employees, or business outcomes.
This inventory should be a living document, updated as new AI tools are deployed. In my experience managing regulatory risk, having comprehensive documentation throughout the business lifecycle is one of your strongest defenses against downstream compliance challenges.
Step 2: Develop AI Use Policies and Standards
Effective AI governance requires establishing policies, and effective guardrails, that balance innovation with risk management. Key policy areas to consider at this stage of the process include:
Acceptable use standards: Define appropriate and prohibited AI applications.
Data governance: Establish requirements for data quality, bias testing, and privacy protection
Human oversight requirements: Specify at which steps human review is needed.
Vendor management: Create due diligence standards for third-party AI tools.
Incident response: Define procedures for AI system failures or unintended outcomes.
Step 3: Build Cross-Functional AI Governance Structure
The best compliance programs incorporate the perspectives of diverse stakeholders, and AI governance benefits from this approach as well. Begin discussing the key roles and responsibilities of the individuals who will comprise this center of excellence for your organization.
AI Governance Committee: Senior leadership provides strategic oversight
Technical AI Review Board: IT and data professionals evaluate system performance
Legal and Compliance: Provide regulatory interpretation and risk assessment
Business Unit Representatives: Operational impact and user experience insights
Privacy and Security: Data protection and cybersecurity considerations
Step 4: Implement Privacy-by-Design for AI Systems
AI amplifies the privacy risks that may already be inherent to your organization, so privacy considerations should be embedded in AI governance from the start. Essential privacy considerations at this stage could include:
Data minimization: Collect and process only data necessary for AI objectives
Purpose limitation: Ensure AI uses align with original data collection purposes
Transparency: Provide clear explanations of AI decision-making processes
Individual rights: Establish procedures for data subject requests related to AI processing
Cross-border considerations: Address international data transfer requirements for AI training and deployment
Step 5: Establish AI Vendor Management Framework
Third-party AI tools require enhanced due diligence. Many organizations overlook vendor management as an AI governance priority, but external AI services often present the highest risk. When engaging these partners, consider:
Algorithmic transparency: Understanding of AI model functionality and limitations
Data handling practices: Compliance with privacy and security requirements
Audit rights: Ability to review AI system performance and compliance
Liability allocation: Clear contractual terms for AI-related incidents
Regulatory compliance: Vendor adherence to applicable AI regulations
Conclusion
Compliance professionals understand that effective governance requires balancing innovation with risk management. By following these steps and setting simple goals — scheduling your AI inventory today, for example — you can begin to build the robust governance frameworks that adapt to evolving regulations and make the most of AI advancement.
About the Author
Matt Kalmick, J.D.
I'm a strategic and collaborative leader passionate about building compliance programs that reduce risk and remove regulatory barriers.
From financial services to FinTech and SaaS to cannabis, I have been managing risk and compliance in highly-regulated environments for the last 15 years.
I received my Juris Doctor from Boston College Law School, my Bachelor’s Degree from Drew University, and my Certified International Privacy Professional (CIPP) certification from the International Association of Privacy Professionals (IAPP).
More Posts

State AI Laws Take Center Stage

AI Regulation: A Federal Moratorium on State Laws Is a Misguided Approach

AI Automation in Compliance: Elevating the Regulatory Affairs Function

The ROI of Proactive Compliance: Why Early Investment Pays Off

Hot Topics in Privacy Compliance: Navigating 2024's Shifting Landscape