Welcome to the ISO Survival Kit — an expert-led blog series created to help you conquer your ISO audits with confidence.
In each edition, we break down the most common non-conformities that lead auditors uncover, so you can fix them before they become roadblocks.
In this installment, we’re diving deep into ISO 42001, the new gold standard for Artificial Intelligence Management Systems (AIMS). As organizations rapidly implement AI tools, many face hidden risks that emerge during an ISO 42001 audit — from governance gaps to ethical oversights.
To help businesses navigate these challenges, we interviewed over 200 certified ISO auditors, including experts in lead auditor training ISO 42001, and analyzed countless real-world audit reports across sectors.
The result? A curated ISO 42001 audit checklist featuring the 100 most common non-conformities, complete with practical fixes, case-based scenarios, and insights you won’t find in a standard ISO internal audit template.
Let’s get into it.
📌 Clause: 6.1 – Actions to Address Risks and Opportunities
What’s going wrong?
Many organizations develop or deploy AI models without a formalized process to assess associated risks—technical, legal, societal, or ethical. Some perform informal brainstorming, others rely on ad-hoc documentation, but few implement a structured and repeatable framework.
Why this is a problem:
ISO 42001 requires proactive identification and mitigation of AI risks before deployment. Without a structured risk process, organizations are essentially gambling with bias, discrimination, hallucinations, or non-compliance—all of which pose reputational, financial, and legal threats.
How to fix it:
✔ Build a dedicated AI risk register that logs all potential risks, likelihood, and impact.
✔ Use frameworks like ALTAI (Assessment List for Trustworthy AI) or integrate with ISO 31000.
✔ Include risks across data integrity, algorithm transparency, explainability, accountability, and fairness.
✔ Assign cross-functional stakeholders—IT, legal, ethics, and business—to conduct reviews.
✔ Reassess risks periodically, especially after model updates or retraining.
📌 Clause: 5.3 – Organizational Roles, Responsibilities, and Authorities
What’s going wrong?
AI systems are mostly created in relative isolation, and this is coupled with unclear demarcation of responsibility boundaries across departments within organizations. While the technical teams are busy with building and deployment, no one is directly responsible for ongoing oversight, compliance, or ethical integrity.
Why this is a problem:
It stresses the clarity of responsibilities in the management system by ISO 42001. If the auditors cannot trace the accountability from training data to model decisions about AI operation, they will flag that as a serious governance failure. It limits the possibilities for incident response and adapting to regulation changes.
How to fix it:
✔ Define and document roles such as AI Compliance Officer, Model Owner, AI Ethics Lead, etc.
✔ Integrate these roles into your existing RACI matrix (Responsible, Accountable, Consulted, Informed).
✔ Provide specific training and authority to these roles so they can enforce standards.
✔ Ensure every AI use case is mapped to its accountable owner.
📌 Clause: 8.3 – Data Management
What’s going wrong?
Many AI projects ingest data from third-party sources, legacy systems, or web-scraped content without assessing its validity, bias, or regulatory compliance. Worse, there’s often no clear documentation of where data came from, how it was cleaned, or how it's stored.
Why this is a problem:
AI is only as good as its training data. With poor data come poor results—biased predictions, invalid recommendations, and unsafe automation. From an auditor's point of view, such a lack of transparency may signal systemic weaknesses in your AI governance.
How to fix it:
✔ Establish a Data Provenance Policy that documents the origin, processing, and destination of all training and operational datasets.
✔ Validate datasets using predefined data quality metrics (completeness, consistency, accuracy).
✔ Monitor for data drift post-deployment and retrain models accordingly.
✔ Ensure alignment with data privacy laws (e.g., GDPR, HIPAA) during collection and storage.
📌 Clause: 4.3 – Determining the Scope of the AIMS
What’s going wrong?
Ethics is treated as a “soft” or optional consideration. Most organizations focus on technical feasibility and performance metrics—completely skipping ethical evaluation of potential harms or unintended consequences.
Why this is a problem:
ISO 42001 integrates ethics right at the heart of AI governance. To deviate from it would not merely mean violating the standard itself but growing regulatory expectations such as those set by the EU AI Act. Auditors will search for ethical oversight, which is much more than just paperwork.
How to fix it:
✔ Set up a formal AI Ethics Review Board composed of internal and external stakeholders.
✔ Create an AI Impact Assessment (AIIA) template that must be completed before development begins.
✔ Review projects for risks related to bias, discrimination, human rights, autonomy, and environmental impact.
✔ Document decisions transparently and make trade-offs traceable.
📌 Clause: 9.1 – Monitoring, Measurement, Analysis, and Evaluation
What’s going wrong?
The majority of the teams will, however, move away from the system once deployed; there are few monitoring processes to gauge whether the model's predictions are still accurate, fair, and safe, especially in changing data environments.
Why this is a problem:
Artificial intelligence systems do not stay permanently good but degrade with time. Organizations would end up losing cash due to performance drops, decision errors, or systemic harm if proactive monitoring did not take place. Auditors want all formal KPIs, drift detection, and retraining logs, rather than just oral assurances.
How to fix it:
✔ Define AI-specific KPIs such as prediction accuracy, fairness score, false positive rate, and drift metrics.
✔ Use model monitoring tools to detect changes in behavior or input distribution.
✔ Set thresholds for automatic alerts and retraining triggers.
✔ Include post-deployment evaluation as a mandatory checkpoint in your development cycle.
The five issues above are just the tip of the iceberg.
To help you pass your ISO 42001 audit and build trustworthy AI systems, we’ve compiled a complete list of 100 common non-conformities with real-world context and actionable solutions for each.
🎯 Don’t wait until an auditor points them out—fix them now.
ISO 42001 isn’t just another compliance box to tick—it's a foundational framework for building AI systems that are trustworthy, ethical, and resilient.
As AI technologies evolve, so do the risks. What separates audit-ready organizations from the rest isn’t flashy tech or complex models—it’s governance, accountability, and readiness to adapt.
The non-conformities we’ve outlined are the most common—but also the most fixable. With the right controls, clear ownership, and a structured AI Management System, your team can not only pass the audit but lead in responsible AI deployment.
Remember: It’s not a matter of if you’ll be audited—it’s when. Don’t wait until the findings are in.
Stay tuned for the next edition of the ISO Survival Kit, where we uncover the most overlooked documentation gaps in ISO 42001—and how to patch them before they become critical.
Stay up-to-date with the latest news, trends, and resources in GSDC
If you like this read then make sure to check out our previous blogs: Cracking Onboarding Challenges: Fresher Success Unveiled
Not sure which certification to pursue? Our advisors will help you decide!