SCCS CLIENT GUIDANCE

Posted on: 27/04/2026

Category:

Using AI Safely in ISO Management Systems

(Aligned to ISO 9001 / 14001 / 45001 and UKAS expectations)

1. PURPOSE

This guidance explains how organisations can use AI tools (e.g. ChatGPT, Copilot, document generators) within their management systems without compromising compliance, effectiveness, or certification outcomes.

It reflects current expectations that: 

  • AI can be used but must be controlled, validated, and understood 
  • Certification focuses on effectiveness and competence, not just documentation 
  • Existing ISO requirements still apply (AI does not change the standard)

2. KEY PRINCIPLE

AI can support your management system but cannot replace understanding, ownership, or accountability.

3. ACCEPTABLE USE OF AI IN ISO SYSTEMS

You may use AI for:

  • Drafting policies and procedures
  • Structuring documentation
  • Generating ideas for risks, objectives, or controls
  • Improving clarity and formatting ISO itself recognises AI as a tool to support drafting and research, provided outputs are checked and validated.

4. UNACCEPTABLE / HIGH-RISK USE

You must NOT: 

  • Use AI outputs without review or validation
  • Rely on AI instead of:
    • competence
    • decision-making
    • management responsibility
  • Implement generic AI-generated systems without tailoring

Doing any of the above will create a high risk of non-conformity due to lack of effectiveness and competence.

5. REQUIRED CONTROLS WHEN USING AI

To remain conforming, your organisation must demonstrate:

5.1 Validation of AI Outputs You must:
  • Review all AI-generated content
  • Confirm:
    • accuracy
    • relevance
    • alignment with ISO requirements 
  • Ensure subject matter expert approval AI outputs must not be accepted “as-is”.

AI outputs must not be accepted “as-is”.

5.2 Contextualisation

Your management system must reflect:

  • Your actual processes
  • Your risks and hazards
  • Your environmental impacts
  • Your organisational context

Generic templates = a high non-conformity risk.

5.3 Ownership You must define:
  • Process owners
  • Document owners
  • Responsibility for maintaining content

AI cannot be the “owner” of any process.

5.4 Competence

Personnel must be able to:

  • Explain their processes
  • Describe risks and controls
  • Demonstrate understanding of ISO requirements

Competence is a core ISO requirement and must be demonstrated in practice, not just on paper.

5.5 Evidence of Implementation

You must show:

  • Records
  • Monitoring results
  • Internal audit findings
  • Corrective actions

A well-written document alone is NOT sufficient evidence of conformity.

5.6 Control of AI Use (Recommended Best Practice)

You should:

  • Define where AI is used
  • Assess risks of AI use 
  • Control:
    • data inputs (confidentiality)
    • outputs (accuracy)
  • Periodically review AI-generated content

This aligns with emerging AI governance expectations such as ISO 42001, which emphasises risk management, transparency, and accountability in AI use.

6. WHAT AUDITORS WILL LOOK FOR

During audits, you should expect questions such as:

About documentation 

  • “Was AI used to create this?” 
  • “How was it reviewed and approved?”

About understanding 

  • “Explain this process in your own words” 
  • “What risk does this control address?”

About implementation

  • “Show me where this procedure is used” 
  • “What evidence do you have it is effective?”

7. COMMON PITFALLS (and how to avoid them)


PitfallRiskHow to avoid

Copying AI-generated policies

Generic, non-applicable system

Tailor to your organisation

No validation process

Incorrect / misleading content

Implement formal review

Staff don’t understand system

Major non-conformity

Train and test competence

Over-documentation

Ineffective system

Keep it simple and relevant

AI-generated risk registers

Unrealistic risks/controls

Base on real operations

8. PRACTICAL EXAMPLES

Poor practice

  • AI generates a risk assessment
  • No review
  • Staff cannot explain risks

Likely audit finding:

  • Lack of competence
  • Ineffective planning

Good practice

  • AI used to draft risk list 
  • Reviewed by process owner
  • Risks tailored to site operations
  • Controls implemented and monitored

This demonstrates:

  • competence 
  • effectiveness 
  • control

9. ALIGNMENT WITH FUTURE EXPECTATIONS

AI use is increasingly expected to be managed systematically.

Standards such as ISO/IEC 42001 require organisations to:

  • Identify and manage AI risks
  • Ensure transparency and accountability
  • Maintain competence and governance over AI systems

Even if you are not certified to ISO 42001, these principles are becoming best practice.

10. SUMMARY

To safely use AI in your ISO system: 

  • Use AI as a support tool, not a replacement
  • Always validate and tailor outputs
  • Ensure people understand the system
  • Demonstrate real implementation and effectiveness
  • Maintain ownership and control

FINAL MESSAGE

If your system looks perfect but your people don’t understand it, it will not pass audit.