Course overview

The ISTQB Certified Tester AI Testing course provides a comprehensive understanding of the challenges, techniques, and ethics involved in testing AI systems. As AI continues to shape the future of technology, this certification equips you with the skills to validate intelligent systems effectively, ethically, and reliably.
Designed for software testers, QA engineers, and AI practitioners, the course blends theoretical knowledge with practical application making it the ideal next step for anyone wanting to become a certified AI tester.
Read More
Course highlights
- Duration: 4-day intensive training (9:00am – 5:00pm)
- Format: live virtual training
- Exam preparation: includes structured revision and support to help you pass the ISTQB AI Testing exam
- Content: aligned with the official ISTQB CT-AI syllabus; covers AI fundamentals, ML models, risks, ethical considerations, and test strategies
- Accreditation: fully accredited by UKITB on behalf of ISTQB
- SFIAplus level: rated at level 3 by BCS
Why you should attend
- Learn how to effectively test complex AI systems and machine learning models
- Understand the unique quality risks associated with AI testing, including bias, transparency, and unpredictability
- Equip yourself with internationally recognised ISTQB AI Testing certification
- Boost your career as a certified AI tester in a high-demand, future-focused discipline
- Train with expert tutors who bring real-world experience in testing AI
Who should attend
- Software testers and QA professionals working with or transitioning into AI projects
- Test analysts, engineers, and technical leads needing AI-specific testing knowledge
- Professionals preparing for the ISTQB Certified Tester AI Testing exam (CT-AI)
- Developers, data scientists, and ML practitioners involved in testing and quality assurance
- Anyone seeking to become a certified AI tester and gain a deeper understanding of AI testing practices
Watch our latest webinar
Artificial Intelligence is transforming how we build and use software. While traditional testing techniques still apply, testing AI introduces new complexities around bias, unpredictability, and explainability.
To help you navigate this shift, we hosted a special webinar with AI testing expert Rosie Sheldon. In the session, Rosie explores the ISTQB Certified Tester AI Testing course—what it covers, why it matters, and how it benefits both individual testers and organisations adopting AI-driven solutions. Watch the webinar recording here.
About ISTQB and the certified tester scheme
Founded in 2002, ISTQB (International Software Testing Qualifications Board) is a not-for-profit association comprising 66 national boards (including the UKITB) to provide worldwide coverage
ISTQB has defined the Advanced certification as part of their ‘Certified Tester’ scheme that, with over 240,000 certifications, has become the de facto world-wide standard for software testing qualifications.
The ‘Certified Tester’ scheme:
- Provides a set of professional qualifications widely-recognised by employers, customers and peers;
- Enables software suppliers to hire and train certified testers and thereby gain commercial advantage over their competitors by advertising their tester recruitment and development policies; and
- Enables comparison of testing skills across different countries, testers to move across country borders more easily, and multi-national/international projects to have a common understanding of testing issues.
Read Less
Entry requirements
The entry requirements for the AI for testers qualification is that the candidate must hold the ISTQB Foundation certificate and it is also suggested that candidates have a minimum of 18 months testing experience.
Course objectives
Individuals who hold the ISTQB® Certified Tester- AI Testing certification should be able to accomplish the following business outcomes:
- Understand the current state and expected trends of AI
- Experience the implementation and testing of a ML model and recognize where testers can best influence its quality
- Understand the challenges associated with testing AI-Based systems, such as their self-learning capabilities, bias, ethics, complexity, non-determinism, transparency and explainability
- Contribute to the test strategy for an AI-Based system
- Design and execute test cases for AI-based systems
- Recognize the special requirements for the test infrastructure to support the testing of AI-based systems
- Understand how AI can be used to support software testing
In addition, Certified AI Testers should be able to demonstrate their skills in the following areas once they have completed the course and passed the exam:
- Describe the AI effect and show how it influences the definition of AI
- Distinguish between narrow AI, general AI, and super AI
- Differentiate between AI-based systems and conventional systems
- Recognize the different technologies used to implement AI
- Identify popular AI development frameworks
- Compare the choices available for hardware to implement AI-based systems
- Explain the concept of AI as a Service (AIaaS)
- Explain the use of pre-trained AI models and the risks associated with them
- Describe how standards apply to AI-based systems
- Explain the importance of flexibility and adaptability as characteristics of AI-based systems
- Explain the relationship between autonomy and AI-based systems
- Explain the importance of managing evolution for AI-based systems
- Describe the different causes and types of bias for AI-based systems
- Discuss the ethical principles that should be respected in the development, deployment and use of AI-based systems
- Explain the occurrence of side effects and reward hacking in AI-based systems
- Explain how transparency, interpretability and explainability apply to AI-based systems
- Recall the characteristics that make it difficult to use AI-based systems in safety-related applications
- Describe classification and regression as part of supervised learning
- Describe clustering and association as part of unsupervised learning
- Describe reinforcement learning
- Summarize the workflow used to create an ML system
- Given a project scenario, identify an appropriate ML approach (from classification, regression, clustering, association, or reinforcement learning)
- Explain the factors involved in the selection of ML algorithms
- Summarize the concepts of underfitting and overfitting
- Demonstrate underfitting and overfitting
- Describe the activities and challenges related to data preparation
- Perform data preparation in support of the creation of an ML model
- Contrast the use of training, validation and test datasets in the development of an ML model
- Identify training and test datasets and create an ML model
- Describe typical dataset quality issues
- Recognize how poor data quality can cause problems with the resultant ML model
- Recall the different approaches to the labelling of data in datasets for supervised learning
- Recall reasons for the data in datasets being mislabeled
- Calculate the ML functional performance metrics from a given set of confusion matrix data
- Contrast and compare the concepts behind the ML functional performance metrics for classification, regression and clustering methods
- Summarize the limitations of using ML functional performance metrics to determine the quality of the ML system
- Select appropriate ML functional performance metrics and/or their values for a given ML model and scenario
- Evaluate the created ML model using selected ML functional performance metrics
- Explain the use of benchmark suites in the context of ML
- Explain the structure and working of a neural network including a DNN
- Experience the implementation of a perceptron
- Describe the different coverage measures for neural networks
- Explain how system specifications for AI-based systems can create challenges in testing
- Describe how AI-based systems are tested at each test level
- Recall those factors associated with test data that can make testing AI-based systems difficult
- Explain automation bias and how this affects testing
- Describe the documentation of an AI component and understand how documentation supports the testing of AI-based systems
- Explain the need for frequently testing the trained model to handle concept drift
- For a given scenario determine a test approach to be followed when developing an ML system
- Explain the challenges in testing created by the self-learning of AI-based systems
- Explain how autonomous AI-based systems are tested
- Explain how to test for bias in an AI-based system
- Explain the challenges in testing created by the probabilistic and non-deterministic nature of AI-based systems
- Explain the challenges in testing created by the complexity of AI-based systems
- Describe how the transparency, interpretability and explainability of AI-based systems can be tested
- Use a tool to show how explainability can be used by testers
- Explain the challenges in creating test oracles resulting from the specific characteristics of AI-based systems
- Select appropriate test objectives and acceptance criteria for the AI-specific quality characteristics of a given AI-based system
- Explain how the testing of ML systems can help prevent adversarial attacks and data poisoning
- Explain how pairwise testing is used for AI-based systems
- Apply pairwise testing to derive and execute test cases for an AI-based system
- Explain how back-to-back testing is used for AI-based systems
- Explain how A/B testing is applied to the testing of AI-based systems
- Apply metamorphic testing for the testing of AI-based systems
- Apply metamorphic testing to derive test cases for a given scenario and execute them
- Explain how experience-based testing can be applied to the testing of AI-based systems
- Apply exploratory testing to an AI-based system
- For a given scenario select appropriate test techniques when testing an AI-based system
- Describe the main factors that differentiate the test environments for AI-based systems from those required for conventional systems
- Describe the benefits provided by virtual test environments in the testing of AI-based systems
- Categorize the AI technologies used in software testing
- Discuss, using examples, those activities in testing where AI is less likely to be used
- Explain how AI can assist in supporting the analysis of new defects
- Explain how AI can assist in test case generation
- Explain how AI can assist in optimization of regression test suites
- Explain how AI can assist in defect prediction
- Implement a simple AI-based defect prediction system
- Explain the use of AI in testing user interfaces
Anton Olivier| Principle Consultant| Agenor Technology –
Course was very intense, but Rosie made it really enjoyable and fun. I learnt a great deal
Gursharan Kaur| Senior Test Manager| KPMG UK –
Interesting course , good delivery of the syllabus from Rosie with plenty of examples with relevant references to our experiences that I could mostly relate to.
Samson Wong| Test Manager| Deloitte –
Great course. A little too many exercises and activities.
Søren K. Lynggaard| Technical Test Manager|Natoinal Danish Police –
Suggestion: Every morning starts with a recap of the previous day’s training.
A few suggestions for the .pdf-files:
– Exercise guide section 18.1 step 10 : The parameter in Weka, which you need to set to 300, is called numIterations in version 3.8 of Weka. In version 3.7 and earlier it was called numTrees.
– the .arff-files are .csv files, but cannot be opened in all non-English versions of Excel. Eg. the Danish version use “;” – not “,” – as separator.
Jamie Harwood| Senior developer in test|VML –
The course was engaging and interesting, with a strong focus on understanding AI fundamentals that built to integrating testing into AI development and AI into testing. It was a good mix of lectures, group exercises and solo work. If I had one suggestion for improvement, it would be to include more practical work around testing AI-based systems specifically (e.g., examples of how to apply some of the more theoretical techniques we have learned)
Lizzie England| Operational systems manager|DofE –
I thought the content of the course was outdated. The slides used in the sessions were unhelpful and we weren’t following the content in the order was presented to us at the start of the course. There was no explanation why that was the case. Homework wasn’t corrected among the rest of the attendees which is always good for reinforcing learning. At no point did the instructor help us by reinforcing learning or recapping on what we had done that day. Poor content and poor delivery.
Stuart Wilson| IT Director|WaterAid –
Excellent trainer, very engaging – made the course content (which can be dry at time) very easy to consume and retain interest. Thoroughly recommend Rosie to run training courses
Thanapol Wisuttikul | Senior Research Assistant | NECTEC –
The instructor explained and gave examples, making the learner understood the material better. The learning material is summarized syllabus, making content easier to understand.