Part one
Here we set out our detailed comments on the overall structure of AIME, order of questions and similar topics.
2. Does the overall structure of the tool make sense? Why/why not?
Things that were good about the overall structure.
Good starting point: The participants in our consultation group generally agreed that the tool provides a useful starting point, particularly for businesses that have not previously considered AI governance. As one participant noted, it “brings together important topics into one place,” which can be beneficial for SMEs with limited exposure to AI policies. We also note that the structure follows a familiar format, similar to Cyber Essentials, making it accessible to organisations with basic governance frameworks in place. The fact that it highlights key areas such as data protection, bias, and governance are a positive step toward fostering responsible AI adoption. In that sense, the overall structure of the tool makes sense.
Things that require improvement regarding the overall structure.
Separate the user types: The participants in our consultation group almost unanimously agreed that the structure and flow of AIME would be far improved if early questions established the nature and type of respondents and the AI systems they use. With this information, the questions could be tailored in order to ensure that the right questions are being asked of the right respondents. This will greatly enhance the quality and accuracy of the answers and the experience had by the respondent. For example, an end-user organisation which only accesses publicly-available LLMs will approach the questions with a very different set of assumptions and premises than a sophisticated AI provider, for whom more of the questions will be relevant (particularly those pertaining to fairness and bias).
Guidance: One area which is missing from the current version of the tool reviewed by this group (but which we understand DSIT will incorporate into the final version) is additional “base guidance” to help users understand what good looks like and what minimum requirements should be in place. This would help ensure a more consistent and meaningful application of the tool across different organisations.
Binary questions: Finally, the group considered the nature of binary responses posed in each question and whether the use of a binary mechanism benefitted or detracted from the overall aim. The group’s view of this was mixed. The majority of participants felt that binary questions oversimplified the issue and could also be subject to abuse, especially where users simply pick answers to try to score highly in a ratings process. This would be particularly noticeable if AIME were to become a mandatory part of public procurement processes. The binary questions also fail to address quality or utility. It may be the case that 2 respondents both answer yes to AIME Q2.1 (“Do you have an AI policy for your organisation?”) but in reality they are incomparable, for example because one has an excellent, comprehensive and well-structed AI policy and the other has a poorly drafted, incoherent AI policy. In this scenario it could not reasonably be said that both parties are following good AI governance processes (the first one is, the second one isn’t) but AIME would wrongly record both as passing this element of the test. This issue is present in other areas of AIME, particularly 1 (AI system record), 4 (impact assessment), 7 (bias) and 8 (data protection). However, in contrast, a sizeable minority of participants were of the view that a binary process was good and should be pursued at the risk of over-simplification, particularly where the questions were being answered by organisations unfamiliar with AI governance. In their view it was worth taking the risk of potentially recording a wrong answer because what mattered more was ensuring that AIME was widely used and capable of being accessed by the vast number of UK organisations venturing into AI for the first time.
During analysis one participant said:
“The overall structure makes sense: of course it does. As a visual person running an SME a format like this really appeals to me. I’m glad it’s not wordy. I don’t have time to read loads of guidance nor do I want to. I just want it set out in a really easy-to-read way and this ticks that box, it’s really good. But the thing I struggle with is not the overall structure, but what is the overall point? With Cyber Essentials I know I’m going to get a certificate and some ability to promote to other businesses that I passed Cyber Essentials. If that’s not the case with AIME then why would I bother with it?”
3. Would you change the order of any of the sections/questions? If yes, which questions and why?
Positive observations about AIME structure.
All participants agreed that the general structure was sensible and logical, although we have some recommendations for adjusting the order of the questions to help user engagement.
Missing sections.
The Vital Need For Distinction Between Users and Developers: We have not focussed here on sections that we consider are missing from AIME because the purpose of this question is to focus on the existing order of questions (see our responses to question 8 which address this in greater detail). However, as with other topics, this question gave rise to debate amongst all participant groups about the lack of questions regarding the distinction between users and developers.
Here is what one participant said when debating question 3:
“The tool could be greatly improved by commencing with some initial baseline questions to establish the manner in which the responding organisation will be utilising AI within its business – for example: Is the organisation a developer, deployer or user of AI? What industry is the business operating in (healthcare, finance, transportation etc.)? How many individuals are likely to be impacted by the use of the AI? etc. These initial baseline questions could then be used to customise AIME and serve up questions which are tailored to the organisation’s particular circumstances. They would go at the start of the AIME before all the other questions.”
Another participant said:
“I think it’s useful to open / set the scene with a question that prompts businesses, (perhaps even before they embark on using AI) to pause and think about what exactly they are looking to get out of using the technology in their businesses. We should encourage this kind of critical thinking around intent, purpose and objectives, as well as any principles that could govern what type of AI would be used and in what ways according to the needs of individual businesses.”
Order of existing questions.
- Almost all participants felt that sections 3 (fairness) and 7 (bias mitigation) were very similar in their nature and should be close to each other. We also think that AIME should do a better job of explaining the difference between them (we expand more on how that can be done in our response to question 7).
- Most participants wanted to swap sections 6 (data management) and 8 (data protection) because this would improve the overall flow.
- Many participants considered that AIME sections 4 (Impact Assessment) and 5 (Risk Assessment) could be combined. They liked the content of both but thought separating them was unnecessary and potentially confusing.
- Several participants were of the opinion that sections 1 (AI system record) and 2 (AI policy) should be swapped around because they felt that most organisations would understand easily what an AI policy is, at least in theory if not in practice. This is less likely to be the case with AI systems records.
Additional points regarding changes/typos.
We would recommend:
- Moving question from section 6.6 (‘Do you sign and retain written contracts with third parties that process personal data on your behalf?’) into section 8, as this appears to be a data protection question, rather than a data management question.
- Within section 8, we would move AIME Q8.5 (‘Have you ensured that all your AI systems and the data they use or generate is protected from interference by third parties?’) and AIME Q8.1 (‘Do you implement appropriate security measures to protect the data used and/or generated by your AI systems?’) next to each other, as protecting data from interference from third parties and data security are intrinsically linked and should follow one another.
Lastly, the numbering and lettering used throughout the AIME tool needs to be consistent. For example the AIME questions in sections 5.2.1 and 5.2.2 both have ‘a’ and ‘b’ options but AIME Q5.3 has a ‘d’ and ‘e’ option. This should be corrected throughout the AIME tool to ensure it is well presented and user friendly.
4. We are planning to format the final version of the tool as an interactive decision tree (loosely based on the Cyber Essentials readiness tool). Do you agree that this format is intuitive/easy to use? Why/why not?
Decision tree is uniformly endorsed.
All participants agreed that a decision tree was the most preferrable manner of setting out the AIME tool.
A decision tree allows users to follow a step-by-step process, guiding them based on their specific answers. It reduces cognitive load by presenting only the relevant next questions, avoiding unnecessary complexity. It is intuitive because users are not overwhelmed by a long list of questions at once. Instead, they are prompted to answer one question at a time, making the process feel manageable. It uses conditional logic so that users can skip irrelevant sections based on their responses (e.g., if an organisation doesn’t develop AI systems, they can bypass questions about training data).
Decision trees also offer a visual structure, showing progression and decision paths. This helps users see where they are in the process and what remains to be completed. This was seen as a positive thing by participants because visualisation improves understanding, especially for non-technical users who may feel daunted by text-heavy formats.
A decision tree can also provide immediate feedback based on responses, such as action points or recommendations. This will help users to understand their current state and receive tailored advice without needing to cross-reference additional documents.
Comparison of AIME versus Cyber Essentials.
Almost all participants agreed that the AIME tool needs significant improvement to match Cyber Essentials' user-friendly approach. Some participants felt that the higher-level questioning in the AIME tool is not comparable to the detail in Cyber Essentials. For example:
- The current guidance is insufficient - "What is this?" help text and links (similar to those used by Cyber Essentials) should be added to provide clarity to the questions.
- Clear examples of good practice should be added to help users understand the requirements which the questions are seeking to address perhaps with the provision of templates.
- The pass / fail criteria should be made clearer - Cyber Essentials clearly marks "Action Points" that are immediately visible on answering a question. This allows a user to read further on any point in the moment and understand how relevant it is to their IT usage. In the current draft of AIME, the responses often lack clear indicators of what's acceptable (although we understand that DSIT intends to add a rating system in the next draft of the tool).
- While the Cyber Essentials readiness tool does includes a few hyperlinks to external guidance notes, these are usually accompanied by executive summaries so that the most critical information is in one place for the user. This makes the Cyber Essentials readiness tool less intimidating for users to complete. This format would also allow for more information boxes, as we feel the current version of the AIME tool has sections which could benefit from more guidance (for example, what is meant by “interference from third parties” in section 8.5?).
As with any self-assessment tool, there is still a risk of users manipulating their answers to achieve the highest score and lowest number of action points, but the Cyber Essentials readiness tool seems to be more intuitive and easier to use than the current format of the AIME tool.