For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more
The Americas -
holly.foster@halldale.com
Rest of World -
jeremy@halldale.com

As artificial intelligence transforms aviation training, industry leaders demand safeguards - and regulators are listening.
The increasing use of Artificial Intelligence in aviation training has left many training leaders both cautious and curious. How will AI be regulated? What remains non-negotiable for safety? These questions dominated the conversation at our recent EATS Heads of Training (HoT) meeting, where industry leaders shared their recommendations and concerns about AI in training. Now that EASA is developing an AI framework of its own, covering the whole aviation spectrum, it’s time to examine which concerns have been addressed and where we still need to push for change.
A fundamental concern for our subject matter experts was the imperative to keep humans in the loop. The HoTs were clear: AI should support, not replace, instructors and evaluators. Human authority must be maintained over critical assessments. The fear? Over-reliance on AI could lead to “cognitive erosion”- a scenario where humans lose the ability to exercise critical thinking independently when needed. Use it or lose it, as the saying goes.
EASA’s proposal aligns well with this concern, placing humans at the centre of decision-making. While different AI classification levels will exist based on the degree of human authority required, responsibility remains with the end user at all times - a reassuring stance that matches our HoTs’ recommendations.
Privacy and Data: Progress with Gaps
When it comes to privacy and data storage, our HoTs emphasised protecting students’ personal information. They called for explicit trainee consent before training data is shared with airlines or other schools, validation of data accuracy before AI processing, clear boundaries for data sharing, secure storage protocols, transparent data‑use policies, and defined parameters for experimental projects.
EASA’s framework, however, focuses primarily on AI performance quality rather than trainee privacy. While Data Quality Requirements (DQRs) ensure accuracy, and rules govern data distribution in training datasets and data management processes, little is said about data-sharing or privacy safeguards. Regulators must recognise the need for more explicit consent policies.
Transparency emerged as critically important to our HoTs. When AI is used, it must explain why something was marked as incorrect so that genuine learning can occur. A simple pass/fail “black box” approach would fail both instructors and students. Students deserve to understand if and how AI evaluates them: What criteria are being applied? Is AI suggesting or making final decisions? How much weight does AI carry versus an instructor’s analysis?
Our HoTs also recommended maintaining an AI decision logbook, creating an accessible record of what the AI did and when, so users can review its decisions later. For content, instructors should be required to validate AI-generated material before presenting it to students. Are the questions appropriate? Is the difficulty level correct for the learners?
EASA’s proposal includes “operational explainability” requirements that align closely with these recommendations. EASA agrees that AI should provide understandable, reliable, and relevant information about how it produces results. Explanations must consider relevance, validity, appropriate detail levels, and timing - some decisions require explanation before they occur (allowing instructor intervention), while others can be explained afterward. EASA also mandates that end users be informed when they’re interacting with AI. These procedures directly address HoT leaders’ concerns.
HoT leaders called for a clear, consistent process that works across different countries and regulatory systems - essentially, a universal set of rules that prevents contradictions across borders.
EASA has proposed a comprehensive risk assessment framework offering a step-by-step process for evaluating AI system risk levels. Its hazard classification system describes potential consequences if AI fails. Once a hazard level is identified, EASA defines development and testing requirements. By referencing the EU AI Act and EUROCAE/SAE standards, EASA proposes that aviation AI comply with broader regulations, ensuring consistency across countries.
The HoT meeting emphasised the importance of bias‑free data to ensure fair student evaluations. The concern is that AI systems could allow factors unrelated to student abilities to influence assessments.
EASA’s framework directly addresses this through ethics‑based assessment requirements, providing a tangible checklist for identifying potential bias during development and requiring human‑centered design considerations that keep humans at the centre of critical decisions.
While the framework represents significant progress, much work remains. What are your concerns about AI in aviation training? What still needs to be addressed?
EASA has officially opened its framework for public comment, and they want to hear from you. Don’t miss this opportunity to shape the future of AI regulation in our industry - the deadline is approaching fast.
View the full framework and submit your comments on EASA's website.