For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more
The Americas -
holly.foster@halldale.com
Rest of World -
jeremy@halldale.com

EASA’s notice of proposed amendment (NPA) 2025-07 on artificial intelligence (AI) represents the first comprehensive attempt to establish a regulatory framework for AI integration across all aviation sectors. Rather than adapting existing regulations, the proposal introduces technology-independent requirements that address the specific challenges AI systems present, including transparency, human oversight, and lifecycle safety assurance. This article examines the regulatory changes proposed, the safety mechanisms required for AI validation, and the data governance requirements that will shape how AI systems are developed and deployed in aviation training and operations.
EASA’s NPA on AI introduces a completely new regulatory framework designed to clarify how AI can be integrated into aviation, according to the CAE regulatory affairs team. “Rather than amending existing sector-specific regulations, the proposal establishes high-level, technology-independent requirements aimed at supporting the adoption of AI across all areas of aviation, including training, operations, and air traffic management”, the team says. “The framework focusses on key considerations such as the expected level of automation, the degree of human oversight required, and the need to identify and assess risks associated with the use of AI. Its purpose is not to define the technology itself, but to ensure that any AI-based system entering the aviation environment meets appropriate safety expectations by design”.
According to the European Cockpit Association (ECA), from the pilot community’s perspective, AI in civil aviation must remain strictly integrative. “It can support analysis and improve efficiency, but human operators must remain the sole decision-making authority, with full oversight and ultimate responsibility always. At the current level of technological maturity, we would caution against regulatory frameworks that could be interpreted as granting operational authority, even limited, to AI systems in safety-critical contexts”, ECA says. “AI systems, particularly those onboard aircraft, in safety systems, air traffic control, or maintenance, must be regulated and certified to the same standards as any other system in civil aviation”.
For this to happen, the way AI systems operate or generate outputs must be transparent, open to scrutiny, and accessible in the event of a failure or to allow for improvements, according to ECA. “‘Black box’ systems, whose inputs, processing, and decision-making cannot be understood or tracked, are therefore considered unsuitable for such applications. Finally, simply because a system is labelled ‘AI’ does not mean that a lower level of control or understanding is possible or necessary”, ECA affirms.
Andrew Mitchell, head of training at FTE Jerez, points out that the focus of the NPA is on AI scope and techniques, AI-based classification, operational domain, AI-based risk assessment, and human-centric design considerations for AI-based systems. “The EASA AI roadmap, which preceded the NPA, focuses on a human-centric approach to AI in aviation. The scope of this document includes flight crew training and contains a brief mention of how AI, specifically machine learning (ML), could impact flight crew training”, he says. “Domains covered by these guidelines include ATM/ANS, flight operations, flight crew training, environmental protection, and airports. To ensure safe operations, crew training is another essential aspect. The use of AI gives rise to adaptive training solutions, where machine learning could improve the effectiveness of training activities by leveraging the large amount of data collected during training and operations”.
The most important requirement for ATOs intending to implement AI systems in training will be to train on the three general levels of AI, defined in the EASA AI roadmap, according to Mitchell. “Level 1, human assistance, Level 2 human-AI collaboration and Level 3 advanced AI automation. These levels are further subdivided into sublevels. Level 1A is assistance to humans, Level 1B is cognitive assistance to humans in decision-making, Level 2A is cooperation between humans and AI, Level 2B is collaboration between humans and AI, Level 3A is where decisions/actions are made by AI but can be overridden by humans, and Level 3B is where decisions/actions are made by AI and cannot be overridden by humans”, he says.
As EASA moves towards AI trustworthiness regulation, ATOs will need to be cautious about how higher levels of AI could impact compliance with EASA regulations, the AI Act, and the EU General Data Protection Regulations (GDPR), which provides significant penalties for non-compliance, Mitchell observes. “It is worth noting that, given the current rapid pace of AI innovation, EASA does not expect any Level 2 or 3A systems to receive regulatory approval before 2035, while Level 3B systems are expected to receive regulatory approval in 2050”, he says. “ATOs should therefore perhaps focus their efforts on Level 1 AI systems in the short term to build trust and competence in AI while waiting for EASA regulations to provide approval for higher AI levels and avoid any issues with the AI Act, the EASA Regulation, and EU GDPR compliance”, he says. “Furthermore, ATO management teams should be aware that Article 4 of the EU AI Act requires personnel to have a sufficient level of AI literacy, a training programme for AI staff could be a good starting point”.
EASA observes that the most significant change proposed by the current NPA is the introduction of a new horizontal regulatory framework for ‘trustworthy AI’ applicable to all aviation sectors. “The next stage of the regulatory process, through a second NPA, will link the generic regulatory framework to existing aviation regulations, in accordance with the requirements of the EU AI Act”, affirms EASA. “Rather than treating AI as a simple software update, EASA proposes a structured approach that classifies AI systems based on their level of authority. The level of safety impact is, as with classical systems, an additional criterion for adjusting assurance requirements accordingly”.
The regulatory framework adopted by EASA addresses key elements including technical robustness, explainability, safety risk mitigation, human oversight, and the development of necessary expertise within both the industry and competent authorities. “The framework balances innovation and safety by enabling low-risk AI applications to advance more rapidly, while providing stronger safety and oversight measures for the most critical use cases. It builds on existing aviation safety principles such as proportionality, human oversight, and development assurance”, states EASA. “At the same time, it deliberately sets boundaries, for example by initially limiting the criticality level of AI systems and preventing adaptive or continuous learning in safety-related AI systems, allowing innovation to develop within clearly defined and manageable safety boundaries”.
As a central objective of the NPA is to enable the introduction of AI while maintaining the current high level of aviation safety, the NPA requires a comprehensive risk assessment before requesting authorisation for the use of AI in any aviation context, the CAE regulatory affairs team affirms. “This assessment must consider both traditional system failure modes and new types of hazards specific to AI-based systems”, the team says. “As with all aviation safety lifecycles, the process is ongoing. As new hazards are identified, they must be assessed, mitigated, and monitored to verify their effectiveness. This iterative approach ensures continuous oversight throughout the AI system’s operational life and supports adaptation as the technology evolves”.
According to ECA, safety assurance must go beyond traditional failure analysis. “AI systems must be transparent, explainable, and certifiable to enable calibrated reliability. Aviation safety cannot rely on opaque ‘black box’ logic, interpretability and traceability are essential throughout the system's lifecycle. All AI elements integrated into aviation must be regulated according to the same standards as other critical components”, says ECA. “AI systems must meet equivalent risk and safety requirements and be fully integrated into safety management systems (SMS). Continuous performance monitoring, risk assessment, and iterative improvement of AI components must be maintained on par with all other safety-critical systems in aviation. While humans can predict future situations, AI-based systems may stop providing answers when the limits of the operational design domain (ODD) are reached, or even suggest ‘made-up solutions’, so-called hallucinations, without disclosing this to the end user”.
The proposed framework under the NPA introduces a safety approach that covers the entire AI lifecycle, from development and testing to in-service monitoring, according to EASA. “This includes structured processes for developing and ensuring learning, verifying performance and robustness, and continuously monitoring once the system is operational”, says EASA. “Organisations will be required to define measurable performance indicators, detect deviations, and maintain traceable records to support oversight, investigations, and continuous safety improvement”.
The regulatory proposal is based on an analysis of the technological challenges posed by AI, which require expanding the current guidelines applicable to traditional systems, affirms EASA. “The proposals specifically address challenges such as unintended behaviour, transparency, uncertainty in outcomes, performance variations over time, and behaviour when exposed to data outside the intended operational domain”, says EASA. “The explainability of operational AI and human-centred design are key elements to ensure that operators understand the system's limitations and can safely intervene when necessary”.
The proposed regulation pays particular attention to explaining the different AI learning approaches, such as machine learning, deep learning, supervised and unsupervised models, and outlines the assurance expectations associated with each, the CAE regulatory affairs team points out. “Authorities will require proof that learning models have been trained with appropriate, high-quality data and that the sources and processes used are sufficiently robust. The NPA also establishes expectations for risk assessment of learning models and defines the assurance measures that must be demonstrated before an AI-based product can be integrated into an aviation system”, the team says.
Data governance will be crucial, according to ECA. “Operational data generated by pilots must be protected from commercial exploitation or punitive use, and any use of emerging technologies must respect the established principles of safety culture. The integration of AI must strengthen the human-centred safety model that has made aviation the safest form of transportation”, says ECA. “Respect and observance of the data rights of the professionals generating such data are essential to the acceptance of AI in the aviation industry. This data is used only with their specific collective consent, subject to clear regulatory safeguards, and includes provisions for periodic oversight. The ongoing value or commercial benefit of any system enabled by this data should be reflected in a legal agreement that collectively compensates the professionals involved”
Periodic reassessments and oversight must be conducted not only to account for changes but also to ensure that AI systems remain aligned with technological advances and operational needs, according to ECA. “Pilot-derived data must not be used for commercial purposes by manufacturers through data aggregation. Safeguards must be in place to protect data ownership rights and prevent the exploitation of such data for financial or competitive advantage, ensuring that its use remains strictly within the scope of the originally certified system”, says ECA.
EASA proposes clear expectations regarding data governance and quality, recognising that data is critical to AI safety. “Organisations will be required to define the operational domain for each AI system and ensure that training, validation, and test data accurately represent the conditions under which the system will operate”, says EASA. “This includes requirements for data traceability, independence between data sets, quality controls, labelling where necessary, and ongoing verification. There is also a strong emphasis on managing bias, fairness, and ethical considerations, particularly when personal or sensitive data may be involved”.
Regarding adaptability, the current proposal takes a cautious approach, affirms EASA. “At this stage, continuous or online learning during operations is not permitted in certified environments. Instead, learning is expected to occur offline, with model updates introduced through controlled and documented change processes. Continuous in-service monitoring will be used to detect performance deviations or emerging risks, ensuring that adaptability is managed within a structured and auditable safety framework”, states EASA.
EASA’s proposed AI framework establishes a structured pathway for integrating AI into aviation while maintaining existing safety standards. The classification system distinguishes between levels of automation and their associated assurance requirements, with near-term focus on Level 1 human assistance applications.
Key requirements include comprehensive risk assessment throughout the AI lifecycle, transparency in system operation, and rigorous data governance to ensure training datasets accurately represent operational conditions. For ATOs and operators, practical implementation will require developing AI literacy among personnel, establishing robust change management processes, and ensuring compliance with both aviation-specific regulations and broader EU AI Act requirements.