For more information about how Halldale can add value to your marketing and promotional campaigns or to discuss event exhibitor and sponsorship opportunities, contact our team to find out more
The Americas -
holly.foster@halldale.com
Rest of World -
jeremy@halldale.com

Your training organization has just one month left to influence how AI will be regulated in aviation—and the decisions made now will directly impact your operations.
Here's what you need to know.
EASA’s Public Consultation for the AI Trustworthiness Framework, which opened in November, is closing in February. The Notice of Proposed Amendment aims to provide the aviation industry with technical guidance on AI trustworthiness in line with requirements for high-risk AI systems contained in the EU AI Act. The Rulemaking group, which helped develop the current NPA, is comprised of several stakeholders including the Federal Aviation Administration, Amazon, Thales, CAE, Boeing and more.
The publication is the first step of Rulemaking task (RMT) 0742 and will be followed by a second NPA in 2026 to deploy the framework to regulators. This publication will help the aviation community prepare for future requirements for AI-based assistance (Level 1 AI) and Human-AI teaming (Level 2 AI). It addresses guidance on AI assurance, human factors, and ethics, and covers data-driven AI-based systems including supervised and unsupervised machine learning.
Training organizations—including approved training organisations, declared training organisations, organisations operating FSTDs, ATCO training organisations, and maintenance training organisations— as well as unmanned aircraft manufacturers, aircraft operators, and more are explicitly listed among the affected stakeholders.
EASA strongly encourages industry comments at this stage to help develop the framework.