When the hype about AI-generated text began to amp up recently, I was naturally curious. After all, predictions are that AI is going to replace humans in numerous jobs, including journalists.

Listen to this story....

I had to experiment to find out for myself. So I asked ChatGPT, the most-hyped platform, some questions related to flight training and virtual reality. The response was okay, if you’re looking for a high-level generic essay that reads like it was composed by a middle school student plagiarising Wikipedia. Where the chatbot completely failed was in identifying the first regulatory-approved VR-based simulator, a story I broke about two years ago.

A colleague had a similar experience, comparing ChatGPT’s response with what we had written about the recent FAA Aviation Safety Summit, which was both widely reported on by aviation writers and posted on YouTube. My colleague received an ‘apology’ from the chatbot, saying it couldn’t comment on ‘future events.’

Apparently, the content on which ChatGPT is ‘trained’ was limited to pre-September 2021… more than 18 months ago. The Summit wasn’t on its radar, but the approval of VRM Switzerland’s (now Loft Dynamics) VR helicopter flight sim by EASA in April 2021 should have been.

But outdated data is not the most serious flaw in AI-generated text.

While we’re at it, let’s dispense with the term ‘artificial intelligence.’ It’s technically LLM – Large Language Modelling – essentially mathematical algorithms trained on huge amounts of data to recognise word patterns and then choose the ‘most likely’ words to fill in the blanks. One commenter snarked that it is “autocorrect on steroids.”

Benj Edwards, a machine learning reporter for Ars Technica and other publications, warns that AI chatbots “can present convincing false information easily, making them unreliable sources of factual information.”

Academics like to refer to AI mistakes as “hallucinations” – as in blame the machine, not the programmer, as if the bot was anthropomorphically tripping on psychedelic drugs.

Edwards notes that AI bots have “invented books and studies that don’t exist, publications that professors didn’t write, fake academic papers, false legal citations, non-existent Linux system features… and technical details that don’t make sense.”

When researchers use the ‘high creativity’ setting, Edwards notes, “the model will guess wildly.”

After the launch of ChatGPT, OpenAI CEO Sam Altman had tweeted, “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now (emphasis added).” Later he wrote, “The danger is that it is confident and wrong a significant fraction of the time.”

“Numerous credible scientific studies have shown that recent LLMs cannot complete language and commonsense thinking tests, even when presented with comparatively easy ones,” writes MarketTechPost blogger Dhanshree Shripad Shenwai.

She warns about “AI models producing wholly false facts… which can jeopardise the applications’ accuracy, dependability, and trustworthiness” – for example, autonomous car programmes such as Ford’s BlueCruise and Tesla’s Autopilot. “Self-driving automobiles, where hallucinations may result in fatalities… is a calamity just waiting to happen.”

Or, in the aviation world, autonomous aircraft such as eVTOLs.

I am reminded of an interview several years ago with an aviation software engineer who described aircraft software as ‘spaghetti,’ in that bugs were fixed with patches, and when a new variant of the aircraft was developed they simply layered on additional software, more bugs, new patches. A clean sheet would apparently be too expensive.

Is so-called AI software like that? Instead of getting ‘smarter,’ does it compound its mistakes through misinformation and WAGs?

A thousand engineers and some business celebrities have issued a call to ‘pause’ development of ChatGPT for six months, warning of “risks to society and humanity.” (Some suspect they want the stall to help their companies to catch up.) 

They declared, “Decisions about AI should not be delegated to unelected tech leaders.” I’m not sure governance should be delegated to elected leaders either, given that most politicians attempt to manipulate the masses by ‘creatively’ filling in the blanks with misinformation and falsehoods.

I don’t share the apocalyptic vision that the chatbots are going to take over the world and wipe out humanity. (Humanity seems to be doing a rather good job of that itself.)

I will concede it’s early days for AI apps, and frankly I love the text-to-image programmes that enable a non-artist like me to transform the visions in my head into pseudo-art. I also appreciate the text-to-speech translators which now have much more realistic human-like professional voices. Perhaps the chatbots – with further development and an abundance of accurate information – will prove more useful in the future. Provided their output can be checked and verified.

If you’re interested in the some of the current realities of AI and how it can apply to immersive aviation training, join us at the World Aviation Training Summit (WATS), 18-20 April in Orlando, where dozens of subject experts will present their views on A/M/V/XR technologies, the Cloud, Big Data, eVTOL, and a host of other real-world challenges for training pilots, technicians, and cabin crew.