Getting smart: artificial intelligence and aviation

26198
image: NASA | Stu Broce | Public domain

It is on your smartphone right now, and is likely to be in your aircraft cockpit soon. Artificial intelligence (AI) could be the next step in improving aviation safety, or perhaps the technology that banishes the human pilot from the cockpit.

Artificial intelligence can loosely be defined as computers doing things that used to be done by people. But definitions of artificial intelligence are fluid, and the subject has instilled a sense of unease since the premiere of the play Rossum’s Universal Robots, nearly a century ago. Capabilities once associated with artificial intelligence, such as calculation, optical character recognition, or indeed, the ability of an autopilot to maintain straight and level flight, have fallen off the definition, as they become commonplace. The current definition from the Oxford Dictionary is, ‘the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision making and translation between languages’.

‘In straightforward terms, AI is an extension of human intelligence,’ says Griffith University lecturer in sociotechnical studies, David Tuffley. ‘It’s important to frame AI as being an extension of humans rather than an adversary,’ he says.

‘There are a lot of media reports and no shortage of dystopian Hollywood movies that portray it as a threat, and something we need to be worried about. The truth is, for as long as humans have been around we’ve been creating tools to solve problems, and AI is no more than that. It is simply a programmed set of algorithms that react in a certain way to a given stimulus.’

Artificial intelligence has gone in and out of fashion in computer science. The failure of early AI research to match its lofty promises was followed by two ‘AI winters’ in the 1970s and ’80s, when research and interest dried up. But modern AI is here to stay, technology forecasters say, because it is based on three recent computing developments:

  • High-powered parallel processing, using chipsets derived from the graphical processing units originally developed for computer gaming.
  • Big data, the vast amount of information collected by search engines and sensors. AI systems that use ‘the cloud’ rather than one computer have access to this data, and to the computing power of a network, rather than a single computer.
  • Deep learning algorithms, which improve the performance of artificial neural networks. These networks, which mimic the functioning of the human brain, are the means by which AI computers can learn and improve by trial and error.

AI is already in the cockpit. Garmin introduced ‘Telligence’, voice recognition as a feature of its GMA 350 and GMA 35 audio panels in 2015, and offered it on GTN touchscreen navigation systems last year. The system performs some of the functions of an attentive co-pilot, such as changing radio channels, reading wind forecasts and providing position information on request.

Future AI systems could easily extend to digital cockpit assistants that interpret weather radar images, or continuously monitor published forecasts and compare them with the human pilot’s flight plan. Such assistants could calculate optimal descent profiles based on aircraft weight and speed, or provide a range of alternative destinations if winds or weather change. Like the perfect human servant, they would anticipate needs and requests, rather than waiting to be asked.

An early example of such technology is already optionally available in Airbus flight decks in the form of the runway overrun protection system (ROPS), a software system that reconciles aircraft approach speed and weight with the published length, condition and local weather of the runway it is approaching. Should there be a serious mismatch between these it pipes up with the message ‘Runway too short!’ in an appropriately hectoring tone of voice. (ROPS also has an after touchdown mode, with a range of stentorian commands to encourage pilots to use maximum braking and reverse thrust, if needed.)

Aviation has for decades been an early adopter of AI, or what passed for it, with technologies such as coupled autopilots, full authority digital engine control and, most recently, adaptive autopilots. NASA began flight testing on its intelligent flight control system (IFCS) in 2003, using a fly-by-wire aircraft with an artificial neural network that analysed aircraft flight characteristics and could create alternative flight control laws to deal with occurrences such as a disabled control surface.

Researchers Haitham Baomar and Peter Bentley, of University College London, are working on an autopilot that uses machine learning to address one of the most dangerous drawbacks of present day autoflight systems—they way they ‘give up’ under difficult conditions and cede control to the human crew, often without warning. The AI system works by observing the reactions of skilled human pilots and mimicking them.

‘In the flight simulator we are using (X-Plane 10 Pro), the system has been trained to autonomously fly a Boeing 777 airliner, and perform piloting tasks that ensure the execution of complete flights starting from take-off from airport A, navigating, and landing at airport B, while being able to handle uncertainties such as sudden or continuous severe weather conditions, emergencies such as engine failure or fire, or even performing an emergency landing or a turnaround when necessary,’ Baomar told Flight Safety Australia.

Baomar says he is keen to test the intelligent autopilot system in real-life conditions. He sees two ways of doing this. One approach would be ‘integrating the system with an industrial flight simulator to gather training data while professional pilots perform the piloting tasks, and then observe the performance of the system in the given industrial simulator.’

‘The other approach, which is of more interest to us, is to integrate the intelligent autopilot system (IAS) with a fixed-wing remotely piloted aircraft system (RPAS). This would ensure the gathering of real-life flight and control data while the remote human pilots perform the piloting tasks. After successfully training the system on the gathered data, we could give it full control of the same RPAS or drone, and watch it imitate the human pilots.’

Baomar says the introduction of AI into aviation is highly likely. ‘AI is advancing fast; it is being applied in a wide spectrum of fields ranging from services to industry. Therefore, it is normal and anticipated to see it being applied in aviation,’ he says.

He foresees AI as a technology that could enable safe introduction of single-pilot flight decks and as a counter to human error that he concedes is related to ‘stress, information overload, and sometimes lack of sufficient and up-to-date training.’

Baomar says it is ironic that the challenges to introducing AI into aviation do not come from the technology but from regulatory authorities. ‘This has to do with the extensive, costly, and exhausting validation and certification process which must be applied to every new technology introduced to aviation.’

He agrees that certification is ‘vital and crucial given the safety requirements,’ and says the process can be viewed as simply ‘too much unnecessary effort’, and a disincentive to apply proven innovations.

Another difficulty known as the black-box problem, is inherently associated with AI. This usage refers to a black box not as a component (the phrase typically refers to a flight data recorder, which is orange) but as a mystery. ‘This problem revolves around the difficulty to understand what is going on when looking “under the hood” of an AI system,’ Baomar says.

‘As the system learns automatically, complex calculations are performed to generate a learning model, and when the system depends on a single, large artificial neural network (ANN) for example, those calculations become even more complex. Therefore, this system would be viewed as a large black box,’ Baomar says. For aviation regulators wanting to examine and certify such a box, it would be very difficult to verify its components or software.

Australian National University researcher in artificial intelligence regulation, Gary Lea, says the black-box problem has been a long-standing certification issue with neural net technology. ‘You can’t get a readout of what’s going on inside,’ he says.

‘But efforts are being made to get around that’, Lea says. ‘There’s some interesting research going on at Google DeepMind, which has released the first iteration of a state readout technology for neural networks. If they can get to the end goal, there will be neural networks that are both trainable and programmable—a hybrid. That would be a game changer.’

Baomar’s answer is to use many small, independent and shallow ANNs, rather than one large and deep ANN. ‘We introduced multiple ANNs, each carefully and specifically designed to handle a very specific task, controlling the elevators, for example. By doing this, we ensure that there is no black box. Rather, there is a small dedicated ANN that uses supervised learning rather than unsupervised methods to learn from a small and specific training dataset which has patterns that can be understood clearly. By following how this small ANN arrived at its learning model, it becomes clear to regulators or testers how this specific part of the system was designed, learned, and how it operates. After all, there is no magic here; it is all about well-known mathematical formulas that represent the core of an AI system. This approach also eliminates the surprise factor represented by undesired behaviour that can be observed with single and large AI systems that must learn from a large number of complex patterns which are usually not labelled ‘Unsupervised Learning’.

University of NSW professor of artificial intelligence, Toby Walsh, says complexity is inherent in machine learning AI systems. ‘How can we verify that machine learning has trained on the right data and now behaves with the performance guarantees we want?’ he asks. ‘We’re only beginning to ask those questions and we don’t have answers, certainly nothing you could put in the cockpit of a Boeing.’

In the field of driverless cars Walsh notes there is a split between the machine-learning systems adopted by Google and Tesla, and the software rules-based systems used by others. ‘In the trial of top level autonomous taxis in Singapore, the top level rules are hand coded. This is so the system can be guaranteed. It’s a more expensive time-consuming process. They’ve written rules at the top level so they can guarantee performance,’ he says.

ANU researcher Lea is wary of complex computer systems carrying the unconscious assumptions of their designers as ‘baggage’.

‘One of the things that will have to be looked at quite carefully, irrespective of whether connectional machine learning or other approaches to AI are used, is the overriding fact that designers and builders of systems have baggage—the implicit assumptions and values they bring with them to their designs.’

‘An example in the social sphere was Microsoft’s Tay, the experimental chatbot (automated conversation generator) that shortly after going online had to be shut down after posting sexist and racist responses. The designers’ benign assumptions simply didn’t match the reality of the online environment,’ Lea says.

Maintenance, ATC

AI methods, such as evolutionary algorithms (EA) that can tease out the optimal structures and flows, are an obvious application to air traffic control. Optimised traffic paths, take-off and landing processes and schedules have the potential to create more efficient and safer operations; for example, whether piston or turbine engines have a signature of temperature fuel flow and pressure which can be compared against the millions of hours of engine data that is now routinely collected. This is already being done in general aviation with services such as Savvy Analysis’s failed exhaust valve analytics program. This program warns of impending valve failure based on digital engine monitor data, specifically, a slow, subtle oscillation in exhaust gas temperature usually announcing incipient exhaust valve failure. Airbus and IBM’s Smarter Fleet maintenance management program uses AI analysis of flight data to optimise scheduled maintenance and fuel consumption.

Drop the pilot? AI and the human role

Baomar sees AI as a ‘digital assistant’, working beside human pilots with the ability to intervene when things go wrong. ‘However, much sooner than that, we can see the IAS piloting UAS or drones autonomously, which would bring a lot of advantages to multiple sectors,’ he says.

Tuffley predicts the growth of AI in many areas of aviation, including autoflight, engine management, air traffic control and operations management. ‘They won’t replace humans but will extend the capability of the humans already engaged in these pursuits,’ he says. ‘In medicine, accounting and other fields where AI comes into play, people are not being made unemployed in droves.’

Tuffley notes that even under automatic train control technology most of the world’s passenger trains retain a driver, whose role is to deal with more complex problems, while overseeing the everyday operation of the system. He ponders whether the driver’s role is also one of symbolic reassurance. ‘Generally, the public are going to want to deal with another human being,’ he says.

Lea concurs. ‘On the civilian side I think it will take a lot longer, particularly where there are passengers involved. Driverless car testing and demonstration is showing that people can be quite reluctant to entrust their safety to a machine, no matter how reliable.’

Walsh predicts AI will become commonplace in drones before manned aircraft. ‘I’m sure in 20 years’ time a lot of last-mile delivery will be by drone. That will require some systems so that the drones are not all colliding with each other. There will be a lot of AI used in these autonomous drones.’

Walsh expects cargo aircraft as an early candidate for full autonomy, probably using AI systems. But Lea, looking sideways at the nuclear energy industry’s experience of automation, argues there will be a need for a human in the loop in numerous other areas for the foreseeable future.

‘More and more sub-elements of safety-critical decision making will be handed over to machines,’ Lea says. ‘And this can be said to bring a risk of rubber stamping—where any final human decision will be a fait accompli because machines provide all the information and the interpretation. To avoid this situation, the implication is that humans will still have to be intensely engaged in machine oversight, so that their final decision can be based on appropriate understanding of what’s happening. It becomes a change in the role of the human, rather than an elimination of the human element.’

‘This is becoming an issue in the nuclear industry, where, historically, the basic design principle was that humans had to be “front of house” making the big decisions, such as reactor start-up and shut down. It came back to the cardinal principle of humans being responsible. But now we can see a significant push to extend machine control.’

Distant, maybe never: artificial general intelligence

Walsh says, ‘The human brain is still the most complex system in the known universe, by an order of magnitude. We’re a long way away from matching it.’

An average human brain has about 100 billion neurons. Each of these is connected to up to 10,000 other neurons, which means that the number of connections, or synapses, is between around 100 trillion and 1000 trillion (10 15, or 1,000,000,000,000,000). That is far more complex than any artificial network yet created, although Google Brain has created artificial neural networks comparable to the brains of mice.

So will we ever create artificial general intelligence, a robot comparable to Stanley Kubrick’s HAL 900, or the unflappable electronic servant of Isaac Asimov’s Bicentennial Man? Writing in Wired, Kevin Kelly takes a radical position. Human-like qualities are the last thing we want in complex systems, and there will be a strong incentive to engineer them out, he says.

About AI he says, ‘this won’t really be intelligence, at least not as we’ve come to think of it. Indeed, intelligence may be a liability—especially if by “intelligence” we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness. We want our self-driving car to be inhumanly focused on the road, not obsessing over an argument it had with the garage. The synthetic Dr Watson at our hospital should be maniacal in its work, never wondering whether it should have majored in English instead. As AIs develop, we might have to engineer ways to prevent consciousness in them—and our most premium AI services will likely be advertised as consciousness-free.’

6 COMMENTS

Comments are closed.