November 4, 2016
Critical error: when machines go off-script
Research project uses learning to predict safety-critical hazards in software systems.
From automatic coffee machines that turn on when the alarm clock buzzes, to new ways of monitoring and diagnosing health, just about every tool that can be connected to the internet is being connected.
The concept, known as the Internet of Things (IoT), is being driven largely by faster and more widely available internet connections coupled with smaller, more advanced sensors.
Yet, ¾«¶«´«Ã½ of ¾«¶«´«Ã½ (UOW) researchers say the proliferation of smartphone applications and other integrated software systems increases the possibility of miscommunication, and if devices used each and every day are somehow linked or interact with safety-critical systems, the miscommunication can have serious consequences.
The researchers, led by Dr from UOW’s School of Computing and Information Technology, have received a grant from technology giant Samsung, as part of its (GRO) program, to develop ways to predict where miscommunication is likely to occur, leading to improvements in safety and reliability.
Dr Hoa Dam said that any safety-critical piece of software, such as those used in domains such as health care, military and aerospace, were subject to strict performance and safety guidelines.
Consumer applications, on the other hand, typically go through varying degrees of user testing to iron out any bugs but are not subject to the same regulatory hurdles.
“This mismatching in testing poses serious safety threats when coupling those two kinds of software systems together, since unsafe operations often result from an unstudied interaction between different areas of a complex system,” Dr Dam said.
“You could have a failure if, for example, an app needs information from another system and takes the wrong information. If one of those systems is safety-critical, it presents a major problem.
“For example, . In this case the Uber mobile app interacts with the car’s software, which is an interaction of consumer-oriented software coupled with safety-critical systems.”
Dr Dam and his colleagues at UOW’s Decision Systems Lab and Deakin ¾«¶«´«Ã½ have developed a software model called DeepSoft that has the ability to predict problems in software before they occur, effectively proof-reading millions of pieces of code to look for errors, duplication and vulnerabilities.
The software uses , a sophisticated form of artificial intelligence that is transforming technology applications from web search engines to self-driving cars.
It borrows from the idea that, just as the human brain stores information to help make future decisions, sophisticated software based on complex mathematics and powerful hardware can learn by detecting and recognising patterns.
"DeepSoft can not only read the code and store the information, it can predict what should appear next, just as the human brain knows a full stop should come at the end of a sentence,” Dr Dam said.
“Through our approach the program is able to remember a very long texts and the relevant context in software code, even the whole source code file, instead of just a few code tokens as in existing methods.
“This significantly improves its accuracy in predicting the next piece of code for completing it.”
Recent research by Dr Dam and his colleagues at the Decision Systems Lab quantified the sheer volume of work involved in checking code for potential flaws and risks.
They investigated a sample of open-source software applications based on the programming language Java.
“Our research found 6 million individual code tokens, or words, based on the sample of 10 apps. This illustrates the sheer volume of code and the scale of work required to check all the code for potential flaws and risks,” he said.
“Most small-scale developers don’t have the time or money for exhaustive testing. Generic testing tools will miss deeper problems and a customised tool may work well in a certain software project may not perform well in other projects.
The pay-off is the model can identify critical areas where developers can focus testing, which reduces costs and enables them to release the software product as soon as possible.
“The automated machinery explores software and raises an alert on the parts that are likely to introduce hazards,” he said.
“The availability of such predictive models can help prioritise the effort and optimise the cost in inspection and testing for safety.
“A predictive capability that identifies hazardous components early in the software lifecycle is a significant achievement since the cost of finding and fixing errors increases dramatically as the software lifecycle progresses.”
Dr Dam said the Samsung project would help UOW accelerate research into software engineering analytics and the use of machine learning and data mining in software engineering.
Past recipients of the Samsung grant have included prestigious institutions such as Stanford, Harvard, MIT, UC Berkeley, Carnegie Mellon, Oxford and Cambridge.
“Software engineering analytics will help us build better software and build software better, addressing quality and productivity needs,” he said.
“The opportunity to work with Samsung, a leading IoT provider, and tech giant creator of mobile devices, computing components, TVs, and household appliances will help us translate our research findings into practice and make real impact.”