Software construction and analysis
A major current technological and societal challenge is to be able to produce software systems that behave in a reliable and predictable manner, i.e., deliver a trustworthy service in a timely manner without causing injury or significant malfunction. The complexity of current software stacks, hardware and networking platforms makes this a significant challenge, as does pressures of time-to-market. These may cause products to be commercially launched before they have been thoroughly tested.
We perform research into different ways of mastering software complexity, e.g.,
- by using programming languages that are particularly well suited to produce reliable, efficient, or reusable code,
- by using techniques based on virtualization to allow for applications with different criticality or real time demands to coexist in a non-interfering way with less demanding and more complex application software, and
- by using verification, testing, and machine learning techniques to increase the capabilities of current modeling and verification.
Highlights from some current research activities in this field
Driven by multicore processors and heterogeneous hardware platforms as well as large-scale distributed systems, research is being conducted into programming languages, models, and systems aiming to simplify concurrency and distribution, providing scalability and safety properties. This research ranges from theoretical foundations to programming systems proven in wide commercial use.
The expanding usage of software, the increasing value of information and the rising cost of malfunctions lead the need of developing correct software. A key strategy to make this task tractable is to partition the system into small and trustworthy components with limited functionality running alongside large commodity software components that provide little or no assurance. Our long-term vision is the development of secure, formally verified and certified virtualization as a core building block to guarantee component isolation, which can be used to bootstrap the desired system properties even if the commodity software is completely compromised.
With the advent of autonomous systems controlled by software, the requirements on dependability are rapidly increasing. With this, the willingness to pay the increased price of applying formal techniques is slowly making its way into industrial practice, and is being reflected in the safety standards of the avionics and the automotive industries. Research is ongoing into techniques and tools that support practical formal verification, but also on the theoretical foundations of multi-agent and autonomous systems.
Software Testing & Reliability
Testing has for many years been a traditional technique for software quality assurance that is widely accepted and used in industry. However, the emerging digitalization of our society requires new techniques for software testing that are scalable to geographically distributed and complex systems-of-systems such as autonomous vehicles. Testing must take into account the whole software stack, e.g., networked communication and even hardware. In the future, test automation will play a vital role in effective and rigorous test processes. One solution that is being considered is to apply machine learning to testing.
Automatic Software Repair
Automatic software repair is the process of fixing software bugs automatically. When repair happens at the level of the program code, it is called program repair or patch generation (for instance by changing the conditional expression of an if statemen ). Repair may also happen at the level of the program state, at runtime, in response to field failures (for instance by changing the value of a variable), this is also sometimes called runtime repair or state repair. The group conducts research in the area of program repair, self-healing software and chaos engineering.