Software construction and analysis
A major current technological and societal challenge is to be able to produce software systems that behave in a reliable and predictable manner, i.e., deliver a trustworthy service in a timely manner without causing injury or significant malfunction. The complexity of current software stacks, hardware and networking platforms makes this a significant challenge, as does pressures of time-to-market. These may cause products to be commercially launched before they have been thoroughly tested.
We perform research into different ways of mastering software complexity, e.g.,
- by using programming languages that are particularly well suited to produce reliable, efficient, or reusable code,
- by using techniques based on virtualization to allow for applications with different criticality or real time demands to coexist in a non-interfering way with less demanding and more complex application software, and
- by using verification, testing, and machine learning techniques to increase the capabilities of current modeling and verification.
Highlights from some current research activities in this field
Driven by multicore processors and heterogeneous hardware platforms as well as large-scale distributed systems, research is being conducted into programming languages, models, and systems aiming to simplify concurrency and distribution, providing scalability and safety properties. This research ranges from theoretical foundations to programming systems proven in wide commercial use.
The construction of software yields much more than the final product. Throughout the development process the complex ecosystem of developers, methods and tools generates large amounts of software meta-data. The differences between versions over time, the pace and amount of change, the activity in the issue tracker, and interaction between networks of developers are now commonly captured in software repositories along with the software itself. This provides an opportunity to perform repository mining to help understand software construction and evolution in practice, develop models and make predictions about software construction, and exploit this knowledge in planning future development.
The expanding usage of software, the increasing value of information and the rising cost of malfunctions lead the need of developing correct software. A key strategy to make this task tractable is to partition the system into small and trustworthy components with limited functionality running alongside large commodity software components that provide little or no assurance. Our long-term vision is the development of secure, formally verified and certified virtualization as a core building block to guarantee component isolation, which can be used to bootstrap the desired system properties even if the commodity software is completely compromised.
With the advent of autonomous systems controlled by software, the requirements on dependability are rapidly increasing. With this, the willingness to pay the increased price of applying formal techniques is slowly making its way into industrial practice, and is being reflected in the safety standards of the avionics and the automotive industries. Research is ongoing into techniques and tools that support practical formal verification, but also on the theoretical foundations of multi-agent and autonomous systems.
Testing has for many years been a traditional technique for software quality assurance that is widely accepted and used in industry. However, the emerging digitalization of our society requires new techniques for software testing that are scalable to geographically distributed and complex systems-of-systems such as autonomous vehicles. Testing must take into account the whole software stack, e.g., networked communication and even hardware. In the future, test automation will play a vital role in effective and rigorous test processes. One solution that is being considered is to apply machine learning to testing.