TCS Seminar Series

Please tell Stephan Gocht if you want to give a seminar in the series. They are ideally held Mondays after lunch, but exceptions are possible.

See also information about other seminars, such as our in depth Theory reading group.

TCS Seminar Series Spring 2018

  • 10 Jul 2018 at 13:15 in 4523
    Posimodular Function Optimization
    (Kaz Makino, Professor at Research Institute for Mathematical Sciences, Kyoto University)

    A function f: 2^V -> R on a finite set V is posimodular if f(X)+f(Y) >= f(X-Y)+f(Y- X) for all X,Y\subseteq V. Posimodular functions often arise in combinatorial optimization such as undirected cut functions. We consider the problem of finding a nonempty subset X minimizing f(X), when the posimodular function f is given by oracle access. We show that posimodular function minimization requires exponential time, contrasting with the polynomial solvability of submodular function minimization that forms another generalization of cut functions. On the other hand, the problem is fixed-parameter tractable in terms of the size of the image (or range) of f.

    In more detail, we show that Omega(2^{0.3219n} T_f) time is necessary and O(2^{0.92n}T_f) sufficient, where T_f denotes the time for one function evaluation. When the image of f is D={0,1, ...,d},O(2^{1.271d}nT_f) time is sufficient and Omega(2^{0.1609d}T_f) necessary. We can also generate all sets minimizing f in time 2^{O(d)} n^2 T_f.

    Finally, we also consider the problem of maximizing a given posimodular function, showing that it requires at least 2^{n-1}T_f time in general, while it has time complexity Theta(n^{d-1}T_f) when D={0,1,..., d} is the image of f, for integer d.

    This is a joint work with Magnus Halldorsson, Toshimasa Ishii, and Kenjiro Takazawa.

  • 09 Jul 2018 at 13:15 in 4523
    Counterfactual Mean Embedding: A Kernel Method for Nonparametric Causal Inference
    (Krikamol Muandet, Research group leader at the Empirical Inference Department, Max Planck Institute for Intelligent Systems, Germany)

    In this talk, I will introduce a novel Hilbert space representation of a counterfactual distribution---called counterfactual mean embedding (CME)---with applications in nonparametric causal inference. Counterfactual prediction has become a ubiquitous tool in machine learning applications, such as an online advertisement, recommendation systems, and medical diagnosis, whose performance relies on certain interventions. To infer the outcomes of such interventions, we propose to embed the associated counterfactual distribution into a reproducing kernel Hilbert space (RKHS) endowed with a positive definite kernel. Under appropriate assumptions, the CME allows us to perform causal inference over the entire landscape of the counterfactual distribution. The CME can be estimated consistently from observational data without requiring any parametric assumption about the underlying distributions. We also derive a rate of convergence which depends on the smoothness of the conditional mean and the Radon-Nikodym derivative of the underlying marginal distributions. Our framework can deal with not only real-valued outcome, but potentially also more complex and structured outcomes such as images, sequences, and graphs. Lastly, I will demonstrate how we can apply the proposed estimator to off-policy evaluation tasks in an online advertisement and recommendation systems.

    This is a joint work with Motonobu Kanagawa (MPI-IS), Sorawit Saengkyongam (Agoda), and Sanparith Marukatat (NECTEC). The preprint of this work can be found at

  • 02 Jul 2018 at 13:15 in 4523
    Heat kernels in graphs: A journey from random walks to geometry, and back
    (He Sun, senior lecturer, University of Edinburgh)

    Heat kernels are one of the most fundamental concepts in physics and mathematics. In physics, the heat kernel is a fundamental solution of the heat equation and connects the Laplacian operator to the rate of heat dissipation. In spectral geometry, many fundamental techniques are based on heat kernels. In finite Markov chain theory, heat kernels correspond to continuous-time random walks and constitute one of the most powerful techniques in estimating the mixing time. In this talk, we will briefly discuss this line of research and its relation to heat kernels in graphs. In particular, we will see how heat kernels are used to design the first nearly-linear time algorithm for finding clusters in real-world graphs. Some interesting open questions will be addressed in the end.

  • 21 Jun 2018 at 09:00 in 4523
    Models and systems in motion: Verification and visual analytics for system executions
    (Cyrille Artho, TCS/EECS/KTH)

    Networked systems are difficult to verify. A single component cannot be fully tested in isolation; however, testing the whole system requires orchestrating different types of components and checking their behavior. Because each component is independent, knowledge of the system state is limited to local observations, which cannot be perfectly coordinated.

    We first present a scalable approach to model-based testing, which separates the semantics of a distributed system from its concurrent execution. Our work gives an easy-to-understand trace at a high level, and has found a very subtle defect in Apache ZooKeeper, a widely used cloud computing component. However, there is still a gap when trying to go from the high-level test model to the full complexity of the underlying system.

    To this end, we want to adapt visual analytics to execution traces that are generated by such systems. Unlike normal visualization, visual analytics combines visualization with filtering, and uses different types of graphs to show different aspects of a system. We show a prototype that embodies work in progress for analyzing traces generated by the Java Pathfinder tool. Early results show that we can indeed get a selective overview of events, and that what you hide is more important than what you show.

  • 19 Jun 2018 at 13:15 in room 1537
    Understanding Configuration Constraints
    (Thorsten Berger)

    Software systems often need to be customized -- commonly through configurations options, set at build or startup time, to tailor systems to specific needs (e.g., hardware, functionality, performance). Such highly configurable systems can easily have thousands of options, together with intricate configuration constraints among them. Declaring options and constraints in higher-level representations, so-called variability models, tackles complexity. Such models facility development, configuration, and validation and verification of large, highly configurable systems. Unfortunately, creating and maintaining variability models is laborious and error-prone, given the complexity of constraints. Automated techniques would be helpful.

    In the talk, I will present our work on extracting configuration constraints from highly configurable systems using static analysis. I will explain the semantics of variability models, how their constraints (the main ingredients of variability models) are represented in source code, and how static analyses to extract them can be conceived. I will show the implementation of the static analysis tooling we conceived as well as the results of empirical studies trying to understand how many and what constraints we can extract from code. Time permitting, I will also show some recent tooling of other researchers building upon our foundations.

  • 18 Jun 2018 at 13:15 in 4523
    Online Bipartite Matching with Amortized O(log^2 N) Replacements
    (Aaron Bernstein, TU Berlin)

    In the online bipartite matching problem with replacements, all the vertices on one side of the bipartition are given, and the vertices on the other side arrive one by one with all their incident edges. The goal is to maintain a maximum matching while minimizing the number of changes (replacements) to the matching. We show that the greedy algorithm that always takes a shortest augmenting path from the newly inserted vertex (denoted SAP) uses at most amortized O(log^2 n) replacements per insertion, where n is the total number of vertices inserted. This is the first analysis to achieve a polylogarithmic number of replacements for any strategy, almost matching the Ω(log n) lower bound. The previous best strategy achieved amortized O(√n) replacements [Bosek, Leniowski, Sankowski, Zych, FOCS 2014]. For SAP in particular, nothing better than the trivial O(n) bound was known except in special cases.

    Joint work with Eva Rotenberg and Jacob Holm at the University of Copenhagen.

  • 12 Jun 2018 at 12:00 in 4523
    Regression Proving in Large-Scale Verification Projects: Theory and Practice
    (Karl Palmskog, The University of Texas at Austin)

    Order food via

    Many large-scale verification projects rely on proof assistants, such as Coq, to construct and check formal proofs. However, such proofs must be checked after every change to a project to ensure that properties still hold. This process of regression proving can require substantial machine time, which is detrimental to productivity and trust in evolving projects.

    In this talk, I will present some novel techniques to speed up regression proving in large-scale verification projects. Our techniques are based on tracking dependencies between files, definitions, and proofs, and re-proving, in parallel, only those files or proofs affected by a change. Our work is inspired by previous work on analogous techniques for Java-like programming languages; in particular, we build on an analogy between _proofs_ and _tests_.

    As I will describe, we have implemented our techniques in a tool, piCoq, which supports Coq projects. Using piCoq, we measured the speedups compared to proof checking from scratch and when using file-based selection (e.g., using Makefiles) for the revision histories of several large-scale open source Coq projects. The speedups range from 2x all the way up to 28x, depending on the project and degree of parallelism. Our results indicate that our techniques and tool can substantially increase the productivity of proof engineers, especially when they perform regression proving using continuous integration services such as Travis CI on GitHub.

    Based on work published in ASE '17, and work to appear in ICSE-Demo '18, and ISSTA '18.

  • 04 Jun 2018 at 14:30 in 4523
    Model Checking: the Interval Way
    (Angelo Montanari, University of Udine, Italy)

    Model checking with interval temporal logics is emerging as a viable alternative to model checking with standard point-based temporal logics, such as LTL, CTL, CTL*, and the like. The behavior of the system is modelled by means of (finite) Kripke structures, as usual. However, while temporal logics which are interpreted ``point-wise'' describe how the system evolves state-by-state, and predicate properties of system states, those which are interpreted ``interval-wise'' express properties of computation stretches, spanning a sequence of states. A proposition letter is assumed to hold over a computation stretch (interval) if and only if it holds over each component state (homogeneity assumption).

    The most well-known interval temporal logic is Halpern and Shoham's modal logic of time intervals HS, which features one modality for each possible ordering relation between a pair of intervals, apart from equality. In the seminar, we provide an overview of the main results on model checking with HS and its fragments under the homogeneity assumption. In particular, we show that the problem turns out to be non-elementarily decidable and EXPSPACE-hard for full HS, but it is often computationally much better for its fragments. We conclude by briefly comparing the expressiveness of HS in model checking with that of LTL, CTL, CTL* and by outlining a generalization of the proposed

    model checking framework that allows one to use regular expressions to define the behavior

    of proposition letters over intervals in terms of the component states.

  • 28 May 2018 at 13:15 in 4523
    Correctness Attraction: A Study of Stability of Software Behavior Under Runtime Perturbation
    (Martin Monperrus, KTH)

    Can the execution of a software be perturbed without breaking the correctness of the output? In this paper, we devise a novel protocol to answer this rarely investigated question. In an experimental study, we observe that many perturbations do not break the correctness in ten subject programs. We call this phenomenon "correctness attraction". The uniqueness of this protocol is that it considers a systematic exploration of the perturbation space as well as perfect oracles to determine the correctness of the output. To this extent, our findings on the stability of software under execution perturbations have a level of validity that has never been reported before in the scarce related work. A qualitative manual analysis enables us to set up the first taxonomy ever of the reasons behind correctness attraction.

  • 15 May 2018 at 13:15 in 4523, Lindstedtsvägen 5 - floor 5
    Spoon - Source Code Analysis and Transformation for Java
    (Martin Monperrus, KTH)

    Spoon is an open-source library to transform and analyze Java source code. Spoon provides a complete and fine-grained metamodel where any program constructs (classes, methods, fields, statements, expressions...) can be accessed both for reading and modification. Spoon can be used for checking coding and architectural rules, instrumentation before dynamic analysis, code generation, software metrics, transpilation and more. This talk will present the key concepts and usages of the Spoon library.

  • 14 May 2018 at 13:15 in 4523
    Excitement, frustration and hope in today’s machine learning area
    (Saikat Chatterjee)

    All of us possible aware of ‘deep learning’ term. Deep learning mainly in the architecture of deep neural network provides unprecedented performance in classification and prediction. Excitement is evident in all conferences and workshops. However, it comes with high challenges to understand the stuff and then eventual frustration. We will find little use of mathematics and knowledge from vast area of signals and systems. In this seminar, I will discuss some issues in neural networks. For example, why neural network has a standard architecture, regularization and overfitting issues, why convolutional neural net got so famous, certain things that are very difficult to understand (for example, dropout), a very simple network with very good performance which almost anybody can implement within 10 minutes (called extreme learning machine) and still frustration, etc. Then we will look for some hope. I will show some some examples where we can use knowledge from signals and systems to develop new architectures and systems.

  • 09 May 2018 at 14:15 in 4523, Lindstedtsvägen 5 - floor 5
    Language support for geo-replicated systems
    (Xin Zhao, KTH)

    Geo-replicated systems play a crucial role in modern software infrastructure, but they are notoriously difficult to implement correctly and efficiently. Highly available distributed systems must weaken the consistency of distributed data due to fundamental impossibility results. However, programming with weakly consistent data is difficult, even for experts. What's more, many distributed applications are developed by composing existing systems that provide various consistency models, making it even more difficult to reason about the consistency of distributed data. This talk aims to address the difficulty of providing consistency guarantees for distributed systems and the gap between application development and system architecture. First, I will review existing approaches for distributed consistency protocols and languages. Then, I will introduce my on-going work on a novel protocol for observable atomic consistency, including its definition, a system implementing the protocol, as well as initial experimental results. Finally, I will give an overview of my plans for future work.

    This is Xin's 30% seminar.

  • 09 May 2018 at 11:00 in OV6, floor 4, Osquldas väg 6
    [KTH Security Seminar] Browser fingerprinting: past, present and possible future
    (Pierre Laperdrix, Stony Brook University)

    This is a KTH Security Seminar (

    Browser fingerprinting has grown a lot since its debut in 2010. By collecting specific information in the browser, one can learn a lot about a device and its configuration. It has been shown in previous studies that it can even be used to track users online, bypassing current tracking methods like cookies.

    In this presentation, we will look at how this technique works and present an overview of the research performed in the domain. We will then see how this technique is currently used online before looking at its possible future.

  • 08 May 2018 at 13:15 in 1440 (library), Lindstedtsvägen 5 - floor 4
    Concurrent games and semi-random determinacy.
    (Stéphane Le Roux)

    Consider concurrent, infinite duration, two-player win/lose games played on graphs. If the winning condition satisfies some simple requirement, the existence of Player 1 winning (finite-memory) strategies is equivalent to the existence of winning (finite-memory) strategies in finitely many derived one-player games. Several classical winning conditions satisfy this simple requirement.

    Under an additional requirement on the winning condition, the non-existence of Player 1 winning strategies from all vertices is equivalent to the existence of Player 2 stochastic strategies winning almost surely from all vertices. Only few classical winning conditions satisfy this additional requirement, but a fairness variant of omega-regular languages does.

  • 07 May 2018 at 14:30 in 4523
    Reaching the Unreached and their Empowerment through ICT and other Scalable Methods
    (Kannan Moudgalya, IIT Bombay)

    The speaker will present a summary of three successful projects carried out from IIT Bombay in the general area of ICT training.

    Spoken Tutorial is a method to provide ICT training through 10 minute long videos created through screencast. These tutorials are created for self learning; spoken part dubbed into 22 Indian languages; can be used offline, which makes it accessible to students who don't have access to Internet. We work only with open source software. Using this method, 4 million students have been trained in the past four years, in about 5,000 colleges in India, affiliated to about 100 universities, through 36,000 lab courses. It has received several awards, including Google MOOCs Focused Research Award. URL for this work:

    FOSSEE stands for Free and Open Source Software for Education. While Spoken Tutorial looks at training the bottom of the pyramid, FOSSEE looks at training at a higher level, and also content generation through crowdsourcing. This project works in niche software tools in various engineering domains, some examples being Scilab, Python, eSim (electronic circuit simulation), DWSIM (chemical process flowsheeting), OpenFOAM (computational fluid dynamics), OpenModelica (general purpose modelling), and Osdag (steel structure design). Through Textbook Companions, students from across the country have generated one of the largest source of solved examples using Scilab and Python. URL for this work:

    The Talk will also introduce an affordable laptop that the speaker has been promoting. At Rs. 10,000, it is an excellent laptop for most engineering students. A demo of this laptop will be given during the talk. More details of the laptop can be seen at this link:

  • 07 May 2018 at 13:15 in Room 4523, Lindstedtsvägen 5
    The Verificatum Project 10-year Anniversary
    (Douglas Wikström, KTH)

    Register for cake before May 2:

    This is a KTH Security Seminar (

    A mix-net is a way for a group of servers to decrypt a set of ciphertexts in such a way that it is infeasible to find any correspondencebetween the individual inputs and output plaintexts. This idea originates in theseminal work of Chaum 1982 and the main application is electronicelections and related notions.

    The Verificatum Mix-Net (VMN) was the first fully distributed implementation of a provably secure mix-net. The first version was completed 2008. Today it is licensed under AGPL.

    Some key cryptographic protocols were developed at KTH, but the main contribution is what can be learned from the project as a whole and its application in real electronic voting systems. VMN is completely faithful to cryptographic theory, yet adopts best practices from real-world software engineering.

    This was one of the first research projects with this focus (2007) and paved the way to what today is often referred to as real-world crypto. It is the only implementation of a mix-net from the research community that has been considered fit for use by national election authorities.

    We first give an overview of a class of electronic voting systems and explain the role of mix-nets in the greater whole. Then we describe the state of the project today. We also give a historical account with afocus on mistakes and what we can learn from them.

    Then we take a pause. The historical account leads us in anatural way into difficult tradeoffs and the uncharted territory wheretheory and practice are supposed to meet. This may be a Q&A session or a more mutual discussion depending on what the audience prefers.

    You are all very welcome!

    Project: Talk at RWC 2018:

  • 04 May 2018 at 11:00 in 4523, Lindstedtsvägen 5 - floor 5
    On symbolic heas modulo permission theories
    (Stéphane Demri)

    In program verification, proving properties related to pointers manipulation and memory management is usually quite cumbersome due to possible aliasings. Separation logic is a framework for proving heap-manipulating programs that aims at simplifying proofs of such programs. It is also a program logic for concurrent heap-manipulating programs in which the ``ownership'' of a memory region is formalized by separation. Separation logic with permissions is an extension of separation logic that can express that the ownership of a given heap region is shared with other threads. In this work, we study the entailment problem for separation logic with symbolic heaps admitting list predicates and permissions. Herein, we design new decision procedures that are parameterised by the permission theories. This permits the use of solvers dealing with the permission theory at hand, independently of the shape analysis. We prove that the entailment problem is coNP-complete when the entailment problem for permission formulae is in coNP, assuming the write permission can be split into as many read permissions as desired. This is a joint work with Etienne Lozes (I3S, Nice) and Denis Lugiez (LIS, Marseille)

  • 25 Apr 2018 at 13:15 in room 4523, Lindstedtsvägen 5 - floor 5
    ASTOR: A Program Repair Library for Java
    (Matias Martinez, University of Valenciennes,

    During the last years, the software engineering research community has proposed approaches for automatically repairing software bugs. Unfortunately, many software artifacts born from this research are not available for repairing Java programs and to reimplement those repair approaches from scratch is costly. In this talk we will present Astor, a publicly available program repair library that includes 6 Java implementation of repair approaches, including one of the pioneer: GenProg. Astor provides 12 extension points that form the explicit decision space of program repair. Researcher have been already defining new approaches over Astor, proposing improvements of those built-in approaches by using the extension points, and executing approaches implementations from Astor in their evaluations. The implementations of the repair approaches built over Astor are capable of repair, in total, 98 real bugs from 5 large Java programs.

  • 24 Apr 2018 at 13:15 in 4523, Lindstedtsvägen 5
    Tensor rank is hard to approximate
    (Joseph Swernofsky, KTH)

    My group researches approximation algorithms and hardness of approximation. Tensor rank is an NP hard generalization of matrix rank. In this talk I will show that it is also NP hard to approximate the rank of a tensor to within some constant factor C. I do this by reduction from bounded occurrence MAX-2SAT.

    This is original research from my first year and a half at KTH and counts as 30% seminar.

  • 16 Apr 2018 at 10:55 in Lindstedtsvägen 3, room 1440 (library)
    Test suite minimization and tree-structured test models in combinatorial testing
    (Takashi Kitamura, Senior Researcher at AIST, Japan)

    In this talk, we explain two features of our combinatorial testing tool, called Calot. The first feature is about minimization of combinatorial t-way test suites. The size of a test suite is one of important concerns in practice in conducting a testing, as the number of test suites directly or indirectly affects the cost of the testing. We propose a technique to minimize combinatorial t-way test suites using SAT-solving, especially using incremental SAT-solving. We evaluate the performance of the technique by experiments in comparison with existing techniques. The second feature is about its tree-structured modeling technique. Tree structured modeling languages, such as Classification Trees, are widely used in industry to describe test models in combinatorial. However, the current tools that support such tree-structured modeling do not perform well in test generation in our case studies, such as for its computation time and the sizes of generated test suites. Moreover, actual algorithms for such tree-structured models are not clarified in existing studies. we explain our algorithm for it called flattening algorithm. We apply a tool based on our algorithm to an actual ticket gate system that is used by several large transportation companies in Japan. Experimental results show that our technique outperforms existing techniques.

  • 16 Apr 2018 at 10:00 in Lindstedtsvägen 3, room 1440 (library)
    Using a satisfiability solver to analyze cascading failures in electric power systems
    (Tatsuhiro Tsuchiya, Professor at Osaka University, Japan)

    A Boolean satisfiability solver is a versatile tool for combinatorial problems. Its applications include model checking, test case generation, digital tomography, etc. In this talk, we propose a new application: an analysis of cascading failures in electric power systems. We consider a power system consisting of two interdependent networks, namely, a power line network and a SCADA network. We assume a failure model that represents failure propagation as a result of the interaction between these two networks. Given a condition on the initial attack, we first construct a Boolean formula that symbolically represents all possible failure propagation scenarios; then we identify the worst-case scenario by solving the satisfiability of the formula.

  • 13 Apr 2018 at 11:00 in 4523, Lindstedtsvägen 5
    Code Defenders: Crowdsourcing Effective Tests and Subtle Mutants with a Mutation Testing Game
    (José Miguel Rojas, University of Leicester)

    Writing good software tests is difficult and not every developer’s favorite occupation. Mutation testing aims to help by seeding artificial faults (mutants) that good tests should identify, and test generation tools help by providing automatically generated tests. However, mutation tools tend to produce huge numbers of mutants, many of which are trivial, redundant, or semantically equivalent to the original program; automated test generation tools tend to produce tests that achieve good code coverage, but are otherwise weak and have no clear purpose. In this paper, we present an approach based on gamification and crowdsourcing to produce better software tests and mutants: The Code Defenders web-based game lets teams of players compete over a program, where attackers try to create subtle mutants, which the defenders try to counter by writing strong tests. Experiments in controlled and crowdsourced scenarios reveal that writing tests as part of the game is more enjoyable, and that playing Code Defenders results in stronger test suites and mutants than those produced by automated tools.

  • 26 Mar 2018 at 13:15 in Room 4523, Lindstedtsvägen 5 - floor 5
    An ownership-based order reduction theorem for concurrent systems.
    (Jonas Oberhauser, Saarland University)

    The vast number of preemptive interleavings in concurrent code make reasoning about such code hard, if not impossible. Instead one would like to have cooperative schedulers, where a thread is only preempted when it reaches designated interleaving points, thereby considerably reducing the number of interleavings to be considered. Sadly multi-core processors do not behave this way, and can preempt the execution of a thread at instruction level; these additional interleavings often introduce new behaviours that may break the program.

    We show a set of software conditions under which a program that has been analyzed under the assumption of a cooperative scheduler will not show new behaviours when executed by a preemptive scheduler. The conditions only have to be satisfied when the program is run by a cooperative scheduler, and so preemptive scheduling becomes completely invisible.

    Our conditions also work in the presence of interrupt handler threads for inter-processor interrupts, which in a preemptive scheduler can preempt the interrupted thread after any instruction but on a cooperative scheduler only at the designated interleaving points. Such interrupt handler threads are particularly exciting because unlike other threads, they access and modify local state of the preempted thread, and some of these modifications remain visible when the preempted thread is resumed. One of our key results is a new ownership formalism, which is both more permissive than existing ownership policies for order reduction, and allows us to deal effectively with interrupt handler threads.

  • 27 Feb 2018 at 13:15 in 4523, Lindstedtsvägen 5 - floor 5
    System-level non-interference of constant-time cryptography
    (Juan Diego Campo)

    Cache based attacks are a class of side-channel attacks that are particularly effective in virtualized or cloud-based environments, where they have been used to recover secret keys from cryptographic implementations. One common approach to thwart cache based attacks is to use constant-time implementations, which do not branch on secrets and do not perform memory accesses that depend on secrets. However, there was no rigorous proof that constant-time implementations are protected against concurrent cache attacks in virtualization platforms; moreover, many prominent implementations are not constant-time.

    We will show a new information-flow analysis that checks if an x86 application executes in constant-time and prove that constant-time programs do not leak confidential information through the cache to other operating systems executing concurrently on virtualization platforms. The soundness proofs are based on new theorems of independent interest, including isolation theorems for virtualization platforms, and proofs that constant-time implementations are non-interfering with respect to a strict information flow policy which disallows that control flow and memory accesses depend on secrets. We formalize our results using the Coq proof assistant and we demonstrate the effectiveness of our analyses on cryptographic implementations, including PolarSSL AES, DES and RC4, SHA256 and Salsa20.

  • 22 Feb 2018 at 15:00 in 4523, Lindstedtsvägen 5 - floor 5
    Optimal Quasi-Gray Codes: Efficient Generation using Simplicity of Even Permutations
    (Diptarka Chakraborty)

    A quasi-Gray code of dimension $n$ and length $\ell$ over an alphabet $\Sigma$ is a sequence of distinct words $w_1,w_2,\dots,w_\ell$ from $\Sigma^n$ such that any two consecutive words differ in at most $c$ coordinates, for some fixed constant $c>0$. In this paper we are interested in the read and write complexity of quasi-Gray codes in the cell-probe model, where we measure the number of symbols read and written in order to transform any word $w_i$ into its successor $w_{i+1}$.

    We present construction of quasi-Gray codes of dimension $n$ and length $3^n$ over the ternary alphabet $\{0,1,2\}$ with worst-case read complexity $O(\log n)$ and write complexity $2$. This generalizes to arbitrary odd-size alphabets. For the binary alphabet, we present quasi-Gray codes of dimension $n$ and length at least $2^n - 20n$ with worst-case read complexity $6+\log n$ and write complexity $2$.

    Our results significantly improve on previously known constructions and for the odd-size alphabets we break the $\Omega(n)$ worst-case barrier for space-optimal (non-redundant) quasi-Gray codes with constant number of writes. We obtain our results via a novel application of algebraic tools together with the principles of catalytic computation.

  • 21 Feb 2018 at 09:15 in Room 4523, Lindstedtsvägen 5
    Students’ Experiences of Assessment and Pair Programming
    (Emma Riese, KTH)

    As teachers, we can use assessment and make certain methods mandatory in order to guide the students to the most important parts of the course material and to give them tools for how to work with the material. How we use assessment and which methods we introduce could, therefore, be said to have an impact on the students’ learning. Understanding the students’ experiences of the assessment and methods introduced could give us insights into how to further improve our courses in order to enhance learning among our students. During this seminar, Emma will present the results from two of her studies. Both studies had a focus on students’ experiences, but from different aspects and in different contexts. The first study uses a phenomenographic approach to categorize students' experiences of assessment throughout an online course. The second study maps out differences in students’ experiences from a voluntary respectively mandatory introduction to the method of pair programming, in two campus courses.

  • 14 Feb 2018 at 14:00 in Room 1537, Lindstedtsvägen 3
    Trustworthy execution and temporal isolation
    (Andreas Lindner)

    Modern cyber physical systems are becoming increasingly integrated with software as a pervasive part. Examples are control systems in mobile phones, cars, avionics, space equipment and even nuclear reactors, which all require high assurance. The cost-effective implementation of such systems requires extensive sharing of hardware platforms, leading to mixed-criticality systems. These allow for reusing off-the-shelf software and mixing software components of different provenance. Our approach to provide high assurance for such systems is the application of formal methods to obtain machine checkable proofs for sound separation of criticality domains on real hardware. This task is generally laborious and therefore we develop automation strategies to reason on the existing complex hardware execution models. As first steps towards this goal, we designed and implemented a memory isolation platform based on existing memory protection hardware for microcontrollers. The platform's minimal design and support for real-time execution enables the possibility for formal verification. As a second step, we designed and implemented a proof-producing transpiler for assisting the binary analysis of machine code in a theorem prover. This tool eases the manual steps involved in reasoning on code running on hardware. Unlike other approaches such as verifying compilers, we can use standard build tools and reason on code changing hardware configurations. The latter is required for system software that changes the memory configuration. Currently, we are extending the transpiler with an analysis algorithm and applying it to an AES encryption binary for establishing functional correctness. In this talk, I will first discuss our results so far and finally introduce my plans for periodically scheduled systems and discuss them in the context of temporal isolation. The expected outcome is a temporal isolation platform enabling mixed-criticality applications.

  • 05 Feb 2018 at 13:15 in room 4523
    Securing Applications by Information Flow Tracking
    (Musard Balliu, KTH)

    Society increasingly relies on applications that handle sensitive information: private individuals and businesses all use software applications that manipulate confidential or untrusted data. My research makes it easier to build applications that handle sensitive data securely and uncover vulnerabilities in existing applications. To achieve this, I use programming language concepts and methods to develop tools and techniques that allow programmers to express application-specific security policies and enforce those policies efficiently, ultimately providing security guarantees that rely on solid foundations.

    In this talk I will give an overview of my research interests with special focus on web application security. I will present some recent work on how to achieve end-to-end web application security, by tracking information flows through the client, the server, and the database. Afterwards, I will discuss a combination of formal and empirical methods to find vulnerabilities in existing web applications. Finally, I will conclude with a few highlights on my future reseach directions in the areas of security and privacy. The talk is self-contained and, for the most part, no prior knowledge is required.

  • 26 Jan 2018 at 11:30 in 4423
    Residual Investigation: Predictive and Precise Bug Detection
    (Christoph Reichenbach)

    There are many static program analyses that can identify potential program faults. However, such analyses are notorious for producing false positives, which substantially reduces their utility to end users. To address this problem, we introduce `residual investigation', a technique for refining static analysis results through predictive dynamic analysis.

    The purpose of residual investigation is to find evidence that indicates whether the statically predicted program fault is likely to be realisable and relevant. The key feature of a residual investigation is that it has to be much more precise (i.e., with fewer false warnings) than the static analysis alone, yet significantly more general (i.e., reporting more errors) than the dynamic tests in the program's test suite that are pertinent to the statically reported error. That is, a good residual investigation encodes dynamic conditions that, when considered in conjunction with the static error report, increase confidence in the existence or severity of an error without needing to directly observe a fault resulting from the error.

    We have enhanced the static analysers FindBugs and JChord with residual investigations, appropriately tuned to the analysers' static error patterns. Examining a number of open source systems, we find that residual investigation is effective at reducing the false positive rate.

  • 25 Jan 2018 at 13:15 in 4423
    Sum of squares lower bounds from symmetry and a good story
    (Aaron Potechin)

    The sum of squares hierarchy is a hierarchy of semidefinite programs which has the three advantages of being broadly applicable (it can be applied whenever the problem can be phrased in terms of polynomial equations over R), powerful (it captures the best known algorithms for several problems including max cut, sparsest cut, and unique games), and in some sense, simple (all it is really using is the fact that squares are non-negative over R). The sum of squares hierarchy can also be viewed as the Positivstellensatz proof system.

    However, the sum of squares does have a few known weaknesses. One such weakness, as shown by Grigoriev's sum of squares lower bound on the knapsack problem, is that the sum of squares hierarchy is poor at capturing integer arguments. In this talk, we further explore this weakness for symmetric problems. In particular, we discuss the following Turan type problem whose true solution was proved by Razborov: Minimize the number of triangles a graph of a given density contains. We then describe a pretend solution (called pseudo-expectation values)​ for this problem and describe features of our pretend solution which allow us to show that it fools the sum of squares hierarchy

    Note: Prior knowledge of the sum of squares hierarchy is helpful but will not be assumed.

  • 19 Jan 2018 at 14:15 in 1440 (library), Lindstedtsvägen 5 - floor 4
    Verifying Concurrent Programs with VerCors
    (Wytse Oortwijn, University of Twente)

    In this talk, I will give an overview of the VerCors tool set and of research that is related to the VerCors project. VerCors is a tool set for practical mechanised verification of parallel and concurrent software under different concurrency models, notably heterogeneous and homogeneous threading. VerCors has verification support for multiple high-level languages with concurrency features, including Java, OpenCL (i.e. GPU kernels), and OpenMP for C (i.e. compiler directives for automated parallelisation). During the talk I will explain the verification capabilities of VerCors and demonstrate this on several example verification problems.

  • 19 Jan 2018 at 11:30 in 4523, Lindstedtsvägen 5
    Scalable Fine-Grained Dynamic Binary Analysis -- Methods and Applications
    (Ulf Kargén)

    Being able to analyze binary (i.e. machine code) programs is important, for example when auditing closed-source applications for security flaws, or for defending against malicious code. Static analysis is challenging to apply on the binary level due to e.g. the lack of type information and structured control-flow. Dynamic analysis, on the other hand, can provide fine-grained insight into the behavior of a binary. However, scalability often becomes an issue when performing fine-grained dynamic analysis due to the large data volumes produced, even for modestly sized programs and inputs. In this talk, I will briefly describe some of our work on scalable dynamic analysis and its applications, with a focus on security. I will devote some extra time to our work on aligning instruction traces, which was recently presented at ASE’17. I will also discuss potential approaches to generalize our trace alignment method to align e.g. Java bytecode traces.

  • 18 Jan 2018 at 13:15 in Room 4523
    Expected time complexity of QuickSelect (Hoare’s selection)
    (Bengt Aspvall, Blekinge Tekniska högskola)

    Analyzing the expected time complexity of QuickSelect (Hoare’s selection) used in order statistics is quite involved. Many researchers have analysed the algorithm over the years. Averaged over all element ranks the expected number of comparisons is 3n + O(log(n)). We show how the result can be derived in a simple and intuitive way when the algorithm is introduced. Our approach is "self-contained", quite short, and easy enough to be included in a lecture or a textbook.

Previous years' seminars.

Top page top