TCS Seminar Series

Please tell Stephan Gocht if you want to give a seminar in the series. They are ideally held Mondays after lunch, but exceptions are possible.

See also information about informal PhD student seminars.

TCS Seminar Series Spring 2017

  • 18 Dec 2017 at 13:15 in 4523, Lindstedtsvägen 5
    Verification of data-centric dynamic systems (and some related aspects)
    (Riccardo De Masellis)

    Data-centric dynamic systems are theoretical frameworks composed by agents and a data component. Each agent is characterized by a (possibly different) lifecycle model specifying its behavior, namely, when it can perform its actions, and the data component is a relational or knowledge base. At each time instant, the system is described by the current state of the agents as well as the state of data: thus, when actions modify the data, the system evolves to a new state. Given their generality, data-centric dynamic systems can be used for modeling multi-agent systems, robots, (company) business processes or software components. The goal is to verify that a data-centric dynamic system meets temporal specification of interest (expressed by some temporal logic formulas): this can be done offline (à la model-checking) or at runtime. Challenges are both theoretical and technical: for example, there is the need of abstracting from the infinitely many values that can populate the system. Also, in many practical cases agent lifecycles eventually terminate, thus making the traditional infinite-trace semantics for temporal logics inadequate. Finally, there is the need of actual tools to perform the verification tasks. The seminar will provide an high-level overview of the data-centric dynamic systems frameworks, their challenges, solutions and some application scenarios.

  • 05 Dec 2017 at 15:15 in room 4523
    Model Checking Strategic Ability: Why, What, and Especially: How?
    (Wojtek Jamroga)

    Automated verification of discrete-state systems has been a hot topic in computer science for over 35 years. Model checking of temporal and strategic properties is one of the most prominent and most successful approaches here. I will present a brief introduction to the topic, and mention some relevant properties that one might like to verify this way. Then, I will talk about some recent results on approximate model checking and model reductions, which can be applied to facilitate verification of notoriously hard cases.

  • 24 Nov 2017 at 11:15 in Seminar room 4523, Lindstedtsvägen 5, level 5
    Machines that read and reason: Multi-hop reading comprehension across documents
    (Pontus Stenetorp, Senior Research Associate at University College, London (UCL) and former student at KTH.)

    While being one of the most active areas of research in Natural Language Processing, existing Reading Comprehension (RC) datasets — SQuAD, TriviaQA, etc. — are dominated by queries that can be answered based on the content of a single paragraph or document. However, enabling models to combine pieces of textual information from different sources would drastically extend the scope of RC – effectively allowing models to even answer queries whose answers are never explicitly stated. In this talk, we will introduce a novel Multi-hop RC task, [1] where a model has to learn how to find and combine disjoint pieces of textual evidence, effectively performing multi-step (alias multi-hop) inference. We present two datasets, WikiHop and MedHop, from different domains — both constructed using a unified methodology. We will then discuss the behaviour of several baseline models, including two established end-to-end RC models, BiDAF and FastQA. For example, one model is capable of integrating information across documents, but both models struggle to select relevant information. Overall the end-to-end models outperform multiple baselines, but their best accuracy is still far behind human performance, leaving ample room for improvements. It is our hope that these new datasets will drive future RC model development, leading to new and improved applications in areas such as Search, Question Answering, and scientific text mining.

    [1]: https://arxiv.org/abs/1710.06481

  • 10 Oct 2017 at 13:15 in room 4618
    Formal goal-oriented development of adaptable and resilient systems
    (Elena Troubitsyna, Åbo Akademi, Finland)

    Goal-oriented development facilitates structuring complex requirements. To ensure resilience, the designers should guarantee that the system achieves its goals despite changes, e.g., caused by failures of system components. In my talk, I will present a formalisation of basic concepts of goal-oriented multi-agent systems and their essential properties. I will focus on describing reasoning about resilience and discuss different mechanisms of enhancing system adaptability and resilience. The proposed formalisation allows us to establishing connections between goals at different levels of abstraction, system architecture and agent responsibilities. The proposed systematic treatment of the involved concepts provides the designers the guidelines for a systematic structured development of adaptive and resilient systems. In my talk, I will also discuss how by augmenting the proposed models with the stochastic information, we can quantitatively evaluate system resilience.

  • 26 Sep 2017 at 13:15 in room 1440 (library)
    Free-form and code-to-code search - Leveraging Q&A Data towards Accurate Matching of Diverse Solutions
    (Tegawendé Bissyandé, Research Associate @SnT / Univ. Luxembourg)

    Code search is an unavoidable activity in software development. Although various approaches have been explored in the literature to support code search tasks, two major issues challenge the accuracy and scalability of state-of-the-art techniques.

    First, source code terms such as method names and variable types are often different from conceptual words mentioned in a free-form search query. This vocabulary mismatch problem can make code search inefficient. We will introduce COde voCABUlary (CoCaBu), an approach to resolving the vocabulary mismatch problem when dealing with free-form code search queries. Our approach leverages common developer questions and the associated expert answers to augment user queries with the relevant, but missing, structural code entities in order to improve the performance of matching relevant code examples within large code repositories.

    Second, a large body of research on searching for code clones has focused on identifying (nearly) similar pieces of code either (1) statically by recognizing syntactic and structural similarities or (2) dynamically by observing when code fragments produce the same outputs for the same inputs or present similar execution traces. Unfortunately, the former approaches cannot detect code fragments that are functionally similar although syntactically different, while the latter approaches cannot scale to the search of (partially) equivalent code fragments in large code bases. We present FaCoY (Find Another Code other than Yours), a novel approach for statically finding code fragments which may be semantically similar to user input code. FaCoY implements a query alternation strategy: instead of directly matching code query tokens with code in the search space, FaCoY first attempts to identify other tokens which may be also relevant in implementing the functional behavior of the input code.​

  • 11 Sep 2017 at 13:15 in room 4523, Lindstedtsvägen 5
    Pseudo-Boolean Constraints: Reasoning and Compilation
    (Romain Wallon, Université d'Artois)

    Pseudo-Boolean constraints, and their associated cutting-planes proof system, are theoretically more powerful than the usual resolution approach. However, even if pseudo-Boolean solvers benefit from some of the last improvements made in SAT solvers, the practical results are disappointing compared to the promises of the theory.

    To try to get a better understanding of this phenomenon, an interesting point of view is to consider pseudo-Boolean constraints as a compilation language.

    In propositional logic, the Knowledge Compilation Map proposed by Adnan Darwiche and Pierre Marquis in 2002 enables to find out which compilation language is suitable to represent knowledge according to what is intended to be done with it.

    In this talk, we will extend this map by adding the pseudo-Boolean constraints to it, and use these results to try to find ways to improve pseudo-Boolean reasoning.

  • 27 Jun 2017 at 13:15 in room 4523, Lindstedtsvägen 5
    Rethinking Large-Scale Consensus through Blockchains
    (Rafal Pass, Cornell)

    The literature on distributed computing (as well as the cryptographic literature) typically considers two types of players---honest ones and corrupted ones. Resilience properties are then analyzed assuming a lower bound on the fraction of honest players. Honest players, however, are not only assumed to follow the prescribed the protocol, but also assumed to be *online* throughout the whole execution of the protocol. The advent of “large-scale” consensus protocols (such as e.g., the blockchain protocol) where we may have millions of players, makes this assumption unrealistic. In this work, we initiate a study of distributed protocols in a “sleepy” model of computation, where players can be either online (awake) or offline (asleep), and their online status may change at any point during the protocol. The main question we address is:

    Can we design consensus protocols that remain resilient under “sporadic participation”, where at any given point, only a subset the players are actually online?

    As far as we know, all standard consensus protocols break down under such sporadicparticipation, even if we assume that 99% of the online players are honest. Our main result answers the above question in the affirmative, assuming only that a majority of the online players are honest. Perhaps surprisingly, our protocol significantly departs from the standard approaches to distributed consensus, and we instead rely on key ideas behind Nakamoto’s blockchain protocol (while dispensing with “proofs-of-work”).

    Based on joint work with Elaine Shi.

  • 09 Jun 2017 at 11:00 in room 4523, Lindstedtsvägen 5
    Pointer chasing via triangular discrimination
    (Amir Yehudayoff, Technion university)

    We shall see how to use information theory to prove lower bounds for the distributional communication complexity of the pointer chasing problem. A key part of the proof is using triangular discrimination instead of total variation distance; this idea may be useful elsewhere.

  • 30 May 2017 at 13:15 in room 4523, Lindstedtsvägen 5
    Revisiting Diffusion Process for Hypergraph Laplacian
    (Hubert Chan, University of Hong Kong)

    Hypergraph Laplacian has been defined via a diffusion process [Louis STOC 2015]. The spectral properties of the Laplacian is used to achieve a variant of the Cheeger's inequality for hyperedge expansion and higher-order Cheeger-like inequalities for multi-way expansion.

    In this talk, we take a closer look at this diffusion process, in which each hyperedge tries to transfer measure from vertices having maximum measure to those having minimum. In order for the diffusion process to be well-defined, we see that the hyperedges must be coordinated to essentially solve a densest subset problem. Consequently, we can recover the basic Cheeger's inequality, but higher-order spectral properties of the Laplacian do not hold in hypergraphs in general.

    This is joint work with Anand Louis, Zhihao Gavin Tang and Chenzi Zhang.

  • 29 May 2017 at 12:00 in room 4523, Lindstedtsvägen 5
    Unified and optimal lower bounds for monotone computation
    (Robert Robere, University of Toronto)

    A classic counting argument due to Claude Shannon shows that almost all Boolean functions have high complexity — more formally, all but an exponentially small fraction of Boolean functions on n bits require strongly exponential (i.e. 2^Ω(n)) size circuits. On the other hand, the best lower bounds on circuit size that we have proven for any explicit function is on the order of 5n - o(1), which is not even superlinear! The state of affairs is not much better for Boolean formulas, for which we merely have cubic lower bounds. Even for monotone circuits and formulas, which are much more understood, the best lower bounds for explicit functions are weakly exponential (i.e. of the form 2^{n^c} where c < 1).

    In this talk, we discuss some recent work in which we have nearly matched Shannon's lower bound for computing an explicit function by monotone circuit models. In particular, we prove that a monotone variant of the SAT problem requires monotone formulas of size 2^{c*n} for some universal constant c > 0. Our lower bounds are tight (up to the constant c) for any monotone Boolean function, and thus represent the first example of any explicit monotone Boolean function with strongly exponential size lower bounds. Furthermore, our techniques are very general and apply to a wide variety of monotone circuits models.

    This work is joint with Toniann Pitassi

  • 24 May 2017 at 12:00 in room 4523, Lindstedtsvägen 5
    "Secure" execution platforms for application software — some definitions of "secure"
    (Frank Piessens, Katholieke Universiteit Leuven)

    Software applications run on top of execution platforms consisting of hardware (processors, devices, communication networks, and so forth) as well as software (operating systems, compilers, virtual machines, language runtimes, databases, and so forth). It is obviously desirable that these platforms are "secure", but it is less obvious what "secure" means in this context. It is an empirical fact that attacks against application software often rely at least to some extent on aspects of the execution platform, but often the possibility of an attack is a joint responsibility between application code vulnerabilities and execution platform characteristics. For instance, a memory safety vulnerability is only a security concern if the compiler or operating system do not implement adequate countermeasures against exploitation of such vulnerabilities. As another example, an attacker sniffing the network is less of a concern if either the application or the platform provide adequate cryptographic protection.

    So what exactly should we require of a platform for it to be secure? This seminar will discuss the problem of platform security by proposing and analyzing a number of different formal definitions for platform security, building on classical concepts from programming language theory.

  • 22 May 2017 at 10:00 in room 1537
    Part I: Future-Proof Software and Part II: How to test the universe? - Efficient Testing of Software Product Lines
    (Ina Schaefer, Technische Universität Braunschweig, Germany)

    Part I: Modern software systems are extremely long-lived and evolve over time in order to adapt to changing user requirements, application contexts or technological changes. This leads to challenges for software design, implementation, quality assurance and re-engineering in order to make those software systems future-proof. In this talk, I will present an overview of ongoing research in my group at the Institute of Software Engineering and Automotive Informatics at TU Braunschweig addressing the challenges of long-living and evolving software systems. I will focus on two areas of our work: (1) applying machine learning techniques for efficient system-level regression testing and (2) similarity-based variability model mining in order to transform grown software systems into well-structured software product line.

    Part II: Software product line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis, test and verification techniques in their quest of ensuring correctness and reliability of software. Simply creating and testing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. In this talk, I present different strategies for software product line testing and focus on our recent work on incremental test case selection and test case prioritization of efficient product line testing.

  • 16 May 2017 at 13:00 in Room 4423, Lindstedtsvägen 5
    Decremental Shortest Paths and Beyond
    (Danupon Na Nongkai, Theoretical Computer Science (TCS) Department- KTH)

    This lecture focuses on a fundamental problem in dealing with graphs that evolve over time called decremental single-source shortest paths. This problem concerns maintaining the distances and shortest paths between a given source node and every other node in a graph undergoing edge deletions. In this talk, I will discuss the recent development on this problem which leads to a randomized (1+?)-approximation algorithm with near-linear total update time on undirected graphs. I will also discuss the main technique called hop-set which has found many new applications beyond the decremental shortest paths problem. The talk will also touch some lower bound developments and other variations of the problem.

  • 16 May 2017 at 10:15 in room 4523, Lindstedtsvägen 5
    An Interactive journey through combinatorial optimization: resilient solutions, multiple objectives, and logic-based approaches for timetabling
    (Emir Demirović, Technische Universität Wien)

    In this presentation, a range of combinatorial optimization topics will be covered, where the audience will dictate the flow according to their interest. Therefore, it is meant to be an interactive presentation. The topics are as follows:

    1) Solving the general high school timetabling (XHSTT) problem using logic-based approaches: maxSAT, SMT-bitvector theory, and a hybrid algorithm which combines maxSAT with large neighborhood search (metaheuristic algorithm). These approaches are state-of-the-art for XHSTT, outperforming the competition on numerous benchmarks.

    2) Computing the "representative" subset of all Pareto optimal solutions for multi-objective optimization problems is known to be sigma^P_2-hard. We will see a shift in complexity to NP-hardness when considering an approximation using the notion of egalitarian solutions, and discuss algorithms for its computation.

    3) For practical applications, it is important to compute solutions that remain "stable" after unexpected changes to the problem happen (e.g. new constraints are added post-solving). We will formally define the problem for set covering and set partitioning, prove sigma^P_3- and sigma^P_2-hardness (respectively), and analyze algorithms for computing such "resilient" solutions.

  • 15 May 2017 at 13:15 in room 4523, Lindstedtsvägen 5
    Representations of Monotone Boolean Functions by Linear Programs
    (Mateus de Oliveira Oliveira)

    We introduce the notion of monotone linear-programming circuits (MLP circuits), a model of computation for partial Boolean functions. Using this model, we prove the following results.

    1. MLP circuits are superpolynomially stronger than monotone Boolean circuits.

    2. MLP circuits are exponentially stronger than monotone span programs.

    3. MLP circuits can be used to provide monotone feasibility interpolation theorems for Lovász-Schrijver proof systems, and for mixed Lovász-Schrijver proof systems.

    4. The Lovász-Schrijver proof system cannot be polynomially simulated by the cutting-planes proof system. This is the first result showing a separation between these two proof systems.

    Finally, we discuss connections between the problem of proving lower bounds on the size of MLPs and the problem of proving lower bounds on extended formulations of polytopes.

  • 08 May 2017 at 12:00 in room 4523, Lindstedtsvägen 5
    Information-theoretic thresholds
    (Amin Coja-Oghlan, Goethe University)

    Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we prove a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. Joint work with Florent Krzakala, Will Perkins and Lenka Zdeborova.

  • 02 May 2017 at 10:15 in room 4523, Lindstedtsvägen 5
    A meta-algorithm for Bayesian estimation
    (Sam Hopkins, Cornell University)

    We propose a simple and efficient meta-algorithm for Bayesian estimation problems (i.e. hidden variable, latent variable, or planted problems). Our algorithm uses low-degree polynomials together with new and highly robust tensor decomposition methods. We focus on the question: for a given estimation problem, precisely how many samples (up to low-order additive terms) do polynomial-time algorithms require to obtain good estimates of hidden variables? Our meta-algorithm is broadly applicable, and achieves statistical or conjectured computational sample-complexity thresholds for many well-studied problems, including many for which previous algorithms were highly problem-specific.

    We discuss two our meta-algorithm in two concrete contexts. In the stochastic block model -- a widely studied family of random graphs used to model community structure in networks -- we recover and unify the best-known algorithms for the so-called "partial recovery" problem, and obtain (conjecturally) tight sample complexity results for more realistic blockmodel variants. For sparse PCA, a useful but computationally challenging primitive in high-dimensional statistics, we prove an optimality result for our meta-algorithm, showing that it is at least as powerful as a very strong family of convex programs called the Sum-of-Squares hierarchy.

    Based on joint works with Pravesh Kothari, Aaron Potechin, Prasad Raghavendra, Tselil Schramm, and David Steurer.

  • 24 Apr 2017 at 12:00 in room 4523,, Lindstedtsvägen 5
    Dynamic (Minimum) Spanning Forest with Worst-case Update Time
    (Thatchaphol Saranurak, TCS group - KTH)

    We will discuss recent algorithms for dynamically maintaining a (minimum) spanning forest of a graph undergoing edge insertions and deletions. This problem played a central role in the study of dynamic graph algorithms.

    We provide the "first polynomial improvement" over the long-standing O(sqrt{n}) bound of [Frederickson STOC’83, Eppstein et. al FOCS’92] for dynamic spanning forest problem. Independently, Wulff-Nilsen shows the first polynomial improvement for dynamic minimum spanning forest problems. Both works will be presented in STOC'17.

    The new crucial techniques are about expanders: 1) an algorithm for decomposing a graph into a collection of expanders in near-linear time, and 2) an algorithm for "repairing" the expansion property of an expander after deleting some edges of it. These techniques can be of independent interest.

  • 21 Apr 2017 at 12:00 in room 4523
    Witnessing: From Optimization to Verification to Optimization
    (Lenore Zuck, University of Illinois, Chicago)

    An optimizing compiler is commonly structured as a sequence of passes each transforming a source program into a target one. The proof that a such a transformation pass is correct consists of a relation between source and target, often with the aid of invariants. Those invariants also enable further optimizations. Thus, the two tasks of proving the validity of a transformation and propagating invariants to enable further optimizations are closely related. The two can be addressed by augmenting each transformation with A witness — a stuttering simulation relation between the target and source programs which guarantees correctness for that pass. This stuttering simulation relation forms a sound and complete witness format. Witnesses can be generated either automatically or by (sound) external static analyzers.

    The talk will describe both witness generation and witness propagation, and demonstrate the feasibility of the approach on the LLVM compiler. Each optimization pass is augmented with a witness generator. This allows to construct an independent translation validation framework for the compiler as well as to generate an independent verification for each of its runs. The automatically generated invariants, as well as some obtained by Frama-C, can then propagated to enhance optimizations. This is demonstrated on experiments where propagated witnesses are used to substantially improve the effectiveness of several optimizations.

  • 10 Apr 2017 at 12:00 in room 4523, Lindstedtsvägen 5
    Complexity results in automated planning
    (Meysam Aghighi, Linköping University)

    Planning is the problem of finding a sequence of actions that achieve a specific goal from an initial state. This problem is computationally hard in the general case. Propositional planning is PSPACE-complete and first-order planning is undecidable.

    Cost-optimal planning (COP) is a type of planning where each action has a cost and the goal is to find a satisfying plan with minimal cost. We show how under different restrictions, COP varies from being polynomial-time solvable to PSPACE-complete. Net-benefit planning (NBP) is a type of planning where the goal is to find a plan that maximizes the achieved utility minus the cost spent. We present a method for efficiently compiling NBP into the ordinary plan existence problem.

    We also show the effect of numeric domains for action costs in COP using parameterized complexity. Planning is PSPACE-complete regardless of whether action costs are positive or non-negative, integer or rational. We distinguish between these cases and show that COP is W[2]-complete for positive integer costs, but it is para-NP-hard if the costs are non-negative integers or positive rationals.

  • 27 Mar 2017 at 12:00 in room 4523, Lindstedtsvägen 5
    Complexity theory beyond deterministic exponential time and applications
    (Igor Carboni Oliveira, Charles University)

    In the first part of the talk, I'll survey a variety of connections between non-uniform circuit lower bounds, non-trivial proof systems, non-trivial learning algorithms, and variants of natural properties. In particular, I plan to discuss how one can prove new unconditional lower bounds for standard complexity classes such as REXP, ZPEXP, BPEXP, and NEXP by designing algorithms with a non-trivial running time.

    After the break, we will focus on a recent application of some of the complexity-theoretic techniques behind these results. I'll describe in more detail recent progress on the problem of efficiently generating large canonical prime numbers, via pseudodeterministic pseudorandomness. If there is enough time and interest, I'll also discuss some non-constructive aspects of the result, and connections to derandomization.

    Based on joint work with Rahul Santhanam.

  • 20 Mar 2017 at 13:15 in Biblioteket 1440, Lindstedtsvägen 3
    Logical modelling, specification, verification and synthesis of multi-agent systems: an introduction
    (Valentin Goranko, Stockholm University)

    In this talk I will introduce concurrent game models for multi-agent systems and the Alternating-time temporal logic ATL as the currently most popular logical system for formalizing and reasoning about strategic abilities in such systems.

    I will illustrate the semantics of ATL with some examples and will discuss briefly the logical decision problems of model checking and constructive satisfiability testing of ATL formulae, and how these can be used for specification, verification, and synthesis of concurrent game models.

  • 16 Mar 2017 at 13:15 in room 4523, Lindstedtsvägen 5
    Dag-like communication and its application
    (Dmitry Sokolov, St. Petersburg Department of V. A. Steklov Institute of Mathematics)

    n this talk we consider a generalization of communication protocols to dag-like case (instead of classical one where protocols correspond to trees). We prove an analogue of Karchmer-Wigderson Theorem for this model and boolean circuits. We also consider a relation between this model and communication PLS games proposed by Razborov in 1995 and simplify the proof of Razborov's analogue of Karchmer-Wigderson Theorem for PLS games.

    We also consider a dag-like analogue of real-valued communication protocols and adapt a lower bound technique for monotone real circuits to prove a lower bound for these protocols. We use real-valued dag-like communication protocols to generalize ideas of "feasible interpolation" technique, which helps us to prove a lower bound on the Cutting Planes proof system (CP) and adapt it to "random CP" by using recent result of Krajíček.

    Our notion of dag-like communication games allows us to use a Raz-McKenzie transformation from Goos and Pitassi paper, which yields a lower bound on the real monotone circuit size for the CSPSAT problem.

  • 15 Mar 2017 at 10:00 in roon 4523
    On polynomial approximations to AC0
    (Prahladh Harsha, Tata Institute of Fundamental Research)

    In this talk, we will discuss some questions related to polynomial approximations of AC0. A classic result due to Tarui (1991) and Beigel, Reingold, and Spielman (1991), states that any AC0 circuit of size s and depth d has an ε-error probabilistic polynomial over the reals of degree at most (log(s/ε))^{O(d)}. We will have a re-look at this construction and show how to improve the bound to (log s)^{O(d)}⋅log(1/ε), which is much better for small values of ε.

    As an application of this result, we show that (log s)^{O(d)}⋅log(1/ε)-wise independence fools AC0, improving on Tal's strengthening of Braverman's theorem that (log(s/ε))^{O(d)}-wise independence fools AC0. Time permitting, we will also discuss some lower bounds on the best polynomial approximations to AC0.

  • 13 Mar 2017 at 12:00 in room 4523, Lindstedtsvägen 5
    A nearly tight sum-of-squares lower bound for the planted clique problem
    (Aaron Potechin, Institute for Advanced Study)

    The sum of squares (SoS) hierarchy, a method for deciding the feasibility of polynomial equations over the reals, is an exciting frontier of complexity theory. SoS is exciting because it can be applied to a wide variety of combinatorial optimization problems and is in fact conjectured to be optimal for many important problems. However, its performance is only partially understood.

    In this talk, I will describe recent work proving sum of squares lower bounds for the planted clique problem, a classic average case problem. In the first part of the talk, I will describe the sum of squares hierarchy, planted clique, and pseudo-calibration, a key idea for the lower bound. In the second part of the talk, I will describe the main technical ideas needed in the proof.

  • 06 Mar 2017 at 13:15 in room 4523, Lindstedtsvägen 5
    SAT-based learning to verify liveness of randomised parameterised systems
    (Philipp Rümmer, Uppsala University)

    We consider the problem of verifying liveness for systems with a finite, but unbounded, number of processes, commonly known as parameterised systems. Typical examples of such systems include distributed protocols (e.g. for the dining philosopher problem). Unlike the case of verifying safety, proving liveness is still considered extremely challenging, especially in the presence of randomness in the system. We introduce an automatic method of proving liveness for randomised parameterised systems under arbitrary schedulers. Viewing liveness as a two-player reachability game (between Scheduler and Process), our method is a CEGAR approach that synthesises a progress relation for Process that can be symbolically represented as a finite-state automaton. The method constructs a progress relation by means of a suitable Boolean encoding and incremental SAT solving. Our experiments show that our algorithm is able to prove liveness automatically for well-known randomised distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon Protocol). To the best of our knowledge, this is the first fully-automatic method that can prove liveness for randomised protocols.

  • 25 Jan 2017 at 10:00 in room 4523,, Lindstedtsvägen 5
    The State of the Art of Automatic Program Repair
    (Martin Monperrus, University of Lille & INRIA France)

    Automatic software repair is the process of fixing software bugs automatically and is a very active research area. Before having a large impact on practice, current research tries to understand the fundamental strengths and weaknesses of the best-of-breed repair algorithms. This talk presents the results of a recent experiment on repairing 224 real Java bugs from open source projects.

    Ref:

    Automatic Repair of Real Bugs in Java: A Large-Scale Experiment on the Defects4J Dataset (Matias Martinez, Thomas Durieux, Romain Sommerard, Jifeng Xuan and Martin Monperrus), In Springer Empirical Software Engineering, 2016.

    Nopol: Automatic Repair of Conditional Statement Bugs in Java Programs (Jifeng Xuan, Matias Martinez, Favio DeMarco, Maxime Clément, Sebastian Lamelas, Thomas Durieux, Daniel Le Berre and Martin Monperrus), In IEEE Transactions on Software Engineering, 2016.

  • 23 Jan 2017 at 12:00 in room 4523, Lindstedtsvägen 5
    Cumulative space in black-white pebbling and resolution
    (Susanna F. de Rezende, Theory Group, KTH)

    The space usage of a computation is usually defined as the maximum memory it requires, and this is essentially everything one would want to know if the computational model is static. But what if we are computing in an environment where we can allocate memory dynamically? Then it makes a big difference whether we only have one single spike of high memory consumption or whether we consistently need large amounts of memory throughout the whole computation.

    In this talk we will explore a complexity measure, namely cumulative space, which allows us to measure this difference by accounting for the sum of memory usage over all time steps rather than just the maximum. It was introduced for pebble games by [Alwen and Serbinenko '15] and used to propose a more robust definition of cryptographic functions that are hard to invert (memory-hard functions).

    We will present some results regarding the cumulative space of another pebble game, namely the black-white pebble game, and relate this to the cumulative space complexity of refuting formulas in the resolution proof system. This is joint work with Joël Alwen, Jakob Nordström and Marc Vinyals.

    This presentation will also serve as Susanna's 50% PhD ladder seminar.

  • 18 Jan 2017 at 12:00 in room 1537, Lindstedtsvägen 3
    The QRAT proof system
    (Martina Seidl, Johannes Kepler University Linz)

    We present QRAT, a novel proof system for quantified Boolean formulas. With QRAT it is now possible to produce certificates for all reasoning techniques implemented in state-of-the-art QBF preprocessors. Such certificates are not only useful to efficiently validate the correctness of a result, but they also allow the extraction of Skolem functions, i.e., winning strategies for satisfiable formula instances.

    In this talk, we first introduce the rules of QRAT, then we show how they can be used to express some powerful reasoning techniques implemented in modern preprocessors, and finally we outline how to extract a Skolem function from a QRAT proof.


Previous years' seminars.

Top page top