talks

Artur Garcez

Neural-symbolic systems combine symbolic reasoning and neural computation. They have been shown capable of outperforming purely symbolic and purely connectionist machine learning by using symbolic background knowledge with data-driven learning in neural networks. A neural-symbolic system translates symbolic knowledge into the initial structure of a neural network, which can be trained from examples in the usual way. Learning expands or revises that knowledge. Knowledge extraction closes the cycle by producing a revised symbolic description of the network. Neural-symbolic systems exist for logic programming, answer-set programming, modal and intuitionistic logic, nonmonotonic logic, temporal logic, first-order logic, etc. In this talk, I will review briefly the developments in the area and how neural-symbolic systems can be used with model checking for adapting system descriptions. I will also exemplify how system properties described in temporal logic can be encoded in a neural network, which can be used for run-time monitoring of a system in the presence of deviations from the system specification. I will conclude with a brief presentation of some recent achievements and discuss the challenges in knowledge extraction from neural networks, relational learning in neural networks, and symbolic reasoning in deep networks.

Slides: garcez2017.pdf

Iury Bessa

We present a novel, complete, and flexible optimization algorithm, which relies on recursive executions that constrain a model-checking procedure based on Satisfiability Modulo Theories (SMT). This SMT-based optimization technique is able to optimize a wide range of functions, including non-linear and non-convex problems. Although SMT-based optimization is not a new technique, this work is the pioneer in solving non-linear and non-convex problems based on SMT; previous applications are only able to solve integer and rational linear problems. The proposed SMT-based optimization algorithm is compared to other traditional optimization techniques. Experimental results show the efficiency and effectiveness of the proposed algorithm, which finds the optimal solution in all evaluated benchmarks, while traditional techniques are usually trapped by local minima.

Slides: sbesc2016.pdf

Lucas Cordeiro

We present ESBMC-GPU tool, an extension to the Efficient SMT-Based Context-Bounded Model Checker (ESBMC), which is aimed at verifying Graphics Processing Unit (GPU) programs written for the Compute Unified Device Architecture (CUDA) platform. ESBMC-GPU uses an operational model, i.e., an abstract representation of the standard CUDA libraries, which conservatively approximates their semantics, in order to verify CUDA-based programs. It then explicitly explores the possible interleavings (up to the given context bound), while treats each interleaving itself symbolically. Additionally, ESBMC-GPU employs the monotonic partial order reduction and the two-thread analysis to prune the state space exploration. Experimental results show that ESBMC-GPU can successfully verify 82% of all benchmarks, while keeping lower rates of false results. Going further than previous attempts, ESBMC-GPU is able to detect more properties violations than other existing GPU verifiers due to its ability to verify errors of the program execution flow and to detect array out-of-bounds and data race violations.

Slides: ccpe2016.pdf

Daniel Poetzl

We present a sound static deadlock analysis approach for real-world concurrent C/Pthreads programs. The method consists of a pipeline of several analyses that build on a new context- and thread-sensitive abstract interpretation framework. We further present a lightweight dependency analysis to identify statements relevant to deadlock analysis and thus speed up the overall analysis. In our experimental evaluation, we succeeded to prove deadlock-freedom for 292 programs from the Debian GNU/Linux distribution with in total 2.3 MLOC in 4 hours.

Slides: ase16.pdf

Anna Trostanetski

In this work, the problem we intend to solve is, given two closely-related programs and the syntactic differences between them, how to summarize the semantic changes in their behavior, namely differences in their input-output relations. We present a modular method, that relies on syntactic similarities between program version and abstract interpretation, to guide symbolic execution towards potential changes in behavior. Our technique computes both an over- and under-approximation of inputs that produce different results in matched functions.

We present v2c, a tool for translating Verilog to C. The tool accepts synthesizable Verilog as input and generates a word-level C program as an output, which we call the software netlist. The generated program is cycle-accurate and bit precise. The translation is based on the synthesis semantics of Verilog. There are several use cases for v2c, ranging from hardware property verification, co-verification to simulation and equivalence checking. This paper gives details of the translation and demonstrates the utility of the tool.

Slides: tacas16_talk.pdf

When optimizing a thread in a concurrent program (either done manually or by the compiler), it must be guaranteed that the resulting thread is a refinement of the original thread. Most definitions of refinement are formulated in terms of valid syntactic transformations on the program code, or in terms of valid transformations on thread execution traces. We present a new theory formulated instead in terms of state transitions between synchronization operations. Our new method shows refinement in more cases and leads to more efficient and simpler procedures for refinement checking. We develop the theory for the SCfor-DRF execution model (using locks for synchronization), and show that its application in compiler testing yields speedups of on average more than two orders of magnitude compared to a previous approach.

Slides: tacas16.pdf

Unmanned aerial vehicles (UAV) are becoming increasingly popular for both recreational as well as industrial applications, leading also to concerns about safety. Autonomous systems, such as UAVs, are typically hybrid systems consisting of a low-level continuous control part and a high-level discrete decision making part. We discuss using the agent programming language AgentSpeak to model the high-level decision making part. We present a translation from AgentSpeak to C that bridges the gap between high-level decision making and low-level control code for safety-critical systems. This allows code to be written in a more natural high-level language, thereby reducing the overall complexity and making it easier to maintain, while still conforming to safety guidelines. As an exemplar, we present the code for an autopilot for a UAV. The generated code is evaluated on a simulator and a Parrot AR.Drone, demonstrating the flexibility and expressiveness of AgentSpeak as a modeling language for UAVs.

Slides: agentspeaktoc.pdf

Paper: taros2015paper.pdf

Translator and autopilot (including instructions) available at http://www.cprover.org/UAVs/TAROS2015/

Encoding problems to Boolean satisfiability is a common technique in state-of-the-art program verification tools and theorem provers. Every problem can be encoded in many different ways and the choice of encoding can have a major effect on performance. Unfortunately there are few tools, theoretical or practical, to compare encodings or design better ones. This talk introduces an ongoing program of work which will address this problem. We show how the language of abstract interpretation can be used to capture the notion of the propagating power of an encoding and how this can be used to automatically generate optimal encodings. This not only improves solver performance but also gives a language to explore problem difficulty and understand encodings.

Slides : oxford_150423.pdf

Concurrent systems are notoriously difficult to analyze, and technological advances such as weak memory architectures greatly compound this problem. This has renewed interest in partial order semantics as a theoretical foundation for formal verification techniques. Among these, symbolic techniques have been shown to be particularly effective at finding concurrency-related bugs because they can leverage highly optimized decision procedures such as SAT/SMT solvers. This talk is about new fundamental results on partial order semantics for SAT/SMT-based symbolic encodings of weak memory concurrency. In particular, we give the theoretical basis for a decision procedure that can handle a fragment of concurrent programs endowed with least fixed point operators. In addition, we show that a certain partial order semantics of relaxed sequential consistency is equivalent to the conjunction of three extensively studied weak memory axioms by Alglave et al. An important consequence of this equivalence is an asymptotically smaller symbolic encoding for bounded model checking which has only a quadratic number of partial order constraints compared to the state-of-the-art cubic-size encoding.

Slides: https://www.cs.ox.ac.uk/people/alex.horn/group-talk-2015-04-30.pdf

Interrupt-driven software is difficult to test and debug, especially when interrupts can be nested and subject to priorities. Interrupts can arrive at arbitrary times, leading to an explosion in the number of cases to be considered. We present a new formal approach to verifying interrupt-driven software based on symbolic execution. The approach leverages recent advances in the encoding of the execution traces of interacting, concurrent threads. We assess the performance of our method on benchmarks drawn from embedded systems code and device drivers, and experimentally compare it to conventional formal approaches that use source-to-source transformations. Our experimental results show that our method significantly outperforms conventional techniques. To the best of our knowledge, our technique is the first to demonstrate effective formal verification of low-level embedded software with nested interrupts.

Website: http://www.cprover.org/interrupts/

Paper: date15_paper.pdf

Slides: date15_slides.pdf

Linearizability is the de facto correctness criteria for concurrent data structures and consensus-based distributed systems. In this talk, I present some work-in-progress on a new tool for testing linearizability. We experimentally show that our tool can handle problems where existing implementations timeout or run out of memory. Our experiments include shared memory programs (Intel's TBB and Siemens' EMBB library) and a large-scale distributed system (etcd) using Aphyr's testing framework (Jepsen).

Slides: https://www.cs.ox.ac.uk/people/alex.horn/linearizability-tester-group-talk-2015-02-05.pdf

Proving termination for bitvector programs is tough. We can make it easier by reducing the problem to a program synthesis problem, which we solve using a refinement loop.

Slides: http://www.cs.ox.ac.uk/people/matt.lewis/termination-slides.pdf

Modern architectures rely on memory fences to prevent undesired weakenings of memory consistency. As the fences’ semantics may be subtle, the automation of their placement is highly desirable. But precise methods for restoring consistency do not scale to deployed systems code. We choose to trade some precision for genuine scalability: our technique is suitable for large code bases. We implement it in our new musketeer tool, and detail experiments on more than 350 executables of packages found in Debian Linux 7.1, e.g. memcached (about 10,000 LoC).

The slides can be found online: http://www.cs.ox.ac.uk/people/vincent.nimal/talk_cav_2014.pdf

In model checking, a well known source of state-space explosion (SSE) is the explicit representation of concurrent actions by their interleavings. Partial-order reductions attempt to address this by constructing an equivalent state space where irrelevant executions of the original are discarded. A comparatively less well-known approach emerges from the use of partial-order semantics, where the state space is instead represented by a partial order where concurrent actions are simply left unordered. Petri net unfoldings are arguably the most prominent verification technique issued from this idea.

This talk will include an introduction to Petri nets (for which I have been asked many times during the last weeks) and three different semantics for them, the last one of which will be the aforementioned unfolding semantics. We will then see how unfoldings can be employed in practical verification and, time permitting, how to improve the method to address additional sources of SSE. in practice.

The slides can be found online: http://www.cs.ox.ac.uk/people/cesar.rodriguez/talks/20140220-groupseminar.pdf

When given a SAT problem, one has two major tasks: (i) to model the problem in Conjunctive Normal Form (CNF), and (ii) to solve the model with a SAT solver. The SAT community is extremely motivated for continuously improving SAT solvers performance. However, there is much to be done with respect to SAT encodings.

In this talk, we will show how to encode formulas that are not naturally expressed in CNF. We will give some insight on good encoding techniques, such as different ways to represent numbers, symmetry breaking, and how to use incrementality. Finally, we will show how different modelling decisions may have a direct impact on the solver's performance.

The slides can be found online: http://www.cs.ox.ac.uk/people/ruben.martins/talks/sat-encodings.pdf

In recent years, Boolean Satisfiability (SAT) solvers have been successfully used in many practical applications. The widespread use of SAT is the result of SAT solvers being very effective in practice. Even though real world problems tend to be large in size, SAT solvers are now able to solve many problems with hundreds of thousands of variables and millions of clauses in just a few minutes.

In this talk, we present how SAT solvers have evolved from the Davis-Putnam-Logemann-Loveland (DPLL) algorithm to Conflict-Driven Clause Learning (CDCL) SAT Solvers. We will focus on the key techniques that make SAT solvers work in practice, namely, clause learning and non-chronological backtracking, search restarts, lazy data structures, and conflict-guided branching. We will show the relative contribution of these key techniques to the performance of SAT solvers and give some insight on why SAT solvers work in practice.

The slides can be found online: http://www.cs.ox.ac.uk/people/ruben.martins/talks/sat-solvers.pdf

We present abstract acceleration techniques for computing loop invariants for numerical programs with linear assignments and conditionals. Whereas abstract interpretation techniques typically over-approximate the set of reachable states iteratively, abstract acceleration captures the effect of the loop with a single, non-iterative transfer function applied to the initial states at the loop head. In contrast to previous acceleration techniques, our approach applies to any linear loop without restrictions. It derives symbolic expressions for the entries of the matrix modeling the effect of n>=0 iterations of the loop. The entries of such a matrix depend on n through polynomial, exponential and trigonometric functions. We introduce an abstract domain for matrices that captures the linear inequality relations between these expressions. This results in an abstract matrix for describing the fixpoint semantics of the loop. Our approach integrates smoothly into standard abstract interpreters and clearly outperforms existing approaches in terms of precision.

The slides can be found online: http://www.cs.ox.ac.uk/people/peter.schrammel/acceleration/popl14_slides.pdf

I'll talk about recent work on automated test case generation for synchronous, reactive systems, like Simulink models. Testing such systems is challenging because long input sequences are often needed to drive them into a state at which a desired feature can be tested. This is particularly problematic in on-target testing, where a system is tested in its real-life application environment and the time required for resetting is high. For many reactive systems, there actually exists a single test case that covers all test goals. The question is how we can find such a test case and minimise the number of execution steps it consists of. And, I'll show how we can (ab)use CBMC to implement it…

The slides can be found online: https://www.cs.ox.ac.uk/people/peter.schrammel/chaincover/chaincover_talk20130628.pdf

This is an informal talk about SMT Kit, an open-source C++11 library for many-sorted logics such as SMT-LIB 2.0. We'll discuss the library's usefulness for experimental research into symbol analysis techniques. These insights were gained through our work on a new encoding for axiomatic concurrency semantics as a quantifier-free first-order logic formula whose size is only quadratic, instead of cubic.

The slides can be found online on the project website: http://ahorn.github.io/smt-kit/

The video can be found here:

Talk video, approx 1 hour / 1Gb

The whiteboard at the end of the presentation looked like this:

talks.txt · Last modified: 2017/03/10 23:16 by luciro