ARW 5 Abstracts of Contributed Talks

 

  • Mark Adams, DRA - "Pen and Paper" Reasoning Facilities for Theorem Provers.
    I shall begin by covering the requirement in industry for sound theorem provers with a minimum capability of carrying out reasoning steps at the same level as are carried out in a rigorous pen and paper mathematical proof.

    I shall then go on to summarise a one year research and development programme carried out at DERA Malvern by myself into meeting this minimum requirement. In particular I shall detail present shortcomings in the ProofPower theorem prover. I shall also explain how I went about extending ProofPower to meet these requirements, and what implementation work is still left to do.

  • David Duffy, University of York - Partial Functions and Proof by Induction in Z

    Reasoning inductively about partial functions introduces special problems for proof systems for the Z language, such as CADiZ. Firstly, it has to be determined what the value is of a partial- function application to objects outside the function's domain; should it be possible for such an application to ``return a value'' outside the function's range, and how is consistency maintained if that range is a Z type. Secondly, typical approaches to the proofs of properties of partial functions use inference rules such as computational induction, and allow non-monotonic inferences, whereas CADiZ currently relies on structural induction and has a monotonic inference system.

    Allowing a function's application to return ``undefined'' values enables the specifier to leave unstated the domain of the function, perhaps the principal benefit of such functions. To maintain consistency with Z's type system we propose either modification of our specifications, translating each type into a subset of a type, or modification of our proof systems, ensuring that we cannot always prove that an expression is a member of its type, thus distinguishing `having a type T' from `being a member of a type T'.

    One possible solution to the problem of CADiZ's use of structural induction and monotonic reasoning is the generation of ``closure axioms'' for each function definition, stating that the function is defined only over the cases explicitly specified. This introduces problems for refinement, where, typically, we require the most general function satisfying our specification, but the closure axioms might apply only at the specification level at which they are introduced, subsequently being discarded. A benefit of the closure-axiom approach is that it makes explicit the conditions necessary for a proof beyond the usual "when the partial functions are defined".

  • Dirk van Heule, University Ghent, Belgium - An Automated Tautology Checker for the Partial Predicate Calculus
    In the history of logic, three-valued logic plays and important role, not only as a foundation for other multiple-valued logics, but also in applications of software verification and hardware design.

    The main goal of our research is to integrate the Partial Predicate Logic in an automated theorem proof tool (Isabelle) and to test it on some applications. The automated tautology checker (ATC/PPC) is one of the results we obtained.

    The three-valued logic PPC differs from most other logics by its definition of validity. It was originally designed to cover undefinedness in mathematics, but it turned out to be useful in handling partial predicates in programming languages. The calculus of PPC is a kind of sequent calculus and because of the correspondence between the operational inference rules of the sequent calculus and the rules of the tableau calculus, PPC can easily be adapted to the method of semantic tableaux.

  • Andrew Ireland, Heriot-Watt University
    and Andrew Cook, Heriot-Watt University. - On the Synthesis of Performance Enhancing Eureka Steps

    Within the Dependable Systems Group we are investigating a hybrid approach to the automatic translation of Standard ML (SML) prototypes into Parallel-C ( more details). The approach builds upon techniques from program instrumentation, performance modelling, program transformation and automated theorem proving. Here we focus on the transformation and theorem proving aspects.

    Program transformation techniques provide us with a means of maximising the opportunities for parallelism within functional prototypes. That is, drawing upon a library of transformations, we attempt to replace costly computations with program patterns which are directly amenable to parallelisation. The program patterns of interest are higher-order functions which are well known to have a close relationship to generic parallel constructs.

    A limitation of this approach is that no library of transformations can be complete. The task of discovering a missing transformation is a hard problem which is typically seen as a eureka step. We are exploring the possibility of using proof plans, in particular a proof plan for mathematical induction, in order to automate the synthesis of such eureka steps. In addition to contributing to the proof plans research, we believe that this work will significantly enhance the generality, extendibility and dependability of our SML to Parallel-C compiler.

  • Paul Jackson, University of Edinburgh - Formally Modelling a Garbage Collection Algorithm

    I'll talk about an experiment I carried out recently in using the PVS interactive theorem prover to formally model and verify properties of a garbage collection algorithm. The experiment involved setting up a state transition system for both the algorithm and its environment, and using linear temporal logic (LTL) to specify and verify safety and liveness properties.

    I'll describe briefly the transition system model I set up, and will give an overview of the proofs I carried out.

    I'll then summarise the main problems I came across in this work and how I dealt with them. These problems concerned:

  • suitability of PVS's specification language,
  • interaction of fairness requirements and nondeterminism,
  • separation of reasoning about flow of control and propagation of data invariants,
  • adequacy of automation in PVS.
  • The experiment is part of larger project to develop and assess formal models and verification techniques for a variety of collector designs. The project currently involves myself, Healf Goguen and Rod Burstall at Edinburgh and is being carried out in consultation with the Memory Management group at the software house Harlequin.

  • Anthony McIsaac, SGS-Thomson Microelectronics Limited - Model checking, abstraction and proof in a microprocessor design project
    An account of the way formal verification was applied in a large-scale industrial microprocessor development project. A case study of the internal bus: the formal specification of the protocol and the arbiter; the verification strategy - construction of abstract models of the behaviour of the arbiter from the point of view of one or two of the bus users; use of model checking to prove that these models were indeed abstractions of the implemented design, and general reasoning to deduce from this that the implementation met its specification.

  • Dirk Nowotka, University of Warwick
    Sara Kalvala and Marco Benini, University of Warwick - Holly - An Approach to Object Code Verification

    Formal verification of software typically consists in checking programs in a high-level language against an abstract specification. Confidence on the correctness of the actual program running on a machine also relies on the soundness of the translation from the high-level language to the machine code, i.e., the correctness of a compiler, including its code optimisation techniques.

    If one wishes to have confidence on the final code, another approach is object code verification, i.e., verification of program code at assembly level. Now the confidence in the implementation of a program on an actual machine is more direct, relying on the correctness of the hardware model and no other software.

    But, dealing with object code also has a price. It is, in a sense, further away from its specification because it has to deal with a more concrete machine, considering registers, limited memory, and so on. There is a vast shift in granularity between code and specification.

    The technique for object code verification we are developing emphasises the use of abstraction, aided by a process algebraic framework. The concept of simulation provides the tool with which abstractions on the process algebraic representation of the program can be formalised.

    However, the correctness proof takes place in higher-order logic (HOL), which we believe presents a more amenable proof environment. Abstraction at the logical level is obtained by unlifting results from the CCS level using the CCS semantics. More precisely, let < be a simulation relation, and let [[ ]]^s be the lifting operator for a particular sequence s of states, then

    If P < Q then for all s . [[P]]^s implies [[Q]]^s .

    The interaction between the logical and the algebraic level is fundamental for our methodology.

    We have implemented this approach in a proof system we call Holly. We use Isabelle/HOL, in which we embedded a variation of Milner's Calculus of Communicating Systems (CCS) as a process algebra. When verifying a program, its object code is translated both into a representation in HOL and an equivalent representation in CCS. At any point in a proof either deductions in HOL or simulations in the process algebra can be used.

    Holly demonstrates the concurrent use of a logical and an algebraic model to make object code verification feasible. In this talk, we expand on the interaction between these models, and highlight the interaction within an example proof in Holly.

  • Alan Smaill, University of Edinburgh - Derived inference and search techniques.

    Recent work in Edinburgh on the use of a higher-order relational programming language suggests an architecture within which two aspects of effective theorem proving can be combined:

  • on the one hand, Logical Framework approaches to reasoning allow the construction of derived inference rules, which are assured to be sound;
  • on the other hand, tactic-based systems, perhaps guided by proof planners, achieve a more procedural abstraction of the problem.
  • This is a sketch of how notions of binding and modularisation in a relational setting can help here, by allowing the development of theories of derived inference and search in a uniform setting.

  • Jamie Stark, Heriot-Watt University - Refinement of failed proof attempts involving loop invariants.
    Verifying simple loop programs is completely automatic if a correct specification of the loop, by way of an invariant, is given. Coming up with an invariant for a loop is seen as an ``eureka'' step when reasoning about loop programs. It is not always obvious given even the simplest programs.

    Proof planning offers the chance to analyse the failure of proof attempts. By recognising what we want the invariant to look like, schematically, at a particular point in the proof, we are then able to patch the proof attempt and refine our original guess of the invariant.

    Reference: A. Ireland, J.Stark. "On the automatic discovery of loop invariants." 4th Nasa Langley Formal Methods Workshop, NASA conference publications 3356, 1997.

  • Andrew Stevens, University of Oxford - Mechanising a Unifying Theory
    Recent work by C.A.R.Hoare and J.He has confirmed a widely held conjecture that all paradigms can be embedded in the single mathematical theory of relations. A general "unifying theory" has been found which projects into more particular instances corresponding a wide variety of computational paradigms. In this talk I will briefly introduce the unifying theory and its proposed applications and address the question of how it (and similar formalisms based on alphabetised relations) may be realised with conventional theorem provers based on higher-order-logic.

  • Willem Visser, University of Manchester - Practical Model Checking
    Model Checking is a popular method for doing formal verification of software and hardware systems. A model checker determines whether a model of a system is correct with respect to a specific behaviour described in a formal specification language. Temporal logic, that can express properties of event orderings in time without necessarily introducing time, is often used as a formal specification language. We will describe a practical model checking algorithm for the branching time temporal logic CTL*. We use an automata-theoretic approach, where the model checking problem is reduced to checking the nonemptiness of alternating automata. We show that the nonemptiness test can be described as a 2-player game and in order for this game to be efficient the novel concept of playing new games is introduced. We relate our work to existing algorithms for doing model checking for sublogics of CTL* and show that it is suitable for doing model checking of industrial-scale systems.