TORACLE 2021

The 1st International Workshop on Test Oracles

Tue 24 August 2021 (online event)

co-located with ESEC/FSE 2021




About TORACLE


TORACLE 2021 is the 1st International Workshop on Test Oracles, co-located with ESEC/FSE 2021: The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering

The challenge of distinguishing correct from incorrect test executions is called the (test) oracle problem and it is recognised as one of the fundamental challenges in software testing. For the first time TORACLE 2021 will bring together researchers and practitioners working on the techniques, approaches and tools related to the alleviation and automation of the oracle problem. The workshop will enable the participants to present ideas, discuss recent results and develop a shared agenda for future research.

TORACLE is a one-day workshop, which will consist of two sessions followed by a discussion panel.


IMPORTANT DATES

Paper submission deadline: June 1, 2021
Notification: July 1, 2021
Camera ready: July 8, 2021
Workshop: August 24, 2021

Program and Registration

Registration is now open!
Please register via the ESEC/FSE registration page (all co-located workshops are included with the Main Conference Registration).

https://2021.esec-fse.org/attending/registration

Early registration rates end August 6th, 2021

Keynote

Prof. Alastair Donaldson, Imperial College London, UK

August 24, 10:10 - 11:10 EEST (UTC +3)

Title: Oracles, Test Case Reduction and Undefined Behaviour in Randomized Testing

Abstract: Randomized testing is a cheap and effective technique for finding bugs in or increasing test coverage of software systems. It involves running a system under test (SUT) on an endless stream of inputs that are either randomly generated from scratch, or randomly mutated based on existing inputs. For randomized testing to be useful, a test oracle must be available: an oracle is essential for finding bugs, and new tests that improve coverage serve little purpose if we cannot determine whether they pass or fail. Randomized testing also requires a means for shrinking generated test cases to a size that makes them actionable: a test case that exposes a bug must be small enough to facilitate debugging, while a test case that achieves new coverage must be simple enough that it can be added to a regression test suite. I will first give an overview of arguably the most successful application of randomized testing: fuzzing for security-critical vulnerabilities in C++ programs. This is a sweet spot because the bugs detected are critical, the rules of the programming language provide a general-purpose oracle, and test case reduction is relatively straightforward. I will then explain why the oracle and test case reduction problems are much harder when using randomized testing to find functional errors, and why undefined behaviour exacerbates both problems. Finally, I will give examples from my recent work (with collaborators) of how metamorphic testing can be used to circumvent the oracle problem, provide easy test case reduction, and facilitate finding bugs and generating high coverage test suites, in the domains of compilers and software libraries.

Bio: Alastair Donaldson is a Professor in the Department of Computing at Imperial College London and a Visiting Researcher at Google. At Imperial he leads the Multicore Programming Group, investigating novel techniques and tool support for programming, testing and reasoning about highly parallel systems. He was Founder and Director of GraphicsFuzz Ltd., a start-up company specialising in metamorphic testing of graphics drivers, which was acquired by Google in 2018. He was the recipient of the 2017 BCS Roger Needham Award and has published 90+ articles on programming languages, formal verification, software testing and parallel programming. Alastair was previously a Visiting Researcher at Microsoft Research Redmond, an EPSRC Postdoctoral Research Fellow at the University of Oxford and a Research Engineer at Codeplay Software Ltd. He holds a PhD from the University of Glasgow, and is a Fellow of the British Computer Society.

Program

Accepted paper

Long paper Using Machine Learning to Generate Test Oracles: A Systematic Literature Review by Afonso Fontes (Chalmers and the University of Gothenburg) and Gregory Gay (Chalmers and the University of Gothenburg)



Displayed time zone: Athens EEST (UTC +3)

10:00 - 10:10 Opening
10:10 - 11:10 Keynote speech by Prof. Alastair Donaldson
11:10 - 11:20 Break (10 minutes)
11:20 - 11:50 Paper presentation: Using Machine Learning to Generate Test Oracles: A Systematic Literature Review by Afonso Fontes (Chalmers and the University of Gothenburg) and Gregory Gay (Chalmers and the University of Gothenburg)
11:50 - 12:20 Invited Paper presentation: Metamorphic Security Testing for Web Systems by Phu X. Mai, Fabrizio Pastore, Arda Goknil, and Lionel Briand (SnT Centre for Security, Reliability and Trust, University of Luxembourg, Luxembourg)
12:10 - 12:30 Break (10 minutes)
12:30 - 13:20 Panel discussion on the test oracle problem
13:20 - 13:30 Closing

Call For Papers

TORACLE calls for high quality papers related to the test oracle problem.

We accept the following types of contributions:

  • Long papers: 10-page papers (including references and appendices) that present authors’ original research work and results.
  • Short papers: 4 page papers (including references and appendices) that present one of the following:

    • Position and Vision Paper: to propose radical, innovative new ideas, arguments, and research directions.
    • Work-in-Progress Paper: to describe novel work that is not at the same level of research maturity as a full submission.
    • Industrial and Experience Paper: to describe the experience gained from applying/evaluating techniques to alleviate the oracle problem in an industrial context.
    • Demonstration paper: to describe new research tools, data, and other artifacts related to the test oracle problem. We also encourage the submission of demonstrations of already published research techniques. However, a demonstration paper submission must not have been previously published in a demonstration form. Authors of accepted papers can choose to give either a presentation or a live demonstration.

Each submission will be reviewed by three members of the program committee. The main evaluation criteria include the relevance and quality of the submission in terms of technical soundness, novelty, relevance for the TORACLE audience, and presentation quality.

Accepted contributions will be included in the ESEC/FSE 2021 proceedings.

How to Submit

TORACLE 2021 will employ a double-blind review process. The papers submitted must not reveal the authors’ identities in any way. The authors’ names must be omitted from the submission, and references to their prior work should be made in the third person. The only exception are Demonstration papers, which will employ a single-blind review process. The demonstration papers submitted do not need to hide the authors’ identities.

All papers must conform to the ESEC/FSE 2021 formatting instructions. The page limit is strict. Submitted papers must be written in English and in PDF format. They must be unpublished and not submitted for publication elsewhere. Papers must be submitted electronically through the HOTCRP submission site: http://esecfse2021-toracle.hotcrp.com/. The submission deadline is June 1, 2021.

Topics Of Interest

  • Automated generation of test case assertion oracles. Automated generation of contracts, program assertions, invariants, pre and post-conditions.
  • Metrics and techniques for assessing the quality of oracles.
  • Oracles for specific types of software systems such as deep learning systems, machine learning systems, cyber-physical systems, web applications, and mobile applications.
  • Automated improvement of test oracles.
  • Automated extraction of oracles from various natural language sources. Metamorphic oracles.
  • Oracles for non-functional properties, such as security, performance.
  • Oracle-based coverage criteria.
  • Empirical evaluations of oracle-related techniques in real-world scenarios.
  • Techniques for reducing the cost of manually deriving test oracles.
  • Theoretical formalisation and analysis of the oracle problem.

If you have doubts about your submission being in the scope of the workshop, do not hesitate to contact the organisers.

Committee Members

Organizers

PC Members

Name Afilliation Country
Earl Barr University College London UK
Alessandra Gorla The IMDEA Software Institute Spain
Facundo Molina Universidad Nacional de Río Cuarto Argentina
Arianna Blasi Università della Svizzera italiana Switzerland
Phil McMinn University of Sheffield UK
Gregory Gay University of Gothenburg Sweden
Shing-Chi Cheung The Hong Kong University of Science and Technology Hong Kong SAR
Leonardo Mariani University of Milano BICOCCA Italy
Jeff Offutt George Mason University USA
Michal Young University of Oregon USA
Reza Shahamiri University of Auckland NZ