Tutorials are 90-minutes long and will be held on the afternoon of Wednesday September 25. There will be two concurrent tutorial sessions.
A Practical Introduction to Formal Development and Verification of High-Assurance Software with SPARK
Benjamin Brosgol, Claire Dross, Yannick Moy (AdaCore)
This hands-on tutorial will show attendees how to use formal methods in developing and verifying high-assurance software. It will cover the benefits and costs of formal methods technology, describe its capabilities and limits, summarize how to adopt formal methods at varying levels depending on assurance requirements, show how to combine formal methods with traditional testing-based techniques, and highlight industrial experience.
The SPARK language (a subset of Ada 2012) will be used as the vehicle for explaining formal methods. The techniques presented can be applied to other language technologies, and the tutorial will compare the SPARK and Frama-C approaches.
Demonstrations will use the GNATprove toolset, with examples based on open source projects including kernels with critical security requirements. Hands-on exercises will be drawn from the SPARK section of the learn.adacore.com site.
Deploying Secure Multi-Party Computation on the Web Using JIFF
Kinan Dak Albab, Rawane Issa, Andrei Lapets, Peter Flockhart, Lucy Qin, Ira Globus-Harris (Boston University)
Implementing Differential Privacy Securely
Simson Garfinkel, Phil Leclerc (US Census Bureau)
Differential privacy is a mathematical approach for performing privacy-preserving data publishing and privacy-preserving data analysis. At its core, differential privacy adds statistical noise (randomness) to all publications based on confidential data. This non-determinism prevents an attacker from reverse-engineering a view of the confidential data based on an analysis of the data publications. This non-determinism allows a database curator to control the extent to which an attacker can reverse-engineer a view of the confidential data based on an analysis of public data publications. This non-determinism allows a database curator to control the extent to which an attacker can reverse-engineer a view of the confidential data based on an analysis of public data publications. Since its invention in 2006, considerable research has focused on improving the efficiency with which differentially private mechanisms allow releases of useful aggregates while controlling the information such releases leak about underlying confidential data records.
This tutorial starts with a brief overview of differential privacy, including the 2006 definition of epsilon differential privacy, the Laplace mechanism, efficient mechanisms for common types of data publication summaries, such as counting queries. We present two models for publishing data: the “trusted curator” model for publishing summary statistics, and the “local model” for publishing privacy-protected microdata. Finally, we present lessons that we have learned in developing and validating differential privacy implementations that are currently in use at the US Census Bureau and that will be used to publish the results of the 2020 Census.
We will use a jupyter notebook for hands-on work with differential privacy.
LLVM for Security Practitioners
John Criswell (University of Rochester)
Many security researchers need to build tools that analyze and transform code. For example, researchers may want to build security hardening tools, tools that find vulnerabilities within software, or tools that prove that a program is invulnerable to attack. This tutorial will guide attendees through creating extensions to the LLVM compiler that perform simple analysis and transformation operations.