This repository contains the Rocq mechanization of the results from the paper "Algorithmic Conversion with Surjective Pairing: A Syntactic and Untyped Approach".
While the documentation is written in org
mode, it is recommended that you use the
generated README.html
file so you can
access the hyperlinks into the generated coqdoc files to navigate the
development in your web browser by launching a local web server using python or
darkhttpd. Note that
if you instead load the file directly with your web server, the toggle
proof functionality of coqdocjs will not work because of CORS
restrictions of javascript modules!
# starting a local server with python listening to port 8080
python -m http.server --bind 127.0.0.1 8080 --directory .
# alternatively, starting a local server with darkhttpd listening to port 8080
darkhttpd . --addr 127.0.0.1 --port 8080
You can then access the document from your web browser with the url
http://127.0.0.1:8080
.
As mentioned in the paper, while the natural numbers are omitted from the text, they can be found in the mechanization and the specification pdf file.
You can skip this section if you are compiling the proof scripts from the configured VM.
Inside a fresh opam
switch, run the
following commands to install the required dependencies.
opam update
opam install -y coq-hammer-tactics coq-stdpp coq-autosubst-ocaml
eval $(opam env)
The termination proof can be brittle because we expect the inversion
to produce the subproofs of the domain
relation that make the termination checker happy.
Here are the versions of the Rocq packages that are known to work.
Note that if you don't plan to modify and regenerate the syntax.v
file from syntax.sig
using autosubst
, you can choose not to install coq-autosubst-ocaml
. However, you need to be
careful not to accidentally run make deepclean
, which will delete the
auto-generated syntax files. Instead, run make
clean
if you want to keep the auto-generated files.
After the dependencies are installed, run the following command to
validate the proof development. The -j2
flag allows make
to validate up to 2 files
in parallel. According to our testing, going beyond -j2
does not reduce the compilation time.
make -j2
It is expected that the compilation process will generate a lot of warnings about notations and the ssreflect library. The compilation is successful as long as the command terminates with the following output.
# ... lots of warnings ...
make[1]: Leaving directory '****'
On a Linux laptop with an Intel 12700k, the compilation takes 67 seconds. If you are running the command inside the attached VM and your host device has an ARM CPU (e.g. Mac M-series), then it'll take around 10x longer since emulating x86 instructions can be quite slow.
list PTm
The grammar specification in higher-order abstract syntax (HOAS) can be found in syntax.sig. We use the autosubst-ocaml tool to turn the syntax file into the Rocq file syntax.v, which contains not only the de Bruijn grammar but also definitions of renaming and substitution functions and their associated lemmas.
Due to the many reduction relations used in our proof, we organize
each untyped reduction, equality, and subtyping relation inside its own
module, where the relation itself is simply named R
. Thus, to avoid ambiguity, those modules are
never meant to be imported and the relation should always be referred to
in its qualified form. For example, the β-reduction relation is defined as
the inductive type R in
the module RRed. Outside
the module, the β-reduction
relation is referred to by its fully qualified name RRed.R
.
Given a module containing a reduction relation named Red
, the module Reds
contains the properties about its reflexive
and transitive closure. Instead of defining the transitive and reflexive
closure of Red.R
as RReds.R
, we directly refer to it as rtc
RRed.R
, where rtc
is a relation
operator that outputs the reflexive and transitive closure of its
input.
Some relations are standard and therefore not included in the text for concision, but we include them here for completeness.
Normal form predicates (Fig. 5)
ishf
and ishne
Note that we sometimes use HRed.nf in place of the above definition of whnf, where the former means the term would no longer reduce with weak-head reduction. These two definitions coincide for precisely the set of terms that are non-stuck.
Also, instead of defining nf and ne as inductive predicates, we define them as mutually recursive fixpoints. For the definition to be accepted by the termination checker, the injection from ne to nf is proven a posteriori as the lemma ne_nf.
Reductions
Strong normalization (Sec. 3.2)
Joinability and Subtyping
Joinability (w.r.t βη-reduction, Def. 3.1): DJoin
Joinability (w.r.t η-reduction): EJoin
One-step subtyping (Page 12): Sub1
Untyped subtyping (Def. 3.2): Sub
Untyped subtyping (w.r.t η-reduction): ESub
Note that ESub holds when two terms can be related by one-step subtyping after η-reduction. It is not mentioned in the paper but is convenient to have around in the mechanization for automation purposes.
Coquand's algorithm (Sec. 4.1)
Coquand's algorithm is one of the exceptions of the above naming scheme, and the actual formal definition is slightly different from the text presentation. Notably, the algorithmic equality for head normal forms is split into two relations, one that handles the case where both terms are neutral, and one that handles the cases where at least one term is not neutral.
The relations are all formulated on arbitrary terms. The neutral and normal form restrictions in f0 ∼ f1 are proven a posteriori as lemmas in the mechanization (e.g. coqeq_no_hred).
Subtyping works similarly, though there is no need to split the relation as the neutral case is handled by equalities.
As mentioned in the text, the definition of the logical predicate cannot be written in Rocq directly as it consists of an inductive definition nested in a fixpoint definition over universe levels.
The technique we adopt to encode the logical relation is described in detail in the write-up by Liu and Weirich. To make the code more readable, we specify a module type LogRel that includes the introduction and induction principles that fully characterizes the logical predicate. The module LogRelImpl shows how the logical predicate is actually defined through the inductive definition InterpExt and the fixpoint InterpUniv, the latter of which the logical predicate satisfying the abstract properties we actually need.
By encapsulating the Rocq-specific workarounds behind the module signature, the properties about the logical predicate (found in LogRelFactsImpl) can be implemented purely in terms of the clean interface specified in LogRel.
To avoid introducing propositional and functional extensionality axioms, we add rule InterpUniv_Conv to our mechanization to ensure that the logical predicate operates on predicates that are extensionally equivalent (denoted by ≐ in the mechanization). These artifacts introduced by avoiding the axioms are noted in the development and can be safely ignored.
The relational definition of algorithmic conversion is not immediately executable. The decidability result (Theorem 4.1) is justified by defining a total function that returns true precisely when two of its input are convertible.
Here, we give links to the definition of the algorithm and the Bove-Capretta domains to handle termination checking.
The completeness and soundness of the computable functions with respect to their relational counterparts are not explicitly included in the paper, but they are linked in the mechanization by the following lemmas.
The termination of algorithmic conversion is implied by the above completeness and soundness results.
split into two separate lemmas
We claim that our development is axiom-free. To validate that claim,
one can use the Print Assumptions
command
on the theorems and confirm that no axioms are displayed.
An alternative method is to run coqchk
,
which can be invoked on all .vo
files by
running make validate
. However, coqchk
doesn't work that well with module types
and will report axioms that we didn't actually use in the
development.
* Theory: Set is predicative
* Theory: Rewrite rules are not allowed
* Axioms:
DecSyn.logrel.LRFacts.functional
DecSyn.logrel.LRFacts.Bind_inv_nopf
DecSyn.logrel.LRFacts.back_clos
DecSyn.logrel.LRFacts.Bind_nopf
DecSyn.logrel.LRFacts.adequacy
DecSyn.logrel.LRFacts.back_closs
Coq.Logic.FunctionalExtensionality.functional_extensionality_dep
Coq.Reals.ClassicalDedekindReals.sig_not_dec
DecSyn.logrel.LRFacts.join
DecSyn.logrel.LRFacts.case
DecSyn.logrel.LRFacts.sub
Coq.Reals.ClassicalDedekindReals.sig_forall_dec
DecSyn.logrel.LRFacts.cumulative
DecSyn.logrel.LRFacts.Univ_inv
DecSyn.logrel.LRFacts.SNe_inv
DecSyn.logrel.LRFacts.Bind_inv
Coq.Logic.Eqdep.Eq_rect_eq.eq_rect_eq
DecSyn.fp_red.NoForbid_FactSN.P_RReds
DecSyn.fp_red.NoForbid_FactSN.P_EPars
DecSyn.logrel.LRFacts.Nat_inv
* Constants/Inductives relying on type-in-type: <none>
* Constants/Inductives relying on unsafe (co)fixpoints: <none>
* Inductives whose positivity is assumed: <none>
make[1]: Leaving directory '****'
Again, all the axiom reported are false positives and you should
always trust Print Assumptions
when it
comes to axiom usage. Still, the output provides the useful information
that our development does not rely on any of the dangerous or
inconsistent features that would make our theorems trivially true.