Z3 Axiom Profiler

Getting It

The Z3 Axiom Profiler comes with VCC, so you can just go and install the binary release of VCC (see Installation Prerequisites). It also comes with VCC sources, so you can also compile it yourself, see Build Instructions for details.

Usage

The usage is either:

Z3AxiomProfiler.exe


(and then load the log file through menu) or

Z3AxiomProfiler.exe file1.bpl file2.bpl /l:logfile


Currently /l: can be usually skipped (the log file will be detected automatically). If you do not have the BPL files, you can just specify the log file. It is generally advisable to run the profiler from command line if things go wrong, because some warnings are printed there.

The option /c:<N> makes the profiler select the profile trace number <N> when there are multiple traces in the log file.

The logfile can be obtained by running Z3 v2 with the TRACE=true option. It goes to z3.log file. This can be overriden with TRACE_FILE_NAME Z3 option. Note that for string Z3 parameters Z3 needs to see the quotes, so in case of Unix-like shells you need to use something like z3 ... TRACE_FILE_NAME='"xyz"' (note single and double quotes), and in case of cmd use just single quotes.

The overhead in performance is between x1.5 to x3 in most cases. The option should not alter the search direction.

Quantifiers

The top level entries in the tree are quantifiers. H-over an entry to get a tooltip referring to the BPL file (which needs to specified in the command line or specified in the loading form). Every information that one can get with mouse H-over, is also displayed in the bottom pane, provided it is enabled (button on the right).

In each quantifier entry the first number there is a "cost", while after "direct:" you get number of instances of that quantifier. The cost is roughly the number of quantifier instances which the current instance/quantifier caused.

After expanding a top level entry you get to choose if you want to see:
  • DEEP instances, i.e. instances that appeared only after long chains of dependencies
  • COSTLY instances, i.e. instances that caused a lot of other instances to be generated
  • FIRST and LAST instances from the log file
  • ALL instances of that quantifier from the log file

For each instance you get the following information:

<name of quantifier> <depth> (<weighted depth>) / |<size of bindings>| @<line in log file>  <cost>  <head of binding 0> |<size of binding 0>| ...


The size of terms is given as two numbers: the first one is number of symbols (function applications and constants), the second one is maximal depth of the term.

The size of bindings is sum of sizes and maximal depth of bindings.

After expanding an instantiation you get access to:
  • the terms that are BLAMED for the instantiation
  • the terms that are BOUND to free variables of the quantifier
  • YIELDS, i.e. the instantiations that blame the current instantiation for their existence
  • and the instantiations that we can blame for the current instantiation (they are given directly, at the top)

The depth of an instantiation is 1 + maximum of depths of instantiations that are blamed. The weighted depth of an instantiation is its weight (as determined from the Boogie {:weight ...} attribute) plus the maximum of weighted depths of blamed instantiations.

The initial cost of an instantiation is 1. If an instantiation has cost C and is blaming N other instantiation for its existence, C/N is added to the cost of each of those blamed.

The nodes in the tree corresponding to complex terms you can unfold. The "C: ..." is more or less C-programming-language transliteration of the term.

Conflicts

The top-level node labeled CONFLICTS gives access to all conflicts created during the search. 100 CONFLICTS gives a random, weighted sampling from the conflicts. The cost of a conflict is number of quantifier instantiations needed to deduce it. It is given as the first number in conflict description. Looking at SCOPES output is generally more useful than looking at bare conflicts.

Scopes

They reflect the way the SMT solver performs the search: it performs a number of decisions (case-splits), finds a conflict clause and then backtracks from some decisions. Therefore one can think of the search as a tree, where nodes are labeled with literals and leafs are additionally labeled with conflicts. There are however very long sequences of nodes with just one child. We collapse such sequences into a single node and label it with entire sequence of literals. The SCOPE tree is exactly this tree with some addition information.

Each decision can cause other literals to be asserted. Those are called implied literals.

The asserted literals can give rise to quantifier instantiations. Such instantiations are accounted to the current scope.

The scope entries are described as follows:

Scope: <total instances> / <current-level instances>, <decisions> lits, <child scope#> children [rec: <descended scope#>, <cost> inst/cnfl]


where <total instances> is the total number of instances for this scope and child scopes, <current-level instances> is the # of instances in the current scope only, <decisions> is the number of decision literals, <child scope#> is the number of child scopes (children in the search tree) of the current scope, while <descended scope#> is the same number but applied transitively, finally <cost> is the average cost in instance of a conflict generated from this scope (that is <total instances>/<descended scope#>).

Each scope has a list of literals and children scopes when you unfold it.

Each literal has again several nodes attached as children:
  • its explanation (which is irrelevant for decision literals, and kind of cryptic anyhow)
  • the literals that it implied
  • and in case it was a decision literal made during left-to-right case-splitting search, an entry showing the clause that contained the literal

Proofs

The Z3 Axiom Profiler can also parse proofs produced by Z3. To produce a proof specify DISPLAY_PROOF=true PROOF_MODE=2 in addition to TRACE=true. This will however disable certain optimization and alter the search (which will be generally much slower). The Z3 Axiom Profiler can figure out which of the instances created during the search were actually useful.

The proof can be browsed (at node labeled th-lemma or unit-resolution), but I haven't yet met anyone who would be able to make sense out of it.

Instead at the node labeled PROOF-INST one can search for quantifier instantiation chains that were particularly deep AND used in the proof. The list is sorted by the weighted depth. The greyed entries are ones that have the weight of zero.

Models

The Z3 Axiom Profiler can also read models produced by Z3 v2 and also the ones that Boogie prints (which are in Z3 v1 format). To get such a model from Boogie you should use:

Boogie myprelude.bpl file.bpl /printModel:1 /printModelToFile:mymodel.log


and then run:

Z3AxiomProfiler.exe myprelude.bpl file.bpl /l:mymodel.log


The models will be displayed in entries labeled MODEL #n per each model found.

The model consists of a number of partitions, or equivalence classes and a table of function symbols and their results when applied to a tuple of partitions. We call a function applied to a tuple (including 0-tuple) of partitions, a model entry.

The model description is divided into sections for literals (integers, true, false, etc.), constants (uninterpreted functions of arity zero) and other functions.

Entries for literals and constants unfold to show:
  • the list of model entries in partition of the literal/constant (EQ [n], where n is the number of such entries).
  • the list of entries that use the partition of the literal/constant in question

Entries for function applications (grouped by function symbol) unfold to additionally show the list of arguments.

Screenshots

axprof1.png

axprof2.png

axprof3.png

Last edited Nov 18, 2009 at 11:41 PM by MichalMoskal, version 9

Comments

No comments yet.