## Introduction

The general idea is that you can make or create a claim by

```claim_t c = claim(o1, ..., oN, P);
```
where o1 to oN are objects, while P is a property (a Boolean expression), that follows from invariants of o1 to oN.
A claim can be disposed by

```unclaim(c, o1, ... , oN);
```

We say that the claim guarantees P and references the objects o1 to oN; references to each object are counted. If you have a claim referencing o and guaranteeing P, then you claim P and have a claim on o. The claim will ensure that the objects o1 to oN stay closed, until the claim is disposed (there is a precondition for unwrap(o) checking the reference count of the object o). The property P can also depend on your local state at the point where the claim is made. After you make a claim, you can share it with your caller by returning it:

```#include <vcc.h>

struct vcc(claimable) Foo
{
int x;
invariant(x > 1)
};

spec(
claim_t get_claim(struct Foo *o)
requires(wrapped(o))
writes(o)
ensures(wrapped(result) && claims(result, closed(o) && o->x >= 0))
{
claim_t c = claim(o, o->x > 0);
return c;
}
)
```

As you can see, you need to list the claimed property again in the postcondition. However you can specify a weaker property. Also the fact that referenced objects will stay closed is included in the claimed property automatically at the time of claim, but when passing it out, it needs to spelled out explicitly.

claims(c, P) translates to forall states: closed(c) ==> P. So P will be also, by default, evaluated in context of an arbitrary (quantified over) state.

## Atomic

Because the claim talks about arbitrary state, if you want to apply it to a particular state, you need to go through a special sing and dance, by saying assert(valid_claim(c)). valid_claim(c) translates to closed(c) and the condition that the current state is full-stop state. Such states are the ones that represent steps of the physical machine. For all intents and purposes this is true everywhere except inside an atomic block (but is true at the beginning of an atomic block and right after it).

In the place where you actually need valid_claim(c) most often, it is applied (semi-)automatically. The place is the atomic block:

```  atomic (c1, c2, o1, o2, o3) {
o1->x = 42;
spec( o2->y = o1->x + 314; )
}
```

The atomic keyword is followed by a list of claims and pointers (you can have arbitrary number of them, in any order you desire). The pointers are required to point to objects that are closed just before the atomic block. The usual place from which you know it, is from the claims that you list in there. So for claims we implicitly assert valid_claim(c), while for objects we implicitly assert closed(o).

The atomic block is allowed to do at most one atomic, physical read or write operation and any number of ghost state updates (including creation of new claims). You're allowed to read and write spans of objects listed. After the atomic block we check that the two state invariants of those objects were satisfied by the entire atomic transition. VCC checks for access to physical memory and reports an error if more than one physical memory location is accessed (read or written) from the non-specification code in the atomic block. Atomic machine instructions can be simulated using the vcc(atomic_inline) mechanism, see also the entry on Atomic Instructions.

## Lock free increment

As an additional convenience we define two macros:

```#define always(c,e) \
requires(wrapped(c) && valid_claim(c) && claims(c, e)) \
ensures(wrapped(c) && valid_claim(c))
#define claimp(n) spec(claim_t n)
```

Because claims are also considered objects, you can also create a claim on a claim.

Now we're ready for an example:

```#include <vcc.h>

struct vcc(claimable) A {
volatile int x;
invariant( old(x) == x || old(x) + 1 == x )
};

void LockFreeIncr(struct A *a claimp(c))
writes(c)
ensures(unchanged(ref_cnt(c)))
always(c, closed(a))
{
int y;
int z;
spec( claim_t c1, c2; )

atomic(c,a) {
y = a->x;
spec( c1 = claim(c, y <= a->x); )
}

if (y >= 0x7fffffff) {
unclaim(c1,c);
return;
}

atomic(c, c1, a) {
// inlined InterlockedCompareAndExchange
if (a->x == y) {
a->x = y + 1;
}
spec( c2 = claim(c, y < a->x); )
}

atomic(c, c2, a) {
z = a->x;
}

assert(y < z);

unclaim(c1,c);
unclaim(c2,c);
}
```

Initially we get an object a and a claim that it is closed. We also will want to take additional claims on the claim we have been passed (because they will depend on the object staying closed).

First we do a volatile read from a->x, and then we claim that the value we have read and stored will be no larger than the actual value of a->x, at any point where the claim c is still there. To do so we first need to prove that in the state in which a->x was read, which is easy (the condition evaluates to a->x <= a->x).

So let's try to break it and say claim(c, y < a->x). We get:

```Verification of LockFreeIncr failed.
Assertion 'chunk y < a->x of the claim initially holds' did not verify.
Location of assertion: c:\felt\Vcc2\Vcc2Host\ex1.c(18)
```

The second thing that you need to prove is that the condition is stable under invariants assuming that all the claimed-on objects are closed. So if we change the claim to claim(c, y == a->x), we get:

```Verification of LockFreeIncr failed.
Assertion 'chunk y == a->x of the claim holds after a step of the machine' did not verify.
Location of assertion: c:\felt\Vcc2\Vcc2Host\ex1.c(18)
```

Similarly, if we had another, unrelated claim bad_c flying around, and we said claim(bad_c, y <= a->x), we would get exactly the same message, as bad_c doesn't guarantee that a will stay closed, which is required for the claim to be stable.

Going further in our example, we check that the increment will not overflow. If the check fails, we dispose our temporary claim c1 (so that the original claim's reference count stays unchanged) and return. If the check succeeds, we can do an interlocked compare and exchange operation (which we for the time being just inline). If the update actually did happen, we have the claim that y < a->x. If the update did not happen, then based on the previous claim that y <= a->x and the fact that y != a->x (from the if statement), we can also conclude that y < a->x.

With a a third atomic read, and using the previous claim, we can conclude the relative values of y and z.

Before returning, we make sure that the temporary claims c1 and c2 are disposed, guaranteeing unchangedness of the reference counter of the claim c (this is convenient for disposing c eventually).

## When claimed

In cases when you want to refer to a particular state, inside of the claims(...), you can use when_claimed(expr). The state is usually at the time of claim or in the post-state of a method depending on the place where when_claimed(expr) is used. When it is used in an ensures clause it refers to the post-state of the method; when it is used at the creation of a claim (claim(c, when_claimed(expr))) it refers to the time when the claim was created.

```#include <vcc.h>

struct vcc(claimable) A {
volatile int x;
invariant( old(x) == x || old(x) + 1 == x )
};

void sample(struct A *a, int *res claimp(c) claimp(*cres))
writes(c, res, cres)
always(c, closed(a))
ensures(claims(*cres, a->x >= when_claimed(*res)))
{
spec( claim_t c1; )

atomic(c,a) {
*res = a->x;
spec( c1 = claim(c, when_claimed(*res) <= a->x); )
}
spec( *cres = c1; )
}
```

The function sample will return a value *res and a claim *cres. The claim guarantees that from now on the values in a->x will be not less than *res at the time when the function returned. Similarly when_claimed(*res) is used when making a claim to refer to the time when the claim was made. Note that these two values don't have to be the same, to illustrate that let's have a look at the following example:

```#include <vcc.h>

struct vcc(claimable) A {
volatile int x;
invariant(x>0)
invariant(unchanged(x) || old(x)+1 == x)
};

void sample(struct A *a, int *res claimp(c) claimp(*cres))
writes(c, res, cres)
always(c, closed(a))
ensures(claims(*cres, a->x > when_claimed(*res)))
{
spec(claim_t c1;)

atomic(c,a) {
*res = a->x;
spec(c1 = claim(c, when_claimed(*res) <= a->x);)
}
*res -= 1;
spec(*cres = c1;)
}
```

Now the value of *res at the claim and at the exit are different and indeed the condition at the exit uses strict inequality.

## How to claim an object?

The unwrap(o) has a precondition that there are no outstanding claims on o. However it seems it would be utterly annoying to care about claims for each and every object that you work with. To make it a little bit more bearable, we provide a way to specify for a particular type if it can be claimed on. This is done with the vcc(claimable) attribute.

TODO: the parts mentioning ref_cnt_ptr are outdated.

Once a type is marked as possible to claim, you gain access to its reference count. In particular when the object is wrapped its reference count becomes mutable. For an object o the reference count is stored in memory at ref_cnt_ptr(o) and ref_cnt(o) is (more or less) a shorthand to *(ref_cnt_ptr(o)). After the object o becomes nested, maintaining the reference count becomes responsibility of the owner of o. Therefore the ref_cnt_ptr(o) is no longer mutable and (logically) goes into the span of the owner, i.e. adding or removing the references to o requires checking invariant of owner(o).

Let's illustrate that with an example:

```#include <vcc.h>
#include <stdlib.h>

struct vcc(claimable) A {
volatile int x;
invariant( old(x) == x || old(x) + 1 == x )
};

void sample(struct A *a, int *res claimp(c) claimp(^cres))
writes(c, res, cres)
always(c, closed(a))
// there will be a single new child to c
ensures(ref_cnt(c) == old(ref_cnt(c)) + 1)
// the claim we return will be child of c
ensures(claims_obj(*cres, c))
// it will be wrapped
ensures(wrapped(*cres))
// it will be childless (so we'll be free to kill it)
ensures(ref_cnt(*cres) == 0)
// and it will guarantee a condition about a->x
ensures(claims(*cres, a->x >= when_claimed(*res)))
// and it was not allocated before
ensures(is_fresh(*cres))
// we don't free any of our out parameters
ensures(mutable(res))
ensures(mutable(cres))
{
spec( claim_t c1; )
atomic(c,a) {
*res = a->x;
spec( c1 = claim(c, when_claimed(*res) <= a->x); )
}
spec( *cres = c1; )
}

void use_case()
{
struct A *a;
int tmp;
spec( claim_t c; )
spec( claim_t c2; )
a = malloc(sizeof(*a));
if (a!=NULL) {
a->x = 0;
wrap(a);
spec( c = claim(a, true); )
sample(a, &tmp spec(c) spec(&c2) );
assert(valid_claim(c2));
assert(tmp <= a->x);
unclaim(c2, c);
unclaim(c, a);
unwrap(a);
free(a);
}
}
```

The function use_case allocates an object a. Then it takes a claim c on a, but the claim only guarantees that the object a will stay closed (which is implicit, because we claim on a). We are able to take the claim, because the ref_cnt_ptr(a) is mutable. Then we pass the claim to sample(...), which requires the claim to guarantee that a will stay closed. Then sample(...) will create a child claim on c, which is described by its postconditions (see comments for details).

When we get the claim back, we can make use of it. The skip_wf(...) will shut down warnings about things inside of it not being thread local (a->x is volatile).

## Disposal of claims

Once you have a claim you can use it, but, as all good things come to an end, finally you want to get rid of it. This is when you use unclaim(c, o1, ..., oN). For each object oi you list there, you will need to prove that c claims it (i.e., claims_obj(c, oi)). Also you will need to prove that the different ois are pair wise distinct. Finally, exactly as in the case of taking a claim, you will need to prove that you either have write access to ref_cnt_ptr(oi), or you check the invariant of owner of oi.

Once you remove all the claims to an object, you are free to unwrap it.

So let's try to modify our example, by removing unclaim(c2, c). What we get is:

```Verification of \$admissibility_A succeeded.
Verification of sample succeeded.
Verification of use_case failed.
Call '_vcc_unclaim(c, a)' did not verify.
Precondition: the claim has no outstanding references
Location of call: c:\felt\Vcc2\Vcc2Host\test.c(54)
```

Similarly if we remove both unclaims, we get:

```Verification of \$admissibility_A succeeded.
Verification of sample succeeded.
Verification of use_case failed.
Call '_vcc_unwrap(a, _vcc_typeof(a))' did not verify.
Precondition: the object has no outstanding claims
Location of call: c:\felt\Vcc2\Vcc2Host\test.c(55)
```

You will get similar messages, if you forget about some postconditions of sample(...).

## Dynamic claims

So far we have only discussed the case when the claims come and go wrapped. However it is sometimes necessary to create claims on nested objects (in particular to nested claims). This usually happen inside concurrency control primitives like rundowns or reader-writer locks.

We will take on rundown as an example. A rundown is essentially a reference counter. If you want to use a shared resource you need to increase the rundown and after you're done with it, you can dereference the rundown. The owner can only get rid of the resource once the rundown went down to zero. In the real code there is also a phase, when the rundown can no longer increase (the owner is waiting for all the users to give up the resource), but we will not model that here (yet).

First we define a rundown data structure:

```typedef struct vcc(claimable) _Protector {
int dummy;
} Protector;

typedef struct vcc(claimable) vcc(volatile_owns) _Rundown {
spec( volatile claim_t self_claim; )
spec( obj_t protected_obj; )
volatile unsigned int count;

spec( volatile bool alive; )
spec( volatile bool enabled; )
spec( Protector enabled_protector; )
invariant( old(closed(&enabled_protector)) ==> unchanged(enabled) && unchanged(alive) )

invariant( !alive ==> !enabled && count == 0 )
// cannot set both count and alive in one step
invariant( old(count) > 0 && old(alive) ==> alive )
invariant( alive ==>
set_in(protected_obj, owns(this)) &&
set_in(self_claim, owns(this)) &&
claims_obj(self_claim, this) &&
ref_cnt(self_claim) == count )
invariant(old(alive) ==> unchanged(self_claim) )

invariant( !enabled ==> count <= old(count) )

} Rundown;
```

The self_claim is supposed to claim the rundown itself, while protected_obj is the resource that we want to protect with the rundown, which will be owned by the rundown. It would also work, if the rundown owned a claim guaranteeing closed(protected_obj) . The only physical member of the rundown is the count of outstanding references. Then we have two flags describing state of life of the rundown.

TODO: the rest of this section (Dynamic claims) is outdated

The invariant can be divided in two parts: one is when rundown is not yet fully initialized (i.e., protected_obj == NULL ). In such case there should be no references and self_claim should be also NULL . The rest of the invariant talks about the case when the rundown is already fully initialized. Then it owns both self_claim and protected_obj , as well as states that the count and number of claims on self_claim are the same. It also says that the self_claim references the rundown.

Finally we say that the protected_obj and self_claim will stay the same once initialized. This will currently prevent the rundown from even being destroyed. To allow destruction we would need another part of invariant, saying what happens when the rundown is disabled.

So let's have a look how do we take a reference to the rundown:

```void ReferenceRundown(Rundown *r claimp(*res) claimp(rdi))
// we want a claim to our intialized (protected_obj != NULL) rundown
always(rdi, closed(r) && r->protected_obj != NULL)
// we will write *res, but not free it
writes(*res)
ensures(mutable(res))
// we will give out a fresh claim
ensures(wrapped(*res) && is_fresh(*res))
// the ref_cnt will be writable (so you can clone the claim) and initially zero
ensures(ref_cnt(*res) == 0 && mutable(ref_cnt_ptr(*res)) && is_fresh(ref_cnt_ptr(*res)))
// the claim will reference r->self_claim, will gurantee that the protected_obj is closed and the rundown is initialized
ensures(claims(*res, claims_obj(when_claimed(*res), r->self_claim) && closed(r->protected_obj) && r->protected_obj != NULL))
{
spec( claim_t c; )

atomic(r, rdi) {
assume(r->count < 0xffffffff);
spec( c = claim(r->self_claim, closed(r->protected_obj) && r->protected_obj != NULL && when_claimed(r->self_claim) == r->self_claim); )
r->count = r->count + 1;
}

spec( *res = c; )
}
```

Giving up a claim on the rundown is easier:

```void DereferenceRundown(Rundown *r claimp( h ))
// we write the claim and its ref_cnt, what will we do with it? we're not telling.
writes(h, ref_cnt_ptr(h))
// we want a claim to self_claim guranteeing that rundown is fully initialized
always(h, claims_obj(h, r->self_claim) && r->protected_obj != NULL)
// with no outstanding references
requires(ref_cnt(h) == 0)
{
atomic(h, r) {
unclaim(h, r->self_claim);
r->count = r->count - 1;
}
}
```

Now we get to the initialization of rundown, which is isn't really all that pretty:

```void InitializeRundown(Rundown *r spec(obj_t obj) claimp( *rdi ))
writes(extent(r), set_singleton(obj), *rdi)
requires(emb(rdi) != r)
ensures(mutable(rdi))
requires(wrapped(obj))
ensures(claims_obj(*rdi, r))
ensures(wrapped(*rdi) && claims(*rdi, r->protected_obj == obj))
ensures(ref_cnt(r) == 2)
{
spec(claim_t s1, s2, s3;)

r->count = 0;
set_owns(r, set_empty());
spec( r->protected_obj = NULL;
r->self_claim = NULL;
)
wrap(r);
assert(not_shared(r));

spec(
atomic(r) {
r->protected_obj = obj;
s2 = claim(r, r->protected_obj != NULL);
s3 = claim(r, obj != NULL && r->protected_obj == obj);
r->self_claim = s2;
set_closed_owner(obj, r);
set_closed_owner(s2, r);
}
*rdi = s3;
)
}
```

The initialization is performed in two steps. First the rundown is wrapped with NULLs as the self_claim and protected_obj . Then, after wrapping it, we say it is not yet shared, which means it is wrapped and has no claims referencing it. After we do that, the compiler-introduced havoc, at the beginning of the atomic block, that is supposed to simulate other threads, will not havoc volatile fields of r . This is actually required, because before setting protected_obj to obj we need to know it was NULL .

Then we take two claims to the rundown, one we will return to the caller, guaranteeing the rundown to be initialized, while the other we will use as self_claim . The procedure set_closed_owner(...) is used to manipulate ownership, while the involved objects are closed. It can be only used, when the owner is listed in the atomic block header.

Finally we can use the rundown, which luckily is easy:

```typedef struct _A {
volatile int x;
invariant( can_be_claimed() )
} A;

void UseRundown(A *a, Rundown *r claimp(rdi))
always(rdi, closed(r) && r->protected_obj == a && a != NULL)
{
spec( claim_t ac; )
ReferenceRundown(r spec(&ac) spec(rdi));
atomic(rdi, ac, a) {
a->x = 12;
}
DereferenceRundown(r spec(ac));
}
```

We require whoever calls us, to tell us what the rundown protects and that it is initialized.

## Ownership transfers with closed objects

Normally ownership transfers happen when wrapping and unwrapping the object (where the owned objects move between the owns set of the thread and the object being wrapped/unwrapped). In case of closed objects ownership transfers are performed using special functions. The call set_closed_owner(p, owner) will transfer ownership of p from me() to owner. The call giveup_closed_owner(p, owner) will transfer ownership of p from owner to {{me())}.

Finally the call set_closed_owns(owner, owns) will set the owns set of owner to be owns, transferring between owns of me() and p as necessary. Note that you need to have write access to all objects being added to owns(owner). This is useful when a complex ownership transfer is required, as in the following example:

```#include <vcc2test.h>

typedef struct Node {
struct Node *next;
int data;
} Node, *PNode;

struct vcc(volatile_owns) Stack {

invariant(forall(PNode n; {&n->next} set_in(n, owns(this)) ==> n->next == NULL || set_in(n->next, owns(this))))
};

void drain(struct Stack *s claimp(c))
always(c, closed(s))
{
atomic(c,s) {
set_closed_owns(s, set_empty());
}
}
```

The more involved example, where the owns set is not simply drained could be:

```void move(struct Stack *s1, struct Stack *s2 claimp(c))
always(c, closed(s1) && closed(s2))
{
struct Node *n, *n2;
spec( ptrset s1_owns; )

atomic(c,s1) {
}

atomic(c,s1) {
spec(
if (n2 == n) {
s1_owns = owns(s1);
set_closed_owns(s1, set_empty());
}
)
}

if (n2 == n) {
atomic(c,s2) {
set_closed_owns(s2, s1_owns);
}
}
}
```

## by_claim

The non-volatile fields of objects by definition do not change while the objects are closed. Therefore, if you hold a claim on the object it should be possible to sequentially read its non-volatile fields. This is performed with help of the by_claim(...) special form.

by_claim(c, o->f) evaluates to o->f during the normal compilation. During verification, it creates two assertions: 1/ valid_claim(c) and the other is 2/ claims(c, closed(o)). These two facts imply that o is closed and therefore we can read from it. Moreover, there are axioms tying together the value of o->f and the version of claim c. This means, that the second time you read o->f through the same claim, it will have the same value.

Refer to section Inference to see how by_claim(...) can be generated automatically.

The following example uses implicit inference of by_claim:

```#include "vcc.h"

struct vcc(claimable) A {
int x;
volatile int vol_x;
};

struct B {
int y;
struct A *a;
spec(claim_t ca;)

invariant(keeps(ca) && claims(ca, closed(a)))
invariant(y == a->x)
};

{
assert(valid_claim(x->ca));
havoc_others(x);
}

void write_int(int *x)
weak_out_param(x);

void ok1(struct A *a claimp(c))
always(c, closed(a))
{
int q;
int z = a->x;
int zz;

write_int(&q);
zz = a->x;
assert( z == zz );
}
```

While the explicit way looks like this:

```#include "vcc.h"

struct vcc(claimable) A {
int x;
volatile int vol_x;
};

vcc_attr("no_infer", "always_by_claim")
struct B {
int y;
struct A *a;
spec(claim_t ca;)

invariant(keeps(ca) && claims(ca, closed(a)))
invariant(y == by_claim(ca, a->x))
};

{
assert(valid_claim(x->ca));
havoc_others(x);
}

void write_int(int *x)
weak_out_param(x);

vcc_attr("no_infer", "always_by_claim")
void ok1(struct A *a claimp(c))
always(c, closed(a))
{
int q;
int z = by_claim(c, a->x);
int zz;

write_int(&q);
zz = by_claim(c, a->x);
assert(z == zz);
}
```

## begin_update()

You should think of an atomic block as having two contained sequences of statements. The first part is specified before a special marker called begin_update(), the second part is specified after it. An implicit begin_update() is assumed at the beginning of the atomic block if no explicit one is given. So, if there is no begin_update(), the first part is considered to be empty.

The first part is executed sequentially, in several steps of the machine, but without possibility of other thread interleaving. It does not have any additional write permissions compared to the outside code. old(...) also refers to the same thing as in the enclosing code (beginning of the function or loop). You can wrap and unwrap objects there, claim them and unclaim them as well as perform ghost state updates. You cannot do physical updates or call functions (this is not yet checked). Transitive closure of two state invariants of all closed objects are maintained by that part.

The second part is supposed to be a single state transition, i.e., two state invariants of all the closed objects should be maintained between right after begin_update() and the end of the atomic block. In this part old(...) refers to the state right after begin_update(). You can perform ghost state updates and at most one physical state update (this is not yet checked). Also, you can claim and unclaim things as well as use set_closed_owner() and giveup_closed_owner() to change (volatile) ownership for atomic objects. You cannot wrap, unwrap, or call functions.

You can always use assert() and assume(), as they do not change the state.

In other words:

```   atomic(o) {
S1;
begin_update();
S2;
}
```

is translated to:

```  atomic_havoc();
S1;
c := current_state();
allow writing to o {
S2 [ old := c ]
}
assert inv2(c, current_state(), o)
```

where atomic_havoc() simulates actions of other thread by havocing non-thread-local data (see below).

Last edited Feb 11, 2011 at 7:56 PM by MarkHillebrand, version 16