Feb 8, 2012 at 11:56 AM
Edited Feb 8, 2012 at 12:01 PM
I am not completely up to date on atomic_read, but this looks like it is more likely related to the fact that you are using a global volatile variable of primitive type.
For example, take a look at the following, where the volatile global is stuck into a global structure, but nothing else changes.
volatile unsigned int value;
_(ghost unsigned int last_value;)
unsigned int bar(void)
_(atomic &val) val.value=0;
while(_(atomic_read &val) val.value==0)
return _(atomic_read &val) val.value;
This gives you the expected behaviour. If you want to get back the behaviour you get with your code, you will need to add an invariant to the type of val, stating that the owner needs to approve all changes made to value (_(invariant \approves(\this->\owner,value))),
which essentially means, when the owner is the thread (which is the case here), that no other thread can modify the volatile variable (which therefore essentially behaves sequentially).
I suspect that the global embedding structure does have such an approval invariant for all global volatiles.
My rule of thumb is "if VCC complains that there's a missing \bump_volatile_version() then the variable behaves sequentially".
Apologies if this is unclear: I am only inferring the inner workings of VCC from its superficial behaviour (which is also why I failed to comment earlier). Maybe someone with a bit more insight will come help?
PS: I am not aware of any way to prevent VCC from generating an approval invariant for volatile globals. However, you can make this approval invariant a bit less restrictive by not having the \wrapped(\gemb(&value)) pre-condition, but instead having another
structure (a lock?) that owns the object. The owner approval invariant will then simply ensure that all changes to value made whilst its owner remains closed respect the owner's two-state invariant. Message me or reply to this if you want an example of this.