Model Checker Collection for the Model Checking Contest @ Petri nets
The model checker collection is a tool that is built to compete in the Model Checking Contest @ Petri nets using the tools that are already competing.
Its purpose is to allow research on the algorithm that will choose the best tool, depending on the model, examination and formula characteristics.
This tool can be installed easily with pip
from the sources:
$ git clone https://github.com/cui-unige/mcc4mcc.git \
&& cd mcc4mcc \
&& pip install .
We currently do not distribute a packaged version.
The MCC Submission Kit can be downloaded automatically, and models extracted from it using the following command:
$ ./prepare
The submission kit is put in the ToolSubmissionKit
directory,
and models are copied in the models
directory.
Help can be obtained through:
$ python3 -m mcc4mcc \
--help
The following command extracts known and learned data from the results of the 2017 edition of the Model Checking Contest:
$ python3 -m mcc4mcc \
extract \
--year=2017
It creates several files, that are used to chose the correct tool to run:
<prefix>-configuration.json
<prefix>-known.json
<prefix>-learned.json
<prefix>-learned.<algorithm>.p
<prefix>-values.json
The following command runs mcc4mcc
with the state space examination
on the model stored in ./models/TokenRing-PT-005.tgz
.
The prefix
option tells the tool to use files generated with
the given prefix.
It allows users to generate files for several extraction parameters,
and use them by giving their prefix.
$ python3 -m mcc4mcc \
run \
--examination=StateSpace \
--input=./models/TokenRing-PT-005.tgz \
--prefix=7e556e9247727f60
The following command tests if docker images can be run on some examinations and models of the Model Checking Contest:
$ python3 -m mcc4mcc \
test \
--year=2017
In order to create more collisions between models given a set of
characteristics, it can be interesting to forget some characteristics
during machine learning.
The --forget
option allows us to do it, for instance:
$ python3 -m mcc4mcc extract \
--duplicates \
--year=2017 \
--forget="Deadlock,Live,Quasi Live,Reversible,Safe"
It is interesting to remove some models during machine learning,
in order to check that the algorithm is still able to obtain a good score.
The --training
option allows us to do it.
$ python3 -m mcc4mcc extract \
--duplicates \
--year=2017 \
--training=0.25
The command above keeps only 25% of the models for learning,
but still computes the score using all the models.
The --duplicates
option allows the tool to keep duplicate lines
during machine learning.
This repository provides scripts to build automatically the docker images using virtual machines of the previous edition of the Model Checking Contest. This is not optimal: tool developers should provide their own docker images for both simplicity and efficiency.
The following command builds all images and uploads them to the docker
image repository mccpetrinets
:
$ ./build
The following command creates a virtual machine containing mcc4mcc
dedicated to the Model Checking Contest:
$ ./create-vm
The virtual machine is created as mcc4mcc.vmdk
.