# Stochastic Petri nets

### Discovering random variables that best approximate the distribution of transition times

From a log and a Petri net, it is possible in PM4Py to discover the random variables that best approximate the distribution of transition times. Let’s provide an example, starting from importing the log:

``````import os
from pm4py.objects.log.importer.xes import factory as xes_importer

log = xes_importer.import_log(os.path.join("tests","input_data","running-example.xes"))
``````

And discovering a Petri net using Alpha Miner:

``````from pm4py.algo.discovery.alpha import factory as alpha_miner

net, initial_marking, final_marking = alpha_miner.apply(log)
``````

Then it is possible to discover the stochastic map that corresponds to each transition in the model the random variable that best approximate the transition times:

``````from pm4py.objects.stochastic_petri import map as stochastic_map

smap = stochastic_map.get_map_from_log_and_net(log, net, initial_marking, final_marking)
``````

If we print the variable smap the following representation will be obtained:

``````{register request: IMMEDIATE, check ticket: EXPONENTIAL 6.442745406963898e-06, examine casually: EXPONENTIAL 9.836710358251032e-06, examine thoroughly: UNIFORM 2879.999968175999;168780.00015271484, decide: UNIFORM 1799.9944315511989;576840.0224847647, pay compensation: UNIFORM 261779.99994853878;497520.0000676266, reinitiate request: UNIFORM 10799.99988674513;85800.00016545928, reject request: UNIFORM 92639.99999843462;203432.0460426052}
``````

Automatic selection of random variable is done by maximizing log likelihood. The random variables that are currently implemented and taken into account are:

• CONSTANT0 -> random variable that is associated to immediate transitions
• UNIFORM -> random variable in which each value belonging to an interval has the same probability
• NORMAL -> random variable that means that times follow a normal distribution
• EXPONENTIAL -> random variable that means that times follow an exponential distribution

It is possible to force the discovery of random variables of a given distribution by providing the force_distribution parameter. For example, it is possible to force an exponential distribution:

``````from pm4py.objects.stochastic_petri import map as stochastic_map

smap = stochastic_map.get_map_from_log_and_net(log, net, initial_marking, final_marking, force_distribution="EXPONENTIAL")
``````

If we print the variable smap the following representation will be obtained:

``````{register request: IMMEDIATE, examine casually: EXPONENTIAL 9.836710358251032e-06, check ticket: EXPONENTIAL 6.442745406963898e-06, examine thoroughly: EXPONENTIAL 1.1646866992778915e-05, decide: EXPONENTIAL 4.6891120196330676e-06, reject request: EXPONENTIAL 6.483402734309982e-06, reinitiate request: EXPONENTIAL 1.5060241318247362e-05, pay compensation: EXPONENTIAL 1.9449198456594224e-06}
``````

### Importing and exporting Stochastic Petri nets

Stochastic Petri nets could be imported using existing importer/exporter, specifying additional parameters. For example, in importing it is mandatory to pass the boolean value return_stochastic_information set to True.

``````import os
from pm4py.objects.petri.importer import pnml as pnml_importer

pnml_path = os.path.join("tests","input_data","stochastic_running_example.pnml")
net, initial_marking, final_marking, smap = pnml_importer.import_net(pnml_path, return_stochastic_information=True)
``````

If we print the variable smap the following representation will be obtained:

``````{reinitiate request: UNIFORM 10799.99988674513;85800.00016545928, register request: IMMEDIATE, examine casually: EXPONENTIAL 1.69176114501161e-05, pay compensation: UNIFORM 261779.99994853878;497520.0000676266, decide: UNIFORM 1799.9944315511989;576840.0224847647, check ticket: EXPONENTIAL 6.5853843650133505e-06, examine thoroughly: UNIFORM 2879.9999331715608;92940.00006706228, loop_3: IMMEDIATE, tau_1: IMMEDIATE, skip_7: IMMEDIATE, reject request: UNIFORM 92639.99999843462;203432.0460426052, skip_5: IMMEDIATE, skip_4: IMMEDIATE, loop_2: IMMEDIATE}
``````

For exporting, it is mandatory to pass to the exporter the stochastic map:

``````from pm4py.objects.petri.exporter import pnml as petri_exporter

petri_exporter.export_net(net, initial_marking, "output.pnml", final_marking=final_marking, stochastic_map=smap)
``````

### Discovering reachability graph and tangible reachability graph from Petri net

Calculating the reachability graph and tangible reachability graph are important in Stochastic Petri net applications. The reachability graph is a transition system built upon the Petri net where the states are the marking of the Petri net and the transitions are corresponding to the transitions of the Petri net. We could provide an example where a log is imported, Alpha Miner is applied, and then the reachability and the tangible reachability graph are obtained. First, log is imported:

``````import os
from pm4py.objects.log.importer.xes import factory as xes_importer

log = xes_importer.import_log(os.path.join("tests","input_data","running-example.xes"))
``````

And then, a Petri net using Alpha Miner could be obtained:

``````from pm4py.algo.discovery.alpha import factory as alpha_miner

net, initial_marking, final_marking = alpha_miner.apply(log)
``````

The reachability graph could be obtained by using the following instructions:

``````from pm4py.objects.petri.reachability_graph import construct_reachability_graph

reachab_graph = construct_reachability_graph(net, initial_marking)
``````

And could be displayed on the screen if needed:

``````from pm4py.visualization.transition_system import factory as ts_vis_factory

viz = ts_vis_factory.apply(reachab_graph, parameters={"show_labels": True, "show_names": False})
ts_vis_factory.view(viz)
``````

The tangible reachability graph is formed by the states of the reachability graph from which only timed transitions exit. The stochastic distribution of transitions (forcing exponential distribution) could be obtained as before with the following instructions:

``````from pm4py.objects.stochastic_petri import map as stochastic_map

smap = stochastic_map.get_map_from_log_and_net(log, net, initial_marking, final_marking, force_distribution="EXPONENTIAL")
``````

And the tangible reachability graph could be built from the reachability graph and the stochastic map with the following code:

``````from pm4py.objects.stochastic_petri import tangible_reachability

tang_reach_graph = tangible_reachability.get_tangible_reachability_from_reachability(reachab_graph, smap)
``````

And displayed to the screen:

``````from pm4py.visualization.transition_system import factory as ts_vis_factory

viz = ts_vis_factory.apply(tang_reach_graph, parameters={"show_labels": True, "show_names": False})
ts_vis_factory.view(viz)
``````

### Continuous Time Markov Chains

Continuous Time Markov Chains are powerful models that are useful for performance analysis. Currently, it is possible to convert a tangible reachability graph into the Q-matrix of the CTMC with the following code:

``````from pm4py.objects.stochastic_petri import ctmc

Q_matrix = ctmc.get_q_matrix_from_tangible_exponential(tang_reach_graph, smap)
``````

With a print of the Q matrix we obtain:

``````[[-4.68911202e-06  0.00000000e+00  4.68911202e-06  0.00000000e+00
0.00000000e+00  0.00000000e+00]
[ 1.16468670e-05 -2.14835774e-05  0.00000000e+00  0.00000000e+00
0.00000000e+00  0.00000000e+00]
[ 0.00000000e+00  0.00000000e+00 -2.34885639e-05  1.94491985e-06
0.00000000e+00  1.50602413e-05]
[ 0.00000000e+00  0.00000000e+00  0.00000000e+00 -0.00000000e+00
0.00000000e+00  0.00000000e+00]
[ 6.44274541e-06  0.00000000e+00  0.00000000e+00  0.00000000e+00
-6.44274541e-06  0.00000000e+00]
[ 0.00000000e+00  6.44274541e-06  0.00000000e+00  0.00000000e+00
9.83671036e-06 -2.79263228e-05]]
``````

On CTMC it is possible to do transient analysis from a state, evaluating the probabilities that the model is in a given state after a given amount of time. To do transient analysis, a state needs to be picked:

``````states = sorted(list(tang_reach_graph.states), key=lambda x: x.name)
state = states[0]
print(state)
``````

The chosen state is checkticketdecide1examinecasuallyexaminethoroughlydecide1. Then, the following instructions may be provided to do transient analysis after just 1 day:

``````transient_result = ctmc.transient_analysis_from_tangible_q_matrix_and_single_state(tang_reach_graph, Q_matrix, state, 86400)
``````

Printing the transient result, the following result is obtained:

``````Counter({checkticketdecide1examinecasuallyexaminethoroughlydecide1: 0.7079833036711465, decidereinitiaterequestrejectrequestpaycompensation1: 0.14089285324453515, reinitiaterequestregisterrequestcheckticket1reinitiaterequestregisterrequestexaminecasuallyexaminethoroughly1: 0.06095861265263284, end1: 0.057958542860088356, examinecasuallyexaminethoroughlydecide1reinitiaterequestregisterrequestcheckticket1: 0.02196036158323918, checkticketdecide1reinitiaterequestregisterrequestexaminecasuallyexaminethoroughly1: 0.010246325988357974})
``````

So in most of cases we have remained inside the same state. If we do transient analysis after 10 days with the following code:

``````transient_result = ctmc.transient_analysis_from_tangible_q_matrix_and_single_state(tang_reach_graph, Q_matrix, state, 864000)
``````

We get the following result:

``````Counter({end1: 0.6805301008857455, checkticketdecide1examinethoroughlyexaminecasuallydecide1: 0.1848166726292765, examinethoroughlyexaminecasuallydecide1reinitiaterequestregisterrequestcheckticket1: 0.06269221968927016, deciderejectrequestreinitiaterequestpaycompensation1: 0.04049912689975318, reinitiaterequestregisterrequestcheckticket1reinitiaterequestregisterrequestexaminethoroughlyexaminecasually1: 0.02361166192956106, checkticketdecide1reinitiaterequestregisterrequestexaminethoroughlyexaminecasually1: 0.007850217966393435})
``````

So in most of the cases we have reached the end state after 10 days.

To do steady state analysis, so to evaluate the probability distribution among cases after an infinite amount of time, the following code could be used:

``steady_state = ctmc.steadystate_analysis_from_tangible_q_matrix(tang_reach_graph, Q_matrix)``

We always reach the end state after infinite time:

``{end1: 1.0000000000000002}``

### Linear Programming performance bounds

Finding performance bounds mean finding a lower and an upper bound for places occupation and transitions throughput in steady state (i.e. an infinite period of time) through linear programming techniques.

The inputs of the problem are:

• A Markovian Stochastic Petri net (obtained forcing the distribution type to “EXPONENTIAL”).
• An average case arrival time (assuming that the case arrival time can be modelled by an exponential distribution).

A sound workflow net extracted by a Process Discovery algorithm (for example, the SIMPLE algorithm) is transformed into an unbounded net where only timed transitions appear, there is an added hidden timed transition representing the case arrival time, and there is a hidden timed transition sucking token from the final marking.

An example transformed network (extracted from the Road Traffic Fine Management log) is this:

The conditions imposed on the linear problem are taken from the paper “Performance analysis of stochastic timed Petri nets using linear programming approach”

In particular, the following conditions are imposed:

• Throughput equation
• Flow-Balance equation
• Second moment equation
• Population covariance
• Liveness equation
• Sample path conditions
• Little’s Law

The variables of the problem are the following:

• x_p: for each place, the average token occupation in the long time.
• q_t: for each transition, the percentage of time in which the transition was activated.
• theta_t: for each transition, its throughput.
• y_pt: for each couple (place, transition), the expected value of the token occupation multiplied by the fact that the transition is activated.

The provided function to minimize could involve one or more of the previous variables.

Let’s see a bit of code.

The Running Example log could be loaded:

```import os

from pm4py.objects.log.importer.xes import factory as xes_importer
log_name = os.path.join("tests", "input_data", "running-example.xes")
log = xes_importer.apply(log_name, variant="nonstandard")```

A model could be extracted from the log using the SIMPLE algorithm:

```from pm4py.algo.discovery.simple.model.log import factory as simple_extraction_factory
from pm4py.util.constants import PARAMETER_CONSTANT_ACTIVITY_KEY, PARAMETER_CONSTANT_ATTRIBUTE_KEY

activity_key = "concept:name"
parameters = {PARAMETER_CONSTANT_ACTIVITY_KEY: activity_key, PARAMETER_CONSTANT_ATTRIBUTE_KEY: activity_key}
net, initial_marking, final_marking = simple_extraction_factory.apply(log, classic_output=True, parameters=parameters)```

The case arrival time could be deduced from the log:

```from pm4py.statistics.traces.log.case_arrival import get_case_arrival_avg
avg_time_starts = get_case_arrival_avg(log)
print("avg_time_starts real=", avg_time_starts)```

The Stochastic map (describing the distribution for the timed transitions) could be retrieved from the log:

```from pm4py.objects.stochastic_petri import map as stochastic_map
smap = stochastic_map.get_map_from_log_and_net(log, net, initial_marking, final_marking,
force_distribution="EXPONENTIAL")
print("smap=", smap)```

Then, the object managing the linear programming bounds is calculated, and the transformed Petri net could be retrieved:

```from pm4py.objects.stochastic_petri.lp_perf_bounds import LpPerfBounds
perf_bound_obj = LpPerfBounds(net, initial_marking, final_marking, smap, avg_time_starts)
net1, imarking1, fmarking1 = perf_bound_obj.get_net()```

And each variable of the problem could be minimized/maximized:

```for var in perf_bound_obj.var_corr:
corr = perf_bound_obj.var_corr[var]

minimum = perf_bound_obj.solve_problem(var, maximize=False)
maximum = perf_bound_obj.solve_problem(var, maximize=True)

print(var, minimum[corr], maximum[corr])```

.