How Do People Create Process Models?

Over the last 7 years, I have been collaborating with some colleagues on a number of experiments where we investigated how people create process models. In particular, we wanted to see where and how they differ and whether their personal unique “modeling style” has an impact on model quality. In this – rather long – blog post, I want to summarize what we found out and point to the different studies that we published. (To be honest, I collected this information for a Master student who wants to replicate some of these studies, but I might as well share it with others). So, here we go.

First experiment: organize your process description!

In 2010, we conducted a first structured experiment on quality of modeling outcomes. We compared how the way an informal requirements document is organized impacts the quality of the created model (modelers get a text about a process and have to create a graphical model – say in BPMN). Spoiler: a breadth-first description of the process works best.

ppm

Models were created more accurately when the process description was given in breadth-first order.

Jakob Pinggera, Stefan Zugal, Barbara Weber, Dirk Fahland, Matthias Weidlich, Jan Mendling, Hajo A. Reijers: How the Structuring of Domain Knowledge Helps Casual Process Modelers. ER 2010: 445-451 http://dx.doi.org/10.1007/978-3-642-16373-9_33

The conceptual background for this and subsequent experiments were two paper investigating the nature of modeling languages regarding how they use particular modeling concepts to structure knowledge about a process.

  • Dirk Fahland, Daniel Lübke, Jan Mendling, Hajo A. Reijers, Barbara Weber, Matthias Weidlich, Stefan Zugal: Declarative versus Imperative Process Modeling Languages: The Issue of Understandability. BMMDS/EMMSAD 2009: 353-366 http://dx.doi.org/10.1007/978-3-642-01862-6_29
  • Dirk Fahland, Jan Mendling, Hajo A. Reijers, Barbara Weber, Matthias Weidlich, Stefan Zugal: Declarative versus Imperative Process Modeling Languages: The Issue of Maintainability. Business Process Management Workshops 2009: 477-488 http://dx.doi.org/10.1007/978-3-642-12186-9_4

Visualizing how people model

In 2011, we published a paper describing a software platform for recording and analyzing modeling actions on a canvas. We also describe the visualization of modeling actions in a time-series diagram where specific phases in the modeling process (creating elements, arranging existing elements, deleting elements, thinking about the process) can be identified and highlighted as illustrated below.

ppm

In the experiments, we could observe significant differences between how different modelers approach the same modeling task – manifesting itself in remarkably distinct modeling phase diagrams.

ppm

Jakob Pinggera, Stefan Zugal, Matthias Weidlich, Dirk Fahland, Barbara Weber, Jan Mendling, Hajo A. Reijers: Tracing the Process of Process Modeling with Modeling Phase Diagrams. Business Process Management Workshops (1) 2011: 370-382 http://dx.doi.org/10.1007/978-3-642-28108-2_36

Identifying modeling styles

In 2012, we analyzed these differences between how modelers approach a modeling task further. We plotted the number of creation, deletion, and re-arranging actions on the canvas on a time-series. We binned these modeling actions into segments of 10 seconds length; each second has a particular “modeling profile” of creation, deletion, and re-arranging actions. We then clustered users based on their “modeling profiles”, i.e., typical occurrences of create/delete/move actions throughout their modeling, and identified three unique clusters of “modeling profiles”. Below is the “modeling profile” of the cluster showing many creation operations early in the modeling and few delete operations.

ppm

Jakob Pinggera, Pnina Soffer, Stefan Zugal, Barbara Weber, Matthias Weidlich, Dirk Fahland, Hajo A. Reijers, Jan Mendling: Modeling Styles in Business Process Modeling. BMMDS/EMMSAD 2012: 151-166 http://dx.doi.org/10.1007/978-3-642-31072-0_11

We then conducted a subsequent, more detailed analysis of these clusters and also investigated the modeling phase diagrams of each cluster. First, we could establish that there are statistically significant differences between the three clusters in (1) speed of adding modeling elements, (2) duration of phases of improving the model layout and elements moves in a phase of layouting, (3) time between adding model elements, thinking about the model, and adding further model elements. Altogether, we could then characterize 3 unique modeling styles from these clusters

  1.  Quick modelers who (after some initial deliberation on the process), create an almost correct model right away and only need minimal adjustments of model layout and few thinking pauses
  2. Modelers who model at a slower pace and make regular and longer layouting breaks (possibly to plan their next modeling steps)
  3. Modelers who also model at a slower pace but require less layouting than the previous group.

This analysis also gave us a first idea into which factors influence how people approach a modeling task. The central two factors are (1) the cognitive load created by the modeling tasks, largely influencing the efficiency with which the model is created, and (2) tool support for layouting, largely influencing the amount of time spent on organizing the model on the canvas.

Jakob Pinggera, Pnina Soffer, Dirk Fahland, Matthias Weidlich, Stefan Zugal, Barbara Weber, Hajo A. Reijers, Jan Mendling: Styles in business process modeling: an exploration and a model. Software and System Modeling 14(3): 1055-1080 (2015) http://dx.doi.org/10.1007/s10270-013-0349-1

Modeling style vs model quality

In a second line of analysis, we investigated how the way modelers create their models impacts the quality of the resulting model. By analyzing modeling operations at a more fine-grained level and also considering the modeling elements themselves, we could compare modeling processes at a more detailed level. Below, we see visualizations of four different modelers creating the same model (visualized using the DottedChart plugin of ProM. Each line corresponds to a modeling element (node or arc), green dots show creation operations, blue dots show move operations, and red dots show delete operations.

ppm

By analyzing the location of modeling elements on the canvas, and the time between different modeling activities, we could confirm three hypotheses:

  1. Structured modeling (e.g., in clearly defined blocks) is linked to better model quality
  2. lots of movement of modeling objects is linked to lower model quality, and
  3. low modeling speed is linked to low model quality.

Jan Claes, Irene T. P. Vanderfeesten, Hajo A. Reijers, Jakob Pinggera, Matthias Weidlich, Stefan Zugal, Dirk Fahland, Barbara Weber, Jan Mendling, Geert Poels: Tying Process Model Quality to the Modeling Process: The Impact of Structuring, Movement, and Speed. BPM 2012: 33-48 http://dx.doi.org/10.1007/978-3-642-32885-5_3

The impact of structured modeling on modeling quality was analyzed further. In a further set of experiments, factors that impact the cognitive load of the modelers were analyzed. In particular, the researchers looked for factors that help to reduce the cognitive load of the model thus helping him to have more cognitive capacity to create correct models. Besides confirming and deepening the 2010 experiment (structured breadth-first organization of process knowledge improves model quality), the experiment also shows that the characteristics of the modeler impact model quality: A modeler may have a preference of structuring knowledge in a particular way. If process knowledge is presented to them fitting their preference, the individual cognitive load is lower and model quality increases. The image below shows  “aspect-oriented” modeling, where a modeler first finishes a first aspect of the model, then works on a second aspect that may involve many modeling elements created earlier.

ppm

Jan Claes, Irene T. P. Vanderfeesten, Frederik Gailly, Paul Grefen, Geert Poels: The Structured Process Modeling Theory (SPMT) a cognitive view on why and how modelers benefit from structuring the process of process modeling. Information Systems Frontiers 17(6): 1401-1425 (2015) http://dx.doi.org/10.1007/s10796-015-9585-y

The following, longer journal paper summarizes several techniques for visually analyzing the process of process modeling from various angles.

Jan Claes, Irene T. P. Vanderfeesten, Jakob Pinggera, Hajo A. Reijers, Barbara Weber, Geert Poels: A visual analysis of the process of process modeling. Inf. Syst. E-Business Management 13(1): 147-190 (2015)  http://dx.doi.org/10.1007/s10257-014-0245-4

For the really interested, there are 2 PhD theses on the topic:

Advertisements

Tutorial: Automating Process Mining with ProM’s Command Line Interface

In this blogpost I explain how to invoke the process mining tool ProM from the commandline without using its graphical user interface. This allows you to run process mining analyses on several logs in batch mode without user interaction. Before you get too excited: there are quite some limitations to this, which I will address in the end. The following instructions have been tested for the ProM 6.4.1 release.

Invoking the ProM Commandline Interface

The ProM commandline interface (CLI) can be invoked through the class

 org.processmining.contexts.cli.CLI

To properly invoke the CLI for ProM 6.4.1, use the following command (which is a copy of the command in ProM641.bat with changed main class).

java -da -Xmx1G -XX:MaxPermSize=256m -classpath ProM641.jar -Djava.util.Arrays.useLegacyMergeSort=true org.processmining.contexts.cli.CLI

The CLI itself has no interactive user interface. Instead, it executes scripts passed to it as commandline parameter. To simplify your life, I suggest to put the command into a batch file ProM_CLI.bat or shell script ProM_CLI.sh that passes on 2 commandline parameters. For instance

java -da -Xmx1G -XX:MaxPermSize=256m -classpath ProM641.jar -Djava.util.Arrays.useLegacyMergeSort=true org.processmining.contexts.cli.CLI %1 %2

A typical example script that the ProM CLI takes is the following script_alpha_miner.txt

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Mining model");
net_and_marking = alpha_miner(log);
net = net_and_marking[0];
marking = net_and_marking[1];

System.out.println("Saving net");
File net_file = new File("mined_net.pnml");
pnml_export_petri_net_(net, net_file);

System.out.println("done.");

You can invoke it with the command

ProM_CLI.bat -f script_alpha_miner.txt

It will read the log file myLog.xes (stored in the current working directory), invoke the alpha miner, and write the resulting Petri net as a PNML file mined_net.pnml to the current working directory. (No, there is currently no way to pass file names as additional commandline parameters to the script).

Note: when running the above script ProM will first produce a (large) number of messages on the screen during the startup phase related to scanning for available packages and plugins, bear with it until it is ready.

Scripts for ProM

The language used for the scripts is basically Java interpreted at runtime. In principle, you can put any Java code which you would put into a method body (no class/method declarations). In case the Java reflection framework is able to infer the type, variables do not have to be declared, but can just be used like in a dynamically typed language. For example variable log in the script_alpha_miner.txt will be inferred to have type XLog.

In a script, you can directly invoke ProM plugins through special method names provided by the CLI; the method names are derived from the plugin names shown in ProM. For example the plugin “Alpha Miner” is available as method alpha_miner. You can get the full list of all ProM plugins available for script invocation with the command liner parameter ‘-l’ (“dash lower-case L”):

ProM_CLI.bat -l

This will scan all installed packages for plugins that do not require the GUI to run and list them in the form name(input types) -> (output types). For example, if you have installed the AlphaMiner package the following plugins will be listed (among many others).

alpha_miner(XLogInfo, LogRelations) -> (Petrinet, Marking)
alpha_miner(XLog) -> (Petrinet, Marking)
alpha_miner(XLog, XLogInfo) -> (Petrinet, Marking)

Use the ProM Package Manager to install plugins you do not find in the list of installed plugins.

The script_alpha_miner.txt uses the second method signature alpha_miner(XLog) -> (Petrinet, Marking) to discover from an XLog a Petrinet and a Marking. In case a plugin returns multiple objects, the return result is an Object[] array, in which you can access the individual components as usual, i.e., net_and_marking[0] contains the PetriNet and net_and_marking[1] contains the Marking.

Besides the typical plugins you already know from the ProM GUI, there are also plugins for loading files and saving files. Just browse the list of available plugins to find the right type.

I suggest to store the list of available plugins in a separate plugin_list.txt file for easier searching using the following command

ProM_CLI.bat -l > plugin_list.txt

Now, you basically know everything to invoke ProM from the commandline.

  1. Create the ProM_CLI.bat or ProM_CLI.sh
  2. Run the ProM PackageManager to install your desired plugins. If you run the PackageManager for the first time, it will suggest a set of standard packages to install which cover most process mining use cases.
  3. Get the list of available plugins.
  4. Write a script.
  5. Invoke the script.

Known Caveats

The ProM CLI is not the primary user interface of ProM and as such does not get the same attention to usability as the GUI. Thus, it is better to consider the CLI an experimental feature where not everything works as you know it from the GUI and that may sink a bit of time and effort to get running. Several factors you should consider:

  • You only can use plugins from the CLI which have been programmed to work without the GUI. Whether your favourite plugin is available depends on two aspects:
    1. Does the plugin require configuration of parameters for which no good default settings are available, so user feedback is requires (for example particular log filtering options)?
    2. Did the developer of that plugin have the time to implement a non-GUI version? We encourage users to first develop the non-GUI version of the plugin and introduce GUI-reliant components only later. However as ProM is an open platform with many contributing parties, individual developers may choose otherwise. If a particular plugin is not available on the CLI, please get in touch with the developer whether this can be changed.
  • The CLI cannot invoke any code that requires the ProM GUI environment. Any plugin that attempts to do that on the side will terminate the CLI with an exception. That being said, you actually can invoke visualizer plugins that produce a JComponent and then create a new JFrame to visualize the JComponent, see below for an example. However, the functionality of these will be limited (e.g. export of pictures, interaction with the model etc. most likely won’t work)
  • It may be that ProM CLI does not terminate/close after the script completes. Workaround: include a System.out.println(“done.”); statement at the end of your script to indicate termination. When you see the “done.” line printed on the screen but ProM is still running, you can terminate it manually (CTRL+C) without loosing data.
  • Log files may not be (g)zipped, i.e., the CLI can only load plain XES or MXML files.
  • PNML files produced by a mining algorithm in the CLI have no layout information yet. If you want to visualize such a PNML file, you have to open it in a tool that can automatically compute the layout of the model. Opening the file in the ProM GUI will do. Invoking a the plugin to visualize a Petri net will also invoke computation of the layout.
  • The plugins to load a file or save a file are named rather inconsistently across the different packages. You may have to look for various keywords like “load”, “open”, “import”, “save”, “export” to find the right load/save plugin.
  • Plugins to load a file always come with a signature that take a String parameter as the path to the file to load. Plugins to save a file always require a File parameter. Thus, you first have to create a file handle myFile = new File(pathToSave); and then pass this handle to the “save file plugin”.
  • Files are read/written relative to the current working directory.
  • Even if the plugins you want to use are available for the CLI, executing them may throw exceptions because the plugin (although accessible from the CLI) assumes  settings that can only be set correctly in a GUI dialog. The only workaround here is to extend your script with Java code that produces all the settings expected by the plugin. See below for more advanced examples.
  • Creating these scripts is certainly on the less convenient side of development. You have no development environment with syntax check, code completion etc. In case your script has an error, you will only notice at runtime when you get a long evaluation error thrown which tries to highlight the problematic part of the script, but is typically hard to spot. You’ve been warned.

Advanced Examples

With all these restrictions in mind. Here are some more advanced scripts to get more advanced ProM plugins running. The following script invokes the HeuristicsMiner with its default settings. It needs some additional code to properly pass the event classifiers to the heuristics miner. The HeuristicsMiner typically produces nice results on real-life data because it does not use a standard process modeling notation as target language. As a consequence, there is no serialization format. However, you can invoke the visualization plugin and pass it to a new JFrame to visualize the result. File script_heuristics_miner.txt

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Getting log info");
org.deckfour.xes.info.XLogInfo logInfo = org.deckfour.xes.info.XLogInfoFactory.createLogInfo(log);

System.out.println("Setting classifier");
org.deckfour.xes.classification.XEventClassifier classifier = logInfo.getEventClassifiers().iterator().next();

System.out.println("Creating heuristics miner settings");
org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings hms = new org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings();
hms.setClassifier(classifier);

System.out.println("Calling miner");
net = mine_for_a_heuristics_net_using_heuristics_miner(log, hms);

System.out.println("Visualize");
javax.swing.JComponent comp = visualize_heuristicsnet_with_annotations(net);
javax.swing.JFrame frame = new javax.swing.JFrame();
frame.add(comp);
frame.setSize(400,400);
frame.setVisible(true);

System.out.println("done.");
heuristics_miner_gui

Heuristics Miner run from the Command Line, visualizing the output in a new JFrame.

 

If you want to change the parameters of the HeuristicsMiner, you can do this via the HeuristicsMinerSettings object. However, here I have to refer you to the source code of the HeuristicsMiner package to study the details of this class. See https://svn.win.tue.nl/trac/prom/ as a starting point.

If you prefer to create a process model in a serializable format out of a HeuristicsNet, simply change your script to invoke another plugin to translate the heuristics net into a Petri net. Below is the script that will also save the resulting Petri net as PNML file to disk. File script_heuristics_miner_pn.txt:

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Getting log info");
org.deckfour.xes.info.XLogInfo logInfo = org.deckfour.xes.info.XLogInfoFactory.createLogInfo(log);

System.out.println("Setting classifier");
org.deckfour.xes.classification.XEventClassifier classifier = logInfo.getEventClassifiers().iterator().next();

System.out.println("Creating heuristics miner settings");
org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings hms = new org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings();
hms.setClassifier(classifier);

System.out.println("Calling miner");
net = mine_for_a_heuristics_net_using_heuristics_miner(log, hms);

System.out.println("Translating to PN");
pn_and_marking = convert_heuristics_net_into_petri_net(net);

System.out.println("Saving net");
File net_file = new File("mined_net.pnml");
pnml_export_petri_net_(pn_and_marking[0], net_file);

System.out.println("done.");

The last example I will show in this blog post is a script to invoke the very reliable InductiveMiner with default parameters which includes some basic noise handling capabilities. The resulting Petri net is written to disk. File script_inductive_miner_pn.txt

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Creating settings");
org.processmining.plugins.InductiveMiner.mining.MiningParametersIMi parameters = new org.processmining.plugins.InductiveMiner.mining.MiningParametersIMi();

System.out.println("Calling miner");
pn_and_marking = mine_petri_net_with_inductive_miner_with_parameters(log, parameters);

System.out.println("Saving net");
File net_file = new File("mined_net.pnml");
pnml_export_petri_net_(pn_and_marking[0], net_file);

System.out.println("done.");

Take Away

It is possible to run process mining analyses in a more automated form using scripts and the ProM CLI interface. Many, but by far not all, plugins are available to run in a non-GUI context. The scripts are non-trivial and may require knowledge of the plugin code to prepare correct plugin settings etc. Luckily, all plugins are open source and ready to be checked. Start here: https://svn.win.tue.nl/trac/prom/

If you were wondering, yes, that’s how we run automated tests for ProM.

For those, who prefer a less experimental environment for automated process mining analysis, I highly recommend ProM integration with RapidMiner available at: http://www.rapidprom.org/

Feel free to post further scripts. In case you have problems with running a particular plugin, I suggest to contact the plugin author to make the plugin ready for the CLI environment.

Is my log big enough for process mining? Some thoughts on generalization

I recently received an email from a colleague who is active in specification mining (software specifications from execution traces and code artifacts) with the following question.

Do you know of process mining works that deal with the confidence one may have in the mined specifications given the set of traces, i.e., how do we know we have seen “enough” traces? Can we quantify our confidence that the model we built from the traces we have seen is a good one?

The property the colleague asked about is called generalization in process mining. As my reply to this question summarized some insights that I gained recently, I though it was time to share this information further.

A system S can produce a language L. In reality we only see a subset of the behaviors that is recorded in a log K. Ideally, we want that a model M that was discovered from K can reproduce L. We say that M generalizes K well if M also accepts traces from L that are not in K (the more, the better).

This measure is in contradiction with 3 other measures (fitness, precision, and simplicity) as the most general model that accepts all traces is not very precise (M should not accept traces that are not in L). These measures are described informally in The Need for a Process Mining Evaluation Framework in Research and Practice (doi: http://dx.doi.org/10.1007/978-3-540-78238-4_10) and the paper On the Role of Fitness, Precision, Generalization and Simplicity in Process Discovery (doi: http://dx.doi.org/10.1007/978-3-642-33606-5_19) shows how they influence each other in practice.

There is currently no generic mathematical definition to compute for a given log K that it was general enough (contains enough information to infer entire L from K). This usually depends on the algorithm, the kind of original system S/the language L, and the kind of model one would like to discover.

The most general result that I am aware of is that the log K has to be directly follows-complete. Action A of the system S directly follows action B if there is some execution trace …AB… of S where first A occurs and then directly B occurs. The log K is directly follows-complete iff for any two actions A, B that directly follow each other, there is a trace …AB… in L. Every system S with a finite number of actions has a finite log K that is directly follows-complete, even if the language L of S is infinite. For such logs, there are algorithms that ensure that S (or a system that is trace-equivalent to S) can be rediscovered. See for instance Discovering Block-Structured Process Models from Event Logs – A Constructive Approach (doi: http://dx.doi.org/10.1007/978-3-642-38697-8_17).

In general, if you have the original system S (or some finite characterization of L), then it will be possible to compute whether the log K is directly follows-complete. If you do not have S or L, then we currently do not know any means to estimate how complete K is. This is an open question that we are currently looking into. Yet, in essence, you have to estimate the probabilities that the information in log K suffices to explain particular languages constructs that the original system has/may have.

We are currently looking deeper into these questions. If you have more pointers on this topic, feel free to drop a comment or an email.

noise canceling, or: what Beethoven has to do with your business

You know the problem. You are on your local commute, in a train, on the plane and all you want to do to kill the time is listen to your most favorite album, audio book, radio program, or latest TV episode. And while all this audio is there, coming to you via your headphones, you also hear the train rattling, the engines bustling, and people talking. So, all the experience you area looking for is dampened by inevitable noise.

Processes are the same. The only thing you want to do in your business is provide service to your customers, build a neat product, invent the next top-notch thing, or just pay some bills. And then reality comes and puts in all that noise into your business like phone calls, non-working printers, unprovided services, telephone hotlines, late clients, ill staff, … And about all that dealing with life, you forget about what you are good at, and you don’t know where you lack support or where you could improve. The good news is, that there is some neat technique around, called process mining. Process mining  is like a consultant that can speak to your IT equipment to tell you what your business is actually doing. The problem is that this consultant has a very sensitive ear. It hears far more noise than the Beethoven sonata you thought your business will be and it will tell you not only about Beethoven but also about all the noise that it has heard.

Enter: the noise canceling headphones. They let you enjoy your favorite piece of audio in the average noisy environment by filtering your environment’s humming and chatter from the sound waves that reach your ear.

Last week, I’ve accomplished something similar for our consultant with overly sensitive ears. I’ve built some noise filtering algorithms that work a little like noise canceling headphones for process mining. So, instead of now telling you about a business process soaked in noise (on the left), you may actually get to know about the actual process in your business (on the right). And the amazing thing is: this works on real data.

filtering a mined process model

So, enjoy your Beethoven.