How Do People Create Process Models?

Over the last 7 years, I have been collaborating with some colleagues on a number of experiments where we investigated how people create process models. In particular, we wanted to see where and how they differ and whether their personal unique “modeling style” has an impact on model quality. In this – rather long – blog post, I want to summarize what we found out and point to the different studies that we published. (To be honest, I collected this information for a Master student who wants to replicate some of these studies, but I might as well share it with others). So, here we go.

First experiment: organize your process description!

In 2010, we conducted a first structured experiment on quality of modeling outcomes. We compared how the way an informal requirements document is organized impacts the quality of the created model (modelers get a text about a process and have to create a graphical model – say in BPMN). Spoiler: a breadth-first description of the process works best.


Models were created more accurately when the process description was given in breadth-first order.

Jakob Pinggera, Stefan Zugal, Barbara Weber, Dirk Fahland, Matthias Weidlich, Jan Mendling, Hajo A. Reijers: How the Structuring of Domain Knowledge Helps Casual Process Modelers. ER 2010: 445-451

The conceptual background for this and subsequent experiments were two paper investigating the nature of modeling languages regarding how they use particular modeling concepts to structure knowledge about a process.

  • Dirk Fahland, Daniel Lübke, Jan Mendling, Hajo A. Reijers, Barbara Weber, Matthias Weidlich, Stefan Zugal: Declarative versus Imperative Process Modeling Languages: The Issue of Understandability. BMMDS/EMMSAD 2009: 353-366
  • Dirk Fahland, Jan Mendling, Hajo A. Reijers, Barbara Weber, Matthias Weidlich, Stefan Zugal: Declarative versus Imperative Process Modeling Languages: The Issue of Maintainability. Business Process Management Workshops 2009: 477-488

Visualizing how people model

In 2011, we published a paper describing a software platform for recording and analyzing modeling actions on a canvas. We also describe the visualization of modeling actions in a time-series diagram where specific phases in the modeling process (creating elements, arranging existing elements, deleting elements, thinking about the process) can be identified and highlighted as illustrated below.


In the experiments, we could observe significant differences between how different modelers approach the same modeling task – manifesting itself in remarkably distinct modeling phase diagrams.


Jakob Pinggera, Stefan Zugal, Matthias Weidlich, Dirk Fahland, Barbara Weber, Jan Mendling, Hajo A. Reijers: Tracing the Process of Process Modeling with Modeling Phase Diagrams. Business Process Management Workshops (1) 2011: 370-382

Identifying modeling styles

In 2012, we analyzed these differences between how modelers approach a modeling task further. We plotted the number of creation, deletion, and re-arranging actions on the canvas on a time-series. We binned these modeling actions into segments of 10 seconds length; each second has a particular “modeling profile” of creation, deletion, and re-arranging actions. We then clustered users based on their “modeling profiles”, i.e., typical occurrences of create/delete/move actions throughout their modeling, and identified three unique clusters of “modeling profiles”. Below is the “modeling profile” of the cluster showing many creation operations early in the modeling and few delete operations.


Jakob Pinggera, Pnina Soffer, Stefan Zugal, Barbara Weber, Matthias Weidlich, Dirk Fahland, Hajo A. Reijers, Jan Mendling: Modeling Styles in Business Process Modeling. BMMDS/EMMSAD 2012: 151-166

We then conducted a subsequent, more detailed analysis of these clusters and also investigated the modeling phase diagrams of each cluster. First, we could establish that there are statistically significant differences between the three clusters in (1) speed of adding modeling elements, (2) duration of phases of improving the model layout and elements moves in a phase of layouting, (3) time between adding model elements, thinking about the model, and adding further model elements. Altogether, we could then characterize 3 unique modeling styles from these clusters

  1.  Quick modelers who (after some initial deliberation on the process), create an almost correct model right away and only need minimal adjustments of model layout and few thinking pauses
  2. Modelers who model at a slower pace and make regular and longer layouting breaks (possibly to plan their next modeling steps)
  3. Modelers who also model at a slower pace but require less layouting than the previous group.

This analysis also gave us a first idea into which factors influence how people approach a modeling task. The central two factors are (1) the cognitive load created by the modeling tasks, largely influencing the efficiency with which the model is created, and (2) tool support for layouting, largely influencing the amount of time spent on organizing the model on the canvas.

Jakob Pinggera, Pnina Soffer, Dirk Fahland, Matthias Weidlich, Stefan Zugal, Barbara Weber, Hajo A. Reijers, Jan Mendling: Styles in business process modeling: an exploration and a model. Software and System Modeling 14(3): 1055-1080 (2015)

Modeling style vs model quality

In a second line of analysis, we investigated how the way modelers create their models impacts the quality of the resulting model. By analyzing modeling operations at a more fine-grained level and also considering the modeling elements themselves, we could compare modeling processes at a more detailed level. Below, we see visualizations of four different modelers creating the same model (visualized using the DottedChart plugin of ProM. Each line corresponds to a modeling element (node or arc), green dots show creation operations, blue dots show move operations, and red dots show delete operations.


By analyzing the location of modeling elements on the canvas, and the time between different modeling activities, we could confirm three hypotheses:

  1. Structured modeling (e.g., in clearly defined blocks) is linked to better model quality
  2. lots of movement of modeling objects is linked to lower model quality, and
  3. low modeling speed is linked to low model quality.

Jan Claes, Irene T. P. Vanderfeesten, Hajo A. Reijers, Jakob Pinggera, Matthias Weidlich, Stefan Zugal, Dirk Fahland, Barbara Weber, Jan Mendling, Geert Poels: Tying Process Model Quality to the Modeling Process: The Impact of Structuring, Movement, and Speed. BPM 2012: 33-48

The impact of structured modeling on modeling quality was analyzed further. In a further set of experiments, factors that impact the cognitive load of the modelers were analyzed. In particular, the researchers looked for factors that help to reduce the cognitive load of the model thus helping him to have more cognitive capacity to create correct models. Besides confirming and deepening the 2010 experiment (structured breadth-first organization of process knowledge improves model quality), the experiment also shows that the characteristics of the modeler impact model quality: A modeler may have a preference of structuring knowledge in a particular way. If process knowledge is presented to them fitting their preference, the individual cognitive load is lower and model quality increases. The image below shows  “aspect-oriented” modeling, where a modeler first finishes a first aspect of the model, then works on a second aspect that may involve many modeling elements created earlier.


Jan Claes, Irene T. P. Vanderfeesten, Frederik Gailly, Paul Grefen, Geert Poels: The Structured Process Modeling Theory (SPMT) a cognitive view on why and how modelers benefit from structuring the process of process modeling. Information Systems Frontiers 17(6): 1401-1425 (2015)

The following, longer journal paper summarizes several techniques for visually analyzing the process of process modeling from various angles.

Jan Claes, Irene T. P. Vanderfeesten, Jakob Pinggera, Hajo A. Reijers, Barbara Weber, Geert Poels: A visual analysis of the process of process modeling. Inf. Syst. E-Business Management 13(1): 147-190 (2015)

For the really interested, there are 2 PhD theses on the topic:


Tutorial: Automating Process Mining with ProM’s Command Line Interface

In this blogpost I explain how to invoke the process mining tool ProM from the commandline without using its graphical user interface. This allows you to run process mining analyses on several logs in batch mode without user interaction. Before you get too excited: there are quite some limitations to this, which I will address in the end. The following instructions have been tested for the ProM 6.4.1 release.

Invoking the ProM Commandline Interface

The ProM commandline interface (CLI) can be invoked through the class


To properly invoke the CLI for ProM 6.4.1, use the following command (which is a copy of the command in ProM641.bat with changed main class).

java -da -Xmx1G -XX:MaxPermSize=256m -classpath ProM641.jar -Djava.util.Arrays.useLegacyMergeSort=true org.processmining.contexts.cli.CLI

The CLI itself has no interactive user interface. Instead, it executes scripts passed to it as commandline parameter. To simplify your life, I suggest to put the command into a batch file ProM_CLI.bat or shell script that passes on 2 commandline parameters. For instance

java -da -Xmx1G -XX:MaxPermSize=256m -classpath ProM641.jar -Djava.util.Arrays.useLegacyMergeSort=true org.processmining.contexts.cli.CLI %1 %2

A typical example script that the ProM CLI takes is the following script_alpha_miner.txt

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Mining model");
net_and_marking = alpha_miner(log);
net = net_and_marking[0];
marking = net_and_marking[1];

System.out.println("Saving net");
File net_file = new File("mined_net.pnml");
pnml_export_petri_net_(net, net_file);


You can invoke it with the command

ProM_CLI.bat -f script_alpha_miner.txt

It will read the log file myLog.xes (stored in the current working directory), invoke the alpha miner, and write the resulting Petri net as a PNML file mined_net.pnml to the current working directory. (No, there is currently no way to pass file names as additional commandline parameters to the script).

Note: when running the above script ProM will first produce a (large) number of messages on the screen during the startup phase related to scanning for available packages and plugins, bear with it until it is ready.

Scripts for ProM

The language used for the scripts is basically Java interpreted at runtime. In principle, you can put any Java code which you would put into a method body (no class/method declarations). In case the Java reflection framework is able to infer the type, variables do not have to be declared, but can just be used like in a dynamically typed language. For example variable log in the script_alpha_miner.txt will be inferred to have type XLog.

In a script, you can directly invoke ProM plugins through special method names provided by the CLI; the method names are derived from the plugin names shown in ProM. For example the plugin “Alpha Miner” is available as method alpha_miner. You can get the full list of all ProM plugins available for script invocation with the command liner parameter ‘-l’ (“dash lower-case L”):

ProM_CLI.bat -l

This will scan all installed packages for plugins that do not require the GUI to run and list them in the form name(input types) -> (output types). For example, if you have installed the AlphaMiner package the following plugins will be listed (among many others).

alpha_miner(XLogInfo, LogRelations) -> (Petrinet, Marking)
alpha_miner(XLog) -> (Petrinet, Marking)
alpha_miner(XLog, XLogInfo) -> (Petrinet, Marking)

Use the ProM Package Manager to install plugins you do not find in the list of installed plugins.

The script_alpha_miner.txt uses the second method signature alpha_miner(XLog) -> (Petrinet, Marking) to discover from an XLog a Petrinet and a Marking. In case a plugin returns multiple objects, the return result is an Object[] array, in which you can access the individual components as usual, i.e., net_and_marking[0] contains the PetriNet and net_and_marking[1] contains the Marking.

Besides the typical plugins you already know from the ProM GUI, there are also plugins for loading files and saving files. Just browse the list of available plugins to find the right type.

I suggest to store the list of available plugins in a separate plugin_list.txt file for easier searching using the following command

ProM_CLI.bat -l > plugin_list.txt

Now, you basically know everything to invoke ProM from the commandline.

  1. Create the ProM_CLI.bat or
  2. Run the ProM PackageManager to install your desired plugins. If you run the PackageManager for the first time, it will suggest a set of standard packages to install which cover most process mining use cases.
  3. Get the list of available plugins.
  4. Write a script.
  5. Invoke the script.

Known Caveats

The ProM CLI is not the primary user interface of ProM and as such does not get the same attention to usability as the GUI. Thus, it is better to consider the CLI an experimental feature where not everything works as you know it from the GUI and that may sink a bit of time and effort to get running. Several factors you should consider:

  • You only can use plugins from the CLI which have been programmed to work without the GUI. Whether your favourite plugin is available depends on two aspects:
    1. Does the plugin require configuration of parameters for which no good default settings are available, so user feedback is requires (for example particular log filtering options)?
    2. Did the developer of that plugin have the time to implement a non-GUI version? We encourage users to first develop the non-GUI version of the plugin and introduce GUI-reliant components only later. However as ProM is an open platform with many contributing parties, individual developers may choose otherwise. If a particular plugin is not available on the CLI, please get in touch with the developer whether this can be changed.
  • The CLI cannot invoke any code that requires the ProM GUI environment. Any plugin that attempts to do that on the side will terminate the CLI with an exception. That being said, you actually can invoke visualizer plugins that produce a JComponent and then create a new JFrame to visualize the JComponent, see below for an example. However, the functionality of these will be limited (e.g. export of pictures, interaction with the model etc. most likely won’t work)
  • It may be that ProM CLI does not terminate/close after the script completes. Workaround: include a System.out.println(“done.”); statement at the end of your script to indicate termination. When you see the “done.” line printed on the screen but ProM is still running, you can terminate it manually (CTRL+C) without loosing data.
  • Log files may not be (g)zipped, i.e., the CLI can only load plain XES or MXML files.
  • PNML files produced by a mining algorithm in the CLI have no layout information yet. If you want to visualize such a PNML file, you have to open it in a tool that can automatically compute the layout of the model. Opening the file in the ProM GUI will do. Invoking a the plugin to visualize a Petri net will also invoke computation of the layout.
  • The plugins to load a file or save a file are named rather inconsistently across the different packages. You may have to look for various keywords like “load”, “open”, “import”, “save”, “export” to find the right load/save plugin.
  • Plugins to load a file always come with a signature that take a String parameter as the path to the file to load. Plugins to save a file always require a File parameter. Thus, you first have to create a file handle myFile = new File(pathToSave); and then pass this handle to the “save file plugin”.
  • Files are read/written relative to the current working directory.
  • Even if the plugins you want to use are available for the CLI, executing them may throw exceptions because the plugin (although accessible from the CLI) assumes  settings that can only be set correctly in a GUI dialog. The only workaround here is to extend your script with Java code that produces all the settings expected by the plugin. See below for more advanced examples.
  • Creating these scripts is certainly on the less convenient side of development. You have no development environment with syntax check, code completion etc. In case your script has an error, you will only notice at runtime when you get a long evaluation error thrown which tries to highlight the problematic part of the script, but is typically hard to spot. You’ve been warned.

Advanced Examples

With all these restrictions in mind. Here are some more advanced scripts to get more advanced ProM plugins running. The following script invokes the HeuristicsMiner with its default settings. It needs some additional code to properly pass the event classifiers to the heuristics miner. The HeuristicsMiner typically produces nice results on real-life data because it does not use a standard process modeling notation as target language. As a consequence, there is no serialization format. However, you can invoke the visualization plugin and pass it to a new JFrame to visualize the result. File script_heuristics_miner.txt

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Getting log info"); logInfo =;

System.out.println("Setting classifier");
org.deckfour.xes.classification.XEventClassifier classifier = logInfo.getEventClassifiers().iterator().next();

System.out.println("Creating heuristics miner settings");
org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings hms = new org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings();

System.out.println("Calling miner");
net = mine_for_a_heuristics_net_using_heuristics_miner(log, hms);

javax.swing.JComponent comp = visualize_heuristicsnet_with_annotations(net);
javax.swing.JFrame frame = new javax.swing.JFrame();


Heuristics Miner run from the Command Line, visualizing the output in a new JFrame.


If you want to change the parameters of the HeuristicsMiner, you can do this via the HeuristicsMinerSettings object. However, here I have to refer you to the source code of the HeuristicsMiner package to study the details of this class. See as a starting point.

If you prefer to create a process model in a serializable format out of a HeuristicsNet, simply change your script to invoke another plugin to translate the heuristics net into a Petri net. Below is the script that will also save the resulting Petri net as PNML file to disk. File script_heuristics_miner_pn.txt:

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Getting log info"); logInfo =;

System.out.println("Setting classifier");
org.deckfour.xes.classification.XEventClassifier classifier = logInfo.getEventClassifiers().iterator().next();

System.out.println("Creating heuristics miner settings");
org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings hms = new org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings();

System.out.println("Calling miner");
net = mine_for_a_heuristics_net_using_heuristics_miner(log, hms);

System.out.println("Translating to PN");
pn_and_marking = convert_heuristics_net_into_petri_net(net);

System.out.println("Saving net");
File net_file = new File("mined_net.pnml");
pnml_export_petri_net_(pn_and_marking[0], net_file);


The last example I will show in this blog post is a script to invoke the very reliable InductiveMiner with default parameters which includes some basic noise handling capabilities. The resulting Petri net is written to disk. File script_inductive_miner_pn.txt

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Creating settings");
org.processmining.plugins.InductiveMiner.mining.MiningParametersIMi parameters = new org.processmining.plugins.InductiveMiner.mining.MiningParametersIMi();

System.out.println("Calling miner");
pn_and_marking = mine_petri_net_with_inductive_miner_with_parameters(log, parameters);

System.out.println("Saving net");
File net_file = new File("mined_net.pnml");
pnml_export_petri_net_(pn_and_marking[0], net_file);


Take Away

It is possible to run process mining analyses in a more automated form using scripts and the ProM CLI interface. Many, but by far not all, plugins are available to run in a non-GUI context. The scripts are non-trivial and may require knowledge of the plugin code to prepare correct plugin settings etc. Luckily, all plugins are open source and ready to be checked. Start here:

If you were wondering, yes, that’s how we run automated tests for ProM.

For those, who prefer a less experimental environment for automated process mining analysis, I highly recommend ProM integration with RapidMiner available at:

Feel free to post further scripts. In case you have problems with running a particular plugin, I suggest to contact the plugin author to make the plugin ready for the CLI environment.

Is my log big enough for process mining? Some thoughts on generalization

I recently received an email from a colleague who is active in specification mining (software specifications from execution traces and code artifacts) with the following question.

Do you know of process mining works that deal with the confidence one may have in the mined specifications given the set of traces, i.e., how do we know we have seen “enough” traces? Can we quantify our confidence that the model we built from the traces we have seen is a good one?

The property the colleague asked about is called generalization in process mining. As my reply to this question summarized some insights that I gained recently, I though it was time to share this information further.

A system S can produce a language L. In reality we only see a subset of the behaviors that is recorded in a log K. Ideally, we want that a model M that was discovered from K can reproduce L. We say that M generalizes K well if M also accepts traces from L that are not in K (the more, the better).

This measure is in contradiction with 3 other measures (fitness, precision, and simplicity) as the most general model that accepts all traces is not very precise (M should not accept traces that are not in L). These measures are described informally in The Need for a Process Mining Evaluation Framework in Research and Practice (doi: and the paper On the Role of Fitness, Precision, Generalization and Simplicity in Process Discovery (doi: shows how they influence each other in practice.

There is currently no generic mathematical definition to compute for a given log K that it was general enough (contains enough information to infer entire L from K). This usually depends on the algorithm, the kind of original system S/the language L, and the kind of model one would like to discover.

The most general result that I am aware of is that the log K has to be directly follows-complete. Action A of the system S directly follows action B if there is some execution trace …AB… of S where first A occurs and then directly B occurs. The log K is directly follows-complete iff for any two actions A, B that directly follow each other, there is a trace …AB… in L. Every system S with a finite number of actions has a finite log K that is directly follows-complete, even if the language L of S is infinite. For such logs, there are algorithms that ensure that S (or a system that is trace-equivalent to S) can be rediscovered. See for instance Discovering Block-Structured Process Models from Event Logs – A Constructive Approach (doi:

In general, if you have the original system S (or some finite characterization of L), then it will be possible to compute whether the log K is directly follows-complete. If you do not have S or L, then we currently do not know any means to estimate how complete K is. This is an open question that we are currently looking into. Yet, in essence, you have to estimate the probabilities that the information in log K suffices to explain particular languages constructs that the original system has/may have.

We are currently looking deeper into these questions. If you have more pointers on this topic, feel free to drop a comment or an email.

Mining Branching-Time Scenarios From Execution Traces

Over the last two years, I have been working with Shahar Maoz and David Lo on discovering high-level specifications of an application from its execution traces. This topic is also known as specification mining and has been brought up around 2002 because we we keep on writing humongous amounts of undocumented code that other people have to use, maintain, or deal with in a later iteration of the software. Getting into this “legacy” code is extremely time consuming and there is a high chance that you will break something because you do not understand how the existing code works.

Specification mining aimes at extracting (mostly) visual representations of existing code that describes its essential architecture and workings at a higher level of abstraction, so that a developer can first get an overview before diving into the code.

We looked particularly at the problem of understanding object interplay in object-oriented code, where you often have many objects that invoke methods of other objects, essentially distributing a behavior over many classes. Everyone who ever tried to understand code that uses a number of architectural patterns combined, such as factories and commands, knows what I mean.

Building on earlier works, we found a way to discover scenarios (something like UML sequence diagrams) that show how multiple objects interact with each other in particular situation. More importantly, we found a way to discover and distinguish two kinds of scenarios:

  • scenarios that describe behavioral invariants of your application (whenever the user presses this button, the application will open that window), and
  • scenarios that describe behavioral alternatives (when the user is on this screen, she may continue with this, that, or even that command)

These two scenarios combined give a good understanding of how an application works at a high level of abstraction. Here is the presentation showing you how it works:

You can also get the full paper, our tool for discovering scenarios, and the data set that we used in our evaluation.

Drop me an email or a comment if you find this useful and/or would like to know more.


some historical statistics on modeling formalisms

Some historical statistics about the use of formalisms for modeling automated systems since 1800. Here is a chart about how frequently a particular name of a modeling formalism was used in literature. I’ve picked Automata, Petri nets, Process algebra, Statecharts, and UML.

You can get the full chart (and add your own favorite formalism) using Google Ngram. What I find surprising is that Petri nets were at some point as relevant in literature as automata (which have been discussed since the 1800s already). I’m not surprised that UML peaks all of them by far. On second throught also UML’s decline is not suprising as the hype returns to normal levels. What I do find surprising is that process algebras are much less referenced in literature than even the very particular, though successful, technique of statecharts.


proper interface descriptions for your service

A service is, in computer science terms, a functionality (of a software or of a device), that hides its implementation details to the user. To be able to use the service, the service has to declare what the service does at its interface. In the old days, you would get a manual, in the new days you would descriptions in some fancy service description language.

Besides discussions how to write down what a service does, I feel there should also be some thoughts about what all should be described about a service. It’s quite easy to miss something important as the following video from our new building shows.

Using a Coffee Machine Service

from the other side of the review form: why papers get rejected and what the BPM community can do about it

I’ve been reviewing papers for the International Conference on Business Process Management (BPM) over the last 3 or 4 years, in 2011 and 2012 as a member of the Program Committee. In that time my evaluations of submitted research papers have been to reject the paper in the very vast majority of cases. This year the best score I gave was a borderline on one paper, and a reject on all other papers (8 in total), other years were a bit better, but not much. In the following I’d like to share my view on why the papers were rejected. Paper authors may find this interesting to learn how they can improve their chances of getting a paper accepted. Perhaps more importantly, it also sheds a light on publishing standards within BPM research and what the BPM community as a whole can do to promote these standards.

Continue reading