Tutorial: Automating Process Mining with ProM’s Command Line Interface

In this blogpost I explain how to invoke the process mining tool ProM from the commandline without using its graphical user interface. This allows you to run process mining analyses on several logs in batch mode without user interaction. Before you get too excited: there are quite some limitations to this, which I will address in the end. The following instructions have been tested for the ProM 6.4.1 release.

Invoking the ProM Commandline Interface

The ProM commandline interface (CLI) can be invoked through the class

 org.processmining.contexts.cli.CLI

To properly invoke the CLI for ProM 6.4.1, use the following command (which is a copy of the command in ProM641.bat with changed main class).

java -da -Xmx1G -XX:MaxPermSize=256m -classpath ProM641.jar -Djava.util.Arrays.useLegacyMergeSort=true org.processmining.contexts.cli.CLI

The CLI itself has no interactive user interface. Instead, it executes scripts passed to it as commandline parameter. To simplify your life, I suggest to put the command into a batch file ProM_CLI.bat or shell script ProM_CLI.sh that passes on 2 commandline parameters. For instance

java -da -Xmx1G -XX:MaxPermSize=256m -classpath ProM641.jar -Djava.util.Arrays.useLegacyMergeSort=true org.processmining.contexts.cli.CLI %1 %2

A typical example script that the ProM CLI takes is the following script_alpha_miner.txt

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Mining model");
net_and_marking = alpha_miner(log);
net = net_and_marking[0];
marking = net_and_marking[1];

System.out.println("Saving net");
File net_file = new File("mined_net.pnml");
pnml_export_petri_net_(net, net_file);

System.out.println("done.");

You can invoke it with the command

ProM_CLI.bat -f script_alpha_miner.txt

It will read the log file myLog.xes (stored in the current working directory), invoke the alpha miner, and write the resulting Petri net as a PNML file mined_net.pnml to the current working directory. (No, there is currently no way to pass file names as additional commandline parameters to the script).

Note: when running the above script ProM will first produce a (large) number of messages on the screen during the startup phase related to scanning for available packages and plugins, bear with it until it is ready.

Scripts for ProM

The language used for the scripts is basically Java interpreted at runtime. In principle, you can put any Java code which you would put into a method body (no class/method declarations). In case the Java reflection framework is able to infer the type, variables do not have to be declared, but can just be used like in a dynamically typed language. For example variable log in the script_alpha_miner.txt will be inferred to have type XLog.

In a script, you can directly invoke ProM plugins through special method names provided by the CLI; the method names are derived from the plugin names shown in ProM. For example the plugin “Alpha Miner” is available as method alpha_miner. You can get the full list of all ProM plugins available for script invocation with the command liner parameter ‘-l’ (“dash lower-case L”):

ProM_CLI.bat -l

This will scan all installed packages for plugins that do not require the GUI to run and list them in the form name(input types) -> (output types). For example, if you have installed the AlphaMiner package the following plugins will be listed (among many others).

alpha_miner(XLogInfo, LogRelations) -> (Petrinet, Marking)
alpha_miner(XLog) -> (Petrinet, Marking)
alpha_miner(XLog, XLogInfo) -> (Petrinet, Marking)

Use the ProM Package Manager to install plugins you do not find in the list of installed plugins.

The script_alpha_miner.txt uses the second method signature alpha_miner(XLog) -> (Petrinet, Marking) to discover from an XLog a Petrinet and a Marking. In case a plugin returns multiple objects, the return result is an Object[] array, in which you can access the individual components as usual, i.e., net_and_marking[0] contains the PetriNet and net_and_marking[1] contains the Marking.

Besides the typical plugins you already know from the ProM GUI, there are also plugins for loading files and saving files. Just browse the list of available plugins to find the right type.

I suggest to store the list of available plugins in a separate plugin_list.txt file for easier searching using the following command

ProM_CLI.bat -l > plugin_list.txt

Now, you basically know everything to invoke ProM from the commandline.

  1. Create the ProM_CLI.bat or ProM_CLI.sh
  2. Run the ProM PackageManager to install your desired plugins. If you run the PackageManager for the first time, it will suggest a set of standard packages to install which cover most process mining use cases.
  3. Get the list of available plugins.
  4. Write a script.
  5. Invoke the script.

Known Caveats

The ProM CLI is not the primary user interface of ProM and as such does not get the same attention to usability as the GUI. Thus, it is better to consider the CLI an experimental feature where not everything works as you know it from the GUI and that may sink a bit of time and effort to get running. Several factors you should consider:

  • You only can use plugins from the CLI which have been programmed to work without the GUI. Whether your favourite plugin is available depends on two aspects:
    1. Does the plugin require configuration of parameters for which no good default settings are available, so user feedback is requires (for example particular log filtering options)?
    2. Did the developer of that plugin have the time to implement a non-GUI version? We encourage users to first develop the non-GUI version of the plugin and introduce GUI-reliant components only later. However as ProM is an open platform with many contributing parties, individual developers may choose otherwise. If a particular plugin is not available on the CLI, please get in touch with the developer whether this can be changed.
  • The CLI cannot invoke any code that requires the ProM GUI environment. Any plugin that attempts to do that on the side will terminate the CLI with an exception. That being said, you actually can invoke visualizer plugins that produce a JComponent and then create a new JFrame to visualize the JComponent, see below for an example. However, the functionality of these will be limited (e.g. export of pictures, interaction with the model etc. most likely won’t work)
  • It may be that ProM CLI does not terminate/close after the script completes. Workaround: include a System.out.println(“done.”); statement at the end of your script to indicate termination. When you see the “done.” line printed on the screen but ProM is still running, you can terminate it manually (CTRL+C) without loosing data.
  • Log files may not be (g)zipped, i.e., the CLI can only load plain XES or MXML files.
  • PNML files produced by a mining algorithm in the CLI have no layout information yet. If you want to visualize such a PNML file, you have to open it in a tool that can automatically compute the layout of the model. Opening the file in the ProM GUI will do. Invoking a the plugin to visualize a Petri net will also invoke computation of the layout.
  • The plugins to load a file or save a file are named rather inconsistently across the different packages. You may have to look for various keywords like “load”, “open”, “import”, “save”, “export” to find the right load/save plugin.
  • Plugins to load a file always come with a signature that take a String parameter as the path to the file to load. Plugins to save a file always require a File parameter. Thus, you first have to create a file handle myFile = new File(pathToSave); and then pass this handle to the “save file plugin”.
  • Files are read/written relative to the current working directory.
  • Even if the plugins you want to use are available for the CLI, executing them may throw exceptions because the plugin (although accessible from the CLI) assumes  settings that can only be set correctly in a GUI dialog. The only workaround here is to extend your script with Java code that produces all the settings expected by the plugin. See below for more advanced examples.
  • Creating these scripts is certainly on the less convenient side of development. You have no development environment with syntax check, code completion etc. In case your script has an error, you will only notice at runtime when you get a long evaluation error thrown which tries to highlight the problematic part of the script, but is typically hard to spot. You’ve been warned.

Advanced Examples

With all these restrictions in mind. Here are some more advanced scripts to get more advanced ProM plugins running. The following script invokes the HeuristicsMiner with its default settings. It needs some additional code to properly pass the event classifiers to the heuristics miner. The HeuristicsMiner typically produces nice results on real-life data because it does not use a standard process modeling notation as target language. As a consequence, there is no serialization format. However, you can invoke the visualization plugin and pass it to a new JFrame to visualize the result. File script_heuristics_miner.txt

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Getting log info");
org.deckfour.xes.info.XLogInfo logInfo = org.deckfour.xes.info.XLogInfoFactory.createLogInfo(log);

System.out.println("Setting classifier");
org.deckfour.xes.classification.XEventClassifier classifier = logInfo.getEventClassifiers().iterator().next();

System.out.println("Creating heuristics miner settings");
org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings hms = new org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings();
hms.setClassifier(classifier);

System.out.println("Calling miner");
net = mine_for_a_heuristics_net_using_heuristics_miner(log, hms);

System.out.println("Visualize");
javax.swing.JComponent comp = visualize_heuristicsnet_with_annotations(net);
javax.swing.JFrame frame = new javax.swing.JFrame();
frame.add(comp);
frame.setSize(400,400);
frame.setVisible(true);

System.out.println("done.");
heuristics_miner_gui

Heuristics Miner run from the Command Line, visualizing the output in a new JFrame.

 

If you want to change the parameters of the HeuristicsMiner, you can do this via the HeuristicsMinerSettings object. However, here I have to refer you to the source code of the HeuristicsMiner package to study the details of this class. See https://svn.win.tue.nl/trac/prom/ as a starting point.

If you prefer to create a process model in a serializable format out of a HeuristicsNet, simply change your script to invoke another plugin to translate the heuristics net into a Petri net. Below is the script that will also save the resulting Petri net as PNML file to disk. File script_heuristics_miner_pn.txt:

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Getting log info");
org.deckfour.xes.info.XLogInfo logInfo = org.deckfour.xes.info.XLogInfoFactory.createLogInfo(log);

System.out.println("Setting classifier");
org.deckfour.xes.classification.XEventClassifier classifier = logInfo.getEventClassifiers().iterator().next();

System.out.println("Creating heuristics miner settings");
org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings hms = new org.processmining.plugins.heuristicsnet.miner.heuristics.miner.settings.HeuristicsMinerSettings();
hms.setClassifier(classifier);

System.out.println("Calling miner");
net = mine_for_a_heuristics_net_using_heuristics_miner(log, hms);

System.out.println("Translating to PN");
pn_and_marking = convert_heuristics_net_into_petri_net(net);

System.out.println("Saving net");
File net_file = new File("mined_net.pnml");
pnml_export_petri_net_(pn_and_marking[0], net_file);

System.out.println("done.");

The last example I will show in this blog post is a script to invoke the very reliable InductiveMiner with default parameters which includes some basic noise handling capabilities. The resulting Petri net is written to disk. File script_inductive_miner_pn.txt

System.out.println("Loading log");
log = open_xes_log_file("myLog.xes");

System.out.println("Creating settings");
org.processmining.plugins.InductiveMiner.mining.MiningParametersIMi parameters = new org.processmining.plugins.InductiveMiner.mining.MiningParametersIMi();

System.out.println("Calling miner");
pn_and_marking = mine_petri_net_with_inductive_miner_with_parameters(log, parameters);

System.out.println("Saving net");
File net_file = new File("mined_net.pnml");
pnml_export_petri_net_(pn_and_marking[0], net_file);

System.out.println("done.");

Take Away

It is possible to run process mining analyses in a more automated form using scripts and the ProM CLI interface. Many, but by far not all, plugins are available to run in a non-GUI context. The scripts are non-trivial and may require knowledge of the plugin code to prepare correct plugin settings etc. Luckily, all plugins are open source and ready to be checked. Start here: https://svn.win.tue.nl/trac/prom/

If you were wondering, yes, that’s how we run automated tests for ProM.

For those, who prefer a less experimental environment for automated process mining analysis, I highly recommend ProM integration with RapidMiner available at: http://www.rapidprom.org/

Feel free to post further scripts. In case you have problems with running a particular plugin, I suggest to contact the plugin author to make the plugin ready for the CLI environment.

how to always read facebook’s news feed in “most recent first” order

Facebook has been tampering with its news feed design over the last weeks and months on its mobile apps and also the website. The “most recent” order is no longer a default view on the app (but several taps away hidden in a sub-sub-menu) as of version 10.0.0. On the default facebook page, the feed regularly switches back to “top news” every 1 or 2 weeks, even if you choose “most recent”. The option to choose the sort order of the news feed also disappeared from the mobile website (that you can reach when opening facebook.com on a mobile browser).

You can still change the ordering of the news feed on the Desktop page and, once changed, the mobile website will inherit the settings. But, I don’t like to change the sort order every 2 weeks again from the Desktop page.

Now, I just saw that facebook stores the sort order in the URL. You can use the following hard links to reach the news feed in the desired ordering:

  1. https://www.facebook.com/?sk=h_chr to read the news feed in chronological order “most recent first”, and
  2. https://www.facebook.com/?sk=h_nor to read the news feed in random order (“top news”).

I’ve placed this URL as a bookmark on the home screen of my smartphone and removed facebook’s mobile app. It actually loads much faster than the mobile app and I now read the feed in the preferred order. Let’s see how long these links work.

Is my log big enough for process mining? Some thoughts on generalization

I recently received an email from a colleague who is active in specification mining (software specifications from execution traces and code artifacts) with the following question.

Do you know of process mining works that deal with the confidence one may have in the mined specifications given the set of traces, i.e., how do we know we have seen “enough” traces? Can we quantify our confidence that the model we built from the traces we have seen is a good one?

The property the colleague asked about is called generalization in process mining. As my reply to this question summarized some insights that I gained recently, I though it was time to share this information further.

A system S can produce a language L. In reality we only see a subset of the behaviors that is recorded in a log K. Ideally, we want that a model M that was discovered from K can reproduce L. We say that M generalizes K well if M also accepts traces from L that are not in K (the more, the better).

This measure is in contradiction with 3 other measures (fitness, precision, and simplicity) as the most general model that accepts all traces is not very precise (M should not accept traces that are not in L). These measures are described informally in The Need for a Process Mining Evaluation Framework in Research and Practice (doi: http://dx.doi.org/10.1007/978-3-540-78238-4_10) and the paper On the Role of Fitness, Precision, Generalization and Simplicity in Process Discovery (doi: http://dx.doi.org/10.1007/978-3-642-33606-5_19) shows how they influence each other in practice.

There is currently no generic mathematical definition to compute for a given log K that it was general enough (contains enough information to infer entire L from K). This usually depends on the algorithm, the kind of original system S/the language L, and the kind of model one would like to discover.

The most general result that I am aware of is that the log K has to be directly follows-complete. Action A of the system S directly follows action B if there is some execution trace …AB… of S where first A occurs and then directly B occurs. The log K is directly follows-complete iff for any two actions A, B that directly follow each other, there is a trace …AB… in L. Every system S with a finite number of actions has a finite log K that is directly follows-complete, even if the language L of S is infinite. For such logs, there are algorithms that ensure that S (or a system that is trace-equivalent to S) can be rediscovered. See for instance Discovering Block-Structured Process Models from Event Logs – A Constructive Approach (doi: http://dx.doi.org/10.1007/978-3-642-38697-8_17).

In general, if you have the original system S (or some finite characterization of L), then it will be possible to compute whether the log K is directly follows-complete. If you do not have S or L, then we currently do not know any means to estimate how complete K is. This is an open question that we are currently looking into. Yet, in essence, you have to estimate the probabilities that the information in log K suffices to explain particular languages constructs that the original system has/may have.

We are currently looking deeper into these questions. If you have more pointers on this topic, feel free to drop a comment or an email.

Mining Branching-Time Scenarios From Execution Traces

Over the last two years, I have been working with Shahar Maoz and David Lo on discovering high-level specifications of an application from its execution traces. This topic is also known as specification mining and has been brought up around 2002 because we we keep on writing humongous amounts of undocumented code that other people have to use, maintain, or deal with in a later iteration of the software. Getting into this “legacy” code is extremely time consuming and there is a high chance that you will break something because you do not understand how the existing code works.

Specification mining aimes at extracting (mostly) visual representations of existing code that describes its essential architecture and workings at a higher level of abstraction, so that a developer can first get an overview before diving into the code.

We looked particularly at the problem of understanding object interplay in object-oriented code, where you often have many objects that invoke methods of other objects, essentially distributing a behavior over many classes. Everyone who ever tried to understand code that uses a number of architectural patterns combined, such as factories and commands, knows what I mean.

Building on earlier works, we found a way to discover scenarios (something like UML sequence diagrams) that show how multiple objects interact with each other in particular situation. More importantly, we found a way to discover and distinguish two kinds of scenarios:

  • scenarios that describe behavioral invariants of your application (whenever the user presses this button, the application will open that window), and
  • scenarios that describe behavioral alternatives (when the user is on this screen, she may continue with this, that, or even that command)

These two scenarios combined give a good understanding of how an application works at a high level of abstraction. Here is the presentation showing you how it works:

You can also get the full paper, our tool for discovering scenarios, and the data set that we used in our evaluation.

Drop me an email or a comment if you find this useful and/or would like to know more.

 

some historical statistics on modeling formalisms

Some historical statistics about the use of formalisms for modeling automated systems since 1800. Here is a chart about how frequently a particular name of a modeling formalism was used in literature. I’ve picked Automata, Petri nets, Process algebra, Statecharts, and UML.

You can get the full chart (and add your own favorite formalism) using Google Ngram. What I find surprising is that Petri nets were at some point as relevant in literature as automata (which have been discussed since the 1800s already). I’m not surprised that UML peaks all of them by far. On second throught also UML’s decline is not suprising as the hype returns to normal levels. What I do find surprising is that process algebras are much less referenced in literature than even the very particular, though successful, technique of statecharts.

 

proper interface descriptions for your service

A service is, in computer science terms, a functionality (of a software or of a device), that hides its implementation details to the user. To be able to use the service, the service has to declare what the service does at its interface. In the old days, you would get a manual, in the new days you would descriptions in some fancy service description language.

Besides discussions how to write down what a service does, I feel there should also be some thoughts about what all should be described about a service. It’s quite easy to miss something important as the following video from our new building shows.

Using a Coffee Machine Service

15 minutes for everyone

Aside

If every person living on earth today wanted his/her 15 minutes of unrivaled fame, it would take ~193778 years from now. We should get started.

Every day 96 people can have their 15 minutes of fame. That means for an estimated 6.79 billion people living on earth today, we need about 70,729,167 days to get everyone famous. That’s just about 193778 years.