Business Process Technology

Previous Master’s Theses

2022

Execution Semantics of Process Models with Data
Maximilian König

Data is one of today’s most important currencies. Thus, maintaining an overview of its creation, usage, and manipulation within an organization is of utmost importance. While this fact has been recognized in the Business Process Management (BPM) community in general, leading to the emergence of disciplines such as process mining, its subfield of process modeling has not attributed attention to that for a long time. BPM is a mature methodology whose objective it is to systematically model, analyze, optimize, and enact business processes. One of the central concepts in BPM are process models, i.e., representations of the different steps the various actors in an organization undertake to achieve the business goals. Extensive research has been conducted on the logical and temporal order of these steps, also called the control flow, specifically on how to visualize and analyze them. While doing so, the impact of data on control flow, e.g., that certain tasks require specific data to be executed, has been largely neglected. Even with the extension of some of the process modeling languages to incorporate data concepts, formal semantics of these concepts that enable automated analysis and enactment are often not present or underspecified. This thesis approaches this gap by introducing a concise semantics for data concepts in BPMN, one of the most widely used process modeling languages in academia and industry. As a first step, a missing notation for some of these data concepts is introduced. The assignment of formal semantics then happens through a mapping of BPMN concepts to Petri nets, a well-understood language with exact execution semantics. The resulting models’ behavior is compared to the BPMN specification to ensure a correct mapping. Based on the defined formalism, analysis methods are presented to verify the correctness of the data flow of BPMN models. That includes the soundness criterion as well as the detection of anti-patterns that indicate erroneous and undesired behavior, and data output analysis, which supports compliance and consistency checks.

A Framework for Business Process Sustainability Analysis
Finn Klessascheck

Business Process Management is a key discipline in enterprises to model, analyse, measure and improve business processes. Due to accelerations in the degradation of nature and ecosystems caused by human activity and the resulting threats to biodiversity and human health, considering the impacts of business processes on sustainability plays an increasingly important role. However, existing approaches are either too broad in scope, or take too few dimensions of impact into account to completely assess the environmental impact. Therefore, this thesis contributes a framework to analyse business processes and their environmental impact in a holistic manner. To this end, activity-based costing mechanisms, which assess costs of individual activities and thus, the cost of the overall process, are combined with life cycle assessment techniques. Through this, the concrete environmental impact of activities and processes can be determined. This is facilitated through the use of business process simulation, so that the environmental impact is determined based on simulated process behaviour, instead of a static consideration of the activities and the process. Further, this simulation allows for hypothesizing about the impact of changes to the process and its implementation. Thus, this framework can be applied by process analysts to determine the environmental impact of business processes and to reason about sustainability-oriented process re-design. The practical applicability of this framework is demonstrated through application on three exemplary processes and their re-design with a prototypical implementation.

Supporting Agile Processes with Artifact-centric Business Process Models
Demi Brabender
Towards Leveraging Textual Data for Explainable Predictive Business Process Monitoring
Christian Warmuth
A Structured Investigation of Flexibility in Imperative and Hybrid Process Models
Carolin Goerke

Business Process Model and Notation (BPMN) is widely used and can be considered the de-facto standard for process modeling. However, its control flow orientation and imperative representation are best suited for highly structured processes. In contrast, fragment-based Case Management (fCM) is arguably better suited for representing more flexible processes, where knowledge workers have the flexibility to decide how to proceed with the process at any time. The fCM is a hybrid modeling language in which declarative data-driven control flow elements extend imperative control flow primitives. Although fCM is most promising in flexible, knowledge-intensive domains, there has been no systematic research on the comprehensibility of the notation in the context of flexible business processes. This thesis seeks to fill this research gap in two aspects: First, a method for the systematic translation of fCM models into BPMN models is developed. The method elucidates the structural differences between the two notations and provides an opportunity to understand the semantics of fCM models in the context of BPMN. Second, the comprehensibility of the two notations is investigated in an empirical study. For this purpose, subjects were presented with either an fCM model or a comparable BPMN model of a flexible, knowledge-intensive process. Subsequent comprehension questions about the process show that subjects develop a better understanding of the process when it is presented to them as a BPMN model instead of an fCM model. This effect is independent of the subjects’ prior knowledge of specific modeling notations. Finally, it is discussed how an automatic translation of fCM models into BMPN models can improve the understanding of the depicted processes.

Supporting the Planning of Actions for Knowledge-intensive Processes
Anjo Seidel

Knowledge-intensive processes are goal-driven, emergent, unpredictable, and non-repeatable. They are executed by knowledge workers, who plan their actions according to upcoming goals. During execution, they need to decide which actions align best with their goals. Many different activities can be enabled, and their effect on the process outcome might not be apparent. Different approaches exist to provide decision support for traditional business processes. However, they are only applicable to knowledge-intensive processes to a limited extent. This thesis proposes a model-based approach so that recommendations can be derived, which support knowledge workers in their decisions and therefore in their planning. This approach allows to formalize goals as first-order logic formulae. Furthermore, it provides two techniques to analyze the state space of knowledge-intensive process models. They search for possible future execution sequences that satisfy the specified goals and calculate which of the next actions are well suited. Both techniques were evaluated by providing a proof of concept implementation, and a user study that measured the performance of knowledge workers given different recommendations or none.

2021

Attribute-driven Case Notion Discovery for Unlabeled Event Logs
Tom Lichtenstein

Event logs extracted from information systems involved in the execution of business processes contain valuable information about the actual behavior of the processes. In recent decades, process mining techniques have emerged that allow business analysts to derive knowledge from event data contained in event logs (e.g., discover process models). Typically, these techniques require the presence of a case identifier that associates events with process instances. However, if the process involves information systems that do not record events in a process-oriented manner, a clear case identifier may be missing, resulting in an unlabeled event log. While some approaches already address the challenge of deriving case identifiers for unlabeled event logs, most of them provide limited support for cyclic behavior without additional inputs. This thesis proposes a three-step approach to correlate events with case identifiers for unlabeled event logs originating from processes with cyclic behavior, considering the unlabeled event log as the only input. The approach aims to identify sub-process instances associated with real-world entities involved in process executions by exploiting optional attributes contained in the unlabeled event log. Moreover, the subprocess instances are merged into process instances according to relationships between the entities. We evaluate the accuracy of our approach with two realworld event logs (MIMIC-IV and Road Traffic Fine Management) and show that, compared to existing approaches, our approach detects cyclic behavior and correlates events closer to the original process instances without requiring additional process-related inputs.

User Requirements for Process Models in Hospitals
Tobias Wuttke

Despite their potential to evoke process improvement discussions, process models in healthcare lack the readability and understandability of domain experts. This thesis aims to determine which requirements process models need to fulfill to meet healthcare practitioners’ information needs (particularly nurses’ and physicians’) to address that issue. The phases of the Requirements Development process are followed to derive the requirements. For Requirements Elicitation, data is gathered in semi-structured interviews with stakeholders. During Requirements Analysis, the collected data is used to derive goals and information needs. The following phase of Requirements Specification lists the intermediate requirements in the form of user stories. Then, a questionnaire is sent to representatives of the stakeholder groups validating the requirements for Requirements Validation. The result is a list with ten validated user requirements for nurses and physicians each. Finally, an example process model from the literature is analyzed regarding its fulfillment of the requirements to illustrate its impact. Based on the results of Requirements Development, doctors need process models to show as much medical information as possible. Nonetheless, they also require details in the process models that help them plan the patients’ rehabilitation, prevent or address complications, and improve the well-being of their patients. Nurses, on the other hand, do not have one clear focus. They require process models to show information to help them protect themselves, prevent and address complications, improve their patients’ well-being, ensure process compliance, make informed care decisions and plan their patients’ journey. Process models in the literature already satisfy some of the requirements. However, they do not address all the information needs of nurses and physicians. Therefore, the thesis’ outcome can guide BPM researchers to create process models that provide domain experts with the information they require to evaluate healthcare processes. Process models that are helpful to nurses and physicians can ultimately contribute to improved healthcare processes and, therefore, to better health outcomes of patients.

Enhancing Robotic Process Automation with Decision Management
Simon Siegert

Enhancing Robotic Process Automation with Decision Management

Robotic Process Automation (RPA) is a novel approach for automating digital workflows that employs software robots, also called RPA agents, to mimic human behavior, such as mouse and keyboard input. RPA agents are typically configured using graphical models, which users can create without separate IT training. While RPA enables automation of everyday tasks on computers, there are only rudimentary ways to represent decisions in RPA models. Common approaches to representing decisions result in confusing models that are difficult to understand and manage. To improve the inadequate situation for representing decisions in RPA models, an integration of decision management concepts from the field of business process management into the context of RPA is proposed. For this purpose, based on the RPA lifecycle, which includes all phases for the implementation of an RPA project, it is investigated which adaptations are necessary so that the notation standard for decision rules DMN (Decision Modeling and Notation) can be used in RPA projects. For evaluation purposes, implementations of different approaches for representing decisions in RPA agents are created and compared with each other, and the advantages and disadvantages of each approach are discussed.

2020

Case History and Case Future - How to visualise executed and upcoming Case Management Activities?
Peter Schwarz

Business Process Management is a multidisciplinary part of operations management with executing and monitoring business processes being part of it. There are special requirements to these disciplines, when thinking of knowledge-driven business processes. On the one hand, the execution needs to be very flexible to profit most from knowledge workers’ input. On the other hand, the monitoring needs to put all needed information in a nutshell so that the business process can easily be understood by a knowledge worker in order to make good decisions. Case Management is a concept which covers theses aspects. A concrete implementation of the Case Management concept can be found in Fragment-based Case Management. Using so called process fragments, business process units are modelled that can dynamically be combined during runtime. However, it does not provide a suitable monitoring approach. A suitable monitoring approach would have to answer questions such as ’How often has action X been executed?’ or ’How many actions does it take at least to finish the case?’. Thus, Fragment-based Case Management has to be extended laying the foundation for this thesis. So far, only actions, which are currently active, are monitored. However, it is necessary to monitor actions of the past and future in order to answer the aforementioned questions. Therefore, this thesis introduces two concepts: unfolded case models and temporal phases. It will be shown that actions can be partitioned into three points of time: past, present and future. To do so, a concept is needed showing all possible execution paths of a case, which is why a one-to-one mapping from the unfolding concept of Petri Nets to Fragment-based Case Management will be presented. By trading of several visualisation concepts against each other, a suitable visualisation will be ensured. From a practical point of view, it is necessary to underline the presented theory with an implementation. Thus, an existing Fragment-based Case Management framework is used to implement newly developed concepts.

A Framework for Context-Aware Resource Behaviour Analysis
Maximilian Völker
Deriving Decisive Case Characteristics in Process Performance Analysis
Jonas Beyer

Business processes are the foundation of a company’s day-to-day work. Nowadays, they are often implemented with the help of IT systems that keep track of data records and execution events. With the help of the data collected by those IT systems, process analysts can conduct retrospective analyses of the process execution to find leverages for evolving and optimizing the company’s operations. In this master’s thesis, a framework for conducting such analysis is introduced, aimed at finding explanations for the behavior or performance of process instances by determining correlations between their characteristics and their corresponding outcome. First, the procedure of deriving such characteristics from event log data is investigated, building on analyses that have been performed in a wide range of existing research. Thereby, the focus of attention lies on finding a concise set of characteristics, enabling the analysis to return meaningful results quickly. Second, a framework is developed, treating the correlation analysis as a classification problem where the case characteristics are used to classify certain case outcomes. Overall, the goal is to provide a coherent user flow that is supposed to enable business users to find insights into their processes, without requiring a lot of data analysis knowledge. Findings are implemented in a prototype that is tested with two real world event logs of different sizes, coming from different types of processes. The usefulness of the prototype is evaluated by conducting the analyses together with domain experts, where the correlations found in the data sets are discussed, and a number of challenges of the analysis and shortcomings of the derived characteristics are revealed. Finally, as an outlook on future work, a number of suggestions for improvement are made.

An Architecture for Multi-Chain Business Process Management Systems
Christian Friedow

Business Process Management Systems (BPMSs) assist the execution of business processes in many organizations. Since blockchain technology became popular, there has been a movement towards implementing business processes and choreographies, especially those involving mutually untrusting participants, using automated smart contracts. Research into implementing BPMSs based on blockchain technology, has focused on a singular, individual blockchain technology. This poses a problem, as there is a variety of different blockchains focused on different usecases. Some blockchains focus on privacy, others on security, currency or throughput. With new approaches emerging quickly, different business processes will require different underlying blockchain technologies. Therefore, a BPMS which aims to be usable independent of specific cases and domains will have to provide the flexibility to select the blockchain technology dynamically according to the needs of the business process. In this thesis we propose an architecture for BPMSs that abstracts from the specific, underlying blockchain technology. Therefore, we break prior BPMS architectures down to their key components, supplement them and reassemble them to suit a multi-chain environment. This architecture is evaluated using software quality metrics and a proof-of-concept implementation. Furthermore challenges and extensions regarding the implementation of this architecture are discussed.

2019

Specifying and Executing Active Choreographies on the Blockchain
Jan Ladleif

In heutigen Unternehmen werden häufig ablaufende Prozesse zunehmend formal modelliert. Diese formalen Modelle können genutzt werden, um Prozesse zu automatisieren, zu beobachten und langfristig zu optimieren. Aktuelle Modellierungs- und Ausführungsmethoden sind primär auf innergeschäftliche Prozesse zugeschnitten, und vernachlässigen dabei häufig die Kommunikation zwischen verschiedenen Unternehmen. Dies liegt vor allem daran, dass der Nachrichtenaustausch zwischen externen Parteien schlecht überwacht und kontrolliert werden kann, ohne eine zentrale Steuerungsinstanz einzuführen. In der Folge sind Choreographien, wie diese Art von Prozessen auch genannt wird, oft nur stark vereinfacht modellierbar. Mit aktuellen Entwicklungen im Bereich der Distributed Ledger Technologie (DLT), beispielsweise Bitcoin, ändert sich diese Situation aber zunehmend. Ein Distributed Ledger (DL) kann im Zusammenspiel mit Smart Contracts dazu genutzt werden, die Kommunikation zwischen den Parteien zu steuern und konkrete Abläufe und Muster durchzusetzen. Zusätzlich können dezentral Daten gespeichert und verarbeitet werden, die die Choreographie zusätzlich anreichern — ohne eine dritte Instanz zu beteiligen oder einzelne Teilnehmer besonders zu privilegieren. In dieser Masterarbeit untersuchen wir, welche Auswirkungen diese technischen Neuerungen auf die Modellierung und Durchführung von Choreographien haben. Dazu erweitern wir Business Process Model and Notation (BPMN) 2.0 Choreographie-Diagramme um neue Modellierungskonzepte, die die Möglichkeiten der neuen Technologien ausnutzen. Wir geben außerdem einen Ansatz zur formalen Beschreibung der neuen Modelle, mit klar definierter Syntax und Semantik. Anschließend präsentieren wir eine prototypische Implementierung, die die Modelle auf Basis der Ethereum Blockchain ausführen kann. Abschließend evaluieren wir unsere Erweiterungen und den Prototypen durch die Analyse zweier beispielhafter Anwendungsfälle.

Next-Activity Prediction with Long-Short-Term Memory Recurrent Networks
Felix Wolff

Manual labor processes have seen a lot of automation in the past, but knowledge-intensive processes still proceed in a highly manual fashion. Systems to assist knowledge workers have been asked for in recent literature reviews. Such assistance systems need to anticipate the development of a case to offer assistance in the right circumstances, which requires the capability to foresee the next activity in a process. Predicting the next activity in a running process is a young discipline called Predictive Process Monitoring, which we contribute to by evaluating neural network prediction models for this task. In this work, we connect next-activity prediction to sequence prediction. Thanks to this connection, we can adopt a prediction model from a natural language processing competition where the next word in a sentence was to be predicted. Additionally, we augment this prediction model with a different feature set to produce a second approach. To have a direct comparison, we reimplement two published prediction models for business processes. Because several strategies are common to create batches from sequential data of variable length, we include a comparison of these strategies in our evaluation. To make model training easier and facilitate the inclusion of the strategies into the training process, we also propose a model training framework. The framework allowed easy training of the four models with four different batch creation strategies on eight datasets. In the evaluation, we note a connection between process complexity and prediction accuracy. Furthermore, we realize that one batching strategy delivers the most promising results, and should be explored further. Finally, we compare our accuracies with numbers from three recent next-activity prediction approaches on two different real-life process event logs. All of our four models outperform these numbers.

Checking Compliance for Fragment-based Case Management
Adrian Holfter

2018

Decision-Aware Compliance Checking
Stephan Haarmann
Evaluating Flexible Process Modeling Approaches
Michael Gottfried Hanser

Business Process Management is concerned with investigating how work in organizations is performed. It focuses on the design, management, support, improvement and execution of business processes. A business process describes a set of activities, which are executed in a particular order to reach a business objective. To provide support through information technology, processes are captured and represented in the form of process models. Knowledge- and decision-intensive processes, which are characterized by high variability and frequent changes, pose new challenges to effective process representation and support. Declarative and data-centric modeling approaches aim to address these new requirements by offering enhanced functionality and flexibility. In this thesis, three of these modeling techniques are compared and evaluated: Fragment-based Case Management, Declare and Dynamic Condition Response Graphs. The evaluation focuses on data support, representing structured and unstructured behavior, and usability. To this end, a feature-based evaluation framework is derived from existing literature. For evaluating usability, outcomes of a controlled user experiment are presented. The results show that no technique yet supports all identified features. Although users perceive the techniques as useful, their complexity raises doubts about their ease of use. Based on the results, suggestions for improving the approaches, as well as enhancing empirical method evaluations are derived and presented.

Generation of an Internet of Things Prototype with Business Process and Class Diagram Modeling
Jaspar Philipp Mang
Flexible Event Subscription in Business Processes
Dennis Wolf

2017

An Extensible BPMN Process Simulator
Tsun Yin Wong

Business process management has been introduced by organizations to make their daily workflows efficient and flexible with respect to their execution environment. Changes in business processes require accurate analysis of possible impacts. Business process simulation is an analysis technique which provides insights into costs, resource utilization and time utilization during the execution of different business process alternatives. In academic research, the evaluation of new concepts is fundamental. Recent work in business process management addresses the role of data and the interaction of business processes with the environment, with the Business Process Model and Notation (BPMN) as the state-of-the-art language for the graphical representation of business processes. Researchers apply simulation to estimate the value of these concepts in real-world execution. However, existing BPMN process simulation software solutions do not provide well-defined interfaces for the integration of new concepts in a simulation environment. Furthermore, they do not support the simulation on organizational level where several processes compete for the same resources. To tackle these issues, we present an open and extensible business process simulator. It is designed to support the simulation of multiple BPMN processes at a time and relies on the building blocks of discrete event simulation. The simulator is based on a plug-in architecture and implemented in the Java programming language. We evaluate its extensibility by the development of a plug-in for batch regions which have been recently introduced into business process management to enable the simultaneous (i.e. batch) execution of multiple processes.

Modeling, Discovering, and Monitoring Context of Business Processes using Micro Blogs
Susanne Bülow
Event-based Monitoring of Time Constraint Violations
Marius Eichenberg

Information sources providing real-time data have highly increased in recent times. Complex Event Processing (CEP) offers methods to process and monitor these data. This monitoring is usually contextrelated where time is crucial. Until now, temporal constraints have been monitored within a certain domain and individual engineered solutions can hardly be reused and abstracted into a cross-domain approach. This thesis closes the research gap and presents a formal framework that is independent of the domain by introducing the time pattern of the common deadline. This deadline connects adjacent activities to monitor and predict the behaviour of the following activity of a sequential process flow. The formal framework is proven in practice by a prototype that is used within a case study based on real-world data. Later on, the framework is extended conceptually to cover more complex process flows and temporal relations between non-adjacent activities.

A methodology to structurally model and support the execution of managerial business decisions
Lucie Omar

Decisions are part of Business Process Management (BPM); yet, representing process logic and the actual decisions in the same model can often be cumbersome. The Decision Model and Notation (DMN) standard is an approach to separate these two concerns by providing an explicit decision model capability. In addition, approaches exist to easily model operational decisions, which are commonly well-structured; however, the less frequently executed managerial decisions are semi-structured, especially if executed in groups of decision makers. These managerial decisions have a significant business value and are commonly executed by humans rather than automated. Due to the structural lack of these decisions, the rule of thumb and gut feeling are applied in the decision execution. To counteract this structural lack, the thesis presents a methodology to structurally model and support the execution of managerial business decisions. For supporting the structural decision execution, three high-level dashboard approaches are proposed. Those approaches incorporate the consistently described and modeled decision process. Overall, the methodology is applied to two different real-world use cases from the travel industry. The application to the first use case enriches the methodology with real-word insights. It further serves as a basis for proposing the high-level support approaches. The second use case serves as the foundation to evaluate the methodology as well as the dashboard approaches.

2016

Fragmentbasiertes Case-Management: Spezifikation und translationale Semantik
Tim Sporleder

Case-Management stellt ein datengetriebenes Paradigma zur Unterstützung spezialisierter Wissensarbeiter bei der Handhabung variantenreicher Prozesse dar. Während der Ausführung eines Prozesses kann der Wissensarbeiter entweder, im Falle des Production-Case-Managements, aus vorgegebenen Teillösungen wählen oder, im Falle des Adaptive Case-Managements, selbst Lösungen entwickeln, um den speziellen Anforderungen des Falls gerecht zu werden. Fragmentbasiertes Case-Management, kurz FCM, ist eine konkrete Umsetzung von Production-Case-Management und somit ein Ansatz für die Unterstützung teilstrukturierter Prozesse. So können Teilmodelle, genannt Fragmente, modelliert werden, welche zur Laufzeit durch den Anwender, abhängig von den Falldaten, flexibel kombiniert werden können. Obwohl bereits eine praktische Umsetzung in Form einer Ausführungsengine vorliegt, existiert bislang jedoch keine vollständige formale Definition der Syntax und Semantik. Daher besteht das Ziel dieser Arbeit darin, fragmentbasiertes Case-Management zu formalisieren. Dazu wird FCM zunächst in Bezug auf die Hauptströmungen des Case-Managements eingeordnet und von anderen Ansätzen abgegrenzt, um dessen Charakteristiken herauszustellen. Anschließend wird, unter Einbeziehung der Grundlagen der Prozessmodellierung sowie auf Basis bestehender Arbeiten, eine abstrakte Syntax für FCM erarbeitet. Dabei werden auftretende Probleme sowie mögliche Varianten erörtert. Auf Grundlage der abstrakten Syntax und einer informalen Beschreibung der Ausführungssemantik wird weiterhin eine translationale Semantik angegeben. Dazu werden, wiederum ausgehend von bestehenden Ansätzen, Regeln für die Abbildung von FCM-Prozessmodellen auf Petri-Netze entwickelt. Somit bildet diese Arbeit ein theoretisches Fundament für die zukünftige Weiterentwicklung des Ansatzes. Die abstrakte Syntax und translationale Semantik ermöglichen eine formale Betrachtung des fragmentbasierten Case-Managements. Die Repräsentation von FCM-Modellen als Petri-Netze bietet zudem einen Zugang zu bestehenden Techniken zur Verifikation und Analyse.

Automatische Generierung von Ereignisabfragen
Kerstin Günther

In the domain of complex event processing, there are many different event query languages to define event queries. The master thesis at hand presents an approach to define event queries in a grafical model that can be automatically translated into any event query language. After analyzing existing modeling languages, a new modeling language was developed to model event queries - Event Query Model and Notation (EQMN). This modeling language is specified in detail with its abstract and concrete syntax. The approach was implemented in a prototype which is based on Camunda’s modeling tool bpmn.io. The prototype was evaluated for the languages Esper Event Query Language and Drools Rules Language.

Reevaluation of Decisions in Business Processes based on Events
Heiko Beck

Business process and decision automation is important for companies to perform their work in an efficient manner. While models can be used as blueprint for the execution of processes and decisions, the efficiency and success at runtime depends not only on the models but also on the data which are used to perform decisions. The possibilities of Complex Event Processing (CEP) to process large streams of data nearly in real time seam to be well-suited to increase the quality of decisions by using more up to date input data to decisions. The differences between using data from classic databases versus using unscheduled complex events are pointed out in this thesis. A pattern-based approach is introduced which considers the unpredictability of events. It enables the reevaluation of decisions as reaction to new events. To decide for how long a reevaluation of a decision should be possible, a Point of no Return (PNR) can be defined. Context-sensitive event queries are generated at runtime in order to filter for events that should be used for a reevaluation with respect to the latest decision outcome. The approach is evaluated by an implementation to the process engine Camunda and the general idea is transferred to the concept of batch regions, an approach to integrate batch processing in business processes.

2015

Transforming Production Rules into Decision Models
Thomas Zwerg

The world is rapidly changing every day. Thus, many companies are challenged every day to keep up with the top organizations in the same field of business activities. To achieve this goal they all have to be flexible but complying with laws, regulations or policies. To shift this flexibility and compliance in their business processes an additional layer representing the decision modeling can be applied. Decision modeling enables the decoupling of the process and the actual decisions. It becomes possible to change pricing rules on demand, every day or even multiple times a day while the process remains unchanged. On the other hand, a number of companies use business rules engines for years. Nevertheless they miss the connection between their processes and the actual implementation of the production rules which are technically representing the business rules. Therefore, this thesis presents an approach of transforming already implemented production rules into decision models. The illustrated concept is based on the open source production rule system Drools and uses the Decision Model and Notation standard. Based on a set of productive rules, it turned out that only 40% can be transformed directly and two thirds can be converted using additional refactoring strategies. This is mainly due to the extensive usage of Java program code within rules.

On the Relationship between Decision Modeling and Process Modeling
Oliver Xylander

With the recently approved Decision Model and Notation standard (DMN 1.0), decision modeling has raised a lot of interest in academia and practice. This thesis investigates the relationship of process models and decision models and introduces the notion of consistency to this affiliation. For this purpose, the points of contact between process model and decision model are exposed. Based on use cases encountered in practice the corresponding models expressed with the industry standard notations BPMN and DMN are compared and discussed regarding devised consistency criteria. A list of requirements is composed which is the foundation for the development of a prototypical implementation concept for both modeling decision models and checking the consistency between BPMN and DMN.

2014

Event Correlation for Business Process on the Basis of Ontologies
Tobias Metzke

Process monitoring relies on the correlation of incoming process execution data to process instances in order to provide an up-to-date view on the current process landscape. Due to missing connections between such event data and process executions in distributed environments like logistics the correlation can however become a complex task. Furthermore, the correlation of useful external data like weather and traffic information, which has no connection to process executions at all, is a task just as difficult. The approach presented in this paper uses semantic technologies to automatically identify those process executions that are related to the data of an occurring event. It uses linked data principles and graph-based algorithms to detect relatedness of events and process instances. The approach allows for the inclusion of external data and the correlation of external events without relying on process specific queries.

Framework for the practical evaluation of modeling techniques with data integration
Thomas Stoff
Entwurf und Evaluation eines prozessorientierten User Interface Cockpits
Robert Böhme
Business Process Architecture Extraction with regard to Inter-Process Dependencies
Robert Breske

Business Process Architectures are a common tool in Business Process Management (BPM) to design the overall process landscape of a company. They provide a holistic view on a company’s process model collection, describe their interdependencies with each other, and provide guidelines to organize and structure those processes accordingly. Often the holistic view is achieved by examining a company’s processes on different levels of semantic abstraction. The connection between these layers may be lost during ongoing modeling efforts, if they run out of sync or are too loosely coupled to consistently fit together. BPA is a conceptual business process architecture approach to address that issue. We argue that an automated extraction of BPA models from a process model collection is a practical solution to further advance this idea. Based on interviews with industry partners, we identified use cases for automated architecture extraction in companies. We developed an automated extraction algorithm, including a prototypical implementation, for BPA models. We found evidence that automated BPA extraction can be a valuable support of companies’ process architecture management efforts, but can, so far, not cover all architectural levels. Furthermore, the diversity of approaches in process modeling complicates the automated extraction with one general approach.

Proactive Decision Support During Business Process Execution
Kimon Batoulis

During the execution of business processes enterprises regularly need to make decisions such as which activities to execute or what kind of ressource to assign to a task. The decision making process is often case-dependent and carried out under uncertainty and constraints such as service-level agreements. Furthermore, businesses operate in a rapidly changing environment that requires them to react in a timely manner. In order to make informed decisions given these circumstances, domain experts can be consulted. However, they are expensive, may not always make optimal decisions under uncertainty and are not constantly available to deal with changes. In this thesis, we address these problems by presenting a proactive approach that eases decision making under uncertainty during business process execution. It is based on automatically generating a Bayesian network that statistically models the relationship between past process execution data variables. This allows us to make case-dependent predictions about these variables during process execution. Particularly, there may be variables whose values are set by the decisions to be made during process execution and also variables measuring key performance indicators such as the process duration. We are thus able to search for specific assignments to the decision variables in order to optimize one or more key performance indicators. Therefore, we transform the Bayesian network to an influence diagram that is capable of finding such optimal assignments. The decisions are then translated to business rules which can be used by a process engine during process execution. Also, they are dynamically adapted depending on the availability of finite resources involved in the decision. We evaluated our approach on two use cases and could show improvements regarding the key perfomance indicators of the processes.

2013

Kickoff Workshops for Scrum Projects - Co-Creating Story Prototypes to Foster Shared Requirements Understanding
Markus Güntert

Agile software development approaches antagonize extensive requirements engineering activities upfront IT implementation projects. Instead, they rely on the concept of iteration in order to obtain early and continuous feedback from the customer since the latter is generally not able to precisely state the intended requirements. Moreover, requirements are likely to change during the project. Among the agile approaches, especially Scrum is growing in its popularity. In interviews with Scrum practitioners, we have learned that the first iterations in a Scrum project are primarily used for detailed need-finding and scoping of the software to be developed. Program code is produced from the beginning in order to confront the customer with concrete results that enable a fruitful discussion. If requirements turn out to be misunderstood between the developers and the customer, however, program code tends to be abandoned. We name such program artifacts software prototypes. We intend to prepone discussions that arise with concrete software prototypes to the occasion of a kickoff workshop in which key stakeholders gather at the beginning of the project. In this work, we therefore propose a kickoff workshop method which enables customer representatives with a non-technical background to participate in the creation of formal models. For this purpose, we introduce a new modeling language building on the notion of user stories which already serve as integral communication artifacts between developers and the customer. Since user stories describe isolated aspects about the software to be implemented and are mostly organized in a flat hierarchy, we enrich them with six control flow concepts adapted from BPMN in order to analyze dependencies in the allowed execution sequence between them. We name these models story prototypes. Our workshop method is centered on the didactics of modeling story prototypes and includes a holistic kickoff agenda guideline. As part of the agenda guideline, we describe how to elicit and consolidate an eligible set of user stories. A voting determines potential key user stories which each define the scope of a story prototype. As a subsequent step, the workshop participants are divided into smaller groups so that every group will model a different story prototype at the same time. This allows for a parallelization of efforts, addresses participants individually, and fosters structured discussions about different aspects of the desired software system. Afterwards, the groups review each others’ models and finally present their results. At the end of such a workshop, for every key feature of the system, a story prototype is available which depicts a coherent view on the corresponding requirements. The results are captured in a precise formalism and sharpen the understanding between the developers and the customer since both parties actively engaged in the act of modeling. Our research approach follows the action research framework. As a first iteration, we have applied our workshop method with university students in a fictive scenario, intended to obtain a plethora of learnings. Furthermore, we have presented the method to the same Scrum practitioners we initially consulted and gathered their feedback.

Automated Testing of Executable BPMN Processes
Jan-Felix Schwarz
Event Monitoring Point Binding and Configuration in Non-Automated Process Execution Environments
Egidijus Gircys

Companies and organizations strive to utilize the benefits of automated business process management, i.e. quality and sustainability assurance, execution prediction, exception tracking, etc. However, there are organizations (e.g. healthcare), where the process execution is still carried out manually, due to their high dynamics and multi-disciplinary nature. Business process monitoring is a key enabler technology for the consumption of these benefits. Still, the monitoring is aggravated by the missing process execution log information. Herzberg et al. solve this by introducing the concept of the event monitoring point. A ground basis for business process monitoring is built by specifying event monitoring points for activity state transitions and binding them to an implementation. This thesis presents an approach for configuring the event monitoring points by specifying the binding relations between the events in the organizations ITLandscape and event monitoring points. A binding architecture is introduced, where three components are representing the event pulling, event binding and event reception functionality. The event binding approach is carried through the publish/subscribe paradigm, where the resulting subscription is built as a composition of simple events, i.e. a complex event. A subscription set for the business process is depicted through event clustering. To each activity is a cluster of events allocated. As well, a manual supervision and configuration of the subscription is supplied. A prototype of the developed architecture was implemented. The solution was implemented based on the SOA paradigm. It is common that the healthcare organizations have several organizational units with detached IT-Systems. Therefore, the components were implemented as Web services, which enables platform independent and loose coupled implementation. Furthermore, the quality of the suggested binding configuration was evaluated with business process technology and healthcare experts. The evaluation results showed 87% recall and 79% precision values for the suggested binding configuration.

BPM Governance in the Modeling Process
Christian Wiggert
A Process Model Search Platform
Christian Walter Reß

More and more organizations use business process modeling to organize their internal operations. The constant growth of business process modeling, and therefore also of the business process model collections of the organizations, requires innovative solutions. Business process model search is a an active field of research addressing this problem by developing methods to find process models among the ever growing process model collections. Unfortunately, this research is rarely published in a form that allows users, researchers or interested developers to directly interact with these new methods and use them in their work. In this thesis we develop a generic search platform to change this. Our platform offers generic interfaces for integrating modern search techniques and provides users, as well as developers of search techniques, with a visual search and evaluation interface aimed at their respective needs.

Regression-Based Repair of Faulty Workflow Logs
Robert Gurol

When business processes are implemented with the help of technical systems, many kinds of process execution data are collected. There are prone to be the usual log files for various applications, or there is even explicitly process-related data stored in a workflow log. Whatever kind of data, it can give valuable insights how a process, defined as an ideal model, is executed in reality, and whether it violates specified criteria such as service level agreements (SLAs). Thus, analysis is desired. In case of a technically difficult data collection scenario or if there is a media discontinuity, faults are prone to occur in the log data. Measurement values may be recorded multiple times or may be missing, there may even be variations from the original process, all making it hard to associate the recorded data with the process model. Preprocessing of the raw data can help to recognize and remove such faults, enabling effective processing, e.g. during the analysis of the business process. For a given use case where data is collected with hand-held scanners, we investigate how structural properties of a defined process model may raise the difficulty of associating data and model, especially in the presence of data sets that cannot be distinguished from those of another activity. We investigate the feasibility of structure-based approaches, propose metrics to help identify potentially difficult process model definitions and introduce an extensible algorithm for data repair.

2012

Process Evolution & Evaluation - Analyzing versioned execution data of clinical pathways
Philipp Maschke

Evaluating business processes based on their actual execution data is an important step within the business process life cycle. It allows for a detailed analysis of a process’ performance and of all factors that influence this performance. The results of the evaluation are subsequently used to create an optimized version of that process, which will again be executed and evaluated. This process life cycle may create multiple versions of the same business process over time. Current business process evaluation techniques do not consider this fact, which may lead to incomplete or inaccurate evaluation results. This thesis examines the evaluation of business processes having multiple versions using the example of clinical pathways. These are descriptions of medical workflows, which can be represented as BPMN process diagrams. The thesis investigates requirements based on a real clinical pathway of a hospital in Germany and describes possibilities and challenges of multi-version evaluations. One of these challenges is the introduction of possibly unwanted influences caused by differing process versions when performing multiversion evaluations. This thesis introduces a technique to identify and remove such unwanted influences if requested by business analysts.

Approaching the Distributed Simulation of Related Business Processes
Felix Elliger
Data Consistency in Service Orchestrations
Daniel Meyer

2011

Visualisierung von Prozessausführungskennzahlen
Sven Wagner-Boysen

Business processes represent the core processes of an enterprise and illustrates them in a model. During the process execution, detailed performance data is collected and can be used as a basis for the calculation of key figures. Available business intelligence solutions do not leverage the process context to evaluate measurements in a certain extend. However, the discussed solution exposes a concept starting the analysis at Key Performance Indicators (KPIs) and continuing at an arbitrary detailed view for specific questions. The concept’s use cases are derived from a provisioning process of an internet service provider. Key performance indicators as an entry point to the analysis are creating a connection to the business objectives. Furthermore they connect the technical implementation to the business view in terms of the analysis. The concept is implemented by a prototype and tested with simulation data.

Ein BPMN Simulator für die Analyse klinischer Pfade
Stefan Krumnow

The simulation of process models is an important task within the business process lifecycle. It allows for the examination and optimization of a process’s performance prior to its implementation or adaption on the basis of models. This thesis examines the simulation of Clinical Pathways, which describe medical workflows in BPMN process diagrams. It investigates the use case and its questions and derives requirements towards a simulator for Clinical Pathways out of them. The thesis evaluates and categorizes existing process simulation techniques and tools. Afterwards, a software system is designed and evaluated that is able to simulate BPMN models of Clinical Pathways. For this purpose, the models are transformed into a Petri net, which is then executed for a specified timeframe. During the execution, simulated events are logged, which allows for the analysis and optimization of the modeled Clinical Pathways.

Automation of Artifact-Driven and Flexible Processes
Ole Eckermann

Business process automation has become state-of-the-art for many enterprise domains. These are, for example, production planning or customer relationship management. For such domains, various approaches for process automation have been proposed and standardization efforts like BPMN have become very popular. However, these approaches are not sufficient for the automation of highly creative processes, as they can be observed in the field of innovative product design. Such processes require a high degree of exibility during runtime, because they are influenced by unpredictable external events. Besides, the results of creative activities are manifold and require different succeeding steps. In this thesis, I focus on product design processes and propose a methodology for the implementation of suitable work flow support. I follow an artifact-centric approach for the technical specification that incorporates process models to cover the needs of business experts. Flexibility at runtime is achieved by the novel structure of the generated workflow model that is executed. Prototypes to support the different levels of the methodology have been developed. Finally, the applicability of my approach is demonstrated with an implemented scenario.

Allowing and Propagating Changes in Architectural Views
Alexander Koglin

In companies, a strategic focus often is put on the steadily growing application landscapes supporting the business. Companies aim to get a holistic view on their architectures and processes as subsumed in the enterprise architecture (EAM). Software maps of enterprise applications help to understand the business, but cannot be changed, yet. The goal of this thesis is to develop a framework and a prototype to propagate changes from the visual maps to the underlying semantics, wherever an unambiguous propagation is possible. It shall enable enterprise architects and process analysts to make changes to the documentation of enterprise architectures and process landscapes. This shall be done by tracking meaningful adaptations to software maps that cause changes of the according semantic model, and by describing their effects through visualization rules. The thesis uses existing science in Model-Driven Engineering (MDE) by defining a lifecycle dependency for the back-transformation of models. In line with state-of-the-art work bidirectional model-transformations, object-relational mappings and change semantics are considered to allow transforming models in both directions. Issues like the view-update problem in relational databases are related to this problem domain. This work results in a framework that allows future implementations to semantically define adaptable software maps.

2010

Variantenmangement für Prozessmodelle
Willi Tscheschner

There are different reasons for the frequent variations of process models in the field of business process modeling. These include legal regulations in different countries, the support of different customer segments, adapting standardized processes or the reuse of predefined processes. Today, the standart practice is to use process models which are completely detached from each other or models where all the variations are included. The results lead to confusing, complex, isolated or unmaintainable process models. This work addresses this issue and presents an approach for integrated variation management. For this reason, real-life scenarios are used which contain specific requirements. These requirements show that it would be ideal to use well known modeling techniques because they are already used and accepted in enterprises. In addition to the definitions of variants, as well as the potential conflicts and their solutions, this work also describes the analysis of process variations. This makes it possible to extract and display differences and adjustments to the base process and display it. All together this work describes a holistic approach to managing and analyzing variants of process models.

Begriffssystembasierte Qualitätskontrolle von Bezeichnern in Geschäftsprozessen
Nicolas Peters

Business process models are often used for human to human communication. To enable an efficient discussion it is very important that all participants can easily understand the process models. The process elements’ labels play a major role for the understandibility. Because of the ambiguity of many terms, misinterpreations occur. Although the labels’ quality is so important, there are not many articles that investigate the quality as well as offer proposals on how to avoid common errors. This thesis introduces a method and a tool support for managing all labels of a process repository as well as detecting and avoiding errors in labels. A glossary takes center stage that is especially designed to support modeling processes. In preparation, common errors are defined that occur in labels, followed by a discussion of other approaches.

Managing Variability in Process Models by Structural Decomposition
Maria Rastrepkina

2009

Querying the Data Perspective of Business Process Models
Steffen Ryll
Business Process Mashups - An Analysis of Mashups and their Value Proposition for Business Process Management
Matthias Kunze
Suitability of Enterprise Collaboration Software for Advanced Change Management Processes
Johannes Nicolai

To gain the benefits of distributed software development, companies have to overcome strategic, cultural and technical challenges. Open Source projects who managed to master these challenges frequently make use of Collaboration Software. Tracker tools are key components of Collaboration Software and manage customer incidents, bug reports, feature requests and other change management related artifacts. Industry already adopted Open Source best practices and integrated tracker tool functionality in Enterprise Collaboration Software as well. Since most tracker tools have been originally designed for light-weight Open Source change management processes, the question remains whether their work flow functionality is sufficient to supportadvanced change management processes in an enterprise context. This thesis elicits, models and analyzes functional requirements of companies who (like to) use tracker tools of Enterprise Collaboration Software to enforce their change management processes. The resulting requirements framework is validated by real world enterprise survey data. Consequently, the tracker work flow capabilities of seven Enterprise Collaboration Software solutions are matched against the ones modeled in the requirements framework. Based on the evaluation results, one approach how to extend an existing tracker tool to realize the uncovered functionality is presented. This approach is finally validated by implementing it on top of a leading Enterprise Collaboration Software.

Generation of User Interfaces for Service Compositions
Falko Menge
Conditional Re-evaluation of Workflows
Bernd Schäufele
Resource Perspective in BPMN - Extending BPMN to Support Resource management and Planning
Andreas Meyer

2008

xBPMN++ - Towards Executability of BPMN: Data Perspective and Process Instantiation
Torben Schreiter

The Business Process Modeling Notation (BPMN) has recently become very popular amongst business analysts as an easy-to-use yet powerful modeling notation for business processes. However, BPMN is not able to capture all the details necessary for automated execution by an engine. The Business Process Execution Language (BPEL), on the other hand, is directly executable by business process engines but lacks an intuitive graphical notation. In order to extend BPMN 1.0 towards direct executability, the thesis presented enriches the revised control flow concepts of xBPMN (by Alexander Grosskopf) with an orthogonal data flow perspective. Sophisticated and carefully defined data flow semantics are essential to enable orchestration on execution level. In the course of this thesis we describe use cases and introduce formal definitions of the following concepts: data object lifecycle, data scoping, data assignment, mediation and transformation, streaming/buffering, correlation, and process instantiation for xBPMN++.

WS-BPEL Import for a BPMN based Process Execution Engine
Philipp Sommer
Eine virtuelle Maschine für den π-Kalkül zur Implementierung von Ressourcen mit dynamischen Verhalten
Olaf Märker
From Executable Process Languages to Process Models
Mathias Weidlich
xBPMN- Formal Control Flow Specification of a BPMN based process Execution Language
Alexander Lübbe
Modeling Telecommunication Payment Processes
Alexander Küchler

The telecommunication industry is changing tremendously and affects both network operators and consumers. This master’s thesis analyzes convergence aspects like IP Multimedia Subsystem and Fixed-Mobile Convergence. The number of services in the telecommunication industry is increasing and network operators require platforms such as the Service Delivery Platform to handle them. A definition of a general Service Delivery Platform is given and the market for such platforms is condensed. Performing business on such platforms requires the efficient modeling of business processes. This thesis focuses on three phases of the service delivery chain: contract agreement, service usage and payment. Payment types are classified on the basis of the relation of service usage and payment. The Business Process Modeling Notation is used to model various telecommunication scenarios that cover all three phases. It points out that several details require a lot of modeling effort, e. g. the large number of interactions in business-to-business processes, the alternative direction of message exchange, the consideration of multiple participants of the same type, the modeling of optional behavior, and the recurrence of activities. Solutions for modeling details of telecommunication business processes are developed. The following extensions of the Business Process Modeling Notation are introduced in this thesis: Interaction Modeling using BPMN, Variability Mechanisms, and Typed Message Flows. They simplify business process diagrams by using fewer elements without losing essential information. These extensions are used in telecommunication scenarios to reduce the visual complexity and to improve readability as well as usability of business process diagrams.

2007

Similarity of Web Services - An Approach for Information Organization
Thomas Hille

This master’s thesis deals with the similarity measurement of Web services and similar entities. The research is motivated by the increasing amount of data, e. g., Web services found in the World Wide Web and inside companies. The aim of the thesis is to provide a concept for finding predefined pieces of data, e. g., a distinct Web service in a pool of data. Furthermore, this concept should include the possibility of consolidating data pools. Similarity aspects should be considered in both search and consolidation cases. Chapter one presents the aims of this thesis. It is explained that the concept for similarity measurement consists of two parts, the so-called search scenario and the consolidation scenario. It also defines a number of requirements that should be the basis for benchmarking a similarity measurement framework. In the second chapter some basic knowledge about Web service technologies is given. The third chapter deals with the definition of similarity. It groups similarity into three different layers and analyses a number of different similarity measurement approaches that can be applied to one of these layers. In addition, a new similarity measurement approach is proposed. It is derived from the data mining domain. Finally, the third chapter evaluates existing similarity measurement frameworks. In view of the limitations of the frameworks evaluated in the third chapter, the fourth chapter develops a general concept for a similarity measurement framework. Furthermore, the concepts for two so-called modules that can be integrated into the framework are explained. One of these modules, called the algorithmic module, implements the data mining approach. The fifth chapter proves the concepts with concrete implementations. The important parts of the implementation are explained and the performance of the algorithmic module is analysed. Finally, the sixth chapter gives a summary of the thesis, provides some proposals for future research and explains the contribution made by this thesis.

Composition and Coordination of Transactional Business Processes
Silvan Golega
A Statistical Approach to Analysis of Search Engine Ranking Algorithms
Sergey Vladimirovich Smirnov

Search engines proved to be effective tools capable of coping with information overload and became very popular among users of the World Wide Web. If a web page is highly ranked by a search engine, a high number of visits can be expected for it. Search engine optimization companies aid web site publishers to reach high positions of their web pages in search engine results. However, methods employed for achieving this goal are usually not formal. This master thesis proposes a formal approach which allows learning criteria a search engine uses to rank web pages. The approach is illustrated with the example of Google search engine. A polynomial search engine model is the basis of the approach. Therefore a hypothesis about the model used within Google search engine is formulated. To check the hypothesis an experiment is conducted: information about the ranking behavior of Google is collected and analyzed. The experiment is realized by means of an application developed within this research. Statistical methods are employed for experiment result analysis. Finally, the results of the data analysis are presented and explained.

Value-Added Service Brokerage-Description of an Architecture for Brokering and Delivering Value-Added Services Using Internet and Web as Delivery Platform
Martin Breest
WebData - Definition of a Middleware for Exposing and Accessing Object-oriented Domain Models as Web Resoures
Jan Schulz-Hofen
Applying Model-driven Engineering to Application Family Engineering
Dominik Tornow
Evaluation of a Novel Information Retrieval Model: eTVSM
Artem Polyvyanyy

The thesis on hand presents evaluation procedure and evaluation results of a novel Information Retrieval model—enhanced Topic-based Vector Space Model (eTVSM). Being a promising one, this model still lacks quantitative evaluations and comparisons to other models. Therefore, this work deals with closing this gap. Evaluation of any Information Retrieval model is a highly heuristic task, primary because of human activities involved in the process. Thus, a formal procedure for evaluation of Information Retrieval model is proposed. Then, statistical approaches for comparing Information Retrieval models are presented. Finally, proposed techniques are applied for performing eTVSM evaluations and comparisons to other models, as well as comparisons of eTVSM under different internal configurations. eTVSM is the ontology driven model. eTVSM search results are highly sensitive to ontology configuration. Hence, a great deal of effort is directed to derivation of eTVSM ontology construction principles that result in competitive eTVSM as compared to other state of the art Information Retrieval models. Due to eTVSM computational complexity, a task of eTVSM feasibility for large scale retrieval systems is addressed.

xBPMN - Formal Control Flow Specification of a BPMN based Process Execution Language
Alexander Grosskopf

The Business Process Modelling Notation (BPMN) is an emerging language to model, document and communicate business processes targeting all process stakeholders. The specification lacks a formal grounding and execution relevant details. Hence BPMN can not be used to specify executable processes. To take BPMN a step further towards an executable language this thesis formally specifies a control flow semantics based on and aligned with BPMN. We set control flow requirements and assess the current language against these requirements to identify the shortcomings of BPMN. Consequently xBPMN is proposed as a revised control flow semantics. The xBPMN semantics is formally specified and validated against the requirements.

2006

Enterprise Service Discovery on the Basis of Metadata
Sebastian Stock
Service and Device Monitoring for a Smart Items Infrastructure
Matthias Michael Wiemann
Evaluation of Process Models, A quantitative and qualitative assesment
Martin Jürgen Fritz Klewitz
Design and Implementation of a Web Frontend Layer for J2EE Applications and a Portal Frontend Prototype for ARCWAY COCKPIT
Lars Trieloff
Complexity-based Independent Software Metrics
Klemann Dennis
Model-Driven Configuration for Managing Business Process Variability in Enterprise Resource Planning Systems
Kay Dirk Hammerl
Supporting the Modeling of Business Processes Using Semi-Automated Web Service Composition Techniques
Jan Schaffner
Process Choreographies in Service-oriented Environments
Gero Decker
A Visual Environment for the Simulation of Business Processes based on the PI-Calculus
Anja Bog

2005

Entwicklung eines Qos-Frameworks für Service-orientierten Architekturen
Harald Carlos Schubert
Entwurf und Implementierung von Prozesssteuerungskomponenten in service-orientieren Umgebungen
Hagen Overdick
Conformance Testing: Measuring the Alignment between Event Logs and Process Models
Anne Rozinat

2004

Analyse und Evaluierung von Integrationsarchitekturen
Maximilian Spiegel z. Diesenberg
Design and Implementation of an Integration Environment for Process Planning and Enactment
Christian Walter Günther