Projects
CQ4CD: Continuous Quality Control for Continuous Delivery Architectures – Systematic Engineering of Performance, Reliability, and Resilience
Continuous Delivery (CD) has become a core element of modern software engineering processes, aiming to automate and accelerate software build, quality assurance, and release pipelines. An extensive set of available technologies, such as CD tools, has also promoted the prevalence of CD. CD systems have become complex and business-critical infrastructures like the system-to-be-delivered. While CD systems are widespread in practice, their software architectures and corresponding quality assurance approaches have not been studied extensively.
The goal of CQ4CD is to develop the foundations of performance, reliability, and resilience engineering for CD architectures. CQ4CD aims to reduce complexity and improve quality through rigorous CD architecture specifications and reduce risks and uncertainties by basing these specifications on established patterns and smells. Based on this foundation, it aims to provide means for automatically detecting conformance to these patterns and the absence of bad smells in CD architectures. Together with formal performance, reliability and resilience models and benchmarks, these contributions enable what-if or trade-off analysis for exploratory architecture evolution. Finally, the project aims to provide novel means for continuously optimizing and adapting a given CD architecture based on performance, reliability, and resilience measurement data. All project results will be evaluated in various empirical studies.
Sponsored by the German Research Foundation DFG (Förderkennzeichen HO 5721/3-1) and the Austrian Science Fund FWF
Leading Partners: Universität Hamburg, University of Vienna
Contact: André van Hoorn
SustainKieker – Sustaining a Reusable High-Quality Monitoring Framework for Software Engineering Research
Application-level monitoring and dynamic analysis of software systems are a basis for various tasks in software engineering research, such as performance evaluation and architecture reconstruction for reverse engineering. The Kieker research software provides monitoring, analysis, and visualization support for these research purposes. The development started in 2006 and evolved into open-source software used in various software engineering research projects. Several research groups form the open-source community that continues to develop the Kieker software framework.
With SustainKieker, we intend to maintain Kieker as a reusable, high-quality monitoring framework that can be used in other research areas and by a wider community. Enhanced interoperability is a goal of SustainKieker. To improve usability and familiarization with Kieker, we will also create tutorial demos for the new research areas. One goal is to integrate the user manuals with the online demo applications to create interactive tutorials. Enhanced automated quality assurance is another goal for SustainKieker. Specifically, we plan to significantly increase test coverage, introduce static architecture checks, expand regression benchmarking, and use GitOps for state-of-the-art automation. This way, SustainKieker can also serve as a blueprint for other research software projects aiming to automate their quality assurance. With SustainKieker, we want to create better conditions for further development by remodularizing the Kieker software architecture. The GitOps procedures will also improve the process for community contributions. With SustainKieker, we are also contributing to the field of research software engineering through software engineering research to establish research software science as a discipline. We will approach this goal through empirical software engineering research. As concrete measures, we will empirically evaluate the impact of tutorials with embedded online demos and the impact of hackathons. We know that controlled experiments with students and action research with hackathons have limited external validity. External validity is the extent to which one can generalize the results of a study to other situations. However, we consider this to be a viable and feasible first step towards research software science in the context of this proposed project.
Sponsored by the German Research Foundation DFG (Förderkennzeichen HO 5721/4-1)
Leading Partners: Universität Hamburg, Kiel University
Contact: André van Hoorn
A Guide to the Sustainable Introduction of Technical Debt Management by Increasing and Maintaining Awareness
Ein Ziel bei der Entwicklung von Softwaresystemen ist es, diese so zu implementieren, dass sie langfristig leicht veränderbar sind. Die Änderbarkeit wird jedoch durch verschiedene suboptimale Konstrukte in Implementierung und Architektur eingeschränkt. In einer technischen Metapher zu finanziellen Schulden wird der Einbau eines suboptimales Konstrukts als Aufnahme von technischen Schulden interpretiert. Der resultierende Mehraufwand für Änderungen am System wird als Zinsen interpretiert. Technische Schulden aktiv zu managen, ist aufgrund der anfallenden Zinsen die Voraussetzung für die Weiterentwicklung von IT-Systemen. Allerdings stellen mangelndes Bewusstsein für technische Schulden sowie fehlende Ansätze für die nachhaltige Einführung eines Managementprozesses ein Problem dar.
Die zwei Hauptziele dieses Projekts sind, zu untersuchen, (1) wie ein aktives Management von technische Schulden in der Praxis eingeführt und (2) wie das Bewusstsein für technische Schulden dauerhaft erhöht werden kann. Dazu wird ein Leitfaden entwickelt, der die einzelnen Aktivitäten rund um technische Schulden (Vermeiden, Erkennen, Erfassen, Überwachen, Priorisieren, Zurückzahlen) zu einem nachhaltigen Managementprozess verbinden soll. Mithilfe einer Basisversion des Leitfadens soll in Team-Workshops ein auf dieses Team angepasster Managementprozess für technische Schulden entwickelt und eingeführt werden. Durch die Einführung eines solchen Prozesses in verschiedenen Teams, können wir Gemeinsamkeiten und Unterschiede identifizieren. Dies werden wir im Leitfaden abbilden, indem die Gemeinsamkeiten einen Grundprozess bilden und die Unterschiede als mögliche Variationen gekennzeichnet werden, so dass weitere Teams den Prozess an ihr Team anpassen können.
Die Partner TRUMPF und DATEV stellen hierfür mehrere Teams zur Verfügung, mit denen der Leitfaden entwickelt wird und mit denen die finale Version schließlich auch evaluiert wird.
Sponsored by the Federal Ministry of Education and Research BMBF (Förderkennzeichen 01IS24031)
In cooperation with Software Campus, TRUMPF SE + Co. KG, and DATEV eG
Contact: Marion Wiese, André van Hoorn
dqualizer: Domain-centric runtime quality analysis of business-critical application systems
The runtime quality of application systems - e.g. in terms of performance, reliability and resilience - has a direct influence on the business success of companies in a wide range of technical domains. As a result, it is important to continuously monitor, evaluate and, if necessary, improve runtime quality through analysis measures. Over the last few years, corresponding analysis measures such as load tests or monitoring have become widespread in practice and mature commercial and open-source tools have been developed. However, these measures are all located at the technical level and are not interpreted at the domain an level. At the same time, software architecture and software development approaches such as Domain-Driven Design (DDD), which are becoming increasingly widespread, essentially do not consider runtime quality concerns despite their criticality.
The research project dqualizer of Novatec Consulting GmbH and the Universität aims at closing this gap between the domain-specificity of application systems and the (technical) measures and findings of quality assurance by means of a domain-centric approach. For this purpose, possibilities for modeling and monitoring runtime quality concerns are to be integrated into DDD-based techniques. From a domain perspective, meaningful load and resilience tests can be automatically generated and interpreted by dqualizer and the links to technical monitoring can be established. The innovative concepts developed are to be implemented in an open source tool and made available and usable in connection with existing tools.
The connection of the joint research project of Novatec Consulting GmbH and the Universität Hamburg with industry partners is intended to establish the link to practice and evaluate the approach using real case studies in the complex technical domains of insurance and payroll/tax accounting and the corresponding technical environments. The case studies will be provided by the associated application partners. The consortium is already working together in various sub-constellations. Novatec can expand its offering through dqualizer into both business-driven architecture consulting and runtime quality analysis, and in particular merge the two areas. For Universität Hamburg, there are extensive opportunities for exploiting the results in research and teaching.
Sponsored by the Federal Ministry of Education and Research (Förderkennzeichen 01IS22007B)
Leading Partners: Universität Hamburg, Novatec Consulting GmbH
Partners: VHV Versicherungen, Datev eG
Project Website: dqualizer.github.io
Contact: André van Hoorn, Sebastian Frank, Alireza Hakamian
SeaSchool – Software Engineering und Architektur in der Schule
Im Rahmen des SeaSchool-Projekts soll ein Konzept entwickelt und evaluiert werden, um Schüler:innen ein realistisches Bild der Rollen, Aufgaben und Fähigkeiten von Software-Ingenieuren bzw. Software-Architekten zu vermitteln. Software Engineering beschäftigt sich mit der ingenieurmäßigen Software-Entwicklung – eine Voraussetzung für die Konstruktion, den Betrieb und die Wartung komplexer Softwaresysteme, wie sie in den letzten Jahren Einzug in unseren Alltag gehalten haben und nicht mehr wegzudenken sind. Programmieren, was oft eine der ersten – auch abschreckenden – Assoziationen mit der Informatik ist, ist nur ein kleiner Bereich des Software Engineering.
Das strategische Ziel des Projektes ist es, mehr Schüler:innen für Berufe im IT-/MINT-Bereich zu begeistern. SeaSchool adressiert das gesellschaftliche Problem des (zunehmenden) IT-Fachkräftemangels (96.000 unbesetzte IT-Stellen). Unsere Herangehensweise zur Problemlösung: Um mehr Schüler:innen für eine (betriebliche oder studentische) IT-/MINT-Ausbildung zu begeistern, setzen wir bereits in der schulischen Ausbildung an, bevor MINT-/Informatik-Schwerpunkte gewählt wurden und es zu spät ist. Das Projekt dient durch die innovativen didaktischen Konzepte zunächst der Motivation des „klassischen“ Informatikunterrichts und der Unterstützung, ob/dass sich die Schüler:innen für einen MINT-Schwerpunkt entscheiden. Es sollen insbesondere auch Schüler:innen für das Thema begeistert werden, die diesen Schwerpunkt vorher aufgrund von Vorurteilen bzgl. des Berufsbildes nicht in Erwägung gezogen haben – das betrifft vor allem auch Schülerinnen.
Kernbestandteile des Konzepts sind ein klassenübergreifender Workshop mit Schüler:innen der 9. oder 10. gymnasialen Jahrgangsstufe sowie eine nachgelagerte und fortführende Informatik-Wahlpflichtveranstaltung für interessierte Schüler:innen des Jahrgangs, die von uns begleitet wird. Anhand eines realen Beispiels, für das die Entwicklung eines Softwaresystems geplant werden soll, sammeln die Schüler:innen praktische Erfahrungen und werden so an die Methoden und Prozesse von Software Engineering und Software-Architektur herangeführt. Studierende der Informatik-Studiengänge sollen in alle Phasen eingebunden werden – sowohl in Konzept- und Materialerstellung sowie -evaluation, als auch der Betreuung in den Schulen, da die Hemmschwelle bei Studierenden für Schüler:innen in der Regel niedriger ist.
Partner: HITeC e.V.
Contact: André van Hoorn, Thomas F. Düllmann, Sebastian Frank
Quantum Explorer – Pioneering Quantum Computing for Particle Physics and Computer Science
While quantum computers are expected to surpass their classical counterparts in the future, current devices are prone to high error rates. Methods to reduce the impact of these errors are crucial to enable reliable quantum computations. Quantum error mitigation can improve computational results significantly but rely on knowledge about the dominant types of error in the system. Noise models offer a means to represent these errors mathematically. Our research in this project focusses on constructing noise models and benchmarking their quality.
Part of DASHH – Data Science in Hamburg, HELMHOLTZ Graduate School for the Structure of Matter
Researcher: Tom Weber
Supervisors: Matthias Riebisch/André van Hoorn, Kerstin Borras (DESY, RWTH Aachen), Karl Jansen (DESY), Dirk Krücker (DESY)
GS-IMTR: Graduate School Intelligent Methods for Test and Reliability
The Graduate School’s scope includes topics such as design for test and diagnosis; post-silicon validation; test generation and optimization; robust device tuning; system-level test; lifetime test and reliability management; and test automation.
Project P8: Software Test Suite Optimisation for Complex High Data-Volume Software
Software plays a vital role in large-scale hardware testing. Test programs allow hardware testers to deal with the complexity of modern chips and enable them to automate the tests. A tester operating system and development environment (TOSDE) connects the customer test programs to the test system with the device under test (DUT). The TOSDE is a complex high data-volume software, for which it is extremely challenging to provide and assure the requested level of correctness, robustness, and performance. This is due to the ecosystem being many-fold and only partially under the control of the automatic test equipment (ATE) platform developers. For instance, typical test system hardware handles millions of instructions per second (MIPS) running an embedded software that communicates with the tester operating system. Customers define and/or generate test programs. As a result of growing chip complexity (Moore's law), the test program complexity and the resulting data volumes are growing exponentially. Hence, this leads to performance-related questions about the TOSDE, including data transfer rates to local discs and network drives. The customer devices and the respective test programs are not available to the TOSDE team for intellectual property reasons. It is vital that the software-intensive TOSDE is of high quality to support the effectiveness and efficiency of the hardware testers. In particular, it needs to be correct, robust to a wide variety of uses by hardware test programs, and perform efficiently to minimize test time.
The objective of the project is to develop and evaluate a novel approach to analyze and optimize software test suites for correctness, robustness, and performance. The particular focus is the support for testing high and exponentially growing data-volume software in a context in which unknown code (test programs) will run on top of this software, which has not been considered by previous approaches. Generating tests using fuzzing seems like a promising approach to tackle the vast space of possible test programs. Yet, to overcome the problems discussed above, we need to find a novel combination of tailored techniques from functional (e.g., coverage analysis, fuzzing, mutation testing) and non-functional testing (e.g., operational profile based scalability testing), as well as model-based performance analysis (e.g., anti-pattern detection, what-if-analysis) to be able to decide what is interesting and important to test so that we get enough performance to handle the high data-volume while reducing the test execution time to feasible levels.
The project will be structured into the following research areas:
- Area 1 (Test Suite Analysis) will investigate and propose methods to analyze the effectiveness of test suites in the context of complex, high data-volume software such as TOSDEs.
- Area 2 (Test Suite Optimisation) will investigate and propose methods to optimize test suites regarding the trade-off between effectiveness and test execution time.
- Area 3 (Test Case Generation) will utilize the effectiveness analysis results from area 1 to automatically generate test cases to achieve desired levels of the identified effectiveness metrics.
- Area 4 (Combination of Manual and Generated Test Cases) will provide the methods to support the adequate combination of manually created and automatically generated test cases. The goal is to provide methods to optimize the interplay by analyzing and optimizing these hybrid test suites, building on the methods from areas 1-3.
- Area 5 (Validation and Evaluation) comprises all activities conducted to assess the developed methods. The experimental evaluation will be conducted by using publicly available open-source systems as well as by applying them to an industry-leading TOSDE software.
Researcher: Maik Betka
Supervision: André van Hoorn (UHH), Steffen Becker (University of Stuttgart), Stefan Wagner (University of Stuttgart), Martin Heinrich (Advantest)
DiSpel – Data-driven specification and verification of resilience scenarios
Today’s (distributed) software systems are often exposed to unexpected events and changes, e.g. load peaks, autoscaling or hardware failures. As a result, user satisfaction can be strongly influenced, e.g. if the availability or performance of the software system is reduced. Therefore, such events should also be considered in the specification in order to make the resilience of the software system assessable. However, requirements of this kind are usually not specified or are specified incorrectly. In particular, stakeholders often find it difficult to quantify these requirements adequately. The necessity and suitability of the requirements is often only recognised when an unexpected event actually occurs. The aim of the project is to develop an approach and a tool based on data-driven formal verification.
Stakeholders will be supported in the specification process by
- setting up known scenarios in a precise and quantifiable way,
- proposing new scenarios based on past events, and
- checking the relevance of already specified requirements.
Furthermore, it should be possible to predict the fulfilment of the established requirements (i) for individual, past events, ii) generalised for the system and (iii) for potential events in the future. In a continuous process, the quality and quantity of the resilience requirements should thus be increased, thereby creating confidence in the behaviour of the system.
Sponsored by the Federal Ministry of Education and Research (Förderkennzeichen 01IS17051)
In cooperation with University of Stuttgart, Software Campus, and DATEV eG
Contact: Sebastian Frank, André van Hoorn
Completed Projects
ADAM – Autonomous Adapting Machines
ADAM – Autonom adaptierende Maschinen
In this project with funding from the Federal Ministry of Research we work as the coordinator. This project is performed with German partners in the fields of Modelling, Artificial Intelligence, Factory Automation and Internet of Things.
Im Maschinen- und Anlagenbau gibt es die generelle Herausforderung, Flexibilität zu erreichen. Änderungen der Anforderungen oder der Einsatzbedingungen einer Maschine sollen möglichst vor Ort berücksichtigt werden. Änderungen an der Maschine und ihrer Konfiguration erfordern ein Zusammenwirken des Betreibers mit dem Maschinenbauer und bei Bedarf mit dessen Zulieferern, was durch Kommunikations- und Lieferwege Zeit und Aufwand erfordert.
Im Verbundprojekt ADAM sollen Anpassungsmöglichkeiten entwickelt werden, die diese Aufwände verringern, indem die Maschine selbstständig sinnvolle Änderungen erkennt, vorbereitet und diese Änderungen unterstützt und (soweit möglich) durchführt. Zu solchen Änderungen gehören zum Beispiel die Anpassung der Konfiguration einer Maschine oder der Austausch von Maschinenkomponenten.
Dazu werden sogenannte autonome Agenten entwickelt, die als Softwarelösung die Aufgabe haben die Maschine zu überwachen und bei Änderungen von Anforderungen zu adaptieren. Die Maschine zusammen mit dem autonomen Agenten bildet die autonom adaptierende Maschine. Der Agent kann als IT-System zur Veränderung der Maschine angesehen werden, wobei der Agent, d.h. das IT-System, sich selbst ebenfalls dynamisch adaptiert.
Dazu werden Modelle entwickelt, auf deren Grundlage der autonome Agent Anforderungen an die Maschine ableitet und Optimierungsbedarf feststellt. Falls die anfallenden Berechnungen zu umfangreich für die Hardware der Maschine sind kann der autonome Agent auf eine in der Cloud ausgelagerte Datenbank an Lösungen zurückgreifen. In ähnlicher Weise wie Änderungen sollen auch Veränderungen der Umgebung, wie Störungen und Fehler behandelt werden. Unerwünschte Emergenz, die durch die autonomen Entscheidungen entstehen kann, wird durch die Verifikation mittels Simulationen und durch wissensbasiertes Monitoring erkannt und verhindert. Dazu wird eine Wissensmodellierung über die möglichen Aktivitäten des Agenten als auch der Maschine und Umgebung verwendet. Diese ermöglicht es, während der Agent Aktionen durchführt, diese zu analysieren und zu reflektieren und so unsichere Handlungen und Wechselwirkungen zu erkennen. Für die Verifikation und den Test der Projektergebnisse werden im Projekt konkrete Anwendungsszenarien aus dem Bereich der industriellen Antriebstechnik betrachtet. Als eine besondere Herausforderung wird die Änderung von Geschäftsprozessen untersucht, die sich durch die neuartige Struktur und den Technologietransfer ergibt.
2019–2022
Sponsored by the Federal Ministry of Education and Research (Förderkennzeichen 01IS18077A)
Partners: HITeC e. V. , encoway GmbH, LENZE SE, Friedrich Remmert GmbH
Researcher: Matthias Riebisch, Pascal Pein, Leif Bonorden
RADON – Rational Decomposition and Orchestration for Serverless Computing
Emerging serverless computing technologies, such as function-as-a-service (FaaS) offerings, enable developers to virtualize the internal logic of an application, simplifying management of cloud-native applications and allowing cost savings through billing and scaling at the level of individual function calls. Serverless computing is therefore rapidly shifting the attention of software vendors to the problem of developing cloud applications that can use these platforms.
RADON aims at creating a DevOps framework to create and manage microservices-based applications that can optimally exploit serverless computing technologies while avoiding FaaS vendor lock-in. RADON applications will include fine-grained and independently deployable microservices that can efficiently exploit FaaS and container technologies. The end goal is to broaden the adoption of serverless computing technologies within the European software industry. The methodology will strive to tackle complexity, harmonize the abstraction and actuation of action-trigger rules, avoid FaaS lock-in, and optimize decomposition and reuse through model-based FaaS-enabled development and orchestration.
2019–2021, University of Stuttgart
RADON is sponsored through Horizon 2020 - Research and Innovation Framework Programme.
Contact: André van Hoorn
Orcas – Efficient Resilience Benchmarking of Microservice Architectures
The microservice architectural style is gaining more and more prevalence when constructing complex, distributed systems. One of its guiding principles is design for failure, which means that a microservice is able to cope with failures of other microservices and its surrounding software/hardware infrastructure. This is achieved by employing architectural patterns such as the circuit breaker and the bulkhead. Resilience benchmarking aims to assess failure tolerance mechanisms—for instance, via fault injection. Meanwhile, resilience benchmarking is not only conducted in development and staging environments, but also during a system’s production use. Existing resilience benchmarks for microservice architectures are ad-hoc and based on randomly injected faults.
The Orcas project aims for efficient resilience benchmarking of microservice architectures. Resilience vulnerabilities shall be detected more efficiently, i.e., faster and with fewer resources, by incorporating architectural knowledge as well as knowledge about the relationship between performance/capacity/stability (anti) patterns and suitable injections. The approach builds on existing works on model-based and measurement-based dependability evaluation of component-based software systems.
2017–2021, University of Stuttgart
Orcas is sponsored by the Baden-Württemberg Stiftung as part of the elite program for junior researchers.
Contact: André van Hoorn
PeCoH – Performance Conscious High Performance Computing
PeCoH – Performance Engineering für wissenschaftliche Software
In PeCoH, we establish the Hamburg HPC Competence Center (HHCC), which coordinates and fosters joint performance engineering activities between the local compute centers DKRZ, RRZ and TUHH. Together, we will implement user services to support performance engineering on a basic level, provide a basis for co-development, user education and dissemination of performance engineering concepts. We will evaluate methods to raise user awareness for performance engineering and bring them into production environments. Specifically, we address cost-awareness, provide success stories, advance HPC competence management resulting in a certification system, and assess the benefit of alternative workflows. Material and tools developed in the course of this project are made available under opensource licenses to foster the re-use in the research community.
2016–2020
Sponsored by the German Research Foundation (project 320893435, research grants LU 1353/12-1, OL 241/2-1, and RI 1068/7-1)
Researcher: Matthias Riebisch, Thomas Ludwig, Stephan Olbrich, Sandra Schröder
SQuAT – Search Techniques for Managing Quality-Attribute Tradeoffs in Software Design Optimizations
Designing a software system in such a way that it meets the main quality-attribute requirements desired by the system stakeholders (e.g., performance, modifiability, reliability, among others) is a complex, challenging and error-prone activity, even for experienced software engineers. A factor that contributes to this complexity is the existence of multiple alternative solutions that satisfy the same requirements, making tradeoffs inevitable. This process can be seen as a search through a large design space, in which the solution space is n-dimensional and each dimension represents a different quality attribute to be optimized. Tool support is vital to assist engineers in exploring the design space and selecting “good-enough” solutions. Over the last years, several tools usually based on heuristic search techniques have been developed. This research has been focused on improving the tooling capabilities but has paid less attention to the quality-attribute tradeoffs of the solutions.
In SQuAT, the University of Stuttgart and the Universidad Nacional del Centro de Buenos Aires (UNICEN) jointly investigate new semi-automated techniques for managing quality-attribute tradeoffs in software design optimizations, particularly focusing on i.) distributed search strategies, ii.) modularization of design knowledge, iii.) incorporation of user preferences and uncertainty, and iv.) application of negotiation techniques for managing quality-attribute tradeoffs.
2015–2017, University of Stuttgart
SQuAT is sponsored by the German Federal Ministry of Education and Research under Grant No. 01DN15014.
Contact: André van Hoorn
ContinuITy – Automated Performance Testing in Continuous Software Engineering
Modern businesses require their application systems to perform in a responsive and cost-effective way. To develop such systems, performance regressions have to be identified early during the software development process, and especially before deployment. However, practice shows that load testing, as a relevant form of performance (regression) testing, is either not performed adequately or not at all. Load tests are usually not representative enough for the actual usage profile in production. This situation is caused by the fact that they require additional effort and necessary competence to create load test scripts. In addition, these scripts become obsolete very quickly due to system and usage profile changes, and their maintenance is associated with high costs. In the context of emerging continuous software development (DevOps), in which software changes are being put into operation ever faster and more frequently, this becomes even a greater problem. Furthermore, because the execution of these tests requires a longer time, it is not practical to carry out a load test for each change, e.g., to detect performance regressions. However, if regressions are detected in these aggregated changes, manual diagnosis is necessary to determine the responsible change in the software or the usage profile.
The ContinuITy research project, executed by NovaTec Consulting GmbH and the University of Stuttgart (Reliable Software Systems Group), aims to ensure automated and efficient load testing by integrating continuously recorded measurement data from production and to integrate it into the continuous software development environment. Load tests are automatically extracted from APM (application performance management) data and evolved taking into account changes in the usage profile. To specify load tests, we use a modular description language, which can be extended with additional contextual information, e.g., with test type and objectives. As a part of the automation of the software development process (continuous delivery), relevant load tests are selected to be executed in testing. Obtained results are used for the detection of regressions and their diagnosis. ContinuITy examines regressions to find whether they are caused by changes in the implementation or the usage profile and provides results to the DevOps team
2017–2020, University of Stuttgart
diagnoseIT is sponsored by the German Federal Ministry of Education and Research under Grant No. 01IS17010.
Contact: André van Hoorn
diagnoseIT – Expert-guided Automatic Diagnosis of Performance Problems in Enterprise Applications
Quality attributes of enterprise software applications such as performance, availability, and reliability have a significant impact on business critical metrics of enterprises such as revenue and total cost of ownership. Application Performance Management (APM) processes and tools are often facilitated and integrated into the application lifecycle to monitor performance-relevant metrics of the enterprise applications (e.g., response time, throughput, or resource utilization). APM is a necessity to detect and solve performance problems early. Experience shows that comprehensive APM is seldom implemented in industry, resulting in an unsatisfying quality of enterprise applications and detection rate of performance problems. There are major reasons for the low adoption rate of APM: the initial setup and maintenance of APM is error-prone and requires a high manual effort and expertise.
In order to improve this situation, NovaTec Consulting GmbH and the University of Stuttgart (Reliable Software Systems Group) launched the collaborative research project diagnoseIT on “Expert-guided Automatic Diagnosis of Performance Problems in Enterprise Applications”. Formalized APM expert knowledge is used to systematically detect and diagnose performance problems. Therefore, diagnoseIT uses an APM-tool-independent approach to orchestrate available APM solutions, initially focusing on the open-source tools inspectIT and Kieker. diagnoseIT provides a goal-oriented root cause analysis, offering a starting point for problem resolution. The project results will be published under an open-source license.
2015–2017, University of Stuttgart
diagnoseIT is sponsored by the German Federal Ministry of Education and Research under Grant No. 01IS15004.
Contact: André van Hoorn
Declare – Declarative Performance Engineering
Performance is of particular relevance to software system design, operation, and evolution because it has a major impact on key business indicators. Consequently, during the life-cycle of a software system, performance analysts continuously need to provide answers to and act on performance-relevant questions about response times, hardware utilization, bottlenecks, trends, anomalies, etc. Various established methods, techniques, and tools for modeling and
evaluating performance properties have been proposed, ranging from model-based prediction and optimization in early design phases, over measurement-based approaches in phases where implementations exist, up to solutions that continuously evaluate and control a system's performance during operation. However, particular challenges which hinder these techniques from being widely applied in practice include that it is hard to choose and to parameterize the various available approaches, as well as to filter and interpret the obtained results.
In DECLARE, the University of Stuttgart and the University of Würzburg will jointly address these challenges introducing a Declarative Performance Engineering approach. We envision to reduce the current abstraction gap between the level on which performance-relevant concerns are formulated and the level on which performance evaluations are actually executed. Performance queries and goals are formulated in a declarative modeling language. Performance evaluation methods, techniques, and tools are integrated into the DECLARE platform based on capability models and adapters and are selected based on corresponding decision algorithms. We will experimentally and empirically evaluate the results using benchmark systems and based on a large-scale industrial application system. We build on our expertise in the area of combining model-based and measurement-based performance evaluation techniques, including preliminary results on Declarative Performance Engineering. DECLARE will be integrated into the second phase of the Priority Programme "Design for Future - Managed Software Evolution" (SPP 1593), including collaborations with other projects inside the SPP and by contributing to the SPP case studies, focusing on CoCoME.
2016–2020, University of Stuttgart
Declare is sponsored by the German Research Foundation (DFG) under Grant No. HO 5721/1-1 and KO 3445/15-1.
Contact: André van Hoorn
EfA – Design Methods for Automation Systems with Model Integration and Evaluation of Variants
EfA - Entwurfsmethoden für Automatisierungssysteme mit Modellintegration und automatischer Variantenbewertung
With the steady growth of complexities of manufacturing plants and production lines for automation systems, the engineering phase of planning and operation of those production systems are facing more challenges. To enable the automation engineers to make an optimized production plan in an efficient way, a model-based approach will be realized in EfA project in combination with automated planning and programming.
The main aim of this project is to develop methods, tools and solutions, which firstly save costs for designing and modification of complex and heterogeneous automation systems at the field level, the control level and the supervision level.
The major effort of SWK in this project is to adopt the methods of Software Production Line (SPL) in the domain of automation systems. With the usage of SPL, the commonalities of all variants will be developed in an effective and managed way. The derived variants can be then evaluated and validated on the basis of formally-defined product requirements.
Below some approaches to support a partial automation design process are listed:
- Formalization of models for heterogeneous systems
- Mappings between different models baded on a meta-model
- Integration of engineering models and tools
- Validation of consistency for sulutions
- Extension of models with explicit variant modelling
- Deriving variants from a common set of core assets with a tool configurator
- Evaluation of variants with model verification and virtual system integration
2012–2015
Sponsored by the Federal Mnistry of Education and Research (Förderkennzeichen 01M3204C).
Partners: Fraunhofer IOSB-INA, HS-OWL Lemgo, LeiKon GmbH Herzogenrath, inpro Berlin, Lenze Automation GmbH
Researcher: Matthias Riebisch
MOPS – Adaptive Planning and Secure Execution of Mobile Processes in Dynamic Scenarios
Adaptive Planung und sichere Ausführung mobiler Prozesse in dynamischen Szenarien
The aim of the MOPS project is the development of solutions for the transfer of formerly static business processes into the world of mobile data devices.
Subproject: Concepts for Platform Independent and Cross-Domain Description of Workflows
2009–2012, TU Ilmenau (cooperation with U Jena, FH Erfurt, Navimatix, NetSys.IT, the Agent Factory, and Godyo AG)
Sponsored by the Thüringer Aufbaubank.
Researcher: Matthias Riebisch
Traceability to Support Evolutionary Software Development
Unterstützung evolutionärer Softwareentwicklung durch Modellierung von Abhängigkeiten mittels Merkmalmodellen und Tranceability-Links für Domäne interaktiver Informationssysteme
Verständlichkeit und Prüfbarkeit sind wichtige Voraussetzungen dafür, Änderungen an Softwaresystemen effektiv, fehlerfrei und unter Erhalt der Qualitätsmerkmale durchzuführen. Die Bezüge zwischen Anforderungen, Entwurfselementen sowie der Implementierung werden durch heute etablierte Modelle nicht in einer Weise dargestellt, dass eine Nachvollziehbarkeit für die Entwickler und eine Prüfung von Änderungen effektiv und werkzeugunterstützt möglich sind. In der Praxis sind ein Anstieg der Software-Entropie und ein Verlust der Struktur und damit der Wartbarkeit der Softwaresysteme eine häufige Folge dieser Defizite, die enorme wirtschaftliche Schäden durch verkürzte Nutzungsdauer dieser Systeme verursacht. Die für solche Bezüge entwickelten Traceability- Links können aufgrund der notwendigen Anzahl und der fehlenden Methoden für Erstellung und Pflege nicht effektiv eingesetzt werden. Im Rahmen des vorgeschlagenen Projekts soll eine Methodik zur Erstellung, Pflege und Prüfung von Traceability-Links entwickelt werden. Zur Strukturierung der Anforderungen und als Mittler im Abstraktionsniveau werden Merkmalsmodelle eingesetzt, um Anforderungen mit Entwurfselementen zu verbinden. Die Traceability-Links sollen die Verständlichkeit für die Entwickler sowie die Prüfbarkeit der Systeme auf Konsistenz und Vollständigkeit und damit ihre Wartbarkeit verbessern. Die Methodik soll exemplarisch für die Domäne der interaktiven Softwaresysteme entwickelt werden, wobei ein objektorientiertes Vorgehen unter Verwendung der UML-Modellierung als Ausgangsbasis dient.
2005–2008, TU Ilmenau
Sponsored by the German Research Foundation (project 13819100).
Researcher: Matthias Riebisch
UML-based Test Case Generation
Generierung von Anwendungstestfällen für statistische Testmethoden auf der Basis von UML-Modellen
Effizientes und extensives Testen von Software ist eine Voraussetzung für eine hohe Softwarequalität. Die Integration des automatischen Testens in den iterativ-inkrementellen Software-Entwicklungsprozess ist notwendig, um neben der geforderten Funktionalität auch evolutionär die Softwarequalität zu gewährleisten. Ausgehend von dem hohen Grad, mit dem sich objektorientierte Vorgehensweisen unter Verwendung der Unified Modeling Language (UML) für den Entwurf kommerzieller Softwaresysteme etabliert haben, erwächst die Problemstellung nach der Möglichkeit zur systematischen Ableitung von Anwendungstestfällen aus den UML-Modellen eines Softwaresystems. Im Rahmen des Projekts soll ein Ansatz zur Generierung von Anwendungstestfällen für statistischer Testmethoden auf der Basis von UML-Modellen entwickelt werden. Es ist vorgesehen, den Ansatz in ein iteratives Softwarevorgehensmodell zu integrieren. Die generierten Testfälle dienen als Ausgangspunkt für ein manuelles oder automatisches Testen. Dies ermöglicht, in Zusammenhang mit einer Zuordnung der Testfälle zu Nutzerprofilen, ein zielorientiertes statistisches Test mit hoher Effektivität.
2002–2005, TU Ilmenau
Sponsored by the German Research Foundation (project 5385880).
Researcher: Matthias Riebisch