Information Systems Engineering
Research Projects
We conduct research together with renowned partners in industry and science worldwide. The following is an overview of current projects supported by third-party funding

Current Projects

TEADAL

Trustworthy, Energy-Aware Federated Data Lakes Along the Computing Continuum

TOUCAN

Transparency in Cloud-Native Architecture and Engineering

GANGES

Gewährleistung von Anonymitäts-Garantien in Enterprise-Streaminganwendungen

Past Projects

In the past years, we conducted several national and European lighthouse projects conntected to our research profile in the areas of cloud-native platforms and applications, blockchain technologies and innovations, and privacy engineering.

DaSKITA

The aim of the three-year project DaSKITA is the design and prototypical implementation of AI-based concepts, mechanisms and tools which enable consumers to obtain a higher level of awareness and self-determination in the context of data-driven services. Transparency and the right of access have always been an integral part of privacy / data protection regulations: to be able to act in a sovereign and self-determined way in everyday digital life, consumers need to know “who knows what, when, and on what occasion about them" (BVerfG 65, 1). In practice, however, both – the exercise of such rights as well as the actual understanding of provided information – are subject to prohibitive hurdles for consumers who are are, basically existing rights notwithstanding, usually not sufficiently informed to actually act in a sovereign and self-determined manner in everyday digital life. In close cooperation between computer science, legal and socio-political research as well as corporate practice, specific, AI-based technologies for the low-effort exercise of transparency and access rights, for simplified reception of appropriate information and its machine-readable provision by service providers are to be developed in the project DaSKITA. Thus, a sustainable contribution to consumer sovereignty in everyday digital life can be achieved. Following a concept of “privacy engineering” that goes well beyond mere data minimization and security, this shall provide a significant contribution to the technically mediated fulfillment of the requirements given by, in particular, the GDPR (“privacy/data protection by design”). The Department of Information Systems Engineering (ISE) at TU Berlin is the consortium lead and focuses on questions of the formal representation, machine-readable provision, and AI-based extraction of transparency information from privacy policies, the AI-supported exertion of the right of access, and the user-side presentation of respective information. Project partner is the iRights.Lab.

The project started on 01.01.2020 and was supported by funds of the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV) based on a decision of the Parliament of the Federal Republic of Germany via the Federal Office for Agriculture and Food (BLE) under the innovation support programme.

SMILE

The continuous development and operation of distributed systems, considering the management of cost, performance and scaling, presents a major challenge. Currently, concepts and technologies of cloud computing and service-oriented architectures are being used to meet this challenge, Cloud computing is now used in almost every industry. A novel approach to cloud computing is serverless computing, with users merely defining the functionality of their application while building on environment services. All other levels of operation are invisible to the developer since responsibilities such as an on-demand scaling can be taken over by the cloud provider. Companies such as Amazon, Google and Microsoft are marketing a version of this technology under the name "Function-as-a-Service (FaaS)", but also in the form of highly optimized data analysis systems such as "Big-Query". The aim of the SMILE project (Supporting MIgration to ServerLess Environments) is to provide methodological and technical support for migrating to serverless environments. The project aims to create a serverless migration framework (SMF) that will allow parts of traditional (cloud) applications to be migrated to a serverless infrastructure. The project will initially focus on data-intensive analysis applications. SMILE is carried out in cooperation with the IAV GmbH, is one of the leading development partners for the automotive industry in Germany. SMILE is a Software Campus Project funded by BMBF. The project started on 1.2.2020 and was funded under the High-Tech Strategy 2025 for 15 months. More Information at: ise-smile.github.io.

BIMcontracts

The joint project BIMcontracts, funded by the German Federal Ministry for Economic Affairs and Energy, is developing a reference architecture for automated and transparent contract and invoice management in the construction industry in a consortium of industry and research in a three-year project phase. Based on Building Information Modeling (BIM), digital construction models will simplify complex contractual constellations and avoid delays in the payment chain. For small and medium-sized service providers in the construction industry in particular, the secure transaction processes that arise in the course of projects mean a reduction in the usual liquidity and insolvency risks; in addition, the provision of services in construction projects from planning to implementation is improved. The basis is the use of modern blockchain technology in conjunction with smart contracts concepts.

Our role in the project: "Automated payment and contract management in construction using blockchain technology and BIM (BIMCHAIN)". The project started on 01.01.2020 and was funded for 15 months.

BloGPV.Blossom

Blockchain technology is associated with critical properties whose introduction and safeguarding promises enormous opportunities for a wide variety of application areas and fields of use: acceleration and cost savings in transaction processing; high transparency and tamper-proofing of transactions that have taken place; decentralized and at the same time consistent data storage in business networks.

In the joint project BloGPV, the use of blockchain technology for the energy industry is being researched. The focus is on maintaining and increasing the economic efficiency of PV systems as an energy industry objective. New approaches to this goal are urgently needed, as a steadily growing number of PV systems are falling out of the EEG subsidy and, at the same time, direct marketing of energy is often not cost-covering for owners of PV systems due to low prices on the electricity exchange. How can PV system operators therefore increase their own consumption or trade surplus PV electricity at more attractive prices?

The project BloGPV develops and tests a blockchain-based virtual large-scale storage for PV system operators. The business processes for the operation of such a large-scale storage are characterized by a multitude of autonomous and decentralized actors that have to share and process the same database; the control of storage, the balancing of energy or the handling of billing therefore requires secure IT solutions for business networks. Blockchain technology for "trustless" interactions represents precisely a trustworthy approach here.

The Department of Information Systems Engineering (ISE) at TU Berlin, led by Prof. Tai, is addressing this challenge. In the BloGPV subproject Blossom (Blockchain Systems and Off-Chaining Methods), ISE is developing a decentralized and trustless data processing system, smart contracts and distributed transactions for billing and storage management. This will result in solution approaches for improving scalability, implementing complex business logic in smart contracts, and system integration between blockchain and non-blockchain systems.

The project started on 01.04.2018 and was funded for three years as part of the Smart Service World II initiative.

ZoKrates

Interest in blockchains like Bitcoin and Ethereum has risen considerably over the last years and many use cases are currently researched and implemented by academia and industry. However, limited scalability and concerns regarding transaction privacy and confidentiality of data in public blockchain networks are issues that require novel solutions in order to enable broad adoption of this new class of systems.

In the ZoKrates Project, we research approaches to address these challenges by efficiently off-chaining computations from the blockchain without impairing its desirable properties, e.g., trustlessness and immutable history.

We use zkSNARKs, a technique from the field of verifiable computations, to off-chain computations and efficiently verify their correctness on the blockchain. By removing the necessity for fully redundant transaction processing in the network, we can reduce the load put on the system and hence improve scalability. By exploiting zero-knowledge proofs, we furthermore enable improved confidentiality for transactions in blockchain systems.

The ZoKrates software - developed in this research project - hides significant complexity inherent to zero-knowledge proofs and provides a more familiar and higher level programming abstraction to developers. For that purpose, it offers a toolbox to specify, integrate and deploy off-chain computations. This toolbox consists of a domain-specific language, a compiler, and generators for proofs and verification Smart Contracts. Furthermore, it enables circuit integration, hence fostering adoption.

DThe first implementation, targeting the Ethereum blockchain, was initially released at the Ethereum Devcon 3 in Cancun, Mexiko (Talk: Description, Video; Workshop: Video). Additionally, there is a podcast episode.

The research paper describing the project in more detail was published at IEEE Blockchain 2018 in Halifax, Canada and won the best paper award.

The project is fully open source and is available at: github.com/Zokrates/ZoKrates.

On-going development of ZoKrates is supported by the Ethereum Foundation.

Additionally, the ZoKrates project was selected as one of nine grantees from more than 100 proposals for the no-strings-attached Samsung NEXT Stack Zero grant. This donation further supports the development of ZoKrates through Open Collective.

Beyond that, we are encouraging open-source contributions as well as research collaborations. Get in touch!

Cloud Service Benchmarking

The advent of cloud computing has disruptively changed the way modern application systems are developed and delivered, but it has also shifted control over key parts of an application system to the cloud provider: Cloud consumers typically have to treat the cloud infrastructure that they are using as a black-box. As a consequence, the quality of the cloud infrastructure often is unpredictable, changes over time, and can vary significantly between different cloud providers.

In this project, we develop novel techniques and toolkits for cloud service benchmarking, especially of cloud storages services, to study complex service qualities and their interdependencies. We also aim to use this knowledge to help cloud consumers deal with variable cloud quality levels.

Current Activities:

  • In the sub-project BenchFoundry: A Framework for Cloud Database Benchmarking, we are working on a novel benchmarking middleware. Today, each benchmark needs to be implemented for each database platform. With BenchFoundry  we aim to provide a toolkit which can run arbitrary application-driven benchmarks against a range of cloud database services – ranging from simple key-value stores up to full relational database systems. This sub-project is in cooperation with Akon Dey (University of Sydney, Awake Networks Inc.).
  • We are currently working on a book with the working title Cloud Benchmarking: Demystifying Quality of Cloud Infrastructure Services which is scheduled to appear in early 2017. With this book, we will offer the first comprehensive overview of cloud service benchmarking. Starting with a broad introduction to the field, this book aims to walk the reader step-by-step through the process of designing, implementing and executing a benchmark as well as understanding and dealing with results. The book will be co-authored by Erik Wittern (IBM Research).
  • We are also working on a number of concrete benchmarking sub-projects. For instance, we are currently benchmarking the quality impacts of enabling security features in cloud datastores or trying to understand the quality of web APIs and how they impact applications.

EMIDD

The German Federal Ministry of Justice and Consumer Protection is funding a research project on consent management for the Internet of Things (IoT) at the Department of Information Systems Engineering. Individual, informed, specific and explicitly given consents are one of the main pillars of data protection law. Traditionally, such consents have been given, for example, in writing or by ticking a box under a privacy statement several pages long. In the context of the IoT - where, on the one hand, traditional user interfaces are not available and, on the other, data once collected can be used for a variety of other desirable purposes - this approach is obviously no longer viable. Instead, new, technically supported approaches to consent management are needed. In the EMIDD project ("Consent Management for the Internet of Things"), the possibilities of such technical approaches as well as their connectivity to current and future data protection law are being researched - over a project term of 15 months starting in April 2017.

DITAS

Today most applications utilize the cloud, but new data intensive applications such as autonomous driving, e-health or industry 4.0 require faster response times and stronger privacy and security guarantees than current cloud providers can offer. Fog computing is one way to address these requirements. Fog computing distributes applications and data between the cloud and systems that are closer to the end user, the so-called Edge. In the context of autonomous driving, edges could, for instance, be cell towers alongside a street. These edge locations have limited data and processing capacity, and it is, therefore, necessary to synchronize data between edge and cloud. This motivates one of the core questions of the DITAS project: Which data is stored at which time on either edge or cloud, when should which data be moved, and how shall this movement be executed? In DITAS, we want to address these questions and develop new, and innovative data management systems, which can move data with is consumers, to guarantee data quality, like fast response times or data consistency for an optimal user experience. This data management system should also be aware of resource limitations of edge locations and help to synchronize data between cloud and edge while still complying with privacy and security requirements. Information in the e-health context, for instance, should only be shared with research institutions, but only if the data was anonymized and the patients gave their explicit agreement to share their data.

Smart Data Management with guarantee of quality

One research challenge lies in the dynamic access patterns created by moving end users as well as a large number of potential data sources and the resulting requirements for fault tolerance, availability, and consistency. Health data, for instance, should never be lost, and a system outage in the autonomous driving context might lead to accidents.

Implementation with the international consortium and market adoption

In DITAS, the industry partners IBM Research Haifa (Israel), ICCS (Greece), Atos(Spain) and CloudSigma (Swiss) will implement a market-ready solution based on research results of TU Berlin and Politecnico Milano (Italy). The project works closely with IK4-IDEKO (Spain), a company that is going to use DITAS for an Industry 4.0 application and Ospedale San Raffaele (Italy) a big hospital in Milan which is going to deploy DITAS in their day to day operations. The DITAS project will, therefore, be practically evaluated together with these industry partners. The project started on the 1st of January 2017 and is funded by the European Union’s Horizon 2020 research and innovation programme (ICT Call). TU Berlin receives 750,000 Euro from the total funding volume of 4.9 Million Euro.

SPiCE

In the project SPiCE („SPICE: Security-Performance Trade-offs in Cloud Engineering“), we explore the complex interdependencies between security and different aspects of performance that modern, cloud-based systems are subject to.

In particular, we developed a structured method for rationalizing security-related configuration decisions which are in practice rarely backed by authoritative (ie quantitative) criteria and data but rather made on a gut-level basis all too often. Following this method, we experimentally assess the impact of different security configurations on the performance provided by big database systems broadly used in industry and research (Cassandra, HBase, etc.).

Through this approach, we identified manifold interdependencies between security- and performance-related characteristics. For instance, we found massive drops in matters of throughput with certain security options of HBase being activated. As the same level of security may also be achieved by other means that do not impact HBase performance, taking the additional costs for such measures may be the more efficient option, given that a certain target throughput can be achieved with fewer HBase nodes.

 

PolyEnergyNet

Research in the funded project PolyEnergyNet (PEN) is centered around the improvement of the resiliency of power grids, in particular on medium and low voltage levels. To accomplish this goal, renewable sources of energy are used in conjunction with the gas- and district heating grids. These different 'nets' are linked together by a common IT infrastructure (IT net). Thus the name 'PolyEnergyNet'.

Goals

The growing introduction of decentralized power supply creates new challenges for a power grid, which was initially designed for centralized supply. One of these challenges is to maintain stability of the power grid. This is the motivating cause for the research conducted in PEN: a more resilient power grid. Resiliency of the power grid is a multidimensional property. It includes, for example, robustness and self-healing capabilities in case of attacks. These properties are improved by the coupling of a smart (power) grid with the adjacent grids through supporting software components. For example the overload (and subsequent power outage) of a local subnet can be prevented by temporarily increasing load and separating the subnet from the higher-level grid. This can be achieved by having a common and up-to-date view of relevant data and the ability to dynamically control consumption, e.g. through batteries.

Approach

At the base of the approach in PEN is the 'Holon'-Model. Holon is Greek for a "part of a whole" and describes a dynamic view on a set of loads and supplies, i.e. subnets, which can merge and separate, as appropriate. The sets of components involved in those mergers and separations are not predetermined but rather dynamic in nature. A holon, by definition, has a total power output of zero, i.e. is power-autonomous. By combining the theoretical view with actual conditions in a local test area, use-cases are defined, which represent possible errors or attacks and intended reactions. Based on this set of use-cases, hard- and software components are developed and evaluated by previously formulated requirements. Both simulated experiments and a field test are conducted to evaluate developed systems.

ISE in PEN

We are developing a data management platform, which is conceptualized and implemented as a prototype and later evaluated using benchmarks and during the field test. The data management platform must meet multiple functional (e.g. data formats, capabilities of interfaces) and non-functional (e.g. availability, performance) requirements. It is particularly designed to be a common distribution layer for all metering data, gathered by sensors in the power grid. Further requirements and design goals include:

  • Distributed architecture
  • Support for heterogeneous data sources and sinks
  • Support for systems with near-real-time requirements
  • Development of metrics, methodology and tools to evaluate such a data management platform

OpenSense