We conduct research with renowned partners from industry, politics, and academia. Please visit the linked subpages for more information about our current projects.
Comparable to other major cities worldwide, Berlin aims to reduce traffic emissions. Foremost mechanism for this is to increase the share of bicycle traffic. However, polls show that a lack of safety and fear of accidents keeps people from using their bikes more frequently. Furthermore, it also quite hard from a city planning perspective to get a good overview of location, time, frequency, and kind of hazards in bicycle traffic since official accident statistics only cover crashes but do not provide information on near crashes.
In this project, we collect – with a strong focus on data protection and privacy – data on such near crashes to identify when and where bicyclists are especially at risk. We also aim to identify the main routes of bicycle traffic in Berlin. To obtain such data, we have developed a smartphone app that uses GPS information to track routes of bicyclists and the built-in acceleration sensors to pre-categorize near crashes. After their trip, users are asked to annotate and upload the collected data, pseudonymized per trip.
The collected data offers detailed insights into bicycle traffic in Berlin and hazardous situations to public administrators and other interested entities so that the number of near crashes (and hence also crashes) can be reduced. The data can also be used to optimize traffic flows to make bicycle traffic more attractive. For the evaluation of data, we plan to cooperate with interdisciplinary partners such as city planning, interested citizens, but also the Berlin Senate Department for the Environment, Transport and Climate Protection.
The project is funded within the scope of TU Berlin’s citizen science initiative. Additional cooperation partners are welcome. Also other cities beyond Berlin can participate. For this, we need a point of contact (a person or a group) in the respective city who will (i) recruit local cyclists to participate and (ii) make use of the collected and preanalyzed data, e.g., to identify the root causes on site and to work with public administration to resolve the situation.
At the moment, the app is available for Android (6.0+) and IOS.
Up-to-Date Statistics & Results
Further information can be found on
Please note: Once you watch the video, data will be transmitted to YouTube/Google. For more information, see Google Privacy.
Future 6G services and applications will generate an unprecedented amount of data for industry, media, and private users, which must be transmitted at a speed and reliability that today's mobile networks cannot achieve. The research project "6G Native Extensions for XR Technologies" (6G_NeXt) aims to develop an infrastructure whose integrated network and software layer enables new processing speeds and implements the dynamic distribution of complex computing tasks (split computing). The latest software technologies combine computing and connectivity into an overall system whose possibilities go far beyond the edge cloud known from 5G.
Our group has two main tasks in the project. First, we develop a software platform for a serverless edge cloud that can serve as a runtime environment for novel 6G applications. Second, we support the implementation of a use case in which a novel anti-collision system for aviation is implemented on our software platform using the example of drones at airports.
The project is funded by the German Federal Ministry of Education and Research for 3 years as part of the call "6G-Industrieprojekte zur Erforschung von ganzheitlichen Systemen und Teiltechnologien für den Mobilfunk der 6. Generation".
While a large number of sensors and associated data sources exist in today's transportation sector, this data is usually not directly available for applications. In particular, there is a need in all traffic sectors to a) warn travelers of events such as drones in an airport approach path and b) inform them of other events and conditions such as traffic flow disruptions. This should be target group specific, geographically precisely localized and in real time.
The aim of this preliminary study is to develop an extensible IT system that can be used flexibly for all traffic areas and is able to distribute data from different data sources to different recipients in real time based on geofences, including geo-warnings. As an example, this will be implemented and evaluated in an air traffic use case using an XR application for data visualization.
The project is being carried out in cooperation with Deutsche Telekom and the Schönhagen airfield and is being funded by the German Federal Ministry of Digital and Transport for 1 year as part of the mFUND funding line.
Function-as-a-Service (FaaS) is the next step in the evolution of cloud-based virtualization after virtual machine services and container services. A key benefit of FaaS is how easy it has made application development: Developers only write small stateless functions which interact with platform services such as databases, streaming pipelines, or messaging -- all operation concerns are left to the FaaS provider and a large part of the application functionality is delivered by platform services. Due to the attractiveness of the programming model as well as the clear separation of state and function management, FaaS is also a promising candidate when extending the cloud towards the edge. While FaaS offers a powerful programming paradigm, it also suffers from a number of drawbacks: First, function placement usually does not consider data input and output implying that FaaS platforms tend to ship data between servers. Second, the cold start problem, which occurs when an arriving request finds no idle function instance, causes latency outliers which add up in multi-function workflows. This is even more pronounced in geo-distributed fog environments where the function code needs to be retrieved from a (geo-)remote machine first, further increasing cold start latency. Third, in the existing event-driven FaaS programming model, functions are triggered by single events. For modern applications, particularly in the Internet of Things, however a more powerful event model which can trigger functions based on multi-event rules would be very useful and also increase reusability of functions through loose function coupling and help to close the paradigm gap between FaaS and stream processing systems. Fourth, existing composition frameworks are provider-specific and do not support multi-provider workflows which, however, is crucial for cloud/edge/fog workflows. They are also orchestration-based, thus, leading to double billing.
In OptiFaaS, we will research and design a FaaS platform which is ready for mixed cloud/edge/fog environments, addressing the problems outlined above. Namely, we will
(i) design a novel FaaS platform which can run in a federated way across cloud/edge/fog while integrating existing FaaS services,
(ii) design a novel multi-event trigger inspired by complex event processing,
(iii) design a cross-platform FaaS choreography framework,
(iv) design a smart function placement and scheduling approach which optimizes and adapts function placement at runtime regarding quality of service goals.
The approach will consider locations of data input and outputs as well as already deployed functions, resource usage in participating FaaS nodes, cold starts, monetary cost, and constraints such as privacy. For this, we will apply and extend model predictive control and distributed decision-making techniques to the domain of FaaS.
The project is a cooperation with Prof. Dr. Sergio Lucia (TU Dortmund) and is funded by the DFG for 3 years.
Today’s applications are typically cloud-based. However, the emerging Fog Computing paradigm, i.e., using cloud and edge resources but also small- to medium-sized data centers in the network between edge and cloud at the same time, promises additional benefits in terms of quality of service (QoS) for some applications. Especially, emerging application domains such as the Internet of Things (IoT), autonomous and interconnected driving, future mobile networks (5G), or eHealth can benefit from or even heavily depend on these QoS improvements. The increasing degree of geo-distribution, however, requires rethinking system and application architectures. Data management systems in particular – both of the relational and NoSQL kind – are ill prepared for this degree of geo-distribution and other characteristics of fog environments. In essence, this means that fog developers currently have to implement data management tasks at application level.
With FogStore, we aim to close that gap. Namely, the core results of the project will include:
The project is funded by the DFG for three years, it started on September 1, 2019.
Typically, IoT applications rely on data produced by sensors to trigger actions on smart devices. As an example, wind, temperature, and brightness sensors in a smart home could be used to control window blinds or a smart factory might use vibration and noise sensors to shut off CNC machines before permanent damage occurs as well as to enable preventive maintenance.
In both scenarios, data is processed and decisions are taken either locally on edge devices or in cloud-based services. Often, there are also secondary uses or improved decision processes when collecting and correlating sensor data from various sources in the cloud. For IoT applications, both cloud and edge computing have their own advantages and disadvantages: while edge computing primarily suffers from capacity constraints on local devices, cloud services have much higher latencies and a higher probability of not being available locally due to network outages. Furthermore, privacy concerns may further limit choices depending on the kind of data. For instance, personal data in EU smart home scenarios can be subject to the GDPR and manufacturers may be reluctant to expose data on sensitive production processes to outsiders. All these aspects need to be weighed carefully so as to alleviate weaknesses and fully utilize strengths when building data management systems for IoT applications in fog environments, i.e., using cloud and edge computing as well as possible intermediary nodes within the core network at the same time.
In this project, we research novel data management and data distribution techniques specifically tailored to the needs of IoT applications. Such techniques need to efficiently distribute and move data across edge and cloud nodes while hiding as much complexity as possible from applications. A particular challenge in this project is that IoT applications can use both streaming and event-driven functions (‘serverless’) as computation approaches depending on the respective use case. Additionally, there are scenarios where pub/sub-based data distribution provides much more flexibility than standard point-to-point communication. The goal of this project is to combine these different approaches and techniques into an integrated platform that offers a continuum choice between functions and streams.
The first four years of this project are funded by the Einstein Foundation; the project started in January 2018.