The DKSR OUP (Open Urban Platform) is a big data platform designed for processing high-volume, fast-moving and diverse urban data sources. The platform’s architecture makes it especially suitable for operating IoT-based applications with real-time functionalities. Through the OUP, real-time data can be integrated, linked and applied as needed to address various urban needs, whether it’s managing road traffic, buildings, shared mobility services, or parking.
OUP has been successfully deployed in over 40 municipalities across Germany, including in cities like Freiburg, Cologne and Mainz. In BIPED, OUP is one of the core back-end solutions of the digital twin architecture. There, its main responsibility is to ingest and process dynamic data on energy, weather and mobility coming from external sources and the CDK platform.
OUP strives for minimal latency, trying to keep the time between data acquisition and data sharing as low as possible—within a milliseconds range. In this way, big data can be processed, analysed, and distributed in near real-time, and used to immediately respond to what is happening in the city, municipality, or region. This is achieved by the DKSR OUP software architecture, which is completely event-based.
The architecture is based on DIN SPEC 91357 for Open Urban Platforms, the de facto European reference architecture for open urban platforms based on the Smart City Marketplace (former EIP-SCC) lighthouse projects’ results. The OUP Core consists of Stream Processing, Data Lake and Big Data Analytic Tool. Stream Processing is event-based and is executed by an event processor module. The analysis can be performed in real-time against the moving event stream. Analyses can be created automatically from predefined patterns or determined manually. The predefined patterns contain standard operations such as min, max and average of the measured values. The event processor module offers an SQL-like syntax for this.
DKSR Open Urban Platform
The Data Lake provides a configuration option for a hierarchical storage implementation. Its advantage is that different storage systems can be combined to allow data to be stored where it is needed. For example, the first level can be configured to use an in-memory storage system to provide fast responses, while the second level can be a long-term storage system. In this way, the module can collect large amounts of data on the one hand, while ensuring a fast response time when historical data is requested on the other.
For Big Data processing a Big Data Analytic Module is provided. The module is based on Apache Livy, which allows seamless communication with Spark within the modular architecture. Apache Livy acts as a bridge between the user applications and the Spark cluster. This enables efficient job submission and monitoring and helps to simplify the interaction and management of Spark jobs.
Each module makes its functionality available to the other modules. The interaction between the modules is carried out by the Vert.X event bus system, which enables asynchronous communication. In combination with REST APIs, the back-end can ultimately communicate both event data for the real-time view and persistent data for the historical view.
Comments