1. Open Data Hub Architecture

The architecture of the Open Data Hub is depicted in Figure 1.1, which shows its composing elements together with its main goal: To gather data from Data Sources and make them available to Data Consumers, which are usually third-party applications that use those data in any way that they deem useful, including (but not limited to) study the evolution of historical data, or carry out data analysis to produce statistical graphics.


Figure 1.1 The Open Data Hub architecture with the components (top) and the data format used (bottom) during the data transformation.

At the core of the Open Data Hub lays the Big Data Platform, a java application which contains all the business logic and handles all the connections with the underling database using the DAL. The Big Data Platform is composed by different modules: A Writer, that receives data from the Data Sources and stores them in the Database using the DAL.

Communication with the Data Sources is guaranteed by the Data Collectors, which are Java applications built on top of the dc-interface that use a DTO for each different source to correctly import the data. Dual to the dc-interface, the ws-interface allows the export of DTOs to web services, that expose them to Data Consumers.

The bottom part of Figure 1.1 shows the data format used in the various steps of the data flow.

Records in the Data Sources can be stored in any format and are converted into JSON as DTOs. They are then transmitted to the Writer, who converts them and stores them in the Database using SQL. To expose data, the Reader queries the DB using SQL, transforms them in JSON’s DTOs to the Web Services who serve the JSON to the Data Consumers.

The Elements of the Big Data Platform in Details

As Figure 1.1 shows, the Big Data Platform is composed by a number of elements, described in the remainder of this section in the same order as they appear in the picture.

Data Source
A Data Source is the origin of one ore more datasets, which usually belongs to a single domain. Data are usually automatically picked up by sensors and stored in some format, like for example CSV.
A dataset is a collection of records that originate from the same Data Source. Within the Open Data Hub, a same Data Source may provide more datasets, that include slight different data, but there is at least one dataset per domain. The underlying data format of a dataset never changes.
Data Collectors
Data collectors are a library of Java classes used to transform data coming from Data Sources into a format that can be understood, used, and stored by the Big Data Platform. As a rule of thumb, each Data Collector is used for one Data Source or dataset and use DTOs to transfer them to the Big Data Platform. They are usually created by extending the dc-interface in the bpd-core repository.
The Data Transfer Object are used to translate the data format from the various formats used by the Data Sources, to be read from the writer and to be exposed by the reader (see below). DTOs are written in JSON. and are composed of three Entities: Station, Data Type, and Record.
With the Writer, we enter in the core of the Big Data Platform. Its purpose is to receive DTOs from the Data Collectors and store them into the DB and therefore implements all methods to read the DTO’s JSON format and to write to the database using SQL.
The Data Abstraction Layer is used by both the Writer and the Reader to access the Database and exchange DTOs and relies on Java Hibernate. It contains classes that map the content of a DTO to corresponding database tables.
Database (DB)
The database represents the persistence layer and contains all the data sent by the Writer. Its configuration requires that two users be defined, one with full permissions granted -used by the writer, and one with read-only permissions, used bye the Reader.
The reader is the last component of the Core. It uses the DAL to retrieve DTOs from the DB and to transmit them to the web services.
Web Services
The Web Services, which extend the ws-interface in the bdp-core repository, receive data from the Reader and make them available to Data Consumers by exposing APIs and REST endpoints. They transform the DTO they get into JSON.
Data Consumers
Data consumers are (web-)applications that use the JSON produced by web services and manipulates them to produce a useful output for the final user.

Also part of the architecture, but not pictured in the diagram, is the persistence.xml file, which contains the credentials and postgres configuration used by both the Reader and Writer.

Development, Testing, and Production Environments


Information in this section is still provisional!

Figure 1.2 shows the various environments which compose the whole Open Data Hub development process.


Figure 1.2 Diagram showing the development, testing, and production environments in the Open Data Hub project.

On the right-hand side, the internal structure of development is shown, while on the left-hand side, how external, and potentially worldwide collaborators can contribute to and interact with the Open Data Hub team.

Internally, two distinct and separate environments exist: testing and production. The former is updated daily, while the latter only when the expected result (be it a new feature, a bug fix, or anything else) is ready to be published.

Both environments are updates with Continuous Integration using Jenkins, which monitors the git repositories and updates the environemnts.

External developers can push their own code to the git repositories (provided they have been granted with the permission to do so) and expect their work to be reviewed and tested by the Open Data Hub team.