Post5

Real Time Data Ingestion (DiP) – Apache Apex (co-dev opportunity)

Posted by

Data Ingestion Platform

This work is based on Xavient co-dev initiative where your engineers can start working with our team to contribute and build your own platform to ingest any kind of data in real time.

The previous blog DiP (Storm Streaming) showed how we can leverage the power of Apache Storm and Kafka to do real time data ingestion and visualization.

This blog is an extension to that and it focuses on using Apache Apex for performing real time data ingestion.

All you need is a running Hadoop cluster with Apache Apex, Kafka, Hive, HBase and Zeppelin. You can deploy the application on the top of your existing cluster and ingest any kind of semi-structured and structured data.

You can download the code base from GitHub.

 

Apache Apex vs Spark Streaming:

Apex vs Spark Streaming

Apache Apex and features:

DAG, or Directed Acyclic Graph, expresses processing logic. It has operators (vertices) and streams (edges) that together constitute an Apache Apex application. Operators function as nodes within the graph, which are connected by a stream of events called tuples.

As Apache Apex is built on top of Apache YARN, hence it comes with inherited builtin support for fault-tolerance, scalability and operability. Apache apex is a true stream processing in a sense that, incoming record is processed and sent to next level of processing as soon as it arrives. Also supports micro-batching like Spark Streaming.

In future, Apex will provide support for streaming machine learning algorithms.

Technology Stack:

The Demo API has been tested on below mentioned HDP 2.4 components:
– Apache Hadoop 2.7.1.2.4
– Apache Kafka 0.9.0.2.4
– Apache Apex 3.4.0
– Apache Hbase 1.1.2.2.4
– Apache Hive 1.2.1.2.4
– Apache Zeppelin 0.6.0.2.4
– Apache Tomcat Server 8.0
– Apache Phoenix 4.4.0.2.4
– Apache Maven
– Java 1.7 or later

High Level Process Flow:

Architecture

  • Input to the application can be fed from a user interface that allows you either enter data manually or upload the data in XML, JSON or CSV file format for bulk processing
  • Data ingested is published to Kafka broker which streams the data to Kafka operator
  • Custom Apex operator(Classifier operator) identifies the message type, extracts the message and send it to different operators for further processing or persistence.
  • Operators recieve data and persist it to storage layer like NoSQL, HDFS, etc.
  • Hive external table provides data storage through HDFS and Phoenix provides an SQL interface for HBase tables
  • Reporting and visualization of data is done through Zeppelin

 

DiP Front End:

UI

 

Application Package Archive – DAG:

The submitted application Directed Acyclic Graph(DAG) looks like this:

DAG

 

DiP Data Visualization:

Using Apache Zeppelin, data ingested in HBase can be viewed as a report/graphs by simply using phoenix interpreter which provides SQL like interface to HBase table. These graphs can be embedded to any other applications using JFrames.

Zeppelin report

Zeppelin report2

 

Technical team:

Neeraj Sabharwal

Mohiuddin Khan Inamdar

Puneet Singh

Gautam Marya

Sumit Chauhan

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *