Post5

Oracle Goldengate

Posted by

Oracle GoldenGate is an Oracle proprietary software for real-time data integration and replication that supports different databases- Oracle, DB2, SQL Server, Ingres, MySQL etc. Even the source and target database can be different. It enables high availability solutions, real-time data integration, transaction change data capture, data replication, transformations besides other features.

ggarch

Golden gate Architecture
arch2

Following are the critical components of golden gate-
Extract
Extract is a mandatory golden gate process that runs on source system and captures database changes from the source database. Extract can be configured in one of the following ways:

Initial load: Initial load means to copy the data of table(s) from source to target databases. Initial load will just run once for a table which will initially load the data from source table to target table and both will be in sync. After this, Change synchronization should be configured to keep the tables in source and target in sync.
Change synchronization: Since the table(s) in source database will be continuously being modified/updated, so Change synchronization mode of extract does the synch of tables from source to target along with their ddl. It should be typically configured after the tables in source and target are initially made in sync through initial load

In Initial load, extract will read data directly from tables(s) of source database while in Change synchronization mode, it will read data from transaction logs/archive logs/audit trail. Third party capture module can also be used to extract transaction data from source database.
Manager
Manager is another mandatory process of goldengate and it is setup on both source and target system. Manger performs many functions like starting up of other GoldenGate processes, manages port numbers, manages trail files and does error management and reporting.

Data Pump
This is an optional GoldenGate process (server process) on the source system that can be configured for better performance and manageability. Incase data pump is not used, then extract process does this task. The DataPump acts as a secondary extract process where it reads the records from Local Trail Files and delivers (pumps) it to the Target system Trail files through collector. Optionally, DataPump can also be configured to perform data filtering, transformation and mapping before transferring the data to the target system.

Replicat
The Replicat process runs on the target system and is primarily responsible for replicating the extracted data delivered to the target trail files. The replicat process scans the remote Trail Files on the target system, generates the DDL and DML from the Trail Files and finally applies them on to the target system.
Replicat has the following two types of configuration which relates to the type of extract being configured on the source system:

CLASSIC: In classic capture mode, the Oracle GoldenGate Extract process captures data changes from the Oracle redo or archive log files on the source system or from shipped archive logs on a standby system.

ggclassic

INTEGRATED: In integrated capture mode, the Oracle GoldenGate Extract process interacts directly with a database log mining server to receive data changes in the form of logical change records (LCR).

ggintegrated

Trail Files
Trails Files are the Operating system files which GoldenGate uses to keep records extracted from the source objects by the extract process. Trail files can be created on the source system, target system and/or intermediate system depending on the GoldenGate replication setup. Trail Files on the source system are called Extract Trails or Local Trails and on the target system called as Remote Trails. By using trail file, GoldenGate minimize load on the source database as once the transaction logs/online logs/redo logs/ archive logs are extracted and loaded by the extract process to trail files, all the operations like filtering, conversions, mapping happens out of the source database. Use of trail file also make the extraction and replication process independent of each other.

Checkpoint
Checkpoint acts as synchronization mechanism between source and target system in GoldenGate replication. Checkpoint stores the current read and write position of a GoldenGate process to disks in order to ensure data consistency and integrity. The checkpoints help in data synchronization, avoiding data redundancy as well as data recovery at the event of network or target failures. The checkpoints are maintained by means of checkpoint files. However, in case of Replicat checkpoints can also be maintained in user defined table on the target database.

Ports
PORT 7809, 7819 and 7820 must be open on SOURCE and TARGET

CSN
In goldengate, CSN(commit sequence number) is used to identify a transaction for the purpose of maintaining transactional consistency and data integrity. It is like SCN in oracle. It uniquely identifies a point in time in which a transaction commits to the database.

How Lag was reduced from 40 min to near-real-time
We implemented unidirectional conditional replication using goldengate classic method for one of the major client in US. The replication business logic was based on filtering & data transformation. Several parameters viz-INSERTMISSINGUPDATES, IGNOREDELETES, REPERROR were used for data transformation. There were typical challenges we faced during golden gate implementation. Somehow, Conditional replication was not working on version 11.2.1.0.10. We upgraded goldengate to version 11.2.1.0.27 and conditional replication started working.
There were millions of records to be replicated every minute and we observed lag of 40-45 minutes on the target side during replication. We used BATCHSQL parameter to increase the performance of Replicate. BATCHSQL causes Replicate to organize similar SQL statements into arrays and apply them at an accelerated rate. We gained some performance improvement using this but still it wasn’t within the accepted timeframe.
We then explored the concept of multiple replicats. Typical problem with multiple replicat is that data set has to be divided amongst them which might cause consistency issue as transactions might not be applied in the same order which happened on source. However, as per business logic, transactions were to be applied serially. We then used multiple replicats along with range function. In this, we were not grouping different tables in replicats but we were dividing data set into equally parts based on range. This means that if we are using 3 replicats then we are diving the data of all replicated tables in 3 equal parts wherein each replicat only works on 1/3 of data of all tables. This not only ensured that each transactions happens in same way as it happened on source, but it reduced lag drastically and we were able to perform a near-real time replication and it took less than 2 minutes to replicate 6+ million records.
Another issue that surfaced was that at source side whenever data was inserted in a new partition of table, it failed on target side and caused golden gate to abend. Though, interval partitioning was enabled on both sides, but still issue persisted. We enabled DDL replication to fix this. As per the requirement, drop and truncate operation were excluded from DDL replication to maintain historical data.

Credits:

Xavient Information Systems

Technical team:

Brijendra Kumar

Anmol Gupta

Referenceshttps://docs.oracle.com/cd/E35209_01/doc.1121/e29397.pdf

Related Posts

  • Understanding Oracle Multitenant 12c database

    Overview of Oracle Multitenant Databases Overview Database 12c Enterprise Edition introduces Multitenant, a new design that empowers clients to effortlessly merge numerous databases, without changing their applications. This new design…

  • MongoDB with C#.NetMongoDB with C#.Net

    Being a C# developer we closely work with relational databases like MS SQL and Oracle. It thus becomes very exciting to explore the world of a prominent NoSQL database like…

  • Different Application of Joins in SQLDifferent Application of Joins in SQL

    Structurally organized with large quantities of information, databases are the backbone of every organization. That is why companies invest in database applications such as MySQL, PostgreSQL, Microsoft SQL Server and…

  • Teradata Intelligent Memory (TIM)

    Overview of Teradata Teradata is a RDBMS (Relational Database Management System). This system is solely based on off-the-shelf (readymade) symmetric multiprocessing (SMP) technology combined with communication networking, connecting SMP systems…

  • HAWQ/HDB and Hadoop with Hive and HBaseHAWQ/HDB and Hadoop with Hive and HBase

    Hive: Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis. HBase: Apache HBase™ is the Hadoop database, a distributed, scalable, big…

  • Column Store Index in SQL Server 2012

    This post is about the new feature, i.e., Column Store Index which is available since SQL 2012 version. Microsoft has released column store index to improve the performance by 10x.…

Leave a Reply

Your email address will not be published. Required fields are marked *