DisCopy


Sunday, 5 April 2026

Decoding the Databases: An Architectural Comparison of Oracle, Sybase ASE, and PostgreSQL!

An Architectural Showdown of Oracle, Postgres, and Sybase ASE:



Under the Hood: How Sybase ASE, Oracle, and PostgreSQL Handle Memory and Processes??

The above diagram illustrates Oracle, PostgreSQL, and Sybase (ASE) Memory and Processes Architecture. By mapping their core components, we can easily grasp how they handle memory allocation, background processes, and physical storage.

1. Core Engine Architecture

Before mapping specific components, it is critical to understand how each engine handles client concurrency at the OS level:

  • PostgreSQL: Uses a Process-per-Connection model. Every client connection forks a heavy, dedicated OS process (postgres backend).
  • Oracle: Uses a Multi-Process model (Dedicated Server) or a Shared Server architecture.
  • Sybase ASE: Uses a Single-Process, Multi-Threaded model. The entire database engine (dataserver) typically runs as a single OS process, utilizing an internal thread scheduler (or Thread Pools in modern ASE) to manage thousands of connections with incredibly low OS-level overhead.

2. Memory Architecture

  • shared_buffers (PostgreSQL)
    • Function: The primary shared caching area used to hold relation pages (data blocks) fetched from persistent storage.
    • Oracle Equivalent: Database Buffer Cache (SGA).
    • Sybase Equivalent: Data Caches (Specifically, the Default Data Cache and user-defined Named Caches).
  • wal_buffers (PostgreSQL)
    • Function: A transient memory area that stages Write-Ahead Log (WAL) records before they are flushed to disk, guaranteeing ACID durability.
    • Oracle Equivalent: Redo Log Buffer (SGA).
    • Sybase Equivalent: User Log Cache (ULC). Sybase caches log records in the ULC for each user session before flushing them to the central syslogs table.
  • work_mem / maintenance_work_mem (PostgreSQL)
    • Function: Session-private memory dynamically allocated per backend process for transient query operations (e.g., hash joins, external sorts) and maintenance tasks.
    • Oracle Equivalent: Program Global Area (PGA) — specifically the SQL Work Areas.
    • Sybase Equivalent: Procedure Cache (for query plan compilation/execution) and Session Memory. (Note: Unlike Postgres which does heavy sorting in RAM via work_mem, Sybase aggressively leverages tempdb and temp caches for heavy sorting operations).

3. Background Process Architecture

  • Postmaster (PostgreSQL)
    • Function: The supervisory process that initializes shared memory, listens for incoming client connections, and forks dedicated backend processes for each session.
    • Oracle Equivalent: Oracle Net Listener combined with PMON instance initialization functions.
    • Sybase Equivalent: The Network Listener Thread operating internally within the core dataserver process.
  • WAL Writer / walwriter (PostgreSQL)
    • Function: Asynchronously flushes WAL buffer contents to physical WAL segments on disk to ensure transaction durability.
    • Oracle Equivalent: Log Writer (LGWR).
    • Sybase Equivalent: Log Writer Thread / ULC Flush mechanism (which flushes the User Log Cache to the transaction log).
  • Background Writer / bgwriter (PostgreSQL)
    • Function: Asynchronously writes modified (dirty) shared buffers to persistent storage. This "trickle" write behavior optimizes the I/O subsystem by minimizing the volume of block writes required during synchronous checkpoints.
    • Oracle Equivalent: Database Writer (DBWn).
    • Sybase Equivalent: The Housekeeper Wash Task. (Sybase brilliantly uses idle CPU cycles to run the Housekeeper thread, which "washes" dirty buffers to disk during quiet periods).
  • Checkpointer / checkpointer (PostgreSQL)
    • Function: Orchestrates the checkpoint operation by ensuring all dirty pages in shared_buffers are flushed to the data files, advancing the WAL sequence, and establishing a bounded crash recovery point.
    • Oracle Equivalent: Checkpoint Process (CKPT).
    • Sybase Equivalent: Checkpoint Task (chkpt).
  • Autovacuum Launcher (PostgreSQL)
    • Function: A supervisory daemon that forks worker processes to execute VACUUM commands. Because PostgreSQL's MVCC implementation writes row versions inline, this process is required to prune dead tuples and prevent transaction ID wraparound.
    • Oracle Equivalent: Conceptually handled by Undo Segments and SMON.
    • Sybase Equivalent: Housekeeper Garbage Collector (for row-level lock dead-row cleanup) and REORG utilities. (Note: Because Sybase traditionally updates records in-place rather than writing new row versions like PostgreSQL, "vacuuming" is an MVCC-specific concept. Sybase DBAs rely on REORG REBUILD or REORG COMPACT to reclaim fragmented page space, UPDATE STATs to analyze the tables).

Re-platforming these Databases: The Transformation of Storage and Code Objects

For Database Architects transitioning from legacy monolithic engines, the PostgreSQL storage model represents a massive paradigm shift. Historically, Oracle and Sybase were designed in an era where operating systems and file systems were unreliable or slow. Therefore, they built highly complex, proprietary storage management layers to bypass the OS entirely. PostgreSQL, conversely, takes a modern, lean approach: it implicitly trusts and delegates to the OS file system. PostgreSQL relies on OS and there is a cap on IOPS on Hyperscale so table spaces are obsolete now.

Key points to consider for Heterogeneous DBEngine Migrations: Not just code but also the data structures are different among these DB engines especially Oracle is considered the SuperSet (All these RDBMSs are ANSI SQL complaint, latest/highest is PostgreSQL: SQL:2023 and Oracle: SQL:2016. BTW, Sybase's baseline ANSI compliance is generally tied to the older ANSI SQL-92 and entry-level SQL:1999 standards).

Unlike Oracle, neither Sybase ASE nor PostgreSQL natively supports PL/SQL Packages or certain proprietary stateful features. Procedural logic in Sybase is strictly encapsulated within T-SQL Stored Procedures, whereas PostgreSQL utilizes a combination of User-Defined Functions (UDFs) and schema-based namespacing. Consequently, migrating from Oracle to either platform introduces high architectural friction and significant code refactoring overhead. Conversely, migrating from Microsoft SQL Server or Sybase ASE to PostgreSQL offers a much smoother transition, as their procedural paradigms and flatter object hierarchies map much more cleanly to PostgreSQL.

1)    Storage Objects Migration Compatibility:

Fortunately, the compatibility among these RDBMS for Storage Objects is very high > 95%. Optimization of data types and properties can be performed based on the kind of data is being stored in that column. PostgreSQL provides additional data-types like Geometric Types pointlinelseg (line segment), boxpolygoncircle and Network Address Types inet (IPv4/IPv6 hosts), cidr (network blocks), macaddr.

The Oracle Storage Model: The Micro-Manager

Oracle's storage architecture abstracts physical storage through a deep, multi-layered hierarchy and most complex in the RDBMS architecture.

  • The Hierarchy: Tablespace --> Datafiles --> Segments --> Extents --> Oracle Blocks.
  • Complexity: Extremely high. We need to pre-allocate Datafiles. We also need to manage Extent sizing (Uniform vs. Autoallocate). Oracle provides ASM (Automatic Storage Management) to manage raw disks, and bypass the OS entirely.

The Sybase (ASE) Storage Model: The Device Mapper

Sybase relies on manual mapping of logical objects to physical storage components.

  • The Hierarchy: Disk Devices (Raw Partitions or OS Files) --> Databases --> Segments --> Extents (8 Pages) --> Pages.
  • Complexity: Moderate to High. You must run DISK INIT to create physical devices, allocate databases to those devices, and use Segments to map specific tables or indexes to specific devices for I/O separation.

2)    Code Objects Migration Complexities:

Migrating Code-Objects from Oracle to PostgreSQL or from Sybase ASE to PostgreSQL is complex with a compatibility ratio of ~65% and ~75%, Oracle is more complex due to Oracle's PL/SQL Packages and session-state management and Sybase relies on standalone Stored Procedures and PostgreSQL relies heavily on Functions (grouped by Schemas) migration of code objects is relatively easy from Sybase but Oracle migrations require heavy refactoring. However, because MS SQL Server and Sybase share a flatter T-SQL lineage without package structures, migrating from those engines to PostgreSQL yields significantly higher compatibility and lower migration effort.

  • Oracle to Sybase/PostgreSQL (High Friction): Oracle relies heavily on PL/SQL Packages for code modularity and state management. Because Sybase and PostgreSQL lack a direct 'Package' equivalent (relying instead on independent Stored Procedures or Schema-grouped Functions), transitioning Oracle codebases requires severe refactoring and architectural redesign.
  • SQL Server & Sybase to PostgreSQL (Lower Friction): MS SQL Server and Sybase ASE share a T-SQL foundation and a flat procedural architecture. Transitioning from these platforms to PostgreSQL is highly compatible, as their standalone procedures map cleanly to PostgreSQL's native function and procedure models.

While PostgreSQL historically supports only Functions (which must return a value until ver:11), now, in the latest versions (11.x+), it does fully support Stored Procedures via the CREATE PROCEDURE command. When migrating from Oracle, DBAs usually replicate Oracle "Packages" in PostgreSQL by creating a dedicated SCHEMA and placing all the related functions/procedures inside that schema to mimic the logical grouping of Oracle Packages!


Thursday, 2 April 2026

PostgreSQL Logical Replication: Configuration and Recovery Protocols!

Deploying and Rescuing PostgreSQL Logical Replication: Handling Dropped Publications & Subscriptions and Fixing replication Errors!

I’ve configured replication/high-availability on every RDBMS from Oracle (GG) to MS SQL and Sybase to MySQL. After all that trauma, setting up PostgreSQL replication is a walk in the park at sunrise.. 😊

First things First:

Anytime logical replication breaks for any reason, we need to immediately check the replication slots on the primary. If replication is broken and we cannot fix it immediately, we must drop the slot manually to save the primary database from an out-of-disk outage due to WAL files retention requirements.

SQL to find orphaned slots (on the Primary Node)

 

SELECT slot_name, plugin, active, restart_lsn,

    pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) as wal_lag_size

FROM pg_replication_slots;

If wal_lag_size is growing into 100s of gigabytes and active is false, drop the slot (pg_drop_replication_slot('slot_name')) to save the primary, and accept that we will have to do a full data re-sync (copy_data = true/false) later.

Quick glance at setting up logical replication:


1.     The Pre-requisite (Database Configuration requirements for logical replication)

Set the wal_level to logical in the Primary/Source Database.

On the Publisher (Primary Node), check postgresql.conf file or run SHOW wal_level; to verify the wal_level is set to logical or not (By default it is replica):

2.     The Publisher (Primary Node)

Log into the source database where Primary/active data lives.

a.     Create the Table: Create a test table or we can skip this step to setup replication to an existing table.

Sample SQL:

CREATE TABLE TEST

(

    id SERIAL PRIMARY KEY,

    description VARCHAR(100),

    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP

);

(Important: Logical replication requires a Primary Key or a Replica Identity on the table).

 

b.     Create the Publication: This notifies PostgreSQL to start tracking changes for this table.

Sample SQL:

CREATE PUBLICATION test_pub FOR TABLE TEST;

 

3.     The Subscriber (Secondary/Target Database)

Log into the target database where the data to be replicated.

a.     Create the Table in the Secondary: Logical replication does not create the target table or replicate schema changes. We must create the target table with the exact source table DDL before subscribing.

Sample SQL:

CREATE TABLE TEST

(

    id SERIAL PRIMARY KEY,

    description VARCHAR(100),

    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP

);

 

b.     Create the Subscription: The Create subscription command connects to the Primary/Publisher, materializes the initial data (Full Load), and establishes the continuous stream (CDC). We need to replace the connection string with the actual source database conn-info details.

Sample SQL:

CREATE SUBSCRIPTION test_sub

CONNECTION 'host=XXX.XX.XX.XX port=5432 dbname=source_db user=postgres password=’XXXXXX'

PUBLICATION test_pub;

 

4.     Test, Validate and Verify the Replication: Let’s insert and perform some DMLs on the Primary/Source TEST table.

a.     Run a couple of Insert statements on Publisher:

Sample SQL:

INSERT INTO TEST (description) VALUES ('Hello from Kasi V Dogga!');

INSERT INTO TEST (description) VALUES ('Hope the Logical replication is active.');

 

b.     Check the data on the Subscriber:

Sample SQL:

SELECT * FROM TEST;

We can see the two tuples appear on the target database to confirm that, we have successfully established a logical replication for a Table.

 

Fixing the replication issues:

When logical replication is disrupted, such as by an accidental drop of a publication or subscription, the data becomes inconsistent. In this situation, only the primary node can be relied upon to handle both OLTP (DML operations) and OLAP (read/query workloads). This is actually a destructive administrative action, not just a "pause."

 

Here are the DBA's actions/options for fixing these scenarios. 

Scenario A: The Publication is Dropped (On the Primary Node)

If someone executes DROP PUBLICATION on the source database, the subscriber will immediately start throwing errors in the logs (e.g., ERROR: publication "my_table_pub" does not exist), and replication will halt. Unlike dropping a subscription, dropping a publication does not automatically drop the replication slot, as the publisher still thinks a subscriber is out there, so it will start aggressively hoarding Write-Ahead Logs (WAL) on the primary node. If we do not fix this quickly, the primary server's disk will fill up to 100% and the database will crash.

The Fix: If we identify this issue quickly and the replication slot is still intact, we can seamlessly resume replication without having to recopy all GB/TB of data.

  1. Recreate the Publication (On Primary Node): We need to recreate the publication exactly as it was created before.

Sample SQL:

CREATE PUBLICATION test_pub FOR TABLE test;

  1. Refresh the Subscription (On Secondary/Subscriber Node): We will refresh the subscriber to re-establish replication by reaching the newly created publication, and resume pulling from the exact LSN (Log Sequence Number) where it left off.

Sample SQL:

ALTER SUBSCRIPTION test_sub REFRESH PUBLICATION;

This will make sure the subscriber will reconnect to the existing replication slot and instantly drain the hoarded WAL files. Replication is restored and data will be in sync.

 

Scenario B: The Subscription is Dropped (On the Secondary/Subscriber Node)

As mentioned earlier, dropping the subscription is a destructive action that usually drops the replication slot on the primary and discards the WAL history. So, we cannot simply resume/refresh the replication.

The Fix:

  1. Verify the Slot is dropped (On Primary/Publisher Node): Ensure the slot was actually dropped to prevent disk bloat.

Sample SQL:

SELECT slot_name, active FROM pg_replication_slots;

-- If the old slot is still there and active=f, drop it:

-- SELECT pg_drop_replication_slot('slot_name');

  1. Re-establish Replication (On Secondary/Subscriber Node): Let the recreate subscription truncate and materialize/sync entire table’s data.

Sample SQL:

CREATE SUBSCRIPTION test_sub

CONNECTION 'CONNINFO'

PUBLICATION test_pub

WITH (copy_data = true);

 

Scenario C: Transaction Failed in Subscriber/Secondary Node

A duplicate key error during PostgreSQL replication typically occurs in logical replication when the subscriber tries to apply an INSERT/UPDATE that violates a PRIMARY KEY or UNIQUE constraint.

Error:

ERROR: duplicate key value violates unique constraint "pk_test"
DETAIL: Key (id)=(521) already exists. 

Common causes:

  • Data already exists on subscriber but not in sync with publisher
  • Manual changes done on subscriber (not recommended)
  • Replication restarted after inconsistency
  • Dropped/recreated publication or subscription
  • Missing initial data sync 

 

Fix Options (Based on the failure scenario and data inconsistency)

Option 1: Delete conflicting row on Subscriber (Quick Fix)

               DELETE FROM test_table WHERE id = 101;

  Then restart replication:

               ALTER SUBSCRIPTION test_sub ENABLE;

 Option 2: Skip the conflicting transaction (Supported in PostgreSQL 15+)

               ALTER SUBSCRIPTION test_sub SKIP (lsn = '0/21DKVR05');

 Option 3: Truncate and Resync Table

ALTER SUBSCRIPTION test_sub DISABLE;
TRUNCATE TABLE test_table;
ALTER SUBSCRIPTION test_sub ENABLE;
 

Option 4: Refresh Subscription (Best Option) – This option re-sync metadata + data

               ALTER SUBSCRIPTION test_sub REFRESH PUBLICATION;

 Option 5: Recreate Subscription (Full Reset)

DROP SUBSCRIPTION test_sub;
CREATE SUBSCRIPTION test_sub
CONNECTION '...'
PUBLICATION test_pub
WITH (copy_data = true);
 

Scenario

Recommended Action

Few duplicate errors

Delete conflicting rows and restart the replication

Frequent errors

Disable the replication and Enable with Re-sync affected tables

Major inconsistency

Drop and Recreate subscription with copy_data

Tuesday, 31 March 2026

PostgreSQL hot_standby_feedback impact on Primary node!

Have you ever encountered a scenario where a long-running transaction on a PostgreSQL standby node caused performance degradation or table bloat on the primary node?

It seems strange, a secondary node is supposed to be read-only, so querying it shouldn't impact the primary at all, right? It is a common misconception that asynchronous replication protects the primary from everything happening on the secondary. While async mode protects your primary's write speed, it does not protect your primary's disk space and vacuum processes.

However, in PostgreSQL architecture, a long-running transaction (SELECT) on a secondary node can severely impact or even bring down the primary. This happens primarily through three specific parameters designed to keep the databases synchronized and keep the integrity.


Here is exactly why this strange thing happens:

The hot_standby_feedback Trap (Bloating on Primary Node):

If a primary node updates (auto correction mode i.e. delete+insert) or deletes a row, VACUUM will eventually clean up the old "dead" row. But what if another query on the secondary is currently reading that exact row? If the primary deletes it, the secondary query will fail with a "snapshot too old" error. To prevent the query on the secondary from failing, PostgreSQL provides hot_standby_feedback, to be enabled.

  • What happens, If we enable hot_standby_feedback: The secondary node now, keep-on messages the primary requesting, "I have a transaction open using transaction ID XXX, hence don’t vacuum anything newer than this XXX".
  • What is the Impact on Primary: If some long report running on the secondary (typically runs 1-2 hours), the primary is completely forbidden from cleaning up any dead tuples across the entire database for those 1-2 hours. Hence, primary tables and indexes will get huge bloat, causing disk I/O to spike and overall performance to degrade heavily.

Replication Slots & Disk Space Exhaustion:

If the secondary node is running a massive transaction, it is consuming heavy CPU, Memory, and Disk I/O.

  • What happens: The secondary becomes so starved for resources that the startup process (which replays the Write-Ahead Logs from the primary) slows down to a grinding halt or stops entirely.
  • The Impact on Primary: If you are using physical replication slots, the primary is forced to hold onto all WAL files until the secondary confirms it has replayed all of  them. If the secondary is bogged down by a large query and stops replaying, the WAL files will pile up on the primary's pg_wal directory until it runs out of disk space. When the primary hits 100% disk usage, the database crashes.

Synchronous Replication Blocking (synchronous_commit):

This is more problematic now. If we have configured our cluster for high availability using synchronous replication (where synchronous_commit is set to on, remote_write, or remote_apply), the primary must wait for the secondary to acknowledge transactions.

  • What happens in the Secondary: A huge query on the secondary hogs the disk I/O or locks resources.
  • The Impact on Primary: Write transactions (INSERT, UPDATE, DELETE) on the primary will hang. The primary might successfully execute the write locally, but the COMMIT will sit in a waiting state until the sluggish secondary processes the WAL and acknowledges it back to Primary. This causes application connections to pile up on the primary, eventually exhausting max_connections and the database crashes.

How to Mitigate This?

In an OLTP production environment, the ultimate goal is to make sure Primary database is always up, running and available. To protect the primary from secondary-node abuse, we have to establish the cited guardrails:

  • Tune max_standby_streaming_delay: Set this to a reasonable limit (e.g., 30s or 1min). If a query on the replica blocks replication for longer than this time, PostgreSQL then forcefully cancel the conflicting query on the Secondary node. The user's query fails, but the primary stays alive.
  • Use statement_timeout on the Replica: Enforce a strict time limit for read-only queries on the secondary so long running queries/reports are killed automatically.
  • Turn off hot_standby_feedback: If we can tolerate occasional query cancellations on the replica due to the error “snapshot too old”, but, turning hot_standby_feedback off ensures the primary's VACUUM is never blocked.