DisCopy


Wednesday, 9 October 2024

PostgreSQL Performance Tuning and Optimization..

Availability > Functionality > Performance! 

Performance tuning is a continuous process in any database environments. Many organizations spend more time and resources on tuning the database than on the original development because of data growth and usage on top of a poor database design. 80% of performance issues can be avoided with a proper database design. In addition, developing SQL code is very different from developing front-end applications code and middle-tier components. The SQL code developed by front-end or middle-tier experts can often be optimized depending on the transparency of the code. Finally, there is a lot of requirement and necessity for optimizing databases when the volume of operations grows and the application that worked great with 5-10 users and 100-200GB no longer lives up to tera bytes of data and 1000s of users' expectations (Scalability).  Yes as all of us aware scalability is the root cause.

A common place where most people like to start tuning databases is tuning the Server/Database configuration. Generally, Most developers that are unhappy with the application performance will demand adding resources like more cpu and memory to the server and reconfiguring the dynamic configuration parameters. But, extending the memory and adding more cpus or keep on changing configuration parameters will only help optimizing the performance up to a certain point. In other words, if you keep adding memory and do not tune the application or the sql code in any other way, you will reach the point where additional memory or re-configuration produces marginal or no performance improvement. 

It is also important to realize that improvement in one area often means compromising others. If you can optimize 90% of the critical queries by slowing down the performance of other 10% it might be well worth of your time. Sometimes you can improve the performance of online transactions by increasing the response time with the expense of reducing concurrency or throughput. Therefore it is important to determine the application performance requirements.  If we improve the performance of selects obviously we compromise DML statements/transactions (may be like adding additional indexes).

It is common to spend more time identifying the cause of the problem then actually troubleshooting and fixing it. If all other areas of application are working properly and you can be sure there is a problem with the database code, then you need to investigate your code modules and decide which one is causing problems. Many times improvement in only one stored procedure or trigger can fix most of the issues with its replicating/cascading performance by reducing locking/blocking.

The Optimizer of the PostgreSQL database engine takes a query and finds the best way to execute it. The optimization is done based on the statistics and available indexes for a view or a table. The optimized plan stays in effect until the statistics are updated or the query changes. So vacuuming frequently and rightly is crucial for Optimizer choosing optimal query plan.

Carefully considered indexes, built on top of a good database design, are the foundation of a high-performance PostgreSQL configuration. However, adding indexes without proper analysis can reduce the overall performance of your system. Insert, update, and delete operations can take longer when a large number of indexes need to be updated.

 

What to observe - high level (We have many more ... :)  Here top 5:

  1. Server error logs
  2. OS error logs
  3. Configuration files
  4. Workload pattern
  5. Peak hours and Off-peak Hours processes

 

What to set/configure - high level (We have many ... :)  Here top 5:

1.   shared_buffers

Most important parameter which Caches frequently accessed data in memory blocks also known as Cache hit ratio. This parameter allows PostgreSQL to read data from memory instead of Storage disk, which can speed up query execution significantly.

Recommended value :  15%-25% of Total RAM 

2.   work_mem

Defines the amount of memory used for internal sort operations and hash tables before writing to disk. Insufficient memory allocation can lead to slower query performance as Sort operations are used for order by, distinct, and merge join operations. Hash tables are used in hash joins and hash based aggregation.

Recommended value :  25% of Total RAM divided by max_connections

3.   effective_cache_size

Informs the optimizer about the level of cache available in the kernel/OS. Index scans are most likely to be used against higher values; otherwise, sequential scans will be used if the value is low. Setting this value too low can cause the query planner to avoid using certain indexes. 

Recommended value :  50% of Total RAM 

4.   wal_buffers

The wal_buffers setting controls the amount of shared memory used for the Write Ahead Log (WAL) data that has not yet been written to disk. The default size is 16 MB, but a higher value can improve performance on a busy server.

Recommended value :  16MB or higher

5.   effective_io_concurrency

Defines the number of simultaneous read and write operations that can be operated by the underlying storage disk. The allowed range is 1 to 1000, or zero to disable asynchronous I/O requests.

Recommended value :  HDD - 2, SSD – 200 and SAN - 300

 

Query tuning, Object optimization, maintenance window for vacuum and analyze, monitoring and reacting on blocked processes/locks, analyzing and fine-tuning long running and slow running Queries including index optimization, access path and right join type will be discussed in the next blog post.

-HTH :)

Monday, 7 October 2024

What to choose between PostgreSQL and MySQL

What to choose between PostgreSQL and MySQL (Best Opensource Cloud RDBMS offerings)



What to choose, If two products are nearly identical in quality and functionality, we will probably choose the free one, as the primary factor would be the cost, free is very hard to beat!  😇
But, what If both are free :)? We have to consider all parameters and facilities in terms of availability, performance, functionality, support, updates, or additional features between both the systems.

When we need to migrate and modernize an on-prem legacy brown-field database environment, the key factor to be considered is the compatibility of source and proposed target database engine and code conversion complexity. Most of the RDBMS are ~95% compatible for Storage Objects (Tables, Views and Indexes) but code objects compatibility varies anywhere from 60% to 80% and complexity is depending on how much customization or typical code stored in Stored procedures, Functions and Triggers. There are multiple Schema conversion tools including AWS SCT, Azure DMA, GCP DMS, ispirer, migVisor, StarM, DMAP and the most popular and free Ora2PG to assess the source database systems to choose the target database engine.

This blogpost is more to choose between PostgreSQL and MySQL for a green-field database system to be developed/designed with no tag of license costs :)

We are living in a new scale i.e. Hyperscale and in this cloud world, open source databases PostgreSQL and MySQL stand out as the two most popular choices as both are supported by DBaaS systems today in all Hyperscale but PostgreSQL is an edge over MySQL. Lets take Amazon Aurora, is the premier PaaS/DBaaS offering from Amazon, supports both of these databases, but GCP AlloyDB is only PostgreSQL variant. Although both databases offer robust features and share many similarities, deep inside the capabilities, they possess noteworthy differences in the features and functionalities, and one has edge over other RDBMS for specific workloads.

Both PostgreSQL and MySQL are widely used open-source databases that power and suit a variety of real-time applications. MySQL is recognized as the world’s most popular RDBMS, which was created by a Swedish company, MySQL AB, founded by Swedes David Axmark, Allan Larsson and Finnish Michael "Monty" Widenius. Original development of MySQL by Widenius and Axmark began in 1994, and the other side PostgreSQL is often described as the world’s most advanced Object relational database management system (ORDBMS) and the implementation of POSTGRES began in 1986 almost 8 years before the MySQL.

 

Let’s review and compare key factors, functionalities of these TWO most popular open-source RDBMS services (Key differences: PostgreSQL vs MySQL)

MySQL and PostgreSQL are two of the most widely used and service offered open-source relational database management systems. MySQL is known for its speed and ease of use, making it ideal for web applications and read-heavy workloads. PostgreSQL offers advanced features, new data types and extensions by making it suitable for complex queries and transactions. Below are the some of the key differences:

Ø  ACID compliance (Winner: PostgreSQL)

Atomicity, consistency, isolation, and durability (ACID) are database properties that ensure a database remains in a consistent state even after system failures.

MySQL offers ACID compliance only when you use it with InnoDB storage engine or software modules. BTW, PostgreSQL is fully ACID compliant in all variants.

Ø  Concurrency control (Winner: PostgreSQL)

Multiversion concurrency control (MVCC) is an advanced database feature that creates multiple copies of the records of a table, to safely read and update the data in parallel. With MVCC, multiple users can read (SELECT) and modify (DML) the same data simultaneously without locking the table yet preserving data integrity.

MVCC varies by Storage Engine in MySQL. MVCC is fully supported with the InnoDB storage engine, but not supported in the MyISAM storage engine. Other side, PostgreSQL supports MVCC in all variants/configurations.

Ø  Indexes (Winner: PostgreSQL)

Indexes are database objects that can be created for a table to get direct access to specific data rows thus improve the performance. Indexes store the values of the key(s) that were named when the index was created, and logical pointers to the data.

MySQL supports B-tree and R-tree indexing where as PostgreSQL supports multiple types of Indexes include trees, expression indexes, partial indexes, and hash indexes to fine-tune your database performance.

Ø  Data types (Winner: PostgreSQL)

MySQL is a relational database provides various data types of a typical RDBMS to cater regular business needs , but PostgreSQL is an object-relational database supports to store data as objects with properties, just like in many programming languages like Java. PostgreSQL supports all MySQL data types plus additional data types like geometric, enumerated, network address, arrays, ranges, XML, hstore, and composite to facilitate optimal storage for new data entities and a clear winner.

Ø  Views (Winner: PostgreSQL)

A view is a subset of one or more tables i.e.  an alternative way of looking at the data in one or more tables to enforce better joins or security to restrict access to base tables.

MySQL supports regular views, but PostgreSQL offers advanced view options like materialized views. Materialized views improve database performance for queries repeatedly access same set of data.

Ø  Stored procedures (Winner: PostgreSQL)

Stored procedures are structured query language (SQL) queries or a named collection of SQL statements or control-of-flow language. We can create stored procedures for commonly used functions, and to improve performance.

MySQL and PostgreSQL both support stored procedures, but the versatility of PostgreSQL allows you to call stored procedures written in languages other than SQL.

Ø  Triggers (Winner: PostgreSQL)

A trigger is a stored procedure that runs automatically when a user attempts a specified data modification statement on a specified table to enforce integrity constraints.

MySQL database supports both AFTER and BEFORE triggers for DML statements i.e. the associated procedure will run automatically before or after user modifies the data. In contrast, PostgreSQL supports the INSTEAD OF trigger, so we can run complex SQL statements using functions.

Ø  Ease of Use (Winner: MySQL)

MySQL is relatively easy to install and configure when compare with PostgreSQL.

 

PostgreSQL vs MySQL – what to choose?

Based on the above classification of functionality, we need to choose the right RDBMS. The following factors also play key roles in choosing the right RDBMS.

1.    Workload type

More Selects/READs                –             MySQL,

More DMLs/Inserts                 –              PostgreSQL.

2.    Application scope

PostgreSQL is better suited for enterprise-level applications with frequent write operations and complex queries.

However, MySQL is the best fit to create internal applications with fewer users for the workloads with more reads and infrequent data updates.

3.    Database development experience

MySQL is much simpler and easier to start with or learn, hence it’s more suitable for beginners. My SQL needs less time as it’s simple to set up MySQL database environment.

PostgreSQL, on the other hand, is a bit more complex than MySQL for beginners as PostgreSQL requires more experience to setup and configure the database environment.

 

Final word:

ü  If we need to build relatively a small database system with more reads than writes and to maintain/manage/administer the database with less experienced manpower then, MySQL is the best bet.

ü  If we need to build a complex OLTP database system with frequent DMLs i.e. typically an OLTP system, with workload of generic reads (Not OLAP kind of reports) but frequent writes and moderate experienced DBAs then PostgreSQL is enterprise level RDBMS, oh! no, ORDBMS.

Monday, 27 June 2022

PostgreSQL VACUUM!

VACUUM

is the garbage-collector of PostgreSQL Database to discard obselete (updated) / deleted records of tables, materialized views and optionally analyze the statistics of a database/object.

If a table doesn’t get vacuumed, it will get bloated, which wastes disk space and also hampers the performance.

 

PostgreSQL's VACUUM command is a maintenance command needs to be run periodically to cater the following vital tasks:

1.      To recover or reuse disk space occupied by updated or deleted rows.

2.      To analyze/update data statistics used by the PostgreSQL query planner.

3.      To protect against loss of very old data due to transaction ID wraparound.

 

In normal PostgreSQL operation, tuples/records/rows that are deleted or obsoleted by an update are not physically removed from their table until a VACUUM is done. Therefore it's necessary to do VACUUM periodically, especially on frequently-updated tables as VACUUM reclaims storage occupied by dead tuples.

Normally we don’t need to take care of all that, because the autovacuum daemon does that with some limitations based on the configuration.

The frequency and scope of the VACUUM operations performed for each of the above reasons will vary depending on the needs of each database environment. Some tables with heavy updates and deletes need frequent vacuuming to discard obselete and deleted records. 

 

  • VACUUM processes every table and materialized view in the current database that the current user has permission to vacuum by default i.e. without a table_and_columns list specified, .With a list, VACUUM processes only those specified table(s).
  • VACUUM ANALYZE performs a VACUUM and then an ANALYZE for each selected table. This is a handy combination form for routine maintenance scripts. This option Updates statistics used by the planner to determine the most efficient way to execute a query thus the Optimizer choose right plan including join and indexes.

VACUUM (without FULLsimply reclaims space and makes it available for re-use. This command can operate in parallel with normal reading and writing of the table, as an exclusive lock is not obtained on the table. However, extra space is not returned to the operating system immediately. it's just kept available for re-use within the same table. It also allows us to leverage multiple CPUs in order to process indexes. This feature is known as parallel vacuum. We can use PARALLEL option and specify parallel workers as zero to disable the parallelism.

  • VACUUM FULL rewrites the entire contents of the table into a new disk file with no extra space, allowing unused space to be returned to the operating system. This form is much slower and requires an ACCESS EXCLUSIVE lock on each table while it is being processed. 

BTW, “full” vacuum, which can reclaim more space, takes much longer and exclusively locks impacts the performance and availability of the specific table. This method also requires extra disk space, since it writes a new copy of the table and doesn't release the old copy until the operation is complete. Usually this should only be used when a significant amount of space needs to be reclaimed from within the table like after a huge purge activity on the table.

 

  • VACUUM FREEZE selects aggressive “freezing” of tuples.

Specifying FREEZE with VACUUM is useful in transaction id wrapping and is equivalent to performing VACUUM with the vacuum_freeze_min_age and vacuum_freeze_table_age   parameters set to zero

 Aggressive freezing is always performed when the table is rewritten, so this option is redundant when FULL is specified.

 

Equivalent Commands in other RDBMS:

  • ·      REORG and UPDATE STATISTICS in Sybase
  • ·      REORG and GATHER STATS in Oracle

 

Sunday, 26 June 2022

PostgreSQL Index types and use cases!

Indexes are database objects that can be created for a table to speed access to specific data rows by validating the existence of the data and pointing to the specific pages. Indexes store the values of the key(s) that were named when the index was created, and logical pointers to the data pages.

Although indexes speed data retrieval, they also can slow down data modifications (Each index creates extra work every time you insert, delete, or update a row) since most DML changes to the data also require updating the indexes.

Optimal indexing demands:

·       The workload on the table i.e. READs vs DMLs

·       An understanding of the behavior of queries that access unindexed heap tables, and tables with indexes

·       An understanding of the mix of queries that run on your server

·       An understanding of the PostgreSQL Query optimizer

Using indexes speeds optimizer access to data and reduces the amount of information read and processed from base tables. Whenever possible, the optimizer attempts index-seek-only retrieval to satisfy a query. With index-only retrieval, the database server uses only the data in the indexes to satisfy the query, and does not need to access rows in the table. The optimizer automatically chooses to use the indexes it determines will lead to the best performance.

Index Types

PostgreSQL provides several types of index: B-tree, Hash, GiST, SP-GiST, GIN and BRIN.

1. B-Tree

2. Hash

3. GiST

4. SP-GiST

5. GIN

6. BRIN

Each index type uses a different algorithm that is best suited to different types of queries. By default, the CREATE INDEX command creates B-tree indexes, which fit the most common situations. The other index types are selected by the keyword USING followed by the index type name.

For example, to create an index:

CREATE INDEX name ON table USING [HASH|GIN](column);

 

Hash Indexes

Hash indexes store a 32-bit hash code derived from the value of the indexed column. Hence, such indexes can only handle simple equality comparisons. The query planner will consider using a hash index whenever an indexed column is involved in a comparison using the equal operator:

=

To create a hash index, you use the CREATE INDEX statement with the HASH index type in the USING clause as follows:

Syntax:

CREATE INDEX index_name ON table_name USING HASH (indexed_column);

GIN indexes

GIN stands for Generalized Inverted Indexes. It is commonly referred to as GIN.

GIN indexes are most useful when you have multiple values stored in a single column, for example, hstorearray, jsonb, and range types.

BRIN Indexes

BRIN stands for Block Range Indexes. BRIN is much smaller and less costly to maintain in comparison with a B-tree index.

BRIN allows the use of an index on a very large table that would previously be impractical using B-tree without horizontal partitioning. BRIN is often used on a column that has a linear sort order, for example, the created date column of the sales order table.

GiST Indexes

GiST stands for Generalized Search Tree. GiST indexes allow a building of general tree structures. GiST indexes are useful in indexing geometric data types and full-text search.

SP-GiST Indexes

SP-GiST stands for space-partitioned GiST. SP-GiST supports partitioned search trees that facilitate the development of a wide range of different non-balanced data structures. SP-GiST indexes are most useful for data that has a natural clustering element to it and is also not an equally balanced tree, for example, GIS, multimedia, phone routing, and IP routing.

 

Implicit Indexes

Implicit indexes are indexes that are automatically created by the database server when an object is created. Indexes are automatically created for primary key constraints and unique constraints.

 

Thursday, 23 June 2022

Interesting Behaviour of PostgreSQL character data types!

Every RDBMS supports multiple data types namely numeric, monetary, character, date(time), binary, Boolean and blobs etc. PostgreSQL supports special datatypes like Geometric, Network-Address, UUID, XML, JSON, Text Search types etc.

Out of these 70% of columns are typically character data types. Character data types are strings of ASCII characters. Upper and Lower case alphabetic characters are accepted literally. There are three kinds of character data types as cited:

1.       fixed-length, blank padded - char

2.       variable-length with limit - varchar                       and

3.       variable unlimited length  - text

The storage requirement for a character datatype for a short string (up to 126 bytes) is 1 byte plus the actual string, which includes the space padding in the case of character. Longer strings have 4 bytes of overhead instead of 1 byte. Long strings are compressed by the system automatically, so the physical requirement on disk might be less. Very long values are also stored in background tables so that they do not interfere with rapid access to shorter column values.

BTW, In any case, the longest possible character string that can be stored is about 1 GB. (The maximum value that will be allowed for n in the data type declaration is less than that. It wouldn't be useful to change this because with multibyte character encodings the number of characters and bytes can be quite different. If you desire to store long strings with no specific upper limit, use text or character varying without a length specifier, rather than making up an arbitrary length limit.)

Most importantly interesting behaviour in PostgreSQL is how it handles these character data types.

There is no performance difference among these three data types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used instead.

 

Source: postgresql.org