DisCopy


Saturday 19 September 2015

SAP HANA for Sybase DBAs

Transform yourself to the better career keeping your core alive..



Sybase ASE, which stands for Adaptive Server Enterprise, was Sybase’s enterprise-class database for transactional applications which brought revolutionary with its Architecture of 1 Server multiple databases, ease of access and Robust database management with optimal performance at minimal TOC. Sybase ASE became widely popular in the finance industy, but was only able to run SAP applications with the ASE verison 15.7 released in Sep' 2011. In April' 2014, SAP released ASE 16 (dropping Sybase Name) and SAP is planning for ASE to be the preferred database for running transactional applications in SAP’s Real-Time Data Platform, a collection of data management tools that SAP is integrating to provide very fast performance.

SAP (Systems, Applications & Products in Data Processing) brought revolution of data management with HANA (The SAP HANA® platform can increase analysis speed by more than 10,000x) which is capable to handle columnar data or COLUMN-STORE (Sybase ASIQ is pioneer, which SAP IQ software holds the Guinness World Record for loading, storing, and analyzing Big Data at 34.4 terabytes per hour) and ROW-STORE data (Sybase ASE is one of the top RDBMSs). It also encapsulates the InMemoryDatabases (SAP HANA is an in-memory database, is a combination of hardware and software made to process massive real time data using In-Memory computing)  and PhysicalDatabases dynamically to handle the data effectively. While SAP HANA can support transaction and analytic processing on the same data, not every business application stands to benefit greatly enough from up-to-the-moment analysis and ad-hoc queries to justify the expense of running it on HANA. SAP plans for ASE to be a less-expensive alternative to HANA for such cases. At the same time, SAP plans to make data in ASE easily accessible to HANA and vice versa, easing integration for companies with a mixed landscape.

SAP is including a license for Sybase ASE with SAP ERP on HANA, indicating that SAP has plans for these two databases to work together. SAP is also planning to make ASE a transitional database between a standard disk-based database and SAP HANA. SAP is working toward making migration from ASE to HANA very smooth using the Sybase Replication Server. For those planning to use HANA as a database in the future, SAP recommends moving to ASE in the short term to make for an easy transition to HANA in the long term.


SAP HANA Architecture


The SAP HANA database is developed in C++ and runs on SUSE Linux Enterpise Server. SAP HANA database consists of multiple servers and the most important component is the Index Server. SAP HANA database consists of Index Server, Name Server, Statistics Server, Preprocessor Server and XS Engine. I'm providing herewith very high-level info, so as to make yourself comfortable and get ready to reach new level with SAP-HANA. Start exploring and learning the future's Database..




Index Server: (like the DATA SERVER in ASE)

  • Index server is the main SAP HANA database component
  • It contains the actual data stores and the engines for processing the data.
  • The index server processes incoming SQL or MDX statements in the context of  authenticated sessions and transactions.



Persistence Layer: 
The database persistence layer is responsible for durability and atomicity of transactions. It ensures that the database can be restored to the most recent committed state after a restart and that transactions are either completely executed or completely undone. 

Preprocessor Server: 
The index server uses the preprocessor server for analyzing text data and extracting the information on which the text search capabilities are based. 

Name Server: 
The name server owns the information about the topology of SAP HANA system. In a distributed system, the name server knows where the components are running and which data is located on which server. 

Statistic Server: 
The statistics server collects information about status, performance and resource consumption from the other servers in the system.. The statistics server also provides a history of measurement data for further analysis. 

Session and Transaction Manager: 
The Transaction manager coordinates database transactions, and keeps track of running and closed transactions. When a transaction is committed or rolled back, the transaction manager informs the involved storage engines about this event so they can execute necessary actions. 

XS Engine: 
XS Engine is an optional component. Using XS Engine clients can connect to SAP HANA database to fetch data via HTTP. 

Source : SAP website and various SAP GURUs (Thanks much to ALL)

Tuesday 17 February 2015

SAP ASE Edge Edition

SAP ASE Edge Edition for Small and Medium Businesses


Good NEWS for Sybase passionate guys.  This might be the sweetest announcement from SAP to cool down your sweat thinking Sybase era is over. After dropping the name Sybase from its product names last year, SAP is quietly reshaping its database platform offerings in a better way.
Sybase Inc. has  2 ASE products; Sybase ASE Enterprise Edition (EE) and Sybase ASE Small Business Edition (SBE). The Sybase ASE EE product was and still is the flagship and the most commonly known edition of Sybase with a high price tag. To offer a Sybase ASE database server to the SMB market space, Sybase created ASE SBE, limited to 2 CPU sockets and 8 engines on a physical server, at a very low price.
The server virtualization revolution has dramatically changed the server landscape and database servers made their transitions to the new world with PaaS, IaaS and CLOUD. The majority of newly deployed database servers are virtualized, either on premise or in the cloud. The Sybase ASE EE 15.7 database server enabled Sybase clients to make the move to the virtual servers with the introduction of the threaded kernel model. Especially SAP ASE EE 16 continued the optimization for virtual servers and offers one of the most advanced database servers in the market along with weel known SAP brand and support.
SAP has improved upon SAP ASE Small Business Edition (ASE SBE) with the release of SAP ASE Edge Edition providing small and medium sized businesses with the same enterprise-level features that are found in SAP ASE Enterprise EditionSAP ASE Edge Edition can run on physical or virtual machines with 4 cores or less.  Machines are not limited to 2 chips as they were for SAP ASE SBE.  This gives small businesses the ability to leverage the benefits of virtualization on today's more powerful, multi-core platforms.
SAP ASE Edge Edition replaces SAP ASE SBE and includes many options that were not available to SAP ASE SBE customers.  Every license includes:
·         Security and Directory Services which provides SSL, LDAP authentication, row-level security, and more to provide the highest levels of security and protection from un-authorized access
·         Encryption of sensitive information such as credit card numbers or SSNs that can only be decrypted by authorized users
·         Intelligent Partitioning of data based on its content, which improves performance, shortens maintenance times, and simplifies operations for aging data
·         Compression which can lower storage costs and improve I/o performance
·         Warm standby replication of ASE Edge data to protect businesses from data loss and accessibility in the event of a system failure
SAP ASE Edge Edition provides small and medium sized businesses with enterprise-grade features for enhanced performance, advanced security, and data availability in the event of hardware failure.  It can be purchased through SAP partners at a low price point that will fit the budget of small businesses.
The following matrix shows the comparison between SAP ASE Edge Edition and Sybase ASE SBE. But pay close attention to the options that are now included and compare this to other vendors standard offerings.
 Parameter
ASE Edge Edition
ASE SBE
Max Engines
No Limit
8
Max Cores OS can use
4
No Limit
Max CPU Chips
No Limit
2
Max Concurrent User Connections
No Limit
256

Hope this excites your Zeal and to consider Database Management Systems in Virtual Environment. Refer to scn.sap for more info.

.

Tuesday 3 February 2015

Dozen New Features that Boosts ASE in Version 15.7 ESD #2

What’s NEW in 15.7++
In Version 15.7 ESD #2
1.    Deferred Table Creation
create table...with deferred_allocation allows you defer the page allocation for a table.
The with deferred_allocation parameter for the create table command lets you defer page allocation for a table. Deferred tables help applications that create numerous tables, but use only a small number of them. Tables are called “deferred” until Adaptive Server allocates their pages.
System tables include entries for deferred tables. These entries allow you to create objects associated with deferred tables such as views, procedures, triggers, and so on..
(It’s like the create database for load option but here the table is online and accessible)

2.    Online Utilities
Adaptive Server versions 15.7 ESD # 1 and later include an online parameter for reorg rebuild that lets you reorganize data and perform maintenance on tables without blocking users data from users.

3.    Merging and Splitting Partitions
Over time, a partition’s data distribution may become skewed, or the manner in which the data was originally partitioned may not suit current business requirements. Use alter table to merge, split, or move partitions to redistribute the data and revive the performance benefits of using partitions.
For example:
·         Splitting partitions – a company divides data into four partitions according to regions —North, South, East and West— so customer representatives have fast and efficient access to their regions’ customers, independent of other regions. If sales increase in the Southern region and the customer base has expanded significantly, frequent queries involving partition scans and maintenance operations may cause the South partition to be slow and inefficient, losing out on the benefits of partitioning the customer data. In this situation, splitting the data in the South partition into two partitions, South-East and South-West, may revive performance without affecting the data in other partitions.
·         Merging partitions – a company’s sales data is partitioned into the four yearly quarters—Q1, Q2, Q3, and Q4. At the end of the year, the company merges the data for the year and archives it. Merging partitions that represent a closed financial year is efficient because sales’ data for a past year is accessed infrequently, and the older data is most likely to be read but not updated.

4.    Maximum Size of Query in the Statement Cache
Adaptive Server versions 15.7 ESD #2 and later allow you to store very large SQL statements. You can save individual statements of up to 2MB (for a 64-bit machine) in the statement cache.
Versions of Adaptive Server earlier than 15.7 ESD #2 had a 16K limit for individual statements stored in the statement cache, even if statement cache size was configured with a larger size.
  
5.    Fast-Logged Bulk Copy
Adaptive Server version 15.7 ESD #2 and later allows you to fully log bcp in fast mode, which provides faster data throughput and full data recovery. Earlier versions logged only page allocations.
Use the set logbulkcopy {on | off } command to configure fast-logged bcp for the session. You may include the set logbulkcopy {on | off } with the --initstring 'Transact-SQL_command' parameter, which sends Transact-SQL commands to Adaptive Server before transferring the data. For example, to enable logging when you transfer the titles.txt data into the pubs2..titles table, enter:
bcp pubs2..titles in titles.txt --initstring 'set logbulkcopy on'
You must enable select into/bulkcopy/pllsort on the database before issuing fast-logged bcp; otherwise, bcp uses slow mode.

6.    Concurrent dump database and dump transaction Commands
Adaptive Server versions 15.7 ESD #2 and later allow a dump transaction command to run concurrently with a dump database command, reducing the risk of losing database updates for a longer period than that established by the dump policy.

7.    Hash-Based Update Statistics
Adaptive Server versions 15.7 ESD #2 and later allow you to gather hash-based statistics on minor index attributes and unindexed columns instead of using sort-based statistics, significantly reducing elapsed time and resource usage. Using hash-based statistics improves performance by reducing the number of required scans, and avoiding disk-based sorting.
Hash-based statistic allow greater flexibility than sort-based statistics:
·         Running hash-based statistics should require less time, increasing the amount you can accomplish during a maintenance window.
·         Because hash-based statistics require less procedure cache, you may be able to run update statistics on a data-only-locked table outside a maintenance window, since the Adaptive Server tempdb buffer cache (which typically uses the default data cache) is typically much larger than the procedure cache, reducing the impact of update statistics.
·         Hash-based statistics do not generally require large tempdb disk allocations. If you previously increased the size of tempdb to accommodate large sorts from update statistics, you may be able to redeploy this space.
·         update [index | all] statistics with hashing may run faster than update [index | all] statistics with sampling. However, an exception may be update statistics table_name(col_name).
·         update statistics table_name (col_name1), (col_name2) . . . with hashing allows you to collect statistics on several columns with a single scan instead of several scans.

8.    Enhancements to dump and load
Adaptive Server 15.7 ESD #2 includes enhancements to the dump and load commands, which make it easier for you to back up and restore your databases.
The enhancements include:
·         The dump configuration command allows you to back up the Adaptive Server configuration file, the dump history file, and the cluster configuration file.
·         Dump configurations define options to create a database dump. Backup Server then uses the configuration to perform a database dump. You can use:
o    The dump configuration to create, modify, or list dump configurations, then use dump database or dump transaction with the configuration.
o    The enforce dump configuration configuration parameter to enable dump operations to use a dump configuration.
o    The configuration group "dump configuration," which represents user-created dump configurations.
·         Dump history:
o    Preserve the history of dump database and dump transaction commands in a dump history file that Adaptive Server can later use to restore databases, up to a specified point in time.
o    Read the dump history file and regenerate the load sequence of SQL statements necessary to restore the database.
o    Use sp_dump_history to purge dump history records.
o    Use the dump history update configuration parameter to disable default updates to the dump history file at the end of every dump operation.
o    Use the dump history filename configuration parameter to specify the name of the dump history file.
·         Dump header – New options to the dump with listonly command:
o    create_sql – lists the sequence of disk init, sp_cacheconfig, create database, and alter database commands required to create a target database with the same layout as the source database.
o    load_sql – uses the dump history file to generate a list of load database and load transaction commands required to repopulate the database to a specified point in time.

9.    alter table drop column without datacopy
Adaptive Server versions 15.7 ESD #2 and later add the no datacopy parameter to the alter table ... drop column command, which allows you to drop columns from a table without performing a data copy, reducing the amount of time required for alter table ... drop column to run.


10. User-Defined Optimization Goal
Adaptive Server versions 15.7, ESD #2 and later allow you to create user-defined optimization goals.
User-defined optimization goals allow you to:
·         Create a new optimizer goal
·         Define set of active criteria
·         Activate the goal at the server, session, procedure, and query level
·         Dynamically change the goal content, without disconnecting and reconnecting the client session
Once you create the user-defined optimization goals, you can invoke them at the server level or for a user session.

11. Shared Query Plans
Adaptive Server versions 15.7 ESD #2 and later allow you to share query plans, which are cloned from primary query plans, avoiding the need for Adaptive Server to create or recompile query plans that are identical to existing plans.
You should see a performance improvement as Adaptive Server shares query plans instead of reusing or recompiling them. You may see a slight change to procedure cache memory usage as primary query plans are pinned in the cache while Adaptive Server uses their shared query plans.

12. In-Row Large Object Compression
Adaptive Server versions 15.7 ESD #2 and later support in-row large object (LOB) compression.


In Version 15.7

Compressing Data in Adaptive Server


Adaptive Server version 15.7 introduces data compression, which lets you use less storage space for the same amount of data, reduce cache memory consumption, and improve performance because of lower I/O demands.
You can compress large object (LOB) and regular data.

After you create a compressed table or partition, Adaptive Server compresses any subsequently inserted or updated data (that is, existing data is not already compressed). If Adaptive Server cannot efficiently compress the inserted data, the original row is retained. If newly inserted or updated LOB data occupies space that is smaller than or equal to a single data page, Adaptive Server does not compress this data.

You need not uncompress data to run queries against it. You can insert, update, and delete compressed data; running select or readtext statements on the compressed column returns decompressed rows. Because there is less data for Adaptive Server to search, there are fewer I/Os, improving the efficiency of data storage.


                                                                                                          * source: sap.com