This post describes the new features introduced with Sybase Adaptive Server Enterprise version 15.0.
1. Partition support
Partitioning is particularly useful in managing large tables and indexes by dividing them into smaller, more manageable pieces. Partitions, like a large-scale index, provide faster and easier access to data.
Partitions are database objects and can be managed independently. You can, for example load data, and create index cannot be done at a partition level.. Yet partitions are transparent to the end user, who can select, insert, and delete data using the same commands whether the table is partitioned or not.
Adaptive Server 15.0 supports horizontal partitioning, in which a selection of table rows can be distributed among partitions on different disk devices. Individual table or index rows are assigned to a partition according to a semantic or to a round-robin partitioning strategy. Semantic partitioning strategies use the data values in specified, key columns in each row to determine the partition assignment of that row. The round-robin partitioning strategy assigns rows randomly without reference to data values.
Partitioning strategies are:
· Hash partitioning (semantic) – a system-supplied hash function determines the partition assignment for each row.
· List partitioning (semantic) – values in key columns are compared with sets of user-supplied values specific to each partition. Exact matches determine the partition assignment.
· Range partitioning (semantic) – values in key columns are compared with a user-supplied set of upper and lower bounds associated with each partition. Key column values falling within the stated bounds determine the partition assignment.
· Round-robin partitioning – rows are assigned randomly to partitions in a round-robin manner so that each partition contains a more or less equal number of rows. This is the default strategy.
You can:
· Create partitions when you create a table or index using the create table and create index commands.
· Alter a table’s partitioning strategy using the alter table command.
· Add a partition to an existing table with add partition.
· You can use partitioning to expedite the loading of large amounts of table data—even when the table eventually will be used as an unpartitioned table.
2. Row-locked system catalogs
Adaptive Server version 15.0 converts most system catalogs to a datarows locking scheme. These system catalogs continue to use allpages locking scheme:
· Materialized tables such as syslocks and sysprocesses. These tables are generated during run-time and their locking schemes are irrelavent for concurrency.
· sysmessages and sysusermessages, which are read-only tables
· Auditing tables in sybsecurity, which are write-once and read many times.
Adaptive Server’s internal upgrade process converts the system table locking schemes during an installation, upgrade, or load upgrade.
3. Query Processor
The Adaptive Server version 15.0 query processor is self-tuning, requiring fewer interventions than earlier versions. This version of Adaptive Server has less reliance on worktables for materialization between steps since the engine supports data flow between steps. However, more worktables could be used in cases where Adaptive Server determines that hash and merge operations are effective.
New features include support for:
· Both vertical and horizontal parallelism for query processing
· Improved index selection, especially for joins with OR clauses and joins and search arguments (SARGs) with mismatched but compatible datatypes
· More efficient algorithms
· Improved costing, using join histograms for joins with data skews in joining columns
· Improved query plan selection that enhances performance through:
o New index union and index intersection strategies for queries with and/or predicates on different indexes
o On-the-fly grouping and ordering using in-memory sorting and hashing for queries with group by and order by clauses
o Cost-based pruning and timeout mechanisms that use permutation search strategies for large, multi-way joins, and for star and snowflake schema joins
· Improved problem diagnosis and resolution using:
o Searchable XML format trace outputs
o Diagnostic output from new set commands
· Joins involving a large number of tables
· Data and index partitioning, which are especially beneficial for very large data sets
Partitioning is the basic building block for parallelism.
Adaptive Server release 15.0 provides roundrobin partitioning. Round robin partitionin is equivalent to the 12.5 style of partitioning. During the upgrade to Adaptive Server release 15.0, all existing partitioned tables are unpartitioned and automatically converted to 1-way round robin partitioned tables.
4. Large identifiers
There are new limits for the length of object names or identifiers: 255 bytes for regular identifiers, and 253 bytes for delimited identifiers. The new limit applies to most user-defined identifiers including table name, column name, index name and so on. Due to the expanded limits, some system tables (catalogs) and built-in functions have been expanded.
For variables, “@” count as 1 byte, and the allowed name for the variable i 254 bytes.
5. Computed columns
This chapter describes an enhancement that provides easier data manipulation and faster data access, by allowing you to create computed columns, computed column indexes, and function-based indexes.
· Computed columns – defined by an expression, whether from regular columns in the same row, functions, arithmetic operators,or path names.
· Indexes on computed columns, or computed column indexes – indexes that contain one or more computed columns as index keys.
· Function-based indexes – indexes that contain one or more expressions as index keys.
· Deterministic property – a property assuring that an expression always returns the same results from a specified set of inputs.
Computed columns and function-based indexes similarly allow you to use an expression or a function as the basis for a more complex function.
Differences between computed columns and function-based indexes
Computed columns and function-based indexes differ in some respects:
· A computed column provides both shorthand for an expression and indexability, while a function-based index provides no shorthand; it allows you to index the expression directly.
· A computed column can be either deterministic or nondeterministic, but a function-based index must be deterministic. “Deterministic” means that if the input values in an expression are the same, the return values must also be the same.
Differences between materialized and not materialized computed columns
· Computed columns can be materialized or not materialized. Columns that are materialized are preevaluated and stored in the table when base columns are inserted or updated. The values associated with them are stored in both the data row and the index row. Any subsequent access to a materialized column does not require reevaluation; its preevaluated result is accessed. Once a column is materialized, each access to it returns the same value.
· Columns that are not materialized are also called virtual columns; virtual columns become materialized when they are accessed. If a column is virtual, or not materialized, its result value must be evaluated each time the column is accessed. This means that if the virtual computed column is expression-based on, or calls a nondeterministic expression, it may return different values each time you access it.You may also encounter run-time exceptions, such as domain errors, when you access virtual computed columns.
6. Scrollable cursors
Adaptive Server Enterprise allows both scrollable and nonscrollable cursors, which can be either semi-sensitive or insensitive.
“Scrollable” means that you can scroll through the cursor result set by fetching any, or many, rows, rather than one row at a time; you can also scan the result set repeatedly. You must use Transact-SQL or JDBC to declare a scrollable cursor, and you must have the query engine provided in Adaptive Server 15.0 or later. A scrollable cursor allows you to set the position of the cursor anywhere in the cursor result set for as long as the cursor is open, by specifying the option first, last, absolute, next, prior, or relative in a fetch statement.
To fetch the last row in a result set, enter:
fetch last [from] <cursor_name>
Or, to select a specific row in the result set, in this case the 500th row, enter:
fetch absolute 500 [from] <cursor_name>
“Insensitive” or “semi-sensitive” refers to the extent to which data changes from outside the cursor are visible to the cursor. A cursor can be semi-sensitive but not scrollable.
All scrollable cursors are read-only. All update cursors are nonscrollable.
7. Unitext support
Adaptive Server version 15.0 includes the variable-length unitext datatype, which can hold up to 1,073,741,823 Unicode characters (2,147,483,646 bytes). You can use unitext anywhere you use the text datatype, with the same semantics. unitext columns are stored in UTF-16 encoding, regardless of the Adaptive Server default character set.
The benefits of unitext include:
· Large Unicode character data. Together with unichar and univarchar datatypes, Adaptive Server provides complete Unicode datatype support, which is best for incremental multilingual applications.
· unitext stores data in UTF-16, which is the native encoding datatype for Windows and Java environments.
8. big int support
Adaptive Server version 15.0 includes the exact numeric datatype bigint
This is the range of numbers allowed by the bigint datatype:
Ranges for bigint datatype | |
Datatype | Range of signed datatypes |
bigint | Whole numbers between -263 and 263 - 1 (from -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807, inclusive. |
9. Unsigned int support
These unsigned integer datatypes allow you to extend the range of the positive numbers for the existing integer types without increasing the required storage size. That is, the signed versions of these datatypes extend both in the negative direction and the positive direction (for example, from -32 to +32). However, the unsigned versions extend only in the positive direction. Below Table describes the range of the signed and unsigned versions of these datatypes.
Ranges for signed and unsigned datatypes | ||
Datatype | Range of signed datatypes | Range of unsigned datatypes |
bigint | Whole numbers between -263 and 263 - 1 (from -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807, inclusive | Whole numbers between 0 and 18,446,744,073,709,551,615 |
int | Whole numbers between -231 and 231 - 1 (-2,147,483,648 and 2,147,483,647), inclusive | Whole numbers between 0 and 4,294,967,295 |
smallint | Whole numbers between -215 and 215 -1 (-32,768 and 32,767), inclusive | Whole numbers between 0 and 65535 |
10. Integer identity
Adaptive Server release 15.0 allows you to use the following datatypes as identity values:
· bigint
· int
· numeric
· smallint
· tinyint
· unsigned bigint
· unsigned int
· unsigned smallint
11. Adaptive Server Plug-in enhancements
Adaptive Server release 15.0 adds the following to the Adaptive Server Plug-in to enhance its efficiency and convenience:
· An enterprise view that includes Server Discovery (which enables you to find available servers on the system) and automatic server status.
· The ability to update servers, administrate remote servers, and manage server logs.
· SQL Preview and Job Scheduler integration.
· A graphical query plan viewer.
· The ability to integrate external tools.
12. User-defined web services
In addition to the Web methods provided by the Adaptive Server Web Services Engine, Web Services enables you to create Web services and execute SQL commands in Adaptive Server Enterprise using either a Web browser or a SOAP client. These user-defined Web services use existing security and auditing control inherent in Adaptive Server Enterprise.
You can create a user-defined Web service with the create service command, which enables you to specify the SQL to be executed, create a first-class object for which permissions can be controlled with the grant command, and control whether the service can be invoked with a Web browser or a SOAP client. The ASE Web Services Engine automatically generates WSDL for user-defined Web services.
13. Very large storage support
Adaptive Server 15.0 greatly extends the allowable number of disk devices and the allowable number of 2K blocks for each device.
In pre-15.0 releases of Adaptive Server, a virtual page is described internally in a 32-bit integer: the first byte holds the device number (vdevno) and the succeeding three bytes describe the page offset within the device in units of 2K bytes (the virtual page number). This architecture limits the number of devices to 256 and the size of each device to 32 gigabytes—for a maximum storage limit of 8 terabytes in the entire server.
With Adaptive Server 15.0, the device number and the page offset are stored in separate 32-bit integers. The new architecture allows you to create up to 2,147,483,647 disk devices, each of which can be as large as 2,147,483,648 2K blocks or 4 terabytes.
14. Automatic update statistics
Instead of manually running update statistics at a certain time, you can set update statistics to run automatically at the time that best suits your site and avoid running it at times that hamper your system. The best time for you to run update statistics is based on the feedback from the datachange function. datachange also helps to ensure that you do not unnecessarily run update statistics. You can use these templates to determine the objects, schedules, priority, and datachange thresholds that trigger update statistics, which ensures that critical resources are used only when the query processor generates more efficient plans.
Because it is a resource intensive task, the decision to run update statistics should be based on a specific set of criteria. Some of the key parameters that can help you determine a good time to run update statistics are:
· How much has the data characteristics changed since you last ran update statistics? This is known as the “datachange” parameter.
· Are there sufficient resources available to run update statistics? These include resources such as the number of idle cpu cycles and making sure that critical online activity does not occur during update statistics.
Datachange is a key metric that helps you measure the amount of altered data since you last ran update statistics, and is tracked by the datachange function. Using this metric and the criteria for resource availability, you can automate the process of running update statistics. The Job Scheduler provides the mechanism to automatically run update statistics. Job Scheduler includes a set of customizable templates that determine when update statistics should be run. These inputs include all parameters to update statistics, the datachange threshold values, and the time when to run update statistics. The Job Scheduler runs update statistics at a low priority so it does not affect critical jobs that are running concurrently.
15. Query processing metrics (qp metrics)
Query processing (QP) metrics identify and compare empirical metric values in query execution. When a query is executed, it is associated with a set of defined metrics that are the basis for comparison in QP metrics.
The metrics captured include:
· CPU execution time – the time, in milliseconds, it takes to execute the query.
· Elapsed time – the difference in milliseconds between the time the command started and the current time, as taken from the operating system clock.
· Logical IO (LIO) reads – the number od Logical IO reads.
· Physical IO (PIO) reads – the number of Physical IO reads.
· Count – the number of times a query is executed.
· Abort count – the number of times a query is aborted by the resource governor due to a resource limit being exceeded.
Each metric has three values: minimum, maximum, and average. Count and abort count are not included.
16. Encrypted Columns
Adaptive Server version 15.0.2 includes these enhancements to encrypted columns:
· Protects data from administrator – you can protect keys and encrypted columns with your own password to ensure privacy of data against the power of the DBO or System Administrator.
· Maintains application transparency using key copies protected by login passwords. That is, you can create key copies and assign them to individual users. Users can encrypt their key copies using their login passwords. Once a key copy is associated with a login password, users do not have to supply the key encryption password when they access data encrypted with the key.
· Provides for key recovery – You can recover access to a key after losing a password. The key owner sets up a recovery key copy, which can later be used to reencrypt the key after losing the password.
· Returns a default value for users without decrypt permission – you can create or alter a table to allow select statements to return specified default values for users who do not have decrypt permission. This allows you to run existing applications and reports without generating a permission error, while keeping private data secure against unauthorized users. Reports generated by unauthorized users do not reveal the encrypted data.
· Restricts automatic decrypt permissions – when the restricted decrypt permission configuration parameter is enabled, the System Security Officer explicitly grants decrypt permission, restricting access to data. When restricted decrypt permission is enabled:
o Table owners are not implicitly granted decrypt permission. The schema owner does not have automatic and implicit access to user data, even in systems that rely on the system encryption password to access the keys.
o Only users with the sso_role can grant decrypt permission. with grant option is supported for decrypt permission.
o Implicit access through ownership chains across view and tables or procedures and tables is restricted.
· Adds datatypes – you can encrypt these additional datatypes: date, time, datetime, smalldatetime, money, smallmoney, big int, unsigned big int, bit, unichar and univarchar.
17. Archive Database Access
Archive database access allows a database administrator to validate or selectively recover data from a database dump (an “archive”) by making the dump appear as if it were a traditional read-only database; this type of database is called an “archive database.”Unlike a traditional database, an archive database uses the actual database dump as its main disk storage device, with a minimum amount of traditional storage to represent new or modified pages that result from the recovery of the database dump. A database dump already contains the images of many (if not most) of the database pages, therefore, an archive database can be loaded without having to use Backup Server to transfer pages from the archive to traditional database storage. Consequently, the load is significantly faster than a traditional database.
Aggregate functions summarize data over a group of rows from the database. The groups are formed using the group by clause of the select statement.
Simple aggregate functions, such as sum, avg, max, min, count_big, and count are allowed only in the select list and in the having and order by clauses as well as the compute clause of a select statement. These functions summarize data over a group of rows from the database.
Adaptive Server Enterprise now supports statistical aggregate functions, which permit statistical analysis of numeric data. These functions include stddev, stddev_samp, stddev_pop, variance, var_samp, and var_pop.
These functions, including stddev and variance, are true aggregate functions in that they can compute values for a group of rows as determined by the query’s group by clause. As with other basic aggregate functions such as max or min, their computation ignores null values in the input. Also, regardless of the domain of the expression being analyzed, all variance and standard deviation computation uses IEEE double-precision floating-point standard.
If the input to any variance or standard deviation function is the empty set, then each function returns as its result a null value. If the input to any variance or standard deviation function is a single value, then each function returns 0 as its result.
19. bcp performance optimization
In earlier versions, Adaptive Server used fast bcp when sp_dboption 'select into/bulkcopy/pllsort' was enabled and the table had no indexes or triggers.In all other cases, Adaptive Server used slow bcp. If a table had indexes or triggers, you had to drop these before using fast bcp and then recreate them after loading the data with bcp.
Adaptive Server version 15.0.2 allows you to use fast bcp to copy data into tables with non-clustered indexes or triggers, improving Adaptive Server’s performance for inserting huge volumes of data. With this optimization the use of slow bcp is now only required when:
· sp_dboption ‘select into/bulkcopy/pllsort’ is off.
· sp_dboption ‘select into/bulkcopy/pllsort’ is on, but the table uses the allpages locking scheme and has a clustered index.
· sp_dboption ‘select into/bulkcopy/pllsort’ is on, but the table has a unique nonclustered index.
If the option ignore_dup_key option is enabled on the unique index, performing fast bcp can put the table and index in an inconsistent state if rows with duplicate keys are inserted. To avoid the inconsistency, Adaptive Server performs slow bcp.
· If the table has nonclustered indexes or triggers, and the table is marked for replication or the database is used as a warm standby.
Because fast bcp does not log inserts, if Adaptive Server uses fast bcp, the rows bcp copies cannot be recovered on the replication site if there is a problem. Adaptive Server uses slow bcp in these situations to maintain compatibility with applications that were written to use the old behavior.
In all other cases, Adaptive Server uses fast bcp.These are situations in which Adaptive Server version 15.0.2 uses fast bcp but earlier versions Adaptive Server used slow bcp (in all cases sp_dboption ‘select into/bulkcopy/pllsort’ is enabled and the table does not have a clustered index):
· When the table has a non-unique, nonclustered index. Adaptive Server logs the index updates and the page allocations only. It does not log inserts into the table.
· When a table has triggers. However, bcp does not fire any triggers in the target table.
· When a table has datarows or datapage locking scheme with a clustered index.
If the table includes nonclustered indexes or triggers, but sp_dboption 'select into/bulkcopy/pllsort' is not enabled, Adaptive Server uses slow bcp, and prints this warning message to indicate that you can improve the performance of bcp by enabling sp_dboption 'select into/bulkcopy/pllsort':Performing slow bcp on table '%s'. To enable fast bcp please turn on 'select into/bulkcopy' option on the database '%s
bcp optimizatin is performed by Adaptive Server and does not require that you use Open Client version 15.0 or later.20. Separate user log cache for a session’s tempdb
Previous versions of Adaptive Server flushed the user log cache (ULC) as user sessions switched between transactions in the user databases and tempdb database. However, flushing the ULC causes logical IOs in syslogs and disk IOs, causing performance degradation.Adaptive Server version 15.0.2 includes a separate ULC for the session’s temporary database, so multi-database transactions that include a single user database and the session’s temporaroy database do not require ULC flushes when the users switch between the databases or if all of the following conditions are met:
· Adaptive Server is currently committing the transaction.
· All the log records are in the ULC
· There are no post-commit log records.
Adaptive Server version 15.0.2 adds the configuration option, session tempdb log cache size, which allows you to configure the size of the ULC, helping to determine how often it needs flushing.Summary information | |
Default value | The logical page size |
Range of values | The logical page size up to 2147483647 |
Status | Static |
Display level | Comprehensive |
Required role | System Administrator |
Configuration group |
21. Optimizations to improve throughput of tempdb transactions
Earlier versions of Adaptive Server flushed the data pages and single log records (SLRs) because crash recovery was not supported for tempdb or any databases not requiring recovery.SLRs are log records that force a flush of the user log cache (ULC) to syslogs immediately after the record is logged. SLRs are created for OAM modifications, and Adaptive Server creates log records affecting allocation pages in a mixed log and data database as SLRs.
· For regular databases, a ULC containing SLRs is flushed immediately to avoid any undetected deadlocks caused during buffer “pinning”. Avoiding a ULC flush for SLRs reduces log semaphore contention, improving the performance.
A ULC flush avoids the deadlock caused by buffer pinning. Because Adaptive Server does not pin the buffers for databases that do not need recovery, it avoids this deadlock and does not have to flush the ULC for SLRs.
· For databases that require recovery, Adaptive Server flushes dirty pages to disk during the checkpoint. This ensures that, if Adaptive Server crashes, all the committed data is saved to disk. However, for databases which do not require recovery, Adaptive Server supports a runtime rollback, but not a crash recovery. This allows it to avoid flushing dirty data pages at a checkpoint and improves performance.
· Adaptive Server does not support write ahead logging on databases that do not require recovery. Write-ahead logging guarantees that data for committed transactions can be recovered by “redoing” the log (reperforming the transactions listed in the log), and “undoing” the changes done by aborted or rolled back transactions to maintain database consistency. Write-ahead logging is implemented by the “buffer pinning” mechanism. Since Adaptive Server does not ensure write-ahead logging on databases not needing recovery, it does not pin buffers for these databases, so it can skip flushing the log when it commits a transaction.
22. Row-level Locking for System Tables
Versions of Adaptive Server earlier than 15.0.2 used exclusive table locks on system tables while executing data definition language (DDL) and utility commands. The set of system tables Adaptive Server locked depended on the type of DDL operation you executed. If another DDL running concurrently tried to take a conflicting exclusive table lock on the same system table, this DDL had to wait to acquire the lock on any system catalogs. These DDL operations were executed serially.This methodology impeded performance in temporary databases, where their DDL activity is very high also, and consquently their catalog contention is very high. This limited the Adaptive Server throughput for applications using temporary tables.
Adaptive Server version 15.0.2 uses row-level locking to resolve these issues:
· System-table contention, caused a bottleneck for many DDLs and utilities.
· tempdb contention. Because the system tables are locked at the row level, Adaptive Server 15.0.2 eliminates tempdb contention.
· Shared or exclusive table-level locks while executing DDLs and utilities. Earlier versions converted most system tables to data-only locking (DOL), but still created shared or exclusive table-level locks while executing DDLs and utilities. Using row-level locks for system tables eliminates this contention.
Adaptive Server sets intent locks on catalogs only, which removes potential contention (An intent lock indicates that page-level or row-level locks are currently held on a table.).
· DDLs and utilities blocking each other. Adaptive Server 15.0.2 allows DDLs and utilities to run in parallel.
Earlier versions of Adaptive Server used table locks to achive system catalog synchronization. Adaptive Server 15.0.2 uses intent locks for table-level synchronization and row locks for row-level synchronization. Earlier releases of Adaptive Server locked the entire system catalog while performing operations on the object, so a single lock request was made. However, Adaptive Server version 15.0.2 requests locks for all applicable rows while performing operations on the object if there are multiple rows corresponding to an object in a system catalog.
This change means that Adaptive Server 15.0.2 requests more locks to perform the same operation than earlier releases, and increases the number of lock resources the system needs. Consequently, you may need to change the number of locks configuration option after you upgrade Adaptive Server.
23. User defined SQL functions
You can include these in a scalar function:· declare statements to define data variables and cursors that are local to the function.
· Assigned values to objects local to the function (for example, assigning values to scalar and variables local to a table with select or set commands).
· Cursor operations that reference local cursors that are declared, opened, closed, and deallocated in the function.
· Control-of-flow statements.
· set options (only valid in the scope of the function).
Adaptive Server does not allow fetch statements in a scalar function that return data to the client. You cannot include :
· select or fetch statements that returns data to the client.
· insert, update, or delete statements.
· Utility commands, such as dbcc, dump and load commands.
· print statements
· Statement that references rand, rand2, getdate, or newid.
You can include select or fetch statements that assign values only to local variable.24. Changes to sp_sysmon
Adaptive Server version 15.0.1 changes the default behavior of sp_sysmon.In Adaptive Server version 15.0.1 and later, sp_sysmon does not clear the monitor counters. You no longer have to specify the noclear option to prevent sp_sysmon from clearing the monitor counters. This is the default behavior.
If you need to clear the monitor counters, use sp_sysmon with the clear option. For compatibility reasons, Adaptive Server accepts the noclear option as a valid parameter, but it does not affect the behavior of sp_sysmon.
However, if you run sp_sysmon using the begin_sample and end_sample options to begin and end the sample period, sp_sysmon always clears the monitor counters. Adaptive Server issues an error message if you run sp_sysmon with begin_sample or end_sample and the noclear option.
25. Dump and load across platforms with different endian architecture
Adaptive Server allows a dump and load database between big endian and little endian architectures, and vice versa.Restrictions
· dump transaction and load transaction is not allowed across platforms.
· dump database and load database to or from a remote backupserver are not supported across platforms.
· You cannot load a password-protected dump file across platforms.
Enjoy with these performance boosting new features in ASE 15.x..
Hi,
ReplyDeleteI have a doubt which I think u will help me out. I am migrating our database from Sybase 12 to 15. There are few cursors .. all are for read only. Should I mention insensitive while I am migrating the same to version 15? if so can you explain me why.
Thanks in advance.