SQL Server Input/Output (I/O) Configuration with PeopleSoft


Ensure that the storage system used for the database server is configured for optimal performance. Incorrect configuration of the I/O system can severely degrade the performance of your system. Sizing and placement of your application database data files, log files, and the tempdb system database play a major role in dictating overall performance.

PeopleSoft applications involve online transaction processing (OLTP), which mainly results in random data access, as well as batch processing that often results in sequential access. Consider how much random data access and sequential access your applications will be making when selecting the disk I/O configuration.

RAID Type Recommendations
A common debate when discussing RAID options is the relative performance of RAID 5 versus RAID 10. RAID 10 will outperform a RAID 5 set of the same number of volumes, for the following reasons:

• Write performance for RAID 10 is superior. A write operation on RAID 5 requires four physical I/O operations, whereas RAID 10 requires two.
• Read performance of RAID 10 is enhanced in most implementations by balancing read requests across the two drives participating in the mirror.

RAID 0 is unsuitable for use in a SQL Server environment because the loss of a single drive will result in the loss of data. Even tempdb should not be placed on RAID 0 in a production environment because the loss of one drive on RAID 0 would result in an outage on the SQL Server instance. A possible use of RAID 0 could be as the temporary location of disk backups, prior to writing disk backups to tape or to another location.

RAID 1 is appropriate for objects such as the SQL Server binaries, the master database, and the msdb database. I/O requirements for these objects are minimal and therefore they do not generally benefit from striping, but they require fault tolerance to remain available. To maintain continuous operation, you must implement fault tolerance for these objects.

Note. You should isolate the transaction log from all other I/O activity – no other files should exist on the drives that contain the log file. This ensures that, with the exception of transaction log backup and the occasional rollback, nothing disturbs the sequential nature of transaction log activity.
RAID 10 affords the best overall performance, making it the preferred choice for all database files.

RAID level recommendations for PeopleSoft applications:



Typical I/O Performance Recommended Range
For PeopleSoft applications with high performance requirements, the recommended range for SQL Server data and log files in milliseconds (ms) per read and milliseconds per write is as follows:

SQL Server data files:
• Up to 10 ms is good.
• 10 to 20 ms is acceptable.
• 20 to 30 ms is not acceptable.
• Above 30 ms can have an adverse effect on the performance of the system and it is usually not acceptable, especially for high-throughput deployments.

SQL Server transaction log files:
• Up to 10 ms is good.
• 10 to 20 ms is acceptable. A duration that is longer than 20 ms can result in problems using CPU resources.
• Above 20 ms indicates that CPU resources of the database server are not fully utilized.

Files, Filegroups, and Object Placement Strategies
The complex pattern of I/O activity and the large number of tables and indexes in the PeopleSoft database make attempting strategic placement of objects and object types (that is, tables and indexes) a difficult task. A better strategy is to spread a single user-defined filegroup across as many physical drives as possible, which puts the entire storage system to work completing as many I/O requests as possible in parallel. It is recommended that you create a user-defined filegroup and then create secondary data files in it. Mark the user-defined filegroup as default and place the PeopleSoft objects in it. For increased manageability, it is also recommended that you create multiple files in the user-defined filegroup.

A possible exception to this strategy is the isolation of temporary tables (described later in this document). Temporary tables store intermediate results during batch processing; therefore, isolating temporary tables from the rest of the PeopleSoft database objects has the potential to reduce the interference that batch processing can have with OLTP processing.

Note. PeopleSoft applications do allow you to assign tables and indexes to specific filegroups. To do so, update PeopleTools tables PSDDLMODEL and PSDDLDEFPARMS. When used, each table and index script is generated with its specific filegroup included.

Tempdb Placement and Tuning
The tempdb system database is a global resource that is available to all users connected to the instance of SQL Server. It holds the following:

• Temporary user objects that are explicitly created, such as: global or local temporary tables, temporary stored procedures, table variables, and cursors.
• Internal objects that are created by the SQL Server Database Engine, for example, work tables to store intermediate results for spools or sorting.
• Row versions that are generated by data modification transactions in a database that uses read-committed using row versioning isolation or snapshot isolation transactions.
• Row versions that are generated by data modification transactions for features, such as online index operations.

In SQL Server, tempdb requires more disk space than in earlier versions of SQL Server. Configuration of tempdb for best performance is critical for SQL Server due to the potential of added performance stress on tempdb from new features such as read-committed snapshot isolation level and online index operations.

It is recommended that tempdb be isolated from other database activity on its own RAID set of physical disks. It is especially important to use RAID 10 for tempdb. To move tempdb to its own set of RAID disks, use the ALTER DATABASE statement with the MODIFY FILE clause to specify a new location for the tempdb data file and log file.

You may also want to increase the SIZE clause to a larger value, such as 100 MB, and increase the FILEGROWTH clause to 50 MB. To preallocate space for all tempdb files, set the file size to a value large enough to accommodate the typical workload in the environment. This prevents tempdb from expanding too frequently, which can affect performance. Set the tempdb database to autogrow, but use this option to increase disk space for unplanned exceptions.
In SQL Server, pre-sizing tempdb to a sufficiently large size is strongly recommended.

When the READ_COMMITTED_SNAPSHOT database option is ON, logical copies are maintained for all data modifications performed in the database. Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb. The tempdb should have enough capacity to store the row versions and the other objects stored in the tempdb.

Set the file growth increment to a reasonable size to avoid the tempdb database files from growing by too small a value. If the file growth is too small, compared to the amount of data that is being written to tempdb, tempdb may have to constantly expand. This will affect performance. See the following general guidelines for setting the FILEGROWTH increment for tempdb files.



You may have to adjust this based on the speed of the I/O subsystem on which the tempdb files are located. To avoid potential latch time-outs, limit the autogrow operation to approximately two minutes. For example, if the I/O subsystem can initialize a file at 50 MB per second, the FILEGROWTH increment should be set to a maximum of 6 GB, regardless of the tempdb file size. The instant database file initialization feature can improve the performance of autogrow operations.

Note. Monitor and avoid automatic file growth, as it impacts performance. Every time SQL Server is started, the tempdb file is re-created with the default size. While tempdb can grow, it does take resources to perform this task. To reduce this overhead of tempdb growing, you may want to permanently increase the default size of tempdb after carefully monitoring its growth.

Also, consider adding multiple data files to tempdb rather than maintaining just one. This can reduce contention on tempdb. To add multiple data files, use the ALTER DATABASE statement with the ADD FILE clause.

Using multiple files reduces tempdb storage contention and yields significantly better scalability. As a general guideline, create one data file for each CPU processor core on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary. Note that a dual-core CPU is considered to be two CPUs for this purpose.
Make each data file the same size; this allows for optimal proportional-fill performance.

Data and Log File Sizing
For the PeopleSoft installation, it is critical that you set sizes for the database data and log files appropriately. Ensure that the data and log files always have enough capacity to allow data modifications to happen seamlessly without causing a physical file expansion (autogrow). In other words, the data and log files should be pre-grown to a sufficiently large size.

It is recommended that you enable autogrow for the data and log files, however; it is meant as a fallback mechanism in the event file expansion is required. For large databases, it is recommended that you enable autogrow by size (MB) rather than by percent.