Oracle Automatic Storage Management (ASM)


Automatic Storage Management (ASM) provides a vertical integration of the file system and the volume manager that is specifically built for the Oracle database files. ASM can provide management for single SMP machines, or across multiple nodes of a cluster for Oracle Real Application Clusters (RAC) support. A single ASM instance can provide support for many Oracle databases simultaneously.

ASM distributes I/O load across all available resources to optimize performance while removing the need for manual I/O tuning. ASM helps DBAs to manage a dynamic database environment by allowing them to increase the database size without having to shut down the database to adjust the storage allocation.

ASM can maintain redundant copies of data to provide fault tolerance, or it can be built on top of vendor-supplied reliable storage mechanisms. ASM may use virtual raw volumes provided in a storage area network (SAN) environment or zero-padded files on a network attached storage (NAS) filer in addition to using local raw devices. Data management is done by selecting the desired reliability and performance characteristics for classes of data rather than with human interaction on a per file basis. ASM capabilities save DBAs’ time by automating manual storage and thereby increasing their ability to manage larger databases and more of them with increased efficiency.

ASM divides files into allocation units (AUs) and spreads the AUs for each file evenly across all the disks. ASM uses an index technique to track the placement of each AU. When your storage capacity changes, ASM does not restripe all of the data, but moves an amount of data proportional to the amount of storage added or removed to evenly redistribute the files and maintain a balanced load across the disks. This is done while the database is up.

You can increase the speed of a rebalance operation, or lower it to reduce the impact on the I/O subsystem. ASM provides mirroring protection without the need to purchase a third-party Logical Volume Manager. One unique advantage of ASM is that the mirroring is applied on a file basis, rather than on a volume basis. Therefore, the same disk group can contain a combination of files protected by mirroring, along with those that are not protected at all.

ASM supports data files, log files, control files, archive logs, temp files, archive log files, SPFILEs, RMAN backup sets, and other Oracle database file types. ASM supports Real Application Clusters and eliminates the need for a Cluster Logical Volume Manager or a Cluster File System.

General Architecture
To use ASM, you must start a special instance, called an ASM instance, before you start your database instance. ASM instances do not mount databases, instead they manage the metadata needed to make ASM files available to ordinary database instances. Both ASM instances and database instances have access to some common set of disks called disk groups. Database instances access the contents of ASM files directly, communicating with an ASM instance only to get information about the layout of these files.

An ASM instance starts several background processes specific to ASM. One process coordinates rebalance activity for disk groups. It is called RBAL. The second one performs the actual rebalance AU movements. There can be many of these at a time, and they are called ARB0, ARB1, and so forth. The GMON process, or Group Monitor, is used for partner and status table, and node membership. An ASM instance also has some of the same background processes as a database instance, including SMON, PMON, LGWR, DBWR, and CKPT.

Each database instance using ASM has two extra background processes called ASMB and RBAL. RBAL performs global opens of the disks in the disk groups. At database instance startup, ASMB connects as a foreground process to the ASM instance. Communication between the database and the ASM instance is performed via this bridge. This includes physical file changes such as data file creation and deletion. Over this connection, periodic messages are exchanged to update statistics and to verify that both instances are healthy.

Creating an ASM Instance
You create an ASM instance by running the Database Configuration Assistant (DBCA). On the first page, select the Configure Automatic Storage Management option, and then follow the steps. The ASM instance is created and started for you. Then you are guided through the process of defining disk groups for the instance.

As part of the ASM instance creation process, the DBCA automatically creates an entry into the oratab file. This entry is used for discovery purposes. On the Windows platform where a services mechanism is used, the DBCA automatically creates an Oracle Service and the appropriate registry entry to facilitate the discovery of ASM instances. In addition, you are prompted to run the localconfig script that configures Cluster Synchronization Services to manage the ASM instance. When an ASM instance is configured, the DBCA creates an ASM instance parameter file and an ASM instance password file.
If you were to create an ASM-enabled database, the DBCA determines whether an ASM instance already exists on your host. If ASM instance discovery returns an empty list, the DBCA creates a new ASM instance.

Create ASM Manually

Create ASM Through DBCA

ASM Instance Initialization Parameters
An ASM instance is controlled by a parameter file in the same way as a regular database instance. Parameters commonly set there include:

          INSTANCE_TYPE should be set to ASM for ASM instances. This is the only parameter that must be defined.
          DB_UNIQUE_NAME specifies the service provider name for which this ASM instance manages disk groups.
          ASM_POWER_LIMIT controls the speed for a rebalance operation. Values range from 1 through 11, with 11 being the fastest. If omitted, this value defaults to 1. ASM_DISKSTRING is an operating system–dependent value used by ASM to limit the set of disks considered for discovery.
          ASM_DISKGROUPS is the list of names of disk groups to be mounted by an ASM instance at startup, or when the ALTER DISKGROUP ALL MOUNT command is used.

Note: Automatic memory management is enabled by default on ASM instances, even when the MEMORY_TARGET parameter is not explicitly set. This is the only parameter that you need to set for complete ASM memory management. Oracle Corporation strongly recommends that you use automatic memory management for ASM.

Starting Up an ASM Instance
ASM instances are started similarly to database instances except that the initialization parameter file contains the entry INSTANCE_TYPE=ASM. When this parameter is set to the value ASM, it informs the Oracle executable that an ASM instance is starting, not a database instance. Also, the ORACLE_SID variable must be set to the ASM instance name. When the ASM instance starts up, the mount stage attempts to mount the disk groups specified by the ASM_DISKGROUPS initialization parameter rather than mounting a database, as is done with non-ASM instances.

Other STARTUP clauses have comparable interpretation for ASM instances as they do for database instances. OPEN is invalid for an ASM instance. NOMOUNT starts up the ASM instance without mounting the database.

SYSASM Role
The SYSASM role is specifically intended for performing ASM administration tasks. Using the SYSASM role instead of the SYSDBA role improves security by separating ASM administration from database administration.

In Oracle Database 11g Release 1, the OS group for SYSASM and SYSDBA is the same, and the default installation group for SYSASM is dba. In a future release, separate groups will have to be created, and SYSDBA users will be restricted in ASM instances. You can also use the combination of CREATE USER and GRANT SYSASM SQL statements from an ASM instance to create a new SYSASM user. This is possible as long as the name of the user is an existing OS username. These commands update the password file of each ASM instance, and do not need the instance to be up and running. Similarly, you can revoke the SYSASM role from a user using the REVOKE command, and you can drop a user from the password file using the DROP USER command. The V$PWFILE_USERS view includes a new column called SYSASM that indicates whether the user can connect with SYSASM privileges (TRUE) or not (FALSE).

Note: In Oracle Database 11g Release 1, if you log in to an ASM instance as SYSDBA, warnings are written in the corresponding alert.log file.

Accessing an ASM Instance
ASM instances do not have a data dictionary, so the only way to connect to one is by using OS authentication, that is, SYSASM, SYSDBA, or SYSOPER. To connect remotely, a password file must be used. Users who connect to the ASM instance with the SYSASM or SYSDBA privileges have administrative access to all disk groups in the system. The SYSOPER privilege is supported in ASM instances and limits the set of allowable SQL commands to the minimum required for basic operation of an already configured system.

The following commands are available to SYSOPER users:
          STARTUP/SHUTDOWN
          ALTER DISKGROUP MOUNT/DISMOUNT
          ALTER DISKGROUP ONLINE/OFFLINE DISK
          ALTER DISKGROUP REBALANCE
          ALTER DISKGROUP CHECK
          SELECT all V$ASM_* views
           
All other commands, such as CREATE DISKGROUP, ADD/DROP/RESIZE DISK, and so on, require the SYSASM or SYSDBA privilege and are not allowed with the SYSOPER privilege.

Using Enterprise Manager to Manage ASM Users
Enterprise Manager allows you to manage the users who access the ASM instance through remote connection (using password file authentication). These users are reserved exclusively for the ASM instance.
You have this functionality only when you are connected as the SYSASM user. It is hidden if you connect as SYSDBA or SYSOPER users.

          When you click the Create button, the Create User page is displayed.
          When you click the Edit button the Edit User page is displayed.
          By clicking the Delete button, you can delete the created users.

Note: Oracle Database 11g adds the SYSASM role to the ASM instance login page.

Shutting Down an ASM Instance
When you attempt to shut down an ASM instance in the NORMAL, IMMEDIATE, or TRANSACTIONAL modes, it succeeds only if there are no database instances connected to the ASM instance. If there is at least one connected instance, you receive the following error: 
ORA-15097: cannot SHUTDOWN ASM instance with connected RDBMS instance
If you perform a SHUTDOWN ABORT on the ASM instance, it shuts down, and it will require recovery at the time of the next startup. Any connected database instances will also eventually shut down, reporting the following error:
ORA-15064: communication failure with ASM instance

In a single–ASM instance configuration, if the ASM instance fails while disk groups are open for update, then after the ASM instance reinitializes, it reads the disk group’s log and recovers all transient changes. With multiple ASM instances sharing disk groups, if one ASM instance fails, another ASM instance automatically recovers transient ASM metadata changes caused by the failed instance. The failure of a database instance does not affect ASM instances. The ASM instance should be started automatically whenever the host is rebooted. The ASM instance is expected to use the automatic startup mechanism supported by the underlying operating system. Note that file system failure usually crashes a node.

ASM Disk Groups
A disk group is a collection of disks managed as a logical unit. Storage is added and removed from disk groups in units of ASM disks. Every ASM disk has an ASM disk name, which is a name common to all nodes in a cluster. The ASM disk name abstraction is required because different hosts can use different names to refer to the same disk.

An ASM file can begin with 1 MB extents and as the file size increases, the extent size also increases to 8 MB and 64 MB at a predefined number of extents. Therefore, the size of extent map defining a file can be smaller by a factor of 8 and 64 depending on the size of the file. The initial extent size is equal to the AU size and it increases by the 8 and 64 factor at predefined thresholds. This is automatic for newly created files after the COMPATIBLE.ASM and COMPATIBLE.RDBMS parameters have been advanced to 11.1

ASM always spreads files evenly across all the disks in a disk group. This is called coarse striping. That way, ASM eliminates the need for manual disk tuning. However, disks in a disk group should have similar size and performance characteristics to obtain optimal I/O. For most installations, there is only a small number of disk groups—for instance, one disk group for a work area, and one for a recovery area. For files, such as log files, that require low latency, ASM provides fine-grained (128 KB) striping. Fine striping stripes each AU.
Fine striping breaks up medium-sized I/O operations into multiple smaller I/O operations that execute in parallel. While the number of files and disks increase, you have to manage only a constant number of disk groups. From a database perspective, disk groups can be specified as the default location for files created in the database.

Note: Each disk group is self-describing, containing its own file directory and disk directory.

Failure Group
A failure group is a set of disks, inside one particular disk group, sharing a common resource whose failure needs to be tolerated. An example of a failure group is a string of SCSI disks connected to a common SCSI controller. A failure of the controller leads to all the disks on its SCSI bus becoming unavailable, although each of the individual disks is still functional.

What constitutes a failure group is site specific. It is largely based upon failure modes that a site is willing to tolerate. By default, ASM assigns each disk to its own failure group. When creating a disk group or adding a disk to a disk group, administrators may specify their own grouping of disks into failure groups. After failure groups are identified, ASM can optimize file layout to reduce the unavailability of data due to the failure of a shared resource.

Disk Group Mirroring
ASM has three disk group types that support different types of mirroring:

          External redundancy: Do not provide mirroring. Use an external-redundancy disk group if you use hardware mirroring or if you can tolerate data loss as the result of a disk failure. Failure groups are not used with these types of disk groups.
          Normal-redundancy: Support two-way mirroring
          High-redundancy: Provide triple mirroring

ASM does not mirror disks; rather, it mirrors AUs. As a result, you need only spare capacity in your disk group. When a disk fails, ASM automatically reconstructs the contents of the failed disk on the surviving disks in the disk group by reading the mirrored contents from the surviving disks. This spreads the I/O hit from a disk failure across several disks.

When ASM allocates a primary AU of a file to one disk in a disk group, it allocates a mirror copy of that AU to another disk in the disk group. Primary AUs on a given disk can have their mirror copies on one of several partner disks in the disk group. ASM ensures that a primary AU and its mirror copy never reside in the same failure group. If you define failure groups for your disk group, ASM can tolerate the simultaneous failure of multiple disks in a single failure group.

Disk Group Dynamic Rebalancing

          With ASM, the rebalance process is very easy and happens without any intervention from the DBA or system administrator. ASM automatically rebalances a disk group whenever disks are added or dropped. However, rebalancing will be delayed when disk groups are dropped because of errors.
          By using index techniques to spread AUs on the available disks, ASM does not need to restripe all of the data, but instead needs to only move an amount of data proportional to the amount of storage added or removed to evenly redistribute the files and maintain a balanced I/O load across the disks in a disk group.
          With the I/O balanced whenever files are allocated and whenever the storage configuration changes, the DBA never needs to search for hot spots in a disk group and manually move data to restore a balanced I/O load. However, because the database needs to resync cached ASM metadata, rebalances will have less impact if done in quiet periods.
          It is more efficient to add or drop multiple disks at the same time so that they are rebalanced as a single operation. This avoids unnecessary movement of data. With this technique, it is easy to achieve online migration of your data. All you need to do is add the new disks in one operation and drop the old ones in one operation.
          You can control how much of a load the rebalance operation has on the system by setting the ASM_POWER_LIMIT parameter. Its range of values is 1 through 11. The lower the number, the lighter the load; a higher setting has more of a load and finishes sooner.

Managing Disk Groups
The main goal of an ASM instance is to manage disk groups and protect their data. ASM instances also communicate file layout to database instances. In this way, database instances can directly access files stored in disk groups.

There are several disk group administrative commands. They all require the SYSASM or SYSDBA privilege and must be issued from an ASM instance.
You can add new disk groups. You can also modify existing disk groups to add new disks, remove existing ones, and perform many other operations. You can remove existing disk groups.

ASM Disk Group Compatibility
There are two kinds of compatibility applicable to ASM disk groups; dealing with the persistent data structures that describe a disk group, and the capabilities of the clients (consumers of disk groups). These attributes are called ASM compatibility and RDBMS compatibility, respectively. The compatibility of each disk group is independently controllable. This is required to enable heterogeneous environments with disk groups from both Oracle Database 10g and Oracle Database 11g. These two compatibility settings are attributes of each ASM disk group:

          RDBMS compatibility refers to the minimum compatible version of the RDBMS instance that would allow the instance to mount the disk group. This compatibility dictates the format of messages that are exchanged between the ASM and database (RDBMS) instances. An ASM instance has the capability to support different RDBMS clients running at different compatibility settings. The database compatible version setting of each instance must be greater than or equal to the RDBMS compatibility of all disk groups used by that database. Database instances are typically run from a different Oracle home than the ASM instance. This implies that the database instance may be running a different software version than the ASM instance. When a database instance first connects to an ASM instance, it negotiates the highest version that they both can support.

The compatibility parameter setting of the database, software version of the database, and the RDBMS compatibility setting of a disk group determine whether a database instance can mount a given disk group.

          ASM compatibility refers to the persistent compatibility setting controlling the format of data structures for ASM metadata on disk. The ASM compatibility level of a disk group must always be greater than or equal to the RDBMS compatibility level of the same disk group. ASM compatibility is concerned only with the format of the ASM metadata. The format of the file contents is up to the database instance. For example, the ASM compatibility of a disk group can be set to 11.0 while its RDBMS compatibility could be 10.1. This implies that the disk group can be managed only by ASM software whose software version is 11.0 or higher, whereas any database client whose software version is higher than or equal to 10.1 can use that disk group.

The compatibility of a disk group needs to be advanced only when there is a change to either persistent disk structures or protocol messaging. However, advancing disk group compatibility is an irreversible operation. You can set the disk group compatibility by using either the CREATE DISKGROUP or ALTER DISKGROUP commands.

Note: In addition to the disk group compatibilities, the compatible parameter (database compatible version) determines the features that are enabled; it applies to the database or ASM instance depending on the instance_type parameter. For example: Setting it to 10.1 would preclude use of any features introduced in Oracle Database 11g (disk online/offline, variable extents, and so on).

ASM Disk Group Attributes
Whenever you create or alter an ASM disk group, you have the ability to change its attributes using the new ATTRIBUTE clause of the CREATE DISKGROUP and ALTER DISKGROUP commands:

          ASM enables the use of different allocation unit (AU) sizes that you specify when you create a disk group. The AU can be 1, 2, 4, 8, 16, 32, or 64 MB in size.
          RDBMS compatibility
          ASM compatibility
          You can specify the DISK_REPAIR_TIME in units of minute (M), hour (H), or day (D). If you omit the unit, then the default is H. If you omit this attribute, then the default is 3.6H. You can override this attribute with an ALTER DISKGROUP ... DISK OFFLINE statement.
          You can also specify the redundancy attribute of the specified template.
          You can also specify the striping attribute of the specified template.

Note: For each defined disk group, you can look at all defined attributes through the V$ASM_ATTRIBUTE fixed view.

Using Enterprise Manager to Edit Disk Group Attributes
Enterprise Manager provides a simple way to store and retrieve environment settings related to disk groups.
You can set the compatible attributes from both the create disk group page and the edit disk group advanced attributes page. The disk_repair_time attribute is added to the edit disk group advanced attributes page only.

Note: For pre-11g ASM instances, the default ASM compatibility and client compatibility is 10.1. For 11g ASM instances, the default ASM compatibility is 10.1 and database compatibility is 10.1.

Miscellaneous ALTER Commands
The following statement rebalances the DGROUPB disk group, if necessary:

ALTER DISKGROUP dgroupB REBALANCE POWER 5;

This command is generally not necessary because it is automatically done as disks are added, dropped, or resized. However, it is useful if you want to use the POWER clause to override the default speed defined by the initialization parameter ASM_POWER_LIMIT. You can change the power level of an ongoing rebalance operation by reentering the command with a new level. A power level of zero causes rebalancing to halt until the command is either implicitly or explicitly reinvoked. The following statement dismounts DGROUPA:

ALTER DISKGROUP dgroupA DISMOUNT;

The MOUNT and DISMOUNT options allow you to make one or more disk groups available or unavailable to the database instances. The ability to manually unmount and mount is useful in a clustered ASM environment supporting a single instance, when that instance is failed over to a different node.

ASMCMD Utility
ASMCMD is a command-line utility that you can use to view and manipulate files and directories within ASM disk groups. ASMCMD can list the contents of disk groups, perform searches, create and remove directories and aliases, display space utilization, and more. ASMCMD works with ASM files, directories, and aliases.
Every file created in ASM gets a system-generated file name, otherwise known as a fully qualified file name. This is the same as a complete path name in a local file system. As in other file systems, an ASM directory is a container for files, and an ASM directory can be part of a tree structure of other directories. The fully qualified file name represents a hierarchy of directories in which the plus sign (+) represent the root directory.
ASMCMD> ls -l +DGROUP1/ORCL/DATAFILE

You can create your own directories as subdirectories of the system-generated
directories using the ASMCMD mkdir command:
ASMCMD> mkdir +dgroup1/sample/mydir

          ASMCMD can perform ASM metadata backup and restore functionality. This provides the ability to re-create a preexisting ASM disk group with the exact same template and alias directory structure.
          The lsdsk command lists ASM disk information. This command can run in two modes: connected and non-connected. In connected mode, ASMCMD uses the V$ and GV$ views to retrieve disk information. In non-connected mode, ASMCMD scans disk headers to retrieve disk information, using an ASM disk string to restrict the discovery set. The connected mode is always attempted first.
          Bad block repair is a new feature that runs automatically on normal- or high-redundancy disk groups. When a normal read from an ASM disk group fails with an I/O error, ASM attempts to repair that block by reading from the mirror copy and write to it and by relocating it if the copy failed to produce a good read. This whole process happens automatically only on blocks that are read. It is possible that some blocks and extents on an ASM disk group are seldom read. One prime example is the secondary extents. The ASMCMD’s repair command is designed to trigger a read on these extents, so the resulting failure in I/O can start the automatic block repair process. One can use the ASMCMD’s repair interface if the storage array returns an error on a physical block, then the ASMCMD repair can initiate a read on that block to trigger the repair.

ASM Scalability and Performance

          ASM Variable Size Extents is an automated feature that enables ASM to support larger file size while improving memory usage efficiency. An ASM file begins with an extent equal to one AU. As the file size increases, the extent size also increases to 8 AU and then to 64 AU at a predefined number of extents. The size of the extent map that defines a file can be smaller by a factor of 8 and 64 depending on the file size. The initial extent size is equal to the allocation unit size and it increases by a factor of 8 and 64 at predefined thresholds.
          Fewer extent pointers are needed to describe the file and less memory is required to manage the extent maps in the shared pool, which would have been prohibitive in large file configurations. Extent size can vary both across files and within files.
          Variable size extents also enable you to deploy Oracle databases using ASM storage that are several hundred TB even several PB in size. The management of variable size extents is completely automated and does not require manual administration.
          However, external fragmentation may occur when a large number of non-contiguous small data extents have been allocated and freed, and no additional contiguous large extents are available. A defragmentation operation is integrated as part of the rebalance operation. So, as a DBA, you always have the possibility to defragment your disk group by executing a rebalance operation.

ASM imposes the following limits:
          63 disk groups in a storage system
          10,000 ASM disks in a storage system
          4 petabyte maximum storage for each ASM disk
          40 exabyte maximum storage for each storage system
          1 million files for each disk group
          Maximum files sizes depending on the redundancy type of the disk groups used: 140 PB for external redundancy (value currently greater than possible database file size), 42 PB for normal redundancy, and 15 PB for high redundancy