SYSASM Role
When Automatic Storage Management (ASM) was introduced with Oracle Database 10g, it somewhat blurred the line between DBAs and sysadmins in terms of their storage allocation functions. The ASM instances are managed by DBAs, just like the regular DBA jobs that require connection as SYSDBA role. But over time, the roles became more clear and we saw a rudimentary division of labor occur. As a result, some ASM operations returned to sysadmins. In some cases, a separate class of "ASM admins" evolved that performed only ASM administration; not that of the database.
However, the emergence of this new role created a conflict: the SYSDBA role was needed to manage the ASM instance but many DBAs of production databases running on the same server didn't feel comfortable sharing that role.
In Oracle Database 11g, this conflict is gone. There is a new role called SYSASM available to manage the ASM instance only. It's like the SYSDBA role for the ASM instance. Here is how you connect to the ASM instance:
$ sqlplus / as sysasm SQL*Plus: Release 11.1.0.6.0 - Production on Fri Sep 28 20:37:39 2007 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, Oracle Label Security, OLAP, Data Mining and Real Application Testing options SQL>
Note the connect clause "as sysasm". In Oracle Database 11g Release 1, this SYSASM role is granted to the operating system group privileged for SYSDBA as well ("dba" in most cases). In other words, the user belonging to the dba group in Unix can also connect as SYSASM. (This arrangement will change in a later release ; the sysdba and sysasm roles will separate into OS groups.)
Once connected as sys to the ASM instance, you can update the password of SYS that is updated in the password file:
SQL> alter user sys identified by oracle 2 / User altered.
Although this is an ASM instance with no database, you can still create users:
SQL> create user asmoper identified by dumboper 2 / User created.
Now you can grant SYSASM role to the user:
SQL> grant sysasm to asmoper; Grant succeeded.
After this grant, the user asmoper can perform all ASM management functions instead of the user SYS. The user cam connect with the clause as sysasm, similar to the "as sysdba" clause in regular databases.
$ sqlplus asmoper/dumboper as sysasm
This feature enforces the much needed separation of ASM and DBA responsibilities.
Variable Extent Sizes
The smallest granular element in ASM storage is an allocation unit (AU), very similar in concept to an Oracle database block. When you create database segments like tables and indexes, the minimum unit allocated is not a block but an extent, which comprises multiple blocks. You can change the extent sizes of segments.
In ASM, there is a very similar concept: when files are created on the ASM diskgroup, the smallest addressable unit is an extent, not an AU. In Oracle Database 10g, the AU and extents were interchangeable; one extent had no more than one AU.
10g-compatible diskgroups require memory from the shared pool for every extent. For large databases, this in turn requires a large amount of memory. So if the AU size is 1MB (the default), a 1TB database will need more than one million extents to be managed in the shared pool.
In Oracle Database 11g, the extent sizes are no longer equal to the AU sizes. The extent size starts at 1MB when the file is created. After the file reaches a certain threshold, the extent size increases to 4MB, then to 16MB, and finally, after a threshold, the extent sizes are 64MB. You don't have to worry about the sizes; the ASM instance automatically allocates the appropriate extent size. Since a lesser number of extents can accommodate a lot of data, the total number of extents in the shared pool is significantly reduced, improving performance several times.
The variable extent sizes may introduce some fragmentation when the files are expended and shrunk extensively. If defragmentation is required, ASM will address that issue automatically.
Varying AU Sizes
As I have described, the default size of an AU is 1MB. For many databases this may be enough but consider a large database more than 10TB in size. The objects will probably be greater than 1MB so you may want to make the AU size bigger to reduce the number of AUs. In Oracle Database 10g, you set an underscore parameter to change the AU size. However, that affects all the diskgroups that are created after that point and also requires an ASM instance recycle to set the parameter.
In Oracle Database 11g, this task is easily accomplished by setting a diskgroup attribute—au_size—during the DG creation, as shown below:
create diskgroup dg6 external redundancy disk '/dev/raw/raw13' attribute 'au_size' = '2M'
The AU_SIZE should be one of 1M, 2M, 4M, 8M, 16M, 32M, 64M (M stands for MB). You can also set the value as an absolute value in bytes:
attribute 'au_size' = ' 2097152'
After the diskgroup is created, you can check the AU size through the following query:
select name, allocation_unit_size from v$asm_diskgroup / NAME ALLOCATION_UNIT_SIZE ------- -------------------- DG1 1048576 DG3 1048576 DG6 2097152 DG5 1048576 DG4 1048576 DG2 1048576
Note the AU size of the various disk group names. Now you can create the diskgroup with appropriate AU size to cater to each application.
Diskgroup Attributes
ASM is a storage platform for Oracle databases ranging from 10g to the current version. So, an ASM instance on 11g can hold databases from 10g Release 1, 10g Release 2, and 11g Release 1 (and beyond). As long as the ASM version is at the same version or more than the RDBMS, it's possible to create a database on that ASM instance. So how does ASM communicate to the RDBMS instance if they are on different versions? Simple: ASM transforms the messages to suit the RDBMS version.
By default, the ASM instance can support 10g databases. But what if you want to place only 11g RDBMS on that ASM instance? The message transformation to support the version difference is not necessary. But what if there was a way to tell the ASM instance that the only databases supported is 11g Release 1? That would eliminate or at least reduce the message transformations. In Oracle Database 11g, that is possible, using the ASM Compatibility and RDBMS Compatibility diskgroup attributes.
First, let's check the current attributes of the diskgroup:
SQL> select compatibility, database_compatibility 2 from v$asm_diskgroup 3 where name = 'DG1' 4 / COMPATIBILITY DATABASE_COMPATIBILITY ---------------------- ---------------------- 10.1.0.0.0 10.1.0.0.0
As you can see, the ASM Compatibility (shown by COMPATIBILITY) is set to 10.1.0.0.0, which means this diskgroup supports up to 10.1 ASM structure. Thus this diskgroup can have any RDBMS structure. The other column, DATABASE_COMPATIBILITY, shows the RDBMS compatibility set to 10.1. It means the ASM diskgroup DG1 can be used for any RDBMS from version 10.1.
Since you want to create only 11g ASM and RDBMS structures, there is no need to have 10g elements. To set the ASM Compatibility attribute of this diskgroup to 11.1, you issue the following statement (in the ASM instance):
SQL> alter diskgroup dg1 set attribute 'compatible.asm'='11.1';
Now, if you check the attributes of the diskgroup:
COMPATIBILITY DATABASE_COMPATIBILITY ---------------------- ---------------------- 11.1.0.0.0 10.1.0.0.0
The ASM Compatibility is set to 11.1; but RDBMS Compatibility is still set to 10.1. To change that to 11.1 as well, use:
SQL> alter diskgroup dg1 set attribute 'compatible.rdbms'='11.1';
Note one important point: the compatibility is set for the diskgroup, not for the entire ASM instance. This feature allows you to use only one ASM instance but cater to all types of database versions. Depending on which version is used, you can set the attribute appropriately and reduce inter-version communication.
Preferred Mirror Read
In an Oracle RAC database there are multiple nodes all going to the same ASM instance. If you use normal mirroring in an ASM diskgroup, the behavior of the access to the disks may not be what you assume.
Suppose you have a diskgroup called DG3 with two failgroups (DG2_0000 and DG2_0001), each with a separate disk, as shown in the figure below:
When something is written to the diskgroup DG2, the extents are written in a round robin manner: the first one goes to DG2_0000 with a copy going to DG2_0001, the second extent to DG2_0001 with a copy on DG2_0000, the third one back to DG2_0000 with copy on DG2_0001, and so on. In this manner ASM maintains the copy of one disk on the other.
But when the extents are read, they are always read from the primary failgroup (DG2_0000, in this case); not from the secondary (DG2_0001). The secondary is read only when the primary is not available.
This works well in most cases but sometimes may not be desirable. In Oracle Database 11g, you can configure a node to read from a specific failgroup. For instance, in the above example, if you want to configure the instance 1 to read from failgroup DG2_0000 and instance 2 to read DG2_0001, you can set the preferred read group for those diskgroups. The following command, executed in instance 1, causes the failgroups DG2_0000 and DG3_0000 in diskgroup DG2 and DG3 respectively to be preferred on instance 1:
SQL> alter system set asm_preferred_read_failure_groups = 'DG2.DG2_0000','DG3.DG3_0000'
Similarly, on the other instance, you can issue the following to make the other failgroups the preferred disks:
SQL> alter system set asm_preferred_read_failure_groups = 'DG2.DG2_0001','DG3.DG3_0001'
After the statements are executed, when some session from instance 1 wants to read from the diskgroup DG2, the disk DG2_0000 will be read. If the disk is not available, the other disk, DG2_0001, is read. Similarly when a session connected to the instance 2 reads the data, the disk DG2_0001 is read.
If you want to check how the different disks of the diskgroups are utilized, you can refer to a new dictionary view, V$ASM_DISK_IOSTAT, which simulates the IOSTAT utility found in UNIX systems:
select instname, dbname, group_number, failgroup, disk_number, reads, writes from v$asm_disk_iostat order by 1,2,3,4,5,6 /
Here is sample output:
INSTNAM DBNAME GROUP_NUMBER FAILGROUP DISK_NUMBER READS WRITES
------- -------- ------------ ---------- ----------- ---------- ----------
PRONE31 PRONE3 2 DG2_0000 0 4450 910
PRONE32 PRONE3 2 DG2_0001 1 2256 910
PRONE31 PRONE3 3 DG3_0000 0 300 29
PRONE32 PRONE3 3 DG3_0001 1 560 29
This output shows instances PRONE31 and PRONE32 have their preferred failgroups DG2_0000 and DG2_0001 respectively. Note the WRITES column; they are identical at 910. This is because the writes occur uniformly to both disks. Now note the READS column. It's 4450 and 2256 for the instances PRONE31 and PRONE32 respectively. Why? Because the instance PRONE3 issued more reads and the reads came from its preferred failgroup, DG2_0000. As for diskgroup DG3, the instance PRONE32 issued more reads which came from its preferred failgroup (DG3_0001) and thus that disk shows more reads.
Preferred reads are especially helpful in "stretch" clusters (clusters with a large geographic distance between nodes). The preferred reads make it faster for reads by isolating reads to specific disks.
Drop Diskgroup Force
What happens when a disk is no longer present (or destroyed beyond repair)? You want to drop the diskgroup completely and recreate it or add the disks of the diskgroup to other diskgroups. The diskgroup is not mounted yet. Since one of the disks is missing, you can't even mount it. To drop the diskgroup you have to mount it but you can't mount it because the disk is missing—a perfect "catch-22" situation. What should you do?
In Oracle Database 10g you can use a workaround by erasing the header of the disk using the dd command:
$ dd if=/dev/zero of=/dev/raw/raw13 bs=1024 count=4
This puts zeroes in the header of the disk /dev/raw/raw13, erasing all information. While this is effective it erases the information in the disk header completely, along with the fact that the disk used to be a part of the diskgroup.
In Oracle Database 11g, you don't need to resort to this workaround. All you have to do is issue the drop command with a force option:
SQL> drop diskgroup dg7 force including contents;
This drops the diskgroup even if the disks are not mounted. The available disks show up as FORMER; that is, they used to be part of some diskgroup. (Note: You have to use the "including contents" clause.)
Metadata Backup and Restore
Many people think of ASM as a database with its own storage. This is not at the case at all—ASM does not store data; the database does. The ASM instance, however, maintains metadata such as the diskgroup names, the disks in them, the directories, and so on. This metadata is stored in the disk headers.
Suppose all the disks crash and the header information disappears. What do you do? Of course, you have taken the backup of the database using RMAN and you can restore it. But you can restore it only after you have created all the diskgroups and directories. Hopefully you kept a record of all that. (Right?) Even if you did, this process takes time.
What if you had a backup? In Oracle Database 11g, you can backup the metadata of the ASM instance through the ASM command line option (ASMCMD), using the command md_backup.
$ asmcmd -p ASMCMD [+] > md_backup
It creates a file named ambr_backup_intermediate_file. Here is a portion of that file from the top:
@diskgroup_set = (
{
'DISKSINFO' => {
DG1_0000' => {
'DG1_0000' => {
'TOTAL_MB' => '103',
'FAILGROUP' => 'DG1_0000',
'NAME' => 'DG1_0000',
'DGNAME' => 'DG1',
'PATH' => '/dev/raw/raw5'
}
}
},
'DGINFO' => {
'DGTORESTORE' => 0,
'DGCOMPAT' => '10.1.0.0.0',
'DGNAME' => 'DG1',
'DGDBCOMPAT' => '10.1.0.0.0',
'DGTYPE' => 'EXTERN',
'DGAUSZ' => '1048576'
},
'ALIASINFO' => {},
'TEMPLATEINFO' => {
'6' => {
'DGNAME' => 'DG1',
'STRIPE' => 'COARSE',
'TEMPNAME' => 'ASM_STALE',
'REDUNDANCY' => 'UNPROT',
'SYSTEM' => 'Y'
... and more ...
It records all the diskgroups, disks, directories, the disk attributes and so on. By default, this file records all the diskgroups. If you want to backup only a specific diskgroup, you can use the -g option. In addition, you can use -b option to create a specific named file.
ASMCMD [+] > md_backup -g dg1 -b prolin3_asm.backup
This backs up the metadata of the diskgroup DG1 to a file named prolin3_asm.backup instead of the default ambr_backup_intermediate_file. This file must be new so you should remove it, if it exists, prior to the generation.
Now let's see how the restore works. There are several types of restore. The easiest use is to restore a diskgroup along with the directories that was dropped earlier. First create a directory on the diskgroup:
ASMCMD [+] > cd DG7 ASMCMD [+DG7] > mkdir TEST ASMCMD [+DG7] > ls TEST/
The diskgroup has a directory called TEST. Now, back up the diskgroup:
ASMCMD [+] > md_backup -g dg7 -b g7.backup
Once backed up, suppose you drop the diskgroup to simulate an accidental drop:
SQL> drop diskgroup dg7; Diskgroup dropped.
Now the diskgroup DG7 is gone from the ASM instance and you want to restore it from the earlier backup. You can reinstate it using the md_restore command:
$ asmcmd md_restore -b dg7.backup -t full Current Diskgroup being restored: DG7 Diskgroup DG7 created! System template TEMPFILE modified! System template FLASHBACK modified! System template ARCHIVELOG modified! System template BACKUPSET modified! System template XTRANSPORT modified! System template DATAGUARDCONFIG modified! System template CONTROLFILE modified! System template AUTOBACKUP modified! System template DUMPSET modified! System template ONLINELOG modified! System template PARAMETERFILE modified! System template ASM_STALE modified! System template CHANGETRACKING modified! System template DATAFILE modified! Directory +DG7/TEST re-created!
Look at the output; it creates the diskgroup as well as the templates and the directory. If there were any data, of course it will be lost. The md_backup is not a backup of the data but rather the metadata of the ASM instance. The data, ostensibly, is being backed up by RMAN. After the diskgroup is created, along with all the directories, you can restore the RMAN backup to this diskgroup.
Another option, -f, allows you to place the commands in a script file, rather than execute them:
ASMCMD [+] > md_restore -b dg7.backup -t full -f cr_dg7.sql
It creates an SQL script with the name cr_dg7.sql that crates the diskgroup and all other objects. You can manually run this in the ASM instance. Here is how the file looks:
create diskgroup DG7 EXTERNAL redundancy disk '/dev/raw/raw14' name DG7_0000 size 100M ;
alter diskgroup /*ASMCMD AMBR*/DG7 alter template TEMPFILE attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template FLASHBACK attributes (UNPROTECTED FINE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template ARCHIVELOG attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template BACKUPSET attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template XTRANSPORT attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template DATAGUARDCONFIG attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template CONTROLFILE attributes (UNPROTECTED FINE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template AUTOBACKUP attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template DUMPSET attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template ONLINELOG attributes (UNPROTECTED FINE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template PARAMETERFILE attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template ASM_STALE attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template CHANGETRACKING attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR*/DG7 alter template DATAFILE attributes (UNPROTECTED COARSE);
alter diskgroup /*ASMCMD AMBR */ DG7 add directory '+DG7/TEST';
One of the most useful applications of this feature is to document the metadata of the ASM instance. You can take backups at regular intervals, or after some major change, such as after adding a diskgroup, adding/removing disks or creating directories.
Checking Disks
One of the biggest complaints from the ASM users accustomed to a traditional volume manager is the ability to check many things through command line. The ASM Command Line option (ASMCMD) closes this gap to a large extent. In Oracle Database 11g, some additional commands in the ASMCMD prompt makes it extremely easy to manage the ASM instance. One such example is the metadata backup you saw earlier. The other notable is the command to check disks managed by the instance. The command is lsdsk.
ASMCMD> lsdsk Path /dev/raw/raw10 /dev/raw/raw11 /dev/raw/raw13 ... snipped ...
Without any flag, the command merely lists all the disks available to the instance. There are several flags that modify the output. The most common is -k as shown below:
ASMCMD> lsdsk -k
Total_MB Free_MB OS_MB Name Failgroup Library Label UDID Product Redund Path
103 41 103 DG4_0000 DG4_0000 System UNKNOWN /dev/raw/raw10
103 41 103 DG5_0000 DG5_0000 System UNKNOWN /dev/raw/raw11
... snipped ...
Another flag, -s, shows various I/O related statistics of the disks:
ASMCMD> lsdsk -s
Reads Write Read_Errs Write_Errs Read_time Write_Time Bytes_Read Bytes_Written Path
207485 207916 0 0 245.820323 159.634398 851251200 /dev/raw/raw10
207481 207912 0 0 229.996931 144.73954 851234816 /dev/raw/raw11
To quickly check the status of the disks, use the -p flag:
ASMCMD> lsdsk -p
Group_Num Disk_Num Incarn Mount_Stat Header_Stat Mode_Stat State Path
4 0 3915926174 CACHED MEMBER ONLINE NORMAL /dev/raw/raw10
5 0 3915926175 CACHED MEMBER ONLINE NORMAL /dev/raw/raw11
6 0 3915926193 CACHED MEMBER ONLINE NORMAL /dev/raw/raw13
Finally, the -t flag shows repair related information (described later in this installment):
ASMCMD> lsdsk -t Create_Date Mount_Date Repair_Timer Path 27-SEP-07 28-SEP-07 0 /dev/raw/raw10 27-SEP-07 28-SEP-07 0 /dev/raw/raw11 28-SEP-07 28-SEP-07 0 /dev/raw/raw13
So far, the ASMCMD option extracted the values from the various V$ views in the ASM instance. But the metadata is stored on the disk itself. If the instance is not available, there should be a way to extract that information from the disks. In Oracle Database 11g, the lsdsk command has a flag " I" (uppercase "I", not lowercase "L") that extracts the information from the disk headers instead of V$ views. Here is an example of the -k flag extracting the information from the disk headers.
ASMCMD> lsdsk -Ik Total_MB Name Failgroup Path 103 DG4_0000 DG4_0000 /dev/raw/raw10 103 DG5_0000 DG5_0000 /dev/raw/raw11 102 DG6_0000 DG6_0000 /dev/raw/raw13
To list disks of a specific diskgroup, say, DG1, you can use the -d flag as shown below:
ASMCMD> lsdsk -t -d dg1 Create_Date Mount_Date Repair_Timer Path 28-SEP-07 28-SEP-07 0 /dev/raw/raw5
You can also give a pattern for the disks:
ASMCMD> lsdsk -t /dev/raw/raw1* Create_Date Mount_Date Repair_Timer Path 27-SEP-07 28-SEP-07 0 /dev/raw/raw10 27-SEP-07 28-SEP-07 0 /dev/raw/raw11 28-SEP-07 28-SEP-07 0 /dev/raw/raw13 28-SEP-07 05-OCT-07 0 /dev/raw/raw14
This shows only the disks that match the pattern. Finally, no need to remember these options; the help command shows all the options:
ASMCMD> help lsdsk lsdsk [-ksptcgHI] [-d <diskgroup_name>] [pattern]
The lsdsk command brings the more advanced volume management like functionality to the ASM world.
Restricted Mount
Suppose you added a disk to a diskgroup. ASM immediately starts the rebalance operation. This operation is online so ASM must coordinate with the RDBMS instance the blocks accessed and changed, through a complex system of locks. In a RAC database, this process is exacerbated since the locks must be managed not just within the database but across multiple instances now.
What if you are adding the disks to a diskgroup no one is using? If ASM could somehow know that, it could eliminate the locking mechanism and make the process faster.
In Oracle Database 11g, a new way of mounting diskgroup makes it possible. The diskgroup can be mounted with a RESTRICT clause as shown below:
alter diskgroup dg7 mount restricted;
When the diskgroup is mounted this way, the ASM instance is aware of the exclusivity of the operations on the underlying disks and minimizes the locking mechanism. This, in turn, affects the disk operations such as rebalance faster.
Fast Failure Repair
Consider a diskgroup DG2 with two failgroups each with a single disk. When a certain area of one of the disks is damaged, it's not fatal for the diskgroup. Since they are mirrored, the damaged extents are reads from the other, intact disk and the operation goes through. But what happens to the damaged portion of the disk?
In Oracle Database 10g, the damaged disk is made offline and either the same disk or another one must be presented to the diskgroup. When the new disk is added, it must be completely cloned from the surviving disk to be used as a mirror. But if only a few blocks are damaged, copying the contents of a 34GB disk (for instance) is not efficient.
Therefore, in Oracle Database 11g, the damaged portions of the disks are repaired instead of copying the whole disk. This feature uses a new attribute of diskgroups, disk_repair_time, which specifies how long the ASM instance should tolerate a disk with errors before dropping it from the diskgroup. Here is how you set the attribute of the diskgroup DG2 to two hours:
SQL> alter diskgroup dg2 set attribute 'disk_repair_time'='2H';
Suppose DG2 has two disks, DISK1 and DISK2, and a few blocks on DISK2 suddenly go bad. Since the repair time is two hours, the ASM instance will not drop the disk immediately but rather will wait. If you fix the issue on DISK2 and make it online once again, then those bad blocks will be repaired from the surviving disk.
Let's see that with an example. Suppose the diskgroup DG2 has two failgroups. First, check the diskgroup configuration:
ASMCMD [+dg2] > lsdg dg2
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
MOUNTED NORMAL N 512 4096 1048576 206 78 0 39 0 DG2/
You can also confirm that from the du command:
ASMCMD [+dg2] > du Used_MB Mirror_used_MB 11 22
The du command confirms that the diskgroup has 22MB but only 11MB is useful. Now check the disks for the group dg2:
ASMCMD [+dg2] > lsdsk -d dg2 Path /dev/raw/raw6 /dev/raw/raw7
You can also confirm that as well as get the names of the disks by:
SQL> select name, path 2 from v$asm_disk 3 where group_number = 2 4 / NAME PATH -------- ----------------------------- DG2_0000 /dev/raw/raw7 DG2_0001 /dev/raw/raw6
This shows that the diskgroup has two disks and shows their names. Now to simulate a block corruption, put some characters in the middle of the device:
$ dd if=/dev/zero of=/dev/raw/raw7 bs=1024 skip=10 count=1
This makes one of the disks of the diskgroup corrupt. Now force a check of the diskgroup using the new command ALTER DISKGROUP ... CHECK.
SQL> alter diskgroup dg2 check
If you check the alert log of the ASM instance, you will notice, among several other lines, the following:
... NOTE: starting check of diskgroup DG2 WARNING: cache read a corrupted block gn=2 fn=3 indblk=1 from disk 0 ... NOTE: cache successfully reads gn 2 fn 3 indblk 1 count 15 from one mirror side kfdp_checkDsk(): 89 ... NOTE: cache initiating offline of disk 0 group 2 WARNING: initiating offline of disk 0.3915926170 (DG2_0000) with mask 0x7e ... WARNING: Disk (DG2_0000) will be dropped in: (7200) secs on ASM inst: (1) ...
The last line says it all. The disk you just corrupted will be dropped from the diskgroup in 7200 secs, which corresponds to two hours you set earlier as the repair timer. The message will be repeated in the alert log as the time drops:
... WARNING: Disk (DG2_0000) will be dropped in: (5550) secs on ASM inst: (1) GMON SlaveB: Deferred DG Ops completed. Sat Oct 06 00:25:52 2007 WARNING: Disk (DG2_0000) will be dropped in: (5366) secs on ASM inst: (1) GMON SlaveB: Deferred DG Ops completed. Sat Oct 06 00:28:55 2007 WARNING: Disk (DG2_0000) will be dropped in: (5183) secs on ASM inst: (1) GMON SlaveB: Deferred DG Ops completed. Sat Oct 06 00:31:59 2007 WARNING: Disk (DG2_0000) will be dropped in: (5000) secs on ASM inst: (1) GMON SlaveB: Deferred DG Ops completed. ...
Finally, the countdown will reach 0 and the disk will be dropped, unless you fix the issue and the disk goes through fast failure repair. If you know the disk is not repairable and should be dropped sooner, you can expedite its demise by issuing:
SQL> alter diskgroup dg2 offline disks in failgroup dg2_0000 drop after 1m
This drops the failgroup dg2_0000 after 1 minute allowing you to either take it physically out or move a different disk to the diskgroup. To drop the disk forcibly:
SQL> alter diskgroup dg2 drop disks in failgroup dg2_0001 force;
When the disk failure is resolved, you can initiate the fast repair by issuing:
SQL> alter diskgroup dg2 online disks in failgroup dg2_0001;
This will start the process of syncing the damaged and changed blocks on the disks of the failgroup DG2_0001 from the other surviving diskgroups. As the entire disk is not replicated but a few blocks alone, this dramatically reduces the time it takes to synchronize the disks after a small failure.
Raw Devices Option Gone in Release 2
Back in the pre-ASM days, we used to create the database files (or “cooked” files) on good old filesystems. While it provided plenty of flexibility – one could move, resize, rename and a whole lot of things with the files – it was not performance-optimal. In contrast, raw devices (or AKA raw files) afforded plenty of performance by removing the filesystem cache and the synchronous I/O, two key performance killers. But, raw devices lacked the flexibility of cooked files.
ASM, which is essentially a volume manager specifically designed for the Oracle Database, gives us the best of both worlds: the flexibility of files and the performance of raw devices. Nevertheless, some folks continued to create raw files as they offered some benefits such as direct access to database files via commands like dd, which were sometimes useful for backups.
In Oracle Database 11g Release 2, the raw device option is completely gone; you cannot create a datafile with raw devices anymore - rather, you must use either a cooked file or ASM file. This may cause some consternation at sites where the dd command is still used as a backup method. Well, if that applies to you, you should change over immediately to Oracle RMAN. There is no better approach to Oracle database backup than RMAN and there is no excuse not to use it – whether for cooked filesytems, ASM diskgroups, or for that matter, raw devices.
ASM has evolved into something quite revolutionary: a cooked filesystem on the top of ASM devices. Yes, you read that right, a cooked filesystem. What’s more, since ASM is cluster-aware, the filesystem is also clustered. You will learn more about this exciting new feature later in this article.
Same Home for CRS and ASM (Release 2 Only)
In the previous versions, there were at least two different homes: for the clusterware and the database. The database home had the executables and the code path for ASM; ASM was part of the database, not the clusterware. In 11g Release 2, there is a new concept called Grid Infrastructure which combines clusterware and ASM into a single home. You now have separate downloads ans installations for different parts of the cluster-database stack. (Grid Infrastructure is required for ASM and other optional components such as Clusterware, Oracle Restart, etc. and Database is required for, well, the database.)
Even if you don’t use RAC (and therefore no clusterware), a single-instance 11g Release 2 database can also have a feature called Oracle Restart, which monitors the database processes and bring them up if one dies. The Grid Infrastructure enables the Restart feature; even if you decide not to use this feature, you still have to use a different home for Oracle Grid Infrastructure for ASM alone.
Now, when you install Oracle for the first time, you have to install the Grid Infrastructure first to create the ASM instance, discover the ASM disks, and create the diskgroups. If you intend to install a database on that server, you have to download and install the Oracle RDBMS software (separate from the Grid Infrastructure). If all you want is to create a clustered ASM Filesystem (explained later) and not a database, you can stop with the Grid Infrastructure home right here. If you want to create a database as well, you have to download and install the Database software. When a patch becomes available, you have to apply it in the corresponding home.
Separation of ASM and RDBMS is quite logical. Fundamentally, ASM is actually an infrastructural component, not part of the database. In Oracle 11.2, ASM has gained more functionality – the ability to create cooked filesystems (explained later in this installment) being one of them. This means ASM can exist without a database. In addition, being part of the infrastructural component, it may have a different set of administrators, not DBAs. Separating the RDBMS and ASM addresses all these new requirements.
Replacement of SYSDBA by SYSASM (Release 2 Only)
Continuing from the above discussion, you now realize that ASM has its own place in the infrastructure stack now, apart from the database. Therefore it is logical to have a different set of administrators. The responsibility of administration may have been given to DBAs for historical reasons, but it is not necessary. A logical choice for that could be the System Admins of that server, since it is now more of an infrastructure rather than an application. To address these unique needs, Release 2 introduces a role called SYSASM, which is analogous to SYSDBA in the database world.
To smoothen the transition to 11g, ASM commands used to be supported via the SYSDBA role. Not anymore: In Release 2, you must connect with the SYSASM role to manipulate the ASM components. The following command would have worked in Release 1:
$ sqlplus / as sysdba SQL> alter diskgroup data dismount;
You can still connect as SYSDBA in Reease 2; but the alter command above will throw an error:
SQL> alter diskgroup data dismount; alter diskgroup data dismount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15260: permission denied on ASM disk group
You have to connect as
$ sqlplus / as sysasm
Thus:
-
If you are upgrading from 10g to 11g Release 2, you must create a sysasm group and mention that group during the installation process.
-
If you are upgrading from 11g Release 1, you are most likely using a group sysasm; but may have assigned to the user “oracle”, the same one used for RDBMS install as well. Here is how the oracle user’s groups are assigned:
$ id
uid=600(oracle) gid=502(oinstall) groups=501(dba),502(oinstall),503(gridadmin),504(asmadmin)
There is nothing else to be done for the group.
-
Decide whether you want the same unix userid to be able to manage the database and the ASM. You may want to choose a different user to manage the ASM instance for security reasons. Even though the same group of people will manage both, the separation of userids will allow you to separate the privileges should you choose to do so in the future. In that case, simply create a new userid, e.g. “asmdba”, make it part of the “asmadmin” group.
-
Regardless of your decisions above, you have to modify the scripts you may have to manage the ASM components with “connect / as sysdba” and replace them with “/ as sysasm”. The “/ as sysdba” will not work anymore.
-
If you use ASMCMD command line to manage the ASM components, you don’t have to make a change; but if you have created a different userid, e.g. “asmdba” as a part of the “asmadmin” group, then make sure the ASMCMD is called from this userid.
Renaming a Diskgroup (Release 2 Only)
Suppose you are trying to refresh development database from the data in production and you can afford to take a downtime in production. What is the best way to do that? The quickest way is to mount the disks from production to the development server and copy the data over using something like Direct Path Insert. All you have to do is mount the physical devices on the development sever and change the asm_diskstring parameter to include the new disks from the production environment. For instance suppose the disks were ‘/dev/sda*’, so this is how the asm string looked:
asm_diskstring = '/dev/sda*'
This will bring up the new disks /dev/sdb* and the diskgroups from production. However, there could be a problem. What if you had a diskgroup called +DG2 in both development and production? When you bring in the production disks, the diskgroup mount will fail since the diskgroup +DG2 will already be there. You have to drop it before you can read the production one. Well, as you can see, the option of dropping a diskgroup will mean losing all data in development – definitely not a good idea and most likely not acceptable. So, what could you do?
In Release 2, there is new functionality to rename diskgroups. Now you can change a diskgroup’s name. In the above example, you would rename the DG2 diskgroup in development to ORIG_DG2 prior to bringing in the disks from production. The “renamedg” command does it.
Let’s first check the diskgroups in development.
<SQL> select name from v$asm_diskgroup; NAME ------------------------------ DATA DG2
We will rename the DG2 diskgroup using the renamedg command. The command can be done in two phases. Here is how you issue the first phase. The verbose option is optional; but lets you see the intricate details of the command.
oracle@oradba1 ~# renamedg dgname=dg2 newdgname=orig_dg2 verbose=true phase=one
Parsing parameters..
Parameters in effect:
Old DG name : DG2
New DG name : ORIG_DG2
Phases :
Phase 1
Discovery str : (null)
Check : FALSE
Clean : TRUE
Raw only : TRUE
renamedg operation: dgname=dg2 newdgname=orig_dg2 verbose=true phase=one
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk UFS:/dev/raw/raw5 with disk number:0 and timestamp (32937942 404239360)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
Identified disk UFS:/dev/raw/raw5 with disk number:0 and timestamp (32937942 404239360)
Checking if the diskgroup is mounted
Checking disk number:0
Checking if diskgroup is used by CSS
Generating configuration file..
Completed phase 1
Terminating kgfd context 0xb7f07050
Then go on to Phase 2.
oracle@oradba1 ~# renamedg dgname=dg2 newdgname=orig_dg2 verbose=true phase=two
Parsing parameters..
Parameters in effect:
Old DG name : DG2
New DG name : ORIG_DG2
Phases : Phase 2
Discovery str : (null)
Clean : TRUE
Raw only : TRUE
renamedg operation: dgname=dg2 newdgname=orig_dg2 verbose=true phase=two
Executing phase 2
Looking for /dev/raw/raw5
Modifying the header
Completed phase 2
Terminating kgfd context 0xb7f4c050
At this point if you check the ASM Diskgroups, you will see there is no DG2 diskgroup anymore. Instead there is a new group called ORIG_DG2.
SQL> select name from v$asm_diskgroup;
NAME ------------------------------ DATA ORIG_DG2
Note the name of the diskgroup; it has been changed to ORIG_DG2 from DG2. By the way, in the above example, we called the renamedg command twice. In Phase 1, a configuration file is created, and in Phase 2 that file is used to actually modify the diskgroup name. However, the command doesn’t have to be called twice. Both phases can be called in one step as well. The call in that case will be:
$ renamedg dgname=dg2 newdgname=orig_dg2 verbose=true phase=both
One interesting question may come up here: what happens to the datafiles on diskgroup DG2? Will they be automatically renamed to reflect the new name ORIG_DG2? The answer is no. The datafiles must be manually renamed using the ALTER DATABASE RENAME FILE statement.
ASM File System and ASM Dynamic Volume Manager (Release 2 Only)
Suppose you have a set of machines running an application and want a cluster filesystem. Note the operative words – cluster and filesystems. The cooked filesystems by default are not visible across multiple servers. In a cluster filesystem, the cluster has to make sure a file is written by only one server at a time; otherwise each server might write to the file independently without realizing that other servers in the cluster are also present, corrupting the file. If you need a filesystem visible across all the servers of the cluster, you have a few options:
-
Invest in a cluster filesystem, which is almost always expensive, particularly the ones suitable for enterprise class
-
Mount the filesystem on NFS, which is usually slow
In 11g Release 2, there is another option – a filesystem on ASM diskgroup – called ASM Cluster Filesystem (ACFS). Since ASM is cluster-enabled by default, the filesystem is also cluster-enabled. And, this is a cooked filesystem, just like /usr or /home only clustered.
A cooked filesystem is built on a physical volume. 11g Release 2 allows you to create a dynamic volume, i.e. a volume presented to the underlying Operating System dynamically without a real or physical device under it.
For instance, consider a typical Linux system. Using the mount command you can find out about the filesystems and the underlying devices.
$ mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw)
Consider the filesystem /boot. The above output clearly shows that the filesystem is built on the device /dev/sda1 (the “s” means it’s probably a SCSI disk) is of the type ext3 (a typical filesystem type in Linux). In the case of ACFS, the underlying device is not a physical device; but one that is presented to the OS by a special software stack. This special software is called ASM Dynamic Volume Manager (ADVM). ADVM translates the system calls to the underlying devices. Here is a schematic representation of the ACFS components:
Here we see an ASM Diskgroup called ACFSDG1, which is built on three disks: Disk1, Disk2 and Disk3. These disks are the actual physical devices. The ASM Volumes are created on this diskgroup. In this example two volumes, acfsvol1 and acfsvol2, have been created. These volumes are presented to the OS as regular volumes. Once the volumes are in place, the filesystems are created and mounted on these volumes. In this example two filesystems, /acfsdir1 and /acfsdir2, have been created on acfsvol1 and acfsvol2 respectively. When the users create and access files on these filesystems, the ADVM component translates those calls to their corresponding locations on the actual devices. The volumes can be expanded inside the diskgroup which makes the filesystems expand as well.
Now let’s see how we can create these filesystems. We will have to start with a volume. First of all, we have to find a diskgroup with some empty space to create the volumes, of course.You can create multiple volumes on the same diskgroup. Let’s start with a fresh diskgroup for simplicity (note, there is no need to create a new diskgroup; you can create a volume on an existing diskgroup with free space).
SQL> create diskgroup dg1 external redundancy disk '/dev/sdb5','/dev/sdb3'; Diskgroup created.
The compatibility of the diskgroup has to be at least 11.2; so you must set the compatibility flag.
SQL> alter diskgroup dg1 set attribute 'compatible.asm' = '11.2.0.0'; Diskgroup altered.
SQL> alter diskgroup dg1 set attribute 'compatible.rdbms' = '11.2.0.0' Diskgroup altered.
SQL> alter diskgroup dg1 set attribute 'compatible.advm' = '11.2.0.0'; Diskgroup altered.
Now add the volume. In this example we will create a volume called ACFSVOL1 of 256MB.
SQL> alter diskgroup dg1 add volume acfsvol1 size 256M; Diskgroup altered.
After the volume is created, you can check the presence of the volumes in the diskgroup DG1, as shown below:
select v.volume_device, v.volume_name from v$asm_volume v, v$asm_diskgroup g where v.group_number = g.group_number and g.name = 'DG1' / VOLUME_DEVICE VOLUME_NAM ------------------------- ---------- /dev/asm/acfsvol1-106 ACFSVOL1
Once the volume is successfully created, prepare for filesystem using the normal mkfs command in Linux. This command is not specific to Oracle; it’s a generic Linux command. The only special part is the “-t” option that specifies the filesystem as of the type ACFS.
[oracle@oradba2 ~]$ /sbin/mkfs -t acfs /dev/asm/acfsvol1-106 mkfs.acfs: version = 11.2.0.1.0.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/acfsvol1-106 mkfs.acfs: volume size = 268435456 mkfs.acfs: Format complete.
Now we can move on to create the filesystem. Suppose you want to create the filesytem /home/oracle/acfsdir1 on this volume, you will need to create the directory first:
[oracle@oradba2 ~]$ mkdir -p /home/oracle/acfsdir1
Then you will mount the newly created volume on this filesystem, using the genric Linux command “mount”. This must be issued by the root user.
[root@oradba2 ~]# mount -t acfs /dev/asm/acfsvol1-106 /home/oracle/acfsdir1
To confirm that the filesystem was actually created, you can use the “mount” command:
[oracle@oradba2 ~]$ mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/asm/acfsvol1-273 on /acfsdir1 type acfs (rw) /dev/asm/acfsvol2a-273 on /acfsdir2 type acfs (rw) /dev/asm/acfsvol1-106 on /home/oracle/acfsdir1 type acfs (rw)
This filesystem is ready to be used for regular files. Let’s start with creating a simple text file on it to test it works.
[oracle@oradba2 acfsdir1]$ echo "Original File" > myfile.txt
Check for the existence of the file:
[oracle@oradba2 acfsdir1]$ ls -l total 68 drwx------ 2 root root 65536 May 29 23:04 lost+found -rw-r--r-- 1 oracle oinstall 127 May 30 22:15 myfile.txt
The filesystem will not be mounted automatically next time the system is rebooted. To make that happen, you must register this mountpoint with the Oracle Grid Infrastucture stack.
[oracle@oradba2 ~]$ /sbin/acfsutil registry -a /dev/asm/acfsvol1-106 /home/oracle/acfsdir1
acfsutil registry: mount point /home/oracle/acfsdir1 successfully added to Oracle Registry
You can also use ASMCMD command line option for managing the ASM Volumes and Filesystems. Here are some examples:
To create a volume called ACFDVOL2 of 256MB:
ASMCMD [+] > volcreate -G dg1 -s 256M acfdvol2
To check the information on the volume:
ASMCMD [+] > volinfo –a Diskgroup Name: DG1 Volume Name: ACFDVOL2 Volume Device: /dev/asm/acfdvol2-106 State: ENABLED Size (MB): 256 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: Mountpath: Volume Name: ACFSVOL1 Volume Device: /dev/asm/acfsvol1-106 State: ENABLED Size (MB): 256 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /home/oracle/acfsdir1
A very important command is volstat, which shows the performance statistics of the volume. This shows, for a specific diskgroup, all the volumes and stats such as how read and write calls, how many bytes have been written and read, how much time was spent in each type of operation and how many errors were encountered.
ASMCMD [+] > volstat -G dg1 DISKGROUP NUMBER / NAME: 2 / DG1 --------------------------------------- VOLUME_NAME READS BYTES_READ READ_TIME READ_ERRS WRITES BYTES_WRITTEN WRITE_TIME WRITE_ERRS ---------------------------------------------------------------------------------------------------------------------------- ACFDVOL2 0 0 0 0 0 0 0 0 ACFSVOL1 150 205824 284 0 454 1611776 649 0
There are several volume management related commands available in ASMCMD. Rather than explain each and every command, I wanted to show you the most common ones and refer to the manual for the rest. Here is a summary of the commands
volcreate |
To create a volume |
voldelete |
To delete a volume |
voldisable |
To disable a volume group |
volenable |
To enable a volume |
volinfo |
To get the information on the volume, the filesystem that mounted it |
volresize |
To change the size of the volume |
volset |
To set a usage attribute of the volume, which is not actually used by the ADVM; but may be helpful later for an administrator. |
volstat |
The usage stats on the volume, e.g. rate of input/output |
Snapshots (Release 2 Only)
One of the very useful features of the ACFS is the “snapshot” feature. It creates pointers to the files on an ACFS filesystem. When the files are changed, the changed blocks are copied to the snapshot. This allows something like an “undo tablespace” for the files. The files can be recreated as of a specific time using these blocks. Here is how a snapshot called “backup1” is created on /home/oracle/acfsdir1 directory.
[oracle@oradba2 acfsdir1]$ /sbin/acfsutil snap create backup1 /home/oracle/acfsdir1 acfsutil snap create: Snapshot operation is complete.
If you examine the directory after the snapshot is created, specifically in a hidden directory called “.ACFS”, you will see a directory called “backup1” buried deep within the directory structure. This is where the changed blocks are stored.
[oracle@oradba2 acfsdir1]$ pwd /home/oracle/acfsdir1 [oracle@oradba2 acfsdir1]$ cd .ACFS [oracle@oradba2 .ACFS]$ ls repl snaps [oracle@oradba2 .ACFS]$ cd snaps [oracle@oradba2 snaps]$ ls backup1
Let’s see what happens when you delete a file on the filesystem acfsdir1. Remember, there is a snapshot on the filesystem.
[oracle@oradba2 acfsdir1] rm myfile.txt
Now, if you examine the “backup1” directory:
[oracle@oradba2 snaps]$ cd backup1 [oracle@oradba2 backup1]$ ls lost+found myfile.txt [oracle@oradba2 backup1]$ cat myfile.txt This is a file
The file is still available there and can be restored from that location, if needed. Here are some important points about Snapshots:
-
The snapshots are created on the same filesystem. So a different specialized location is not required.
-
The snapshots do not occupy any space. When the files are changed, only the changed blocks are stored in the snapshots. Of course when the files are completely removed, all the blocks are stored.
-
The snapshots allow a “consistent” backup, i.e. backup as of a specific time. So, if the files are constantly changing, you can create a snapshot, called, say “snap1” which will have pointers to all the files as of that time.
Did you notice some interesting facts about the ACFS? Let examine some of them:
-
The filesystem is /home/oracle/acfsdir1, which is mounted on a mountpoint different from “/”; it’s not important for the ACFS to be on “/” structure.
-
The creation of the filesystem (mkfs) command was issued by the “oracle” user, not root. This is vital since the ACFS can be managed by the non-root users as well.
-
The mount command is the only command that needed root permissions.
-
The mountpoint /acfsdir1 was not placed in the /etc/fstab file, unlike other typical mountpoints. Why? Because the mountpoint is automated by the Grid Infrastructure stack, if registered.
These make the ACFS special. Not only the administration is easy, you just got a cluster filesystem without any additional software. This filesytem inherits the same benefits of the ASM diskgroup – performance and scalability. You can add disks to the underlying diskgroup, and have more room for the exapansion of the filesystem online.
Where can you use this functionality? Here are some example usages:
-
You need a cluster filesystem to store image files to be accessed by any of the nodes of the RAC cluster.
-
You need to setup a common location for trace and log files for Middleware tools. In this case even a database is not needed, only the Grid Infrastructure is.
-
You do not want to invest in a Media Management Library for RMAN to read files from ASM diskgroup. In that case just create a common ACFS filesystems for all backup and archived log locations and use normal tape commands to read from these filesystems.
-
You have a RAC database for Datawarehouse and you want to create a common location for datafiles for ETL jobs.
The possibilities are endless, limited by your imagination. In terms of usability index, this feature scores a 10.
Access Control (Release 2 Only)
ASM is a volume manager; but it can also store files related to the database: datafiles, archived logs, controlfiles, backup files and export dumps. Of course, using ACFS (explained previously) you can store even more types of files; but for now let’s concentrate on the files stored on ASM alone. Who are allowed to access them? The files are owned by user “oracle”, or whichever is the owner of the Oracle software on the server. What if you wanted to provide a fine grained access control similar to the unix user+group+others permissions? In earlier versions it was not possible.
In this 11g Release 2, creating fine grained access structure is quite possible. Let’s see how it works with an example.
Suppose you have two unix users – appuser1 and appuser2 - and they should have some specific types of privileges on one diskgroup, ORIG_DG2. First, let’s create the OS users and groups:
[root@oradba1 ~]# groupadd appgroup1 [root@oradba1 ~]# groupadd appgroup2 [root@oradba1 ~]# useradd appuser1 [root@oradba1 ~]# useradd appuser2 [root@oradba1 ~]# usermod -G appgroup1 appuser1 [root@oradba1 ~]# usermod -G appgroup2 appuser2 [root@oradba1 ~]# passwd appuser1
Changing password for user appuser1.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@oradba1 ~]# passwd appuser2
Changing password for user appuser2.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully
Now that the unix level user creation is complete, let’s prepare the diskgroup ORIG_DG2 for this access control:
alter diskgroup ORIG_DG2 set attribute 'ACCESS_CONTROL.ENABLED' = 'TRUE’ /
The Unix users we created earlier can now be placed under access controls. To do so, we have to register the users for that diskgroup. You can do this using ASMCMD command line tool.
ASMCMD> mkusr orig_dg2 appuser1 ASMCMD> mkusr orig_dg2 appuser2
You can confirm that these users were created:
ASMCMD> lsusr DG_Name User_Num OS_ID OS_Name ORIG_DG2 1 502 appuser1 ORIG_DG2 2 503 appuser2
If you notice carefully, the user numbers shown above are the same as in Unix user number.
Now, create the groups for those users, again under the purview of the diskgroups.
ASMCMD> mkgrp orig_dg2 appgroup1 appuser1 ASMCMD> mkgrp orig_dg2 appgroup2 appuser2
Confirm the groups were created:
ASMCMD> lsgrp DG_Name Grp_Name Owner ORIG_DG2 appgroup1 oracle ORIG_DG2 appgroup2 oracle
With the users and their groups set up properly, you can now turn your attention to changing the permission levels of the file. Let’s take an example file - ORIG_DG2/D112D1/DATAFILE/TS2.256.720399549 - and set it to 640 (read+write by user, read by group and none for others):
ASMCMD [+] > chmod 640 +ORIG_DG2/D112D1/DATAFILE/TS2.256.720399549
Confirm that the permissions were properly changed using the ls command with --permission option (note, there are two dashes before “permission”):
ASMCMD [+] > ls --permission +ORIG_DG2/D112D1/DATAFILE/TS2.256.720399549 User Group Permission Name appuser1 appgroup1 rw-r----- TS2.256.720399549
Now this file will be “owned” by appuser1, belonging to the appgroup1. The other users of the appgroup1 can read this file; but can’t modify it. Using this, you can establish fine grained access control for files.
Suppose you want to set a specific permission level for all files under a diskgroup and you don’t want to give a permission altering command chmod everytme. You can do so using a diskgroup attribute called a "mask":
alter diskgroup DG1 set attribute 'ACCESS_CONTROL.UMASK' = '026' /
This resembles Unix and it is similar in case positioning: first position is for owner, the next for group and third for others. But beware, unlike Unix, this is a mask; not a permission setting. So the numbers are subtracted from 6 to arrive at the resulting permission. The numbers 4, 2, and 0 in the mask mean write, read and nothing respectively. So, if you use “0”, it means you want 6 - 0 = 6, i.e. both read and write for that position. Similarly, 4 means 6 – 4 = 2, i.e. read. So, 026 in the mask means the files created in that diskgroup will have Full permissions for the owner, Write (6-2=4) for the group, and no privilege (6-6=0) for others.
Optimal Placement of ASM (Release 2 Only)
Anyone familiar with the disk technology knows that the data transfer time varies based on the location on the disk with the periphery of the disks being the fastest and gradually slowing down to the center of the disk. Naturally, with that knowledge you may want to place files intelligently. The most accessed tablespaces will benefit from being placed at or near the edge of the disks while least used (or not so much I/O sensitive files such as archived logs) can be placed near the center of the disks.
How can you do that? If you have a sophisticated SAN, you can create LUNs on those specific disk locations and use these LUNs to create diskgroups. What if you have JBODs - can you still do it? And, this may not be a one-time activity; files may change behavior making them less or more accessed. So you may have to make some adjustments over time.
In 11g Release 2, you can do it all using a new attribute of diskgroup which allows two values: “hot” and “cold”. The former puts the data near the edges while the latter places it near the center. This is done via templates. Let’s define a template – hot_files – for those most accessed files.
alter diskgroup dg1 add template hot_files attributes (hot) /
Diskgroup altered.
Once the template is created, you can use it to create the datafiles using that template:
create tablespace hot_ts datafile '+DATA(hot_files)/hot_ts_01.dbf' size 1M /
Note the (hot_files) clause, which lets ASM know that this file should have the hot attribute, and should be placed as near to the edge of the disk as possible.
What if the datafile already exists? You can still move the file to a hot region by issuing the following SQL:
SQL> alter diskgroup data 2 modify file '+data/D112D1/datafile/ABCD_DATA.272.697114011' 3 attributes (hot) 4 /
Diskgroup altered.
This operation actually does not move the file to the hot region. It will simply mark the fact that the new extents of the file should go into the hot region. The existing extents will be relocated to the hot region during rebalance operation. What if you want the existing extents to go to the hot region right now? In that case you can force a manual rebalance by executing:
SQL> alter diskgroup data rebalance power 11;
Diskgroup altered.
Bear in mind that the manual rebalance can take a very long time depending on how much data will need to be rebalanced. Although it is an online operation, the rebalance will consume CPU cycles and will cause some degree of stress on the operation of the system. In the previous example we have used power 11, which uses full CPU cycles; if you want to limit the CPU use (thereby extending the time of completion), you can use a smaller power number.
ASMCMD Added Functionality (Release 2 Only)
Oracle Database 10g Release 2 included a new tool called ASMCMD for managing ASM instance using commands rather than SQL. This was particularly welcome from the SysAdmin group not familiar with SQL language. Even for folks familiar with SQL interface, the command line interface made it easier to build tools and scripts. The tool has added more and more functionality over the later versions of Oracle.
In 11g Release 2, several more capabilities have been added to make ASMCMD complete. Some have been explained above, e.g. the Volume Management related commands such as volstat and volcreate. Let’s see some more added features:
Template Management
There are several commands to manage templates now and you don’t have to rely on SQL for them. The new command mktmpl allows you to create a template called hot_coarse_mirr that is striped coarse and mirrored:
ASMCMD> mktmpl -G data --striping coarse --redundancy mirror --primary hot hot_coarse_mirr
To find out about a specific template, use the lstmpl command:
ASMCMD> lstmpl -l hot_coarse_mirr Group_Name Group_Num Name Stripe Sys Redund PriReg MirrReg DATA 1 HOT_COARSE_MIRR COARSE N MIRROR HOT COLD
If you want to change the attributes of a template, use the chtmpl command:
ASMCMD> chtmpl -G data --striping fine hot_coarse_mirr
Finally to drop a template, use the rmtmpl command:
ASMCMD> rmtmpl -G data hot_coarse_mirr
The command lsattr is powerful. You can use it to find out about the attributes of a diskgroup:
ASMCMD> lsattr -l -G orig_dg2 Name Value access_control.enabled true access_control.umask 066 au_size 1048576 cell.smart_scan_capable FALSE compatible.asm 11.2.0.0.0 compatible.rdbms 11.2 disk_repair_time 3.6h sector_size 512
But perhaps the most useful is the iostat command. You can use it to display some important metrics on the diskgroup’s I/O characteristics such as Read and Write rates.
ASMCMD> iostat --region -t -G orig_dg2 5
Group_Name Dsk_Name Reads Writes Cold_Reads Cold_Writes Hot_Reads Hot_Writes Read_Time Write_Time
ORIG_DG2 DG2_0000 290816 35979264 65536 11558912 0 0 .815435 40.712632
Group_Name Dsk_Name Reads Writes Cold_Reads Cold_Writes Hot_Reads Hot_Writes Read_Time Write_Time
ORIG_DG2 DG2_0000 0.00 819.20 65536 11558912 0 0 0.00 0.00
… output truncated …
Did you notice the Cold_Reads and Hot_Reads columns? These are particularly useful for the ASM placement of data. If you have created two files -- one hot and the other cold, on the same diskgroup, how will you know how much is the read and write rate for these types? The overall stats on the diskgroup will not be very helpful. To know the rate per type of file (hot or cold), you can rely on iostat. This helps you immensely in deciding if the hot and cold placements of the files are as you anticipated.
I have shown just a few examples of the very helpful commands. There are several more to manage the ASM instance, diskgroups, and more. The additional commands make ASMCMD pretty much complete for the management of ASM infrastructure.
ASM Configuration Assistant (Release 2 Only)
How do you create and manage the ASM instances and other components? ASMCMD is one option. Oracle Enterprise Manager (either standalone or Grid Control) is another. In this release of Oracle, there is yet another tool: ASMCA (ASM Configuration Assistant). It allows you to create the ASM instance as well as manage it within a GUI front end. You can invoke the tool by issuing the command asmca. Here is the initial screen.
The first tab shows the diskgroup information, The subsequent tabs show volumes (if defined) and ACFS flesystems on these volumes (if defined). Both volumes and ACFS have been explained earlier in this article.
If you right click on the disk name, you will get a pop-up menu with additional activities such as adding disks and managing templates, as shown in the figure. To add a template, click on Manage Templates and put the information on the template on the next screen, shown below.
Here I have created a template called HOT_UNPROT_COARSE (the name is partially hidden in the field), made the mirror unprotected, striping coarse and HOT as the extent location. Click on Save to save this template.
ASM Configuration can also be used to create Volumes and ACFS mountpoints, nstead of using ASMCMD or the SQL approach shown earlier in this article. The GUI interface of the ASMCA makes the process highly intuitive. The screenshots are not being shown in the interest of space.
参考至:http://www.oracle.com/technetwork/articles/sql/11g-asm-083478.html
如有错误,欢迎指正
邮箱:czmcj@163.com
相关推荐
在IT领域,尤其是在数据库管理与存储系统集成方面,Oracle Database 10g的Automatic Storage Management (ASM)与FUJITSU Storage的结合提供了一个高效且可靠的数据管理解决方案。本篇内容将深入探讨这一主题,从...
《McGraw Hill Oracle Automatic Storage Management》是一本深入探讨Oracle自动存储管理(Automatic Storage Management, ASM)的专业书籍。ASM是Oracle数据库系统中的一个重要组件,它为数据库管理员提供了集成的...
### Oracle Automatic Storage Management (ASM):深入解析与实践部署指南 #### 一、Oracle ASM简介 Oracle Automatic Storage Management(简称ASM)是Oracle数据库系统中的一种高性能、可伸缩且易于管理的存储...
Oracle自动存储管理(Oracle Automatic Storage Management,简称ASM)是Oracle数据库系统中的一种集成化存储管理系统,它在11g Release 2(11.2)版本中提供了更高级的功能和优化,以提升数据库的性能、可用性和...
ASM(Automatic Storage Management)是Oracle数据库从10g版本开始引入的一种自动化的存储管理系统,旨在简化数据库存储的管理和优化。ASM替代了传统的文件系统和逻辑卷管理器(LVM),提供了一种更高效、更灵活的...
自动存储管理(Automatic Storage Management,简称ASM)是Oracle数据库系统中用于管理存储的一种高性能、高可用性技术。它提供了一种简单有效的方式来管理和配置存储资源,特别是在大型企业环境中。本文将基于文档...
Oracle自动存储管理(Automatic Storage Management, ASM)是Oracle数据库中的一种集成存储管理系统,它提供了高效、自动化的数据存储和管理功能。ASM旨在简化存储管理,提高性能,减少维护工作,并优化磁盘空间的...
标题中的“12C OCP 1z0-060 RMAN”指的是Oracle Certified Professional (OCP) 认证考试中的1z0-060部分,这是一场针对Oracle Database 12c的考试,重点是Oracle恢复管理(RMAN)。RMAN是Oracle数据库的一个关键工具,...
Oracle数据库提供了多种高可用特性,包括Oracle Data Guard、Oracle Real Application Clusters、Oracle Automatic Storage Management等。这些特性可以帮助医院信息系统数据库管理员构建高可用性的数据库环境,保证...
- **存储方式**: 使用ASM (Automatic Storage Management) **备注**: Oracle 12c 不支持32位环境。 ##### 1.2 数据库安装软件下载 - **下载地址**: Oracle官方网站 - **软件包**: - `linuxamd64_12c_grid_1of2` ...
2. **HCIE-Storage V2.0环境搭建手册-数据迁移方案(Linux+Oracle RAC ASM)_Part2(HALP版).docx**:这份文档专注于在Linux操作系统上使用Oracle Real Application Clusters (RAC)和Automatic Storage Management ...
* Automatic Storage Management * Automatic Workload Repository * Automatic Diagnostic Monitor * Automatic SQL Tuning * Automatic Workload Capture/Replay * Automatic SQL Plan Management * Automatic ...
* Automatic Storage Management * Automatic Workload Repository * Automatic Diagnostic Monitor * Automatic SQL Tuning * Automatic Workload Capture/Replay * Automatic SQL Plan Management * Automatic ...
- 如果您正在安装 Automatic Storage Management,则应该查阅 Oracle Automatic Storage Management Administrator's Guide。 #### 4. 登录系统并具备管理员权限 **操作:** 在安装过程中,用户需要以管理员身份...
包括Oracle Real Application Clusters (RAC)、Oracle Data Guard、Oracle GoldenGate、Oracle Streams以及Oracle Recovery Manager和Oracle Automatic Storage Management。 首先,Oracle RAC是一种集群数据库技术...
尽管NTFS仍然可以在没有Oracle Grid Infrastructure支持的情况下运行于Oracle ADVM(Automatic Storage Management Logical Volume Manager)。 - ASMLIB(ASM Library)在这一版本中不受支持。ASMLIB通常用于简化...
在存储方面,使用三个存储节点实现Automatic Storage Management (ASM)的High Redundant配置,确保数据文件的高可用性。 数据库层面,Exadata支持多种高可用性技术。例如,Edition-Based Redefinition和Online ...
Oracle提供了多种机制来确保这一点,如RAC(Real Application Clusters)、Data Guard、闪回技术、Automatic Storage Management (ASM)以及Exadata等。 1. RAC(Real Application Clusters):Oracle RAC允许多台...
在Windows环境下安装Oracle数据库并集成Automatic Storage Management (ASM)是一项技术性较强的任务,涉及到多个步骤和配置选项。Oracle ASM是Oracle数据库系统中的一个组件,它提供了自动化的磁盘管理和存储优化,...