Fernando Simon (Board Member)
ODA, ACFS and ASM Dilemma
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

ODA, ACFS and ASM Dilemma

As you know, for ODA, you have two options for storage: ACFS or ASM. If you choose ACFS, you can create all versions for databases, from 11g to 18c (until this moment). But if you choose ASM, the 11g will not be compatible.
So, ASM or ACFS? If you choose ACFS, the diskgroup where ACFS runs will be sliced and you have one mount point for each database. If you have, as an example, one system with more than 30 databases, can be complicated to manage all the ACFS mount points. So, ASM it simple and easier solution to sustain. Besides the fact that it is more homogeneous with other database environments (Exadata, RAC’s …etc).
If you choose ASM you can’t use 11g versions or avoid the ACFS mount points for all databases, but you can do a little simple approach to use 11g databases and still use ASM for others. Took one example where just 3 or 4 databases will run over 11g version and all others 30 databases in the environment will be in 12/18. To achieve that, the option, in this case, is using a “manual” ACFS mount point, I will explain.

Simple ACFS

First, create the folder where you will mount the ACFS. In this example /u01/app/oracle/oradata:

 

[root@oak1 ~]# mkdir -p /u01/app/oracle/oradata

[root@oak1 ~]# chown oracle:oinstall /u01/app/oracle/oradata

[root@oak1 ~]#

####Node 2

[root@oak2 ~]# mkdir -p /u01/app/oracle/oradata

[root@oak2 ~]# chown oracle:oinstall /u01/app/oracle/oradata

[root@oak2 ~]#

After that, connect at ASM and create the volume that will store ACFS:

[grid@oak1 ~]$ asmcmd

ASMCMD> volcreate -G DATA -s 5T DATA11G

ASMCMD>

ASMCMD>

ASMCMD> volinfo -G DATA DATA11G

Diskgroup Name: DATA

         Volume Name: DATA11G

         Volume Device: /dev/asm/data11g-463

         State: ENABLED

         Size (MB): 5242880

         Resize Unit (MB): 512

         Redundancy: MIRROR

         Stripe Columns: 8

         Stripe Width (K): 1024

         Usage:

         Mountpath:

ASMCMD>

ASMCMD>

ASMCMD> exit

[grid@oak1 ~]$

The size is 5TB and the volume name will be DATA11G mounted over DATA diskgroup. Now, format the filesystem:

[grid@oak1 ~]$ /sbin/mkfs -t acfs /dev/asm/data11g-463

mkfs.acfs: version                   = 18.0.0.0.0

mkfs.acfs: on-disk version           = 46.0

mkfs.acfs: volume                    = /dev/asm/data11g-463

mkfs.acfs: volume size               = 5497558138880  (   5.00 TB )

mkfs.acfs: Format complete.

[grid@oak1 ~]$

After format the filesystem, the next step it is add the filesystem in GI/CRS:

[root@oak1 ~]# export ORACLE_HOME=/u01/app/18.0.0.0/grid

[root@oak1 ~]# $ORACLE_HOME/bin/srvctl add filesystem -d /dev/asm/data11g-463 -g DATA -v DATA11G -m /u01/app/oracle/oradata -user oracle,grid -fstype ACFS

[root@oak1 ~]#

[root@oak1 ~]# $ORACLE_HOME/bin/crsctl stat res ora.data.data11g.acfs -t

——————————————————————————–

Name           Target  State        Server                   State details

——————————————————————————–

Local Resources

——————————————————————————–

ora.data.data11g.acfs

               OFFLINE OFFLINE      oak1                     STABLE

               OFFLINE OFFLINE      oak2                     STABLE

——————————————————————————–

[root@oak1 ~]#

To finish, start the mount point inside GI/CRS:

[root@oak1 ~]# $ORACLE_HOME/bin/srvctl start filesystem -d /dev/asm/data11g-463

[root@oak1 ~]#

[root@oak1 ~]# acfsutil registry -l

Device : /dev/asm/commonstore-463 : Mount Point : /opt/oracle/dcs/commonstore : Options : none : Nodes : all : Disk Group: DATA : Primary Volume : COMMONSTORE : Accelerator Volumes :

Device : /dev/asm/data11g-463 : Mount Point : /u01/app/oracle/oradata : Options : none : Nodes : all : Disk Group: DATA : Primary Volume : DATA11G : Accelerator Volumes :

[root@oak1 ~]#

[root@oak1 ~]# $ORACLE_HOME/bin/crsctl stat res ora.data.data11g.acfs -t

——————————————————————————–

Name           Target  State        Server                   State details

——————————————————————————–

Local Resources

——————————————————————————–

ora.data.data11g.acfs

               ONLINE  ONLINE       oak1                     mounted on /u01/app/

                                                             oracle/oradata,STABL

                                                             E

               ONLINE  ONLINE       oak2                     mounted on /u01/app/

                                                             oracle/oradata,STABL

                                                             E

——————————————————————————–

[root@oak1 ~]#

After the mount, verify if the ownership still remains for oracle:oinstall, if no, change it.
If you want to test, runs the DBCA with your template. Take care of the file locations parameters, check one example for silent execution:

[oracle@oak1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1

[oracle@oak1 ~]$ export PATH=$ORACLE_HOME/bin:$PATH

[oracle@oak1 ~]$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName YOURTEMPLATEFILE.dbt -gdbName O11GTEST -adminManaged -sid O11GTEST -sysPassword welcome1 -systemPassword welcome1 -characterSet WE8ISO8859P15 -emConfiguration NONE -datafileDestination /u01/app/oracle/oradata -recoveryAreaDestination /u01/app/oracle/oradata -nodelist oak1,oak2 -sampleSchema false -RACOneNode -RACOneNodeServiceName TEST11 -initParams “db_create_file_dest=/u01/app/oracle/oradata,db_create_online_log_dest_1=/u01/app/oracle/oradata,db_recovery_file_dest=/u01/app/oracle/oradata”

Creating and starting Oracle instance

1% complete

3% complete

Creating database files

96% complete

99% complete

100% complete

Look at the log file “/u01/app/oracle/cfgtoollogs/dbca/O11GTEST/O11GTEST1.log” for further details.

[oracle@oak1 ~]$

[oracle@oak1 ~]$

[oracle@oak1 ~]$ $ORACLE_HOME/bin/srvctl config database -d O11GTEST

Database unique name: O11GTEST

Database name: O11GTEST

Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome_1

Oracle user: oracle

Spfile: /u01/app/oracle/oradata/O11GTEST/spfileO11GTEST.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: O11GTEST

Database instances:

Disk Groups:

Mount point paths: /u01/app/oracle/oradata

Services: TEST11

Type: RACOneNode

Online relocation timeout: 30

Instance name prefix: O11GTEST

Candidate servers: oak1,oak2

Database is administrator managed

[oracle@oak1 ~]$

[oracle@oak1 ~]$

 

Check above that GI/CRS registered that this database is dependent from one mountpoint. In the case of a reboot, it will wait for the ACFS to be in mount state to start the database. Normal ACFS behavior.
 
 11G, ACFS and ASM
Doing these steps, you have just one ACFS volume mounted over /u01/app/oracle/oradata (or where you mount it) and can use for database creation (for 11G).  Be careful that following this way for 11G database you can’t use the odacli create-database command or others related to databases for 11G (and maybe have no MOS support). But you can use normal dbca commands to create your database, all the GI commands and integrations will work normally; or even runs temporary 11g databases in your ODA. In the other hand, you win all the ASM flexibility and simplicity to manage/sustain all other versions.
 
 Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful ”

 


1 5 6 7