Engineer System
ODA, JSON and Flash
Category: Engineer System Author: Fernando Simon (Board Member) Date: 6 years ago Comments: 0

ODA, JSON and Flash

Recently, in March, I made the reimage from an X5-2 HA ODA and saw a strange behavior during the diskgroup creation and couldn’t reproduce (because involve reimaging again). Basically, the FLASH diskgroup was not created.
But in last May I reimaged another ODA using the same patch/imageversion (18.3.0.0 – Patch 27604623) and was possible to verify again. In both cases, I created the appliance using the CLI “odacli create-appliance” using JSON file because the network uses VLAN (what it is impossible to create using the web interface), and both appliances are identical (X5-2 HA with SSD for RECO and FLASH).
To reimage, I followed the steps in the docs for this version and used the ISO to do the baremetal procedure. If you look in the docs about the options for storage (check here) you can see that there is no single reference to use FLASH diskgroup (or that you need to do that). Checking in the readme/reference JSON files that exist in the folder “/opt/oracle/dcs/sample” under file “sample-oda-ha-json-readme.txt”:grid:
 

    diskGroup: (ODA HA contains DATA, RECO and REDO Diskgroups)

        diskgroupName: DATA|RECO|REDO

        redundancy: Normal|High

            If the system has less than 5 NVMe storage devices, redundancy is Normal.

            If the system has 5 or more NVMes, then Normal or High are supported.

        diskPercentage: Percentage of NVMe drive capacity is used for this particular diskgroup.

 

And the example that come with image (sample-oda-ha.json):

  “grid” : {

    “diskGroup” : [ {

      “diskGroupName” : “DATA”,

      “redundancy” : “”,

      “diskPercentage” :80

    }, {

      “diskGroupName” : “RECO”,

      “redundancy” : “”,

      “diskPercentage” :20

    }, {

      “diskGroupName”: “REDO”,

      “diskPercentage”: 100,

      “redundancy”: “HIGH”

    } ],

 
As you can see there is no reference to FLASH, or even if it is possible to use that (because the options are diskgroupName: DATA|RECO|REDO). Based on that (and supposing that FLASH will be created automatically – since X5-2 have these SSD disk dedicated to that) I made JSON file with:

  “grid” : {

    “diskGroup” : [ {

      “diskGroupName” : “DATA”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : 90

    }, {

      “diskGroupName” : “RECO”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : 10

    }, {

      “diskGroupName” : “REDO”,

      “redundancy” : “HIGH”,

      “disks” : null,

      “diskPercentage” : null

    } ],

 

And after the create-appliance command I got just these filegroups in ASM:

ASMCMD> lsdg

State    Type    Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  NORMAL  N         512             512   4096  4194304  108019712  108000740          6751232        50624754              0             Y  DATA/

MOUNTED  NORMAL  N         512             512   4096  4194304   11997184   11995456           749824         5622816              0             N  RECO/

MOUNTED  HIGH    N         512             512   4096  4194304     762880     749908           190720          186396              0             N  REDO/

ASMCMD>

 

And the describe-system reports:

[root@ODA1 ~]# /opt/oracle/dcs/bin/odacli describe-system

 

Appliance Information

—————————————————————-

                     ID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

               Platform: X5-2-HA

        Data Disk Count: 24

         CPU Core Count: 6

                Created: March 26, 2019 11:56:24 AM CET

 

System Information

—————————————————————-

                   Name: ODA

            Domain Name: XYZ.XYZ

              Time Zone: Europe/Luxembourg

             DB Edition: EE

            DNS Servers: 1.1.1.1 2.2.2.2

            NTP Servers: 3.3.3.3

 

Disk Group Information

—————————————————————-

DG Name                   Redundancy                Percentage

————————- ————————- ————

Data                      Normal                    90

Reco                      Normal                    10

Redo                      High                      100

 

[root@ODA1 ~]#

 

After that, the only option was creating the FLASH manually:

SQL> create diskgroup FLASH normal redundancy

  2     DISK ‘AFD:SSD_E0_S16_1320409344P1’ NAME SSD_E0_S16_1320409344P1

  3     ,  ‘AFD:SSD_E0_S17_1320408816P1’ NAME SSD_E0_S17_1320408816P1

  4     ,  ‘AFD:SSD_E0_S18_1320404784P1’ NAME SSD_E0_S18_1320404784P1

  5     ,  ‘AFD:SSD_E0_S19_1320406740P1’ NAME SSD_E0_S19_1320406740P1

  6     attribute  ‘COMPATIBLE.ASM’ =   ‘18.0.0.0.0’

  7     , ‘COMPATIBLE.rdbms’ = ‘12.1.0.2’

  8     , ‘compatible.advm’ = ‘18.0.0.0’

  9     , ‘au_size’=’4M’

  10  ;

 

Diskgroup created.

 

SQL> select NAME, SECTOR_SIZE, ALLOCATION_UNIT_SIZE, COMPATIBILITY, DATABASE_COMPATIBILITY from v$asm_diskgroup;

 

NAME                           SECTOR_SIZE ALLOCATION_UNIT_SIZE COMPATIBILITY                                                DATABASE_COMPATIBILITY

—————————— ———– ——————– ———————————————————— ————————————————————

DATA                                   512              4194304 18.0.0.0.0                                                   12.1.0.2.0

RECO                                   512              4194304 18.0.0.0.0                                                   12.1.0.2.0

REDO                                   512              4194304 18.0.0.0.0                                                   12.1.0.2.0

FLASH                                  512              4194304 18.0.0.0.0                                                   12.1.0.2.0

 

SQL>

Reimage again, or cleanup everything, sometimes it is not an option. Change the JSON file and modify the parameter “dbOnFlashStorage” it is not correct since will fail again as you can see in the release notes for version 12.2.1.3.0 that says in workaround: “Provide information for DATA, REDO, RECO, and FLASH disk groups at the time of provisioning. If you have provisioned your environment without creating the FLASH disk group, then create the FLASH disk group manually. This issue is tracked with Oracle bug 27721310.
 
Second try
For the second reimage I specified my JSON file as:

  “grid” : {

    “diskGroup” : [ {

      “diskGroupName” : “DATA”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : 90

    }, {

      “diskGroupName” : “RECO”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : 10

    }, {

      “diskGroupName” : “REDO”,

      “redundancy” : “HIGH”,

      “disks” : null,

      “diskPercentage” : null

    }, {

      “diskGroupName” : “FLASH”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : null

    } ],

 

And the describe for system now says:

[root@OAK1 ~]# /opt/oracle/dcs/bin/odacli describe-system

 

Appliance Information

—————————————————————-

                     ID: XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

               Platform: X5-2-HA

        Data Disk Count: 24

         CPU Core Count: 36

                Created: May 22, 2019 11:03:36 AM CEST

 

System Information

—————————————————————-

                   Name: OAK

            Domain Name: XYZ.XYZ

              Time Zone: Europe/Luxembourg

             DB Edition: EE

            DNS Servers: 1.1.1.1 2.2.2.2

            NTP Servers: 3.3.3.3

 

Disk Group Information

—————————————————————-

DG Name                   Redundancy                Percentage

————————- ————————- ————

Data                      Normal                    90

Reco                      Normal                    10

Redo                      High                      100

Flash                     Normal                    100

 

[root@OAK1 ~]#

And I can confirm everything as expected in ASM:

ASMCMD> lsdg

State    Type    Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  NORMAL  N         512             512   4096  4194304  108019712  108007188          6751232        50627978              0             Y  DATA/

MOUNTED  NORMAL  N         512             512   4096  4194304    1525760    1518672           381440          568616              0             N  FLASH/

MOUNTED  NORMAL  N         512             512   4096  4194304   11997184   11995736           749824         5622956              0             N  RECO/

MOUNTED  HIGH    N         512             512   4096  4194304     762880     749908           190720          186396              0             N  REDO/

ASMCMD>

If you compare the job report for the new create-appliance execution:

Post cluster OAKD configuration          May 22, 2019 11:53:45 AM CEST       May 22, 2019 11:59:11 AM CEST       Success

Disk group ‘RECO’creation                May 22, 2019 11:59:20 AM CEST       May 22, 2019 11:59:38 AM CEST       Success

Disk group ‘REDO’creation                May 22, 2019 11:59:38 AM CEST       May 22, 2019 11:59:47 AM CEST       Success

Disk group ‘FLASH’creation               May 22, 2019 11:59:47 AM CEST       May 22, 2019 11:59:58 AM CEST       Success

Volume ‘commonstore’creation             May 22, 2019 11:59:58 AM CEST       May 22, 2019 12:00:48 PM CEST       Success

It is different from the previous:

Post cluster OAKD configuration          March 26, 2019 1:47:06 PM CET       March 26, 2019 1:52:32 PM CET       Success

Disk group ‘RECO’creation                March 26, 2019 1:52:41 PM CET       March 26, 2019 1:52:57 PM CET       Success

Disk group ‘REDO’creation                March 26, 2019 1:52:57 PM CET       March 26, 2019 1:53:07 PM CET       Success

Volume ‘commonstore’creation             March 26, 2019 1:53:07 PM CET       March 26, 2019 1:53:53 PM CET       Success

 

Know your environment and your target
So, be careful during your ODA deployment, even if the doc says something, check again to avoid errors. It is important to know your environment and be aware of what is expected as a result. Here in this case, even if the docs says nothing about specifying the FLASH diskgroup, it is needed and if you forgot, not FLASH for you.
 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

 

 

Fernando Simon – http://www.fernandosimon.com/blog

ODA, ACFS and ASM Dilemma
Category: Engineer System Author: Fernando Simon (Board Member) Date: 6 years ago Comments: 0

ODA, ACFS and ASM Dilemma

As you know, for ODA, you have two options for storage: ACFS or ASM. If you choose ACFS, you can create all versions for databases, from 11g to 18c (until this moment). But if you choose ASM, the 11g will not be compatible.
So, ASM or ACFS? If you choose ACFS, the diskgroup where ACFS runs will be sliced and you have one mount point for each database. If you have, as an example, one system with more than 30 databases, can be complicated to manage all the ACFS mount points. So, ASM it simple and easier solution to sustain. Besides the fact that it is more homogeneous with other database environments (Exadata, RAC’s …etc).
If you choose ASM you can’t use 11g versions or avoid the ACFS mount points for all databases, but you can do a little simple approach to use 11g databases and still use ASM for others. Took one example where just 3 or 4 databases will run over 11g version and all others 30 databases in the environment will be in 12/18. To achieve that, the option, in this case, is using a “manual” ACFS mount point, I will explain.

Simple ACFS

First, create the folder where you will mount the ACFS. In this example /u01/app/oracle/oradata:

 

[root@oak1 ~]# mkdir -p /u01/app/oracle/oradata

[root@oak1 ~]# chown oracle:oinstall /u01/app/oracle/oradata

[root@oak1 ~]#

####Node 2

[root@oak2 ~]# mkdir -p /u01/app/oracle/oradata

[root@oak2 ~]# chown oracle:oinstall /u01/app/oracle/oradata

[root@oak2 ~]#

After that, connect at ASM and create the volume that will store ACFS:

[grid@oak1 ~]$ asmcmd

ASMCMD> volcreate -G DATA -s 5T DATA11G

ASMCMD>

ASMCMD>

ASMCMD> volinfo -G DATA DATA11G

Diskgroup Name: DATA

         Volume Name: DATA11G

         Volume Device: /dev/asm/data11g-463

         State: ENABLED

         Size (MB): 5242880

         Resize Unit (MB): 512

         Redundancy: MIRROR

         Stripe Columns: 8

         Stripe Width (K): 1024

         Usage:

         Mountpath:

ASMCMD>

ASMCMD>

ASMCMD> exit

[grid@oak1 ~]$

The size is 5TB and the volume name will be DATA11G mounted over DATA diskgroup. Now, format the filesystem:

[grid@oak1 ~]$ /sbin/mkfs -t acfs /dev/asm/data11g-463

mkfs.acfs: version                   = 18.0.0.0.0

mkfs.acfs: on-disk version           = 46.0

mkfs.acfs: volume                    = /dev/asm/data11g-463

mkfs.acfs: volume size               = 5497558138880  (   5.00 TB )

mkfs.acfs: Format complete.

[grid@oak1 ~]$

After format the filesystem, the next step it is add the filesystem in GI/CRS:

[root@oak1 ~]# export ORACLE_HOME=/u01/app/18.0.0.0/grid

[root@oak1 ~]# $ORACLE_HOME/bin/srvctl add filesystem -d /dev/asm/data11g-463 -g DATA -v DATA11G -m /u01/app/oracle/oradata -user oracle,grid -fstype ACFS

[root@oak1 ~]#

[root@oak1 ~]# $ORACLE_HOME/bin/crsctl stat res ora.data.data11g.acfs -t

——————————————————————————–

Name           Target  State        Server                   State details

——————————————————————————–

Local Resources

——————————————————————————–

ora.data.data11g.acfs

               OFFLINE OFFLINE      oak1                     STABLE

               OFFLINE OFFLINE      oak2                     STABLE

——————————————————————————–

[root@oak1 ~]#

To finish, start the mount point inside GI/CRS:

[root@oak1 ~]# $ORACLE_HOME/bin/srvctl start filesystem -d /dev/asm/data11g-463

[root@oak1 ~]#

[root@oak1 ~]# acfsutil registry -l

Device : /dev/asm/commonstore-463 : Mount Point : /opt/oracle/dcs/commonstore : Options : none : Nodes : all : Disk Group: DATA : Primary Volume : COMMONSTORE : Accelerator Volumes :

Device : /dev/asm/data11g-463 : Mount Point : /u01/app/oracle/oradata : Options : none : Nodes : all : Disk Group: DATA : Primary Volume : DATA11G : Accelerator Volumes :

[root@oak1 ~]#

[root@oak1 ~]# $ORACLE_HOME/bin/crsctl stat res ora.data.data11g.acfs -t

——————————————————————————–

Name           Target  State        Server                   State details

——————————————————————————–

Local Resources

——————————————————————————–

ora.data.data11g.acfs

               ONLINE  ONLINE       oak1                     mounted on /u01/app/

                                                             oracle/oradata,STABL

                                                             E

               ONLINE  ONLINE       oak2                     mounted on /u01/app/

                                                             oracle/oradata,STABL

                                                             E

——————————————————————————–

[root@oak1 ~]#

After the mount, verify if the ownership still remains for oracle:oinstall, if no, change it.
If you want to test, runs the DBCA with your template. Take care of the file locations parameters, check one example for silent execution:

[oracle@oak1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1

[oracle@oak1 ~]$ export PATH=$ORACLE_HOME/bin:$PATH

[oracle@oak1 ~]$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName YOURTEMPLATEFILE.dbt -gdbName O11GTEST -adminManaged -sid O11GTEST -sysPassword welcome1 -systemPassword welcome1 -characterSet WE8ISO8859P15 -emConfiguration NONE -datafileDestination /u01/app/oracle/oradata -recoveryAreaDestination /u01/app/oracle/oradata -nodelist oak1,oak2 -sampleSchema false -RACOneNode -RACOneNodeServiceName TEST11 -initParams “db_create_file_dest=/u01/app/oracle/oradata,db_create_online_log_dest_1=/u01/app/oracle/oradata,db_recovery_file_dest=/u01/app/oracle/oradata”

Creating and starting Oracle instance

1% complete

3% complete

Creating database files

96% complete

99% complete

100% complete

Look at the log file “/u01/app/oracle/cfgtoollogs/dbca/O11GTEST/O11GTEST1.log” for further details.

[oracle@oak1 ~]$

[oracle@oak1 ~]$

[oracle@oak1 ~]$ $ORACLE_HOME/bin/srvctl config database -d O11GTEST

Database unique name: O11GTEST

Database name: O11GTEST

Oracle home: /u01/app/oracle/product/11.2.0.4/dbhome_1

Oracle user: oracle

Spfile: /u01/app/oracle/oradata/O11GTEST/spfileO11GTEST.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: O11GTEST

Database instances:

Disk Groups:

Mount point paths: /u01/app/oracle/oradata

Services: TEST11

Type: RACOneNode

Online relocation timeout: 30

Instance name prefix: O11GTEST

Candidate servers: oak1,oak2

Database is administrator managed

[oracle@oak1 ~]$

[oracle@oak1 ~]$

 

Check above that GI/CRS registered that this database is dependent from one mountpoint. In the case of a reboot, it will wait for the ACFS to be in mount state to start the database. Normal ACFS behavior.
 
 11G, ACFS and ASM
Doing these steps, you have just one ACFS volume mounted over /u01/app/oracle/oradata (or where you mount it) and can use for database creation (for 11G).  Be careful that following this way for 11G database you can’t use the odacli create-database command or others related to databases for 11G (and maybe have no MOS support). But you can use normal dbca commands to create your database, all the GI commands and integrations will work normally; or even runs temporary 11g databases in your ODA. In the other hand, you win all the ASM flexibility and simplicity to manage/sustain all other versions.
 
 Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful ”

 


1 5 6 7