Fernando Simon (Board Member)
Observer, Where?
Category: Database Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Observer, Where?

Some months ago I got one error with Oracle Data Guard and now I had time to review again and time write this article. Just to be clear since the begin, the discussion here it is not about the error itself, but about the circumstances that generated it.
The environment described here follow the most, and common, best practices for DG by Oracle. Have 1 dedicated server for each one: Primary Database, Physical Standby Database and Observer. The primary and standby resides in different datacenters in different cities, dedicated network for interconnect between sites, protection mode was Maximum Availability and runs with fast-start failover enabled (with 30 seconds for threshold). The version here is 12.2, but will be the same for 19c. So, nothing so bad in the environment, basic DG configuration trying to follow the best practices.
But, one day, application servers running, primary linux db server running, but database itself down. Looking for the cause, found in the broker log:

 

 

03/10/2019 12:48:05

LGWR: FSFO SetState(st=2 "UNSYNC", fl=0x2 "WAIT", ob=0x0, tgt=0, v=0)

LGWR: FSFO SetState("UNSYNC", 0x2) operation requires an ack

        Primary database will shutdown within 30 seconds if permission

         is not granted from Observer or FSFO target standby to proceed

LGWR:   current in-memory FSFO state flags=0x40001, version=46

03/10/2019 12:48:19

A Fast-Start Failover target switch is necessary because the primary cannot reach the Fast-Start Failover target standby database

A target switch was not attempted because the observer has not pinging primary recently.

FSFP network call timeout. Killing process FSFP.

03/10/2019 12:48:36

Notifying Oracle Clusterware to disable services and monitoring because primary will be shutdown

Primary has heard from neither observer nor target standby

        within FastStartFailoverThreshold seconds. It is

        likely an automatic failover has already occurred.

        The primary is shutting down.

LGWR: FSFO SetState(st=8 "FO PENDING", fl=0x0 "", ob=0x0, tgt=0, v=0)

LGWR: Shutdown primary instance 1 now because the primary has been isolated




And in the alertlog:

2019-03-10T12:48:35.860142+01:00

Starting background process FSFP

2019-03-10T12:48:35.870802+01:00

FSFP started with pid=30, OS id=6055

2019-03-10T12:48:36.875828+01:00

Primary has heard from neither observer nor target standby within FastStartFailoverThreshold seconds.

It is likely an automatic failover has already occurred. Primary is shutting down.

2019-03-10T12:48:36.876434+01:00

Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_lgwr_2472.trc:

ORA-16830: primary isolated from fast-start failover partners longer than FastStartFailoverThreshold seconds: shutting down

LGWR (ospid: 2472): terminating the instance due to error 16830

 

But this is (and was) not a DG problem, the DG made what was design to do. Primary lost the communication with the Standby and Observer and after the Fast-Start Failover threshold, FSFP killed the primary because it don’t know if was evicted and want avoid split brain (or something similar). Worked as designed!
As I wrote before the main question here is not the ORA-XXXX error, but the circumstances. In this case, by design definition (and probably based in the docs), chosen to put the observer in the same site than standby. But, because one failure in the standby datacenter (just in the enclosure that runs blades for Oracle stuffs, application continued to runs), the entire database was unavailable. One outage in standby datacenter, shutdown the primary database even DG running in Maximum Availability mode.

 

 

As you can imagine, because the design decision “where to put the observer”, everything was down. If the observer was running in the primary datacenter, nothing supposed to occurs. But here it is the point for this post: “Where you put the Observer?”, “Primary site? Standby site?”, and “Why?”, “How you based this decision?”. Appears to be a simple question to answer, but there are a lot of pros and cons, and there is not much information about that.
If you search in the Oracle docs they recommend to put the observer in a third datacenter, isolated from the others, or in the in the same network than application, or in the standby datacenter. Here: https://www.oracle.com/technetwork/database/availability/maa-roletransitionbp-2621582.pdf (page 9). But, where are pros and cons for the decision? And how many clients that have a third datacenter? And if you search about where put the observer over the google, the 99% spread the same information (that go in the opposite of docs) “primary site”. But again, “Why?”.

 

 

Observer in primary site:
  • Pros: Protect for most failures (primary db crash failure, db logical crash as example), low impact network issues for observer (usually same LAN than primary).
  • Cons: Not protect (not switch) in case of whole primary datacenter failure.
  • When to use: When you want to avoid “false positive” switches in case of network problems against standby datacenter or you “not trust” in standby datacenter. Or because all transactions are important (will explain later).

Observer in standby site:

  • Pros: Protect against whole primary datacenter failure.
  • Cons: Heavy dependency from network between sites (need even multiple paths), can suffer “false positives” switches since standby site decides even if primary is running correctly (similar than related here). When database is more important than transactions, maybe Maximum Availability is not suitable (explain later).
  • When to use: When you want to protect the database and when your system and network infrastructure between datacenter are reliable.
Observer Third datacenter:
  • Pro: Cover all scenarios of failures.
  • Cons: Very heavily dependent of good network to avoid “false positives” or will suffer from fast-start failover disabled. Will be more expensive.
  • When to use: when you want to protect for most of possible scenarios.
Bellow just one idea about a good design for high availability for database and applications. This came directly from the Oracle docs about  “Recovering from Unscheduled Outages – https://docs.oracle.com/database/121/HABPT/E40019-02.pdf

 

 

 
Above I talked about transactions and Maximum Availability that it is deeply related here. Remember that primary database shutdown only after the fast start failover threshold? This means that primary database received transactions during 30 seconds before shutting down. If you put observer in standby site you can suffer from data loss (of course that failover, by design, means that) because standby side decides everything.
 
If you do that, maybe you need to operate in Maximum Protection to have zero data loss. This is more clear, or critical, if your database receive connections from applications that are not in the same datacenter and can connect in both at same time. In Maximum Protection you avoid that primary commit data in moments of possible failures, but you will put some overhead in every transaction operating in sync mode (https://www.oracle.com/technetwork/database/availability/sync-2437177.pdf). So, you need to decide if database running is more important than transactions.
 
Of course that every strategy have pro and cons. Observer in primary is more easy, maybe can allow you to use less strict protection and continue to have a high transaction protection, but not handle full datacenter failure. Observer in standby can protect from datacenter failure, but is possible that you need to handle with more “false positive” switches of primary db or even complete primary shutdown (as related here) in the case of standby datacenter failure. Adding the fact that maybe your application needs to allow some data loss, or if not, you may need to operate in maximum protection. Observer in third site can be best option for data protection, but will be more expensive and heavily network and operational dependent.
 
As you can see, there is not easy design. Even a simple choose, like observer location, will left plenty of decisions to be made. With Oracle 12.2 and beyond (including 19c) you have better options to handle it, I will pass over this in the next post about multiple observers, but there is no (yet) 100% solution to cover all possible scenarios.
Martin Bracher, Dr. Martin Wunderli and Torsten Rosenwald made a good cover of this in the past. You can check the presentations here:
 
https://www.doag.org/formes/pubfiles/303232/2008-K-IT-Bracher-Dataguard_Observer_ohne_Rechenzentrum.pdf
https://www.trivadis.com/sites/default/files/downloads/fsfo_understood_decus07.pdf
https://www.doag.org/formes/pubfiles/218046/FSFO.pdf
I recommend read the docs too:
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/data-guard-broker.pdf
https://www.oracle.com/technetwork/database/availability/maa-roletransitionbp-2621582.pdf
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sbydb/data-guard-concepts-and-administration.pdf
https://docs.oracle.com/database/121/HABPT/E40019-02.pdf
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


Exadata X8, Second look
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Exadata X8, Second look

Every year Oracle arrives and release new version of Exadata with a plenty of new things. We have the natural evolution from hardware (more memory, more cpu…) and sometimes news features from software side. The point for this post today it is not about the HW and SW things, but something is hidden in the small lines of the new X8.
In the Exadata X8 datasheet it is possible to read about the new Extended (XT) Storage Server. And for me, this was the best thing in the new release. With the new XT you can have, in the same appliance (and for your database usage), one dedicated storage that you can use for some goods and cool things.
Think about, today if you have a huge partitioned table in your database you continue to store, in the same place (sharing the same disks), the partition that you use in almost every transaction and the partition that use one time per month. The same can occurs if you have a DSS database with mix of OLTP and DW loads. Or even mix DEV/TST/PRO databases in the same appliance.
And for Exadata you “waste” features like flash, smart scan, storage server memory and the most important – money – (because the storage server need to be licensed) for this “not used data” too. Until X8 you don’t have what to do, the options to offload this data outside of Exadata are complicate and require some intervention in database side beside that you need to sustain different hardware.
So, why it is more “Efficient”, “Simple”, “Secure”, “Scalable” and “Compatible”? Because in the same appliance you can have, transparently from/for database side and using the same db features (like TDE as example), two storages. You can create one additional diskgroup in the XT storage servers and move your archived data there. Simple like that. Another option, if you use OVM, you can dedicate XT for a DEV/TST vm.
If you read the data sheet, you saw a quickly information about “storage software is optional”. But this is something that need clarification. In the docs about System Overview for 19.2 version (page 284 or here) you can see two interesting things:
As you can see, you still have the Exadata Software installed in XT server, but all the SQLOffload features (like smartscan – storage index I am not sure) will be disabled. By the way, about the Exadata Software, there is no other option, it is required to be there to share the disks thought iDB because basically it is not ODA with “JBOD/NtoN/X” connection between database servers and disks. But as told, it is cropped from offload features BUT you don’t need to pay the expensive storage license for use it.
About the hardware itself, it is the same X8-2L model for HC with just one processor, less memory, no flash and 14TB disks. And you cant upgrade the memory. The XT servers is just for “Flexible Configurations” with at least two XT servers in the config.
So, with the XT server Oracle added new devices for Exadata that increase the available options for database usage, but at same time it is complete transparent and more cheaper. You don’t need to configure fancy things to offload your data outside of Exadata, you use the same sql to read data and continued to have a good response time. Maybe you can use ASM Flex disk groups too with XT to have more flexibility. And if you use the OVM you can dedicate this XT servers for DEV/TST vms too. From maintenance part, you have the same procedure to upgrade the storage software stack.
The bad part from XT, and I believe that will occur, is that some customer will buy one appliance with a lot of XT servers and will complain that “brought Exadata and it is slow”. So, if you are a sales person, please inform correctly the customer about the usage for XT servers.
At the end, if you think about, Exadata it is there in the market since 2010 with SQLOffloading/smartscan/storage index and there is nothing in the market to beat it. Of course that exist hardware that it is more powerful than Exadata. You can have new fancy all flash storage, but if you do full stable scan over 1TB table you still load this amount from storage to database memory; for Exadata the history is different. Remember that Exadata it is not just hardware, it is software too.
 

 

I wrote one article about new option for Exadata X8
http://www.fernandosimon.com/blog/exadata-x8-second-look/
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


ODA REIMAGE
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

ODA REIMAGE

The idea to reimage ODA is to refresh the environment without the need to jump from one by one to reach the last available version, or even rescue the system from S.O. failure/crash. The process to do a reimage can be check in the official documentation but unfortunately can be very tricky because the information (the order and steps) are not 100% clear. The idea is to show you how to reimage using version 18 (18.3 in this example), that represents the last available.

 

In resume the process is executed in the order:
  • ILOM: Boot the ISO
  • Prepare to create the appliance
  • Upload GI and DB base version to the repository
  • Linux: Create the appliance
  • Firmware and patch
  • Create Oracle Homes and the databases
  • Finish and clean the install
The environment in this scenario is:
  • ODA HA: Oda HA version with two nodes and with SSD disk available.
  • Reimage with 18 version: 18.3 version at the moment that reimages was done, but can use any 18.x version
  • Databases: 11, 12, 18 for Oracle Home and available versions
Before even start the reimage you have two steps. The first is to verify the hardware against failures. Disks, memory, CPU, MB, whatever error that exists need to be clean before the start. If no, you can hit error during the process. To check about hw error, you can log in the ILOM in the browser and verify for some incidents/errors (this vary from version to version of ODA, but usually the errors are visible in the first page of ILOM). Or you can log in in the console for ILOM and execute the command “show faulty”. Whatever the mode, check in both nodes. If you have a connection with the system execute smartctl –a /dev/DISK and query about errors and look in the column “Total uncorrected errors”. Another option is check if the disk is in warning state using the command smartctl -q errorsonly -H -l xerror /dev/DISK; if you have the report will be similar to “SMART Health Status: WARNING: ascq=0x97 [asc=b, ascq=97]”. For the errors, open SR and wait for the maintenance from Oracle HW team.

 

The second is save/backup every config that you need. If you did not lose the system backup from /etc folder it is a good start. Another hint is execute “ifconfig” and “route” to have one output from IP’s that you have and the routing table (vlan list it is important too if used). For Oracle files, backup tnsnames.ora, listener.ora, and sqlnet.ora for every OH or GI that you have. These files are example, depend on every system and needs. Adjust if you need something additional.

 

ILOM

 

The process to reimage start putting the ISO in the ilom and booting both nodes with that. After the boot, the install runs automatically, and you have a Linux to create the appliance.
Before starting the reimage one explanation since the machine was already in use it needs to cleanup the disks. The first option is cleanup before the reimage, using the software from the currently running linux/oda/oak. Or running the cleanup after the reimage finish and before creating the appliance (the option that I will use here).

 

The steps to reimage from ILOM are done in this order:
  • Login ILOM both nodes using the browser
  • Navigate to “Host Management” menu and choose “Host Control” submenu.
    1. Select the option “CDROM” in the list and after click in SAVE.
  • Navigate to “Remote Control” menu and choose “Redirection” submenu.
    1. Click on the option “Launch Remote Control Console”
    2. You can receive some confirmation to execute some java (depend on how your machine has java configured) or ask to agree about the certificate. Always click yes and agree or continue.
  • In the newly opened window select the menu “KVMS” and after the submenu “Storage”.
  • In the “Storage Devices” do:
    1. Click on the “Add”
    2. Select the ISO file and after click in the option “Select”
    3. After, verify if the ISO appeared in the “Path” and the “Device Type” is “ISO Image” in the Storage Devices menu. Select the line with the ISO and after click in the option “Connect”.
    4. After you click in the “Connect” the information marked in red will appear in the screen and the menu now will appear as “Disconnect”. You just need to click in the option “OK”.
    5. DO NOT CLONE the Console.
  • Go back to the Browser and click in the menu “Host Management” and after in the submenu “Power Control”
    1. Select the option “Power Cycle” and after click in the “Save” option.
    2. Maybe you will receive on a question if you are sure about the reboot. Just click Ok.
  • You will see in the console:
    1. Reboot
    2. The process will boot the ISO
    3. Install the new ODA Image
In the end, you have the Linux console

 

All the steps above you can see in the Image gallery below. All the images are linked with the step mentioned.

 

 

 
Remember to do this in both nodes. You can do this both at the same time. The installation time depends more from the network because involve uploading the ISO from machine to ODA. The postscript part during the installation tends to take a lot of time.
Please does not close the Remote Console because this kills the ISO mounting device and the installation will crash. There is no problem if the browser login receives timeout, the installation will continue.
One important thing from this part is that does not update the ILOM SP or even other firmware for HW. This will be executed in the next steps.
Recently Oracle changed the ODA base page and removed the links to ISO files. But the patch 27604623 contains the Oracle Database Appliance Bare-metal ISO Image to download. Just to be clear, I am reimaging for baremetal (not OVM).

 

Prepare to create the appliance

After you “reimage” you need to configure and create the appliance. But before we need to execute some steps. Now, to connect in the Linux console we have two options: same java console opened to do the reimage, or, ssh to ilom ip and start the console (start /SP/console). I will use the last option because allow to work (copy and paste commands) more smoothly. Example (root/welcome1 are user/password that is the default for reimaged ODA):

 

Oracle(R) Integrated Lights Out Manager




Version 3.2.9.23 r116695




Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.




Warning: HTTPS certificate is set to factory default.




Hostname: odak1-ilom




-> start /SP/console

Are you sure you want to start /SP/console (y/n)? y




Serial console started.  To stop, type ESC (







Oracle Linux Server release 6.10

Kernel 4.1.12-124.18.6.el6uek.x86_64 on an x86_64




test0 login: root

[root@test0 ~]#


Cleanup

As told before, we need to clean the headers for ODA. I start checking the headers with command “cleanup.pl -checkHeader”:

 

[root@test0 ~]# /opt/oracle/oak/onecmd/cleanup.pl -checkHeader

Command may take few minutes...

OS Disk               Disk Type    OAK Header   ASM Header(p1)       ASM Header(p2)

 /dev/sda                HDD        Found           Found                Found

 /dev/sdb                HDD        Found           Found                Found

 /dev/sdaa               HDD        Found           Found                Found

 /dev/sdab               HDD        Found           Found                Found

 /dev/sdac               HDD        Found           Found                Found

 /dev/sdad               HDD        Found           Found                Found

 /dev/sdae               HDD        Found           Found                Found

 /dev/sdaf               HDD        Found           Found                Found

 /dev/sdag               HDD        Found           Found                Found

 /dev/sdah               HDD        Found          Erased               Erased

 /dev/sdai               HDD        Found           Found                Found

 /dev/sdaj               HDD        Found           Found                Found

 /dev/sdak               HDD        Found           Found                Found

 /dev/sdal               HDD        Found           Found                Found

 /dev/sdam               HDD        Found           Found                Found

 /dev/sdan               HDD        Found           Found                Found

 /dev/sdao               SSD        Found           Found       p2 not created

 /dev/sdap               SSD        Found           Found       p2 not created

 /dev/sdaq               SSD        Found           Found       p2 not created

 /dev/sdar               SSD        Found           Found       p2 not created

 /dev/sdas               SSD        Found           Found       p2 not created

 /dev/sdat               SSD        Found           Found       p2 not created

 /dev/sdau               SSD        Found           Found       p2 not created

 /dev/sdav               SSD        Found           Found       p2 not created

[root@test0 ~]#


Look above that some ASM Header was found. To clean we execute “/opt/oracle/oak/onecmd/cleanup.pl”  with option “-erasedata”. The cleanup script executes more than just erase disk headers, it cleans all the configurations (hostname, ips, users) and because of this needs to be executed in both nodes from HA env. This equalizes all nodes with the same config and avoid an error when you create the appliance. Example:

 

[root@test0 ~]# /opt/oracle/oak/onecmd/cleanup.pl -erasedata

INFO: *******************************************************************

INFO: ** Starting process to cleanup provisioned host oak1             **

INFO: *******************************************************************

WARNING: Secure Erase is an irrecoverable process. All data on the disk

WARNING: will be erased, and cannot be recovered by any means. On X3-2,

WARNING: X4-2, and X5-2 HA, the secure erase process can take more than

WARNING: 10 hours. If you need this data, then take a complete backup

WARNING: before proceeding.

Do you want to continue (yes/no) : yes

INFO:

....

....

INFO: Executing </opt/oracle/oak/bin/odaeraser.py -a -f -v >

Start erasing disks on the system

On some platforms, this will take several hours to finish, please wait

Do you want to continue (yes|no) [yes] ? yes

Number of disks are processing: 0




Disk     Vendor     Model                    Erase method         Status       Time(seconds)

e0_pd_00 HGST       H7280A520SUN8.0T         SCSI crypto erase    success               0

e0_pd_01 HGST       H7280A520SUN8.0T         SCSI crypto erase    success               0

e0_pd_02 HGST       H7280A520SUN8.0T         SCSI crypto erase    success               0

e0_pd_03 HGST       H7280A520SUN8.0T         SCSI crypto erase    success               0

e0_pd_04 HGST       H7280A520SUN8.0T         SCSI crypto erase    success               0

e0_pd_05 HGST       H7280A520SUN8.0T         SCSI crypto erase    success               0

e0_pd_06 HGST       H7280A520SUN8.0T         SCSI crypto erase    success               0

e0_pd_07 HGST       H7280A520SUN8.0T         SCSI crypto erase    success               0

e0_pd_08 HGST       H7280A520SUN8.0T         SCSI crypto erase    success               0

e0_pd_09 HGST       H7280A520SUN8.0T         SCSI cryp[  777.976350] reboot: Restarting system

...

...

After node reboot you can check the disk readers (look that the cleanup will change the hostname to “oak”):

 

[root@oak2 ~]# /opt/oracle/oak/onecmd/cleanup.pl -checkHeader

Command may take few minutes...

OS Disk               Disk Type    OAK Header   ASM Header(p1)       ASM Header(p2)

 /dev/sda                HDD       Erased          Erased              UnKnown

 /dev/sdb                HDD       Erased          Erased              UnKnown

 /dev/sdaa               HDD       Erased          Erased              UnKnown

 /dev/sdab               HDD       Erased          Erased              UnKnown

 /dev/sdac               HDD       Erased          Erased              UnKnown

 /dev/sdad               HDD       Erased          Erased              UnKnown

 /dev/sdae               HDD       Erased          Erased              UnKnown

 /dev/sdaf               HDD       Erased          Erased              UnKnown

 /dev/sdag               HDD       Erased          Erased              UnKnown

 /dev/sdah               HDD       Erased          Erased              UnKnown

 /dev/sdai               HDD       Erased          Erased              UnKnown

 /dev/sdaj               HDD       Erased          Erased              UnKnown

 /dev/sdak               HDD       Erased          Erased              UnKnown

 /dev/sdal               HDD       Erased          Erased              UnKnown

 /dev/sdam               HDD       Erased          Erased              UnKnown

 /dev/sdan               HDD       Erased          Erased              UnKnown

 /dev/sdao               SSD       Erased          Erased              UnKnown

 /dev/sdap               SSD       Erased          Erased              UnKnown

 /dev/sdaq               SSD       Erased          Erased              UnKnown

 /dev/sdar               SSD       Erased          Erased              UnKnown

 /dev/sdas               SSD       Erased          Erased              UnKnown

 /dev/sdat               SSD       Erased          Erased              UnKnown

 /dev/sdau               SSD       Erased          Erased              UnKnown

 /dev/sdav               SSD       Erased          Erased              UnKnown



First network

After the cleanup we can create the basic network. To do that (need to do in both nodes), just execute the script “/opt/oracle/dcs/bin/odacli” with parameter “configure-firstnet”. Look:

 

[root@oak1 ~]# /opt/oracle/dcs/bin/odacli configure-firstnet

Select the Interface to configure the network on (bond0 bond1) [bond0]:

Configure DHCP on bond0 (yes/no) [no]:

INFO: You have chosen Static configuration

Use VLAN on bond0 (yes/no) [no]:yes

Configure VLAN on bond0, input VLAN ID [2 - 4094] 2999

INFO: using network interface bond0.2999

Enter the IP address to configure : 200.200.67.100

Enter the Netmask address to configure : 255.255.255.0

Enter the Gateway address to configure[200.200.67.1] :

INFO: Restarting the network

Shutting down interface bond0:  [  OK  ]

Shutting down interface bond1:  [  OK  ]

Shutting down interface ib0:  [  OK  ]

Shutting down interface ib1:  [  OK  ]

Shutting down interface ibbond0:  [  OK  ]

Shutting down loopback interface:  [  OK  ]

Bringing up loopback interface:  [  OK  ]

Bringing up interface bond0:  [  OK  ]

Bringing up interface bond1:  [  OK  ]

Bringing up interface ibbond0:  Determining if ip address 192.168.16.24 is already in use for device ibbond0...

[  OK  ]

Bringing up interface bond0.2999:  Determining if ip address 200.200.67.100 is already in use for device bond0.2999...

[  OK  ]

INFO: Restarting the DCS agent

initdcsagent stop/waiting

initdcsagent start/running, process 23750

[root@oak1 ~]#

And in the second node:

 

[root@oak2 ~]# /opt/oracle/dcs/bin/odacli configure-firstnet

Select the Interface to configure the network on (bond0 bond1) [bond0]:

Configure DHCP on bond0 (yes/no) [no]:

INFO: You have chosen Static configuration

Use VLAN on bond0 (yes/no) [no]:yes

Configure VLAN on bond0, input VLAN ID [2 - 4094] 2999

INFO: using network interface bond0.2999

Enter the IP address to configure : 200.200.67.101

Enter the Netmask address to configure : 255.255.255.0

Enter the Gateway address to configure[200.200.67.1] :

INFO: Restarting the network

Shutting down interface bond0:  [  OK  ]

Shutting down interface bond1:  [  OK  ]

Shutting down interface ib0:  [  OK  ]

Shutting down interface ib1:  [  OK  ]

Shutting down interface ibbond0:  [  OK  ]

Shutting down loopback interface:  [  OK  ]

Bringing up loopback interface:  [  OK  ]

Bringing up interface bond0:  [  OK  ]

Bringing up interface bond1:  [  OK  ]

Bringing up interface ibbond0:  Determining if ip address 192.168.16.25 is already in use for device ibbond0...

[  OK  ]

Bringing up interface bond0.2999:  Determining if ip address 200.200.67.101 is already in use for device bond0.2999...

[  OK  ]

INFO: Restarting the DCS agent

initdcsagent stop/waiting

initdcsagent start/running, process 10775

[root@oak2 ~]#



The most important is to define correctly the vlan (if used, as in this case). You can rerun the command if needed. After that, you can exit the ILOM console and do ssh to the ODA node.

Upload GI and DB base version to the repository

After you configure the first network you need to upload the files that will fill the internal ODA repository and allow the “create appliance script” do initiate the ASM and base Oracle Home. There are some several steps, but you will execute in just one node.
Again, Oracle recently changed the ODA base page (https://support.oracle.com/epmos/faces/DocContentDisplay?id=888888.1) and removed the links to files with clone version for OH and GI. But the patch numbers are patch 27604593 for GI and patch 27604558 for RDBMS. Bot are for baremetal image, if you are using VM installation, they are others (but you can find then patch search inside Oracle MOS.
The first step is to upload the files “p27604558_183000_Linux-x86-64.zip”and “p27604593_183000_Linux-x86-64.zip” to the first node and unzip. Remember that the file name will change if you are reimage using another version. Here I used in “/tmp”

 

[root@oak1 ~]# mkdir /tmp/deploy

[root@oak1 ~]# cd /tmp/deploy

[root@oak1 deploy]# ###### EXECUTE THE SCP OR COPY FILES TO THE FOLDER########

[root@oak1 deploy]# unzip -qa p27604558_183000_Linux-x86-64.zip

[root@oak1 deploy]# unzip -qa p27604593_183000_Linux-x86-64.zip

replace README.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: A

[root@oak1 deploy]#

[root@oak1 deploy]#

[root@oak1 deploy]# ls -l

total 20559868

-rw-r--r-- 1 root root 4322872013 Dec 18 08:24 odacli-dcs-18.3.0.0.0-180905-DB-18.0.0.0.zip

-rwxr-xr-x 1 root root 6193475789 Dec 18 08:03 odacli-dcs-18.3.0.0.0-181205-GI-18.3.0.0.zip

-rw-r--r-- 1 root root 4322873299 May 22 08:37 p27604558_183000_Linux-x86-64.zip

-rw-r--r-- 1 root root 6193477129 May 22 08:35 p27604593_183000_Linux-x86-64.zip

-rw-r--r-- 1 root root       2044 Dec 18 08:05 README.txt

[root@oak1 deploy]#


After that we can upload these file to the internal ODA repository:

 

[root@oak1 deploy]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/deploy/odacli-dcs-18.3.0.0.0-181205-GI-18.3.0.0.zip,/tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-18.0.0.0.zip

{

  "jobId" : "b66269aa-e007-4304-8074-7ba7e9c8c5c2",

  "status" : "Created",

  "message" : "/tmp/deploy/odacli-dcs-18.3.0.0.0-181205-GI-18.3.0.0.zip,/tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-18.0.0.0.zip",

  "reports" : [ ],

  "createTimestamp" : "May 22, 2019 08:51:14 AM UTC",

  "resourceList" : [ ],

  "description" : "Repository Update",

  "updatedTime" : "May 22, 2019 08:51:14 AM UTC"

}

[root@oak1 deploy]#


Look that one job will be created, and you can check the progress for the job with the “/opt/oracle/dcs/bin/odacli describe-job -i” passing as parameter the jobId:

 

[root@oak1 deploy]# /opt/oracle/dcs/bin/odacli describe-job -i b66269aa-e007-4304-8074-7ba7e9c8c5c2

 /opt/oracle/dcs/bin/odacli describe-job -i b66269aa-e007-430




Job details

----------------------------------------------------------------

                     ID:  b66269aa-e007-4304-8074-7ba7e9c8c5c2

            Description:  Repository Update

                 Status:  Running

                Created:  May 22, 2019 8:51:14 AM UTC

                Message:  /tmp/deploy/odacli-dcs-18.3.0.0.0-181205-GI-18.3.0.0.zip,/tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-18.0.0.0.zip




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Check AvailableSpace                     May 22, 2019 8:51:14 AM UTC         May 22, 2019 8:51:14 AM UTC         Success

Setting up ssh equivalance               May 22, 2019 8:51:15 AM UTC         May 22, 2019 8:51:15 AM UTC         Success

Copy BundleFile                          May 22, 2019 8:51:15 AM UTC         May 22, 2019 8:51:15 AM UTC         Running




[root@oak1 deploy]#

[root@oak1 deploy]# /opt/oracle/dcs/bin/odacli describe-job -i b66269aa-e007-4304-8074-7ba7e9c8c5c2




Job details

----------------------------------------------------------------

                     ID:  b66269aa-e007-4304-8074-7ba7e9c8c5c2

            Description:  Repository Update

                 Status:  Success

                Created:  May 22, 2019 8:51:14 AM UTC

                Message:  /tmp/deploy/odacli-dcs-18.3.0.0.0-181205-GI-18.3.0.0.zip,/tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-18.0.0.0.zip




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Check AvailableSpace                     May 22, 2019 8:51:14 AM UTC         May 22, 2019 8:51:14 AM UTC         Success

Setting up ssh equivalance               May 22, 2019 8:51:15 AM UTC         May 22, 2019 8:51:15 AM UTC         Success

Copy BundleFile                          May 22, 2019 8:51:15 AM UTC         May 22, 2019 8:52:21 AM UTC         Success

Validating CopiedFile                    May 22, 2019 8:52:24 AM UTC         May 22, 2019 8:52:48 AM UTC         Success

Unzip bundle                             May 22, 2019 8:52:48 AM UTC         May 22, 2019 8:54:17 AM UTC         Success

Unzip bundle                             May 22, 2019 8:54:17 AM UTC         May 22, 2019 8:55:47 AM UTC         Success

Delete PatchBundles                      May 22, 2019 8:55:47 AM UTC         May 22, 2019 8:55:48 AM UTC         Success

Removing ssh keys                        May 22, 2019 8:55:48 AM UTC         May 22, 2019 8:55:49 AM UTC         Success




[root@oak1 deploy]#


Create appliance

After you clean the disk headers, prepare the first network and upload the files in the repository you can create the appliance. This means that you will configure the operational system, groups, users, folders, GI, ASM, and Oracle database. There are two ways to create, one is using the web interface (this replace the old java app that creates the XML file), and the other is from CLI. Unfortunately, until I wrote this, if you use VLAN you can’t create the appliance with the web interface, just with CLI.
The CLI uses the script “/opt/oracle/dcs/bin/odacli create-appliance” that receive a JSON file as a parameter. There is not some much information the parameter for this JSON file, just the readme with some examples and one web page with other examples. But in both, no real good explanation to each option, look:
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.3/cmtxd/create-appliance-using-json-file.html#GUID-42250FD2-EA91-4457-9ED7-CA3A2A863B40

 

Fortunately, most of the options are self-explained, other parts I will explain below. First, the full JSON file that I used:

 

[root@oak1 ~]# cat odas.json

{

  "instance" : {

    "name" : "odas",

    "instanceBaseName" : "odas",

    "systemPassword" : null,

    "dbEdition" : "EE",

    "timeZone" : "Europe/Luxembourg",

    "ntpServers" : [ "200.200.13.125" ],

    "dnsServers" : [ "200.200.1.32", "200.200.1.33" ],

    "domainName" : "xxxx.xxxx.xxx",

    "isRoleSeparated" : true,

    "osUserGroup" : {

      "groups" : [ {

        "groupId" : 1001,

        "groupName" : "oinstall",

        "groupRole" : "oinstall"

      }, {

        "groupId" : 1003,

        "groupName" : "dbaoper",

        "groupRole" : "dbaoper"

      }, {

        "groupId" : 1002,

        "groupName" : "dba",

        "groupRole" : "dba"

      }, {

        "groupId" : 1006,

        "groupName" : "asmadmin",

        "groupRole" : "asmadmin"

      }, {

        "groupId" : 1005,

        "groupName" : "asmoper",

        "groupRole" : "asmoper"

      }, {

        "groupId" : 1004,

        "groupName" : "asmdba",

        "groupRole" : "asmdba"

      } ],

      "users" : [ {

        "userId" : 1001,

        "userName" : "oracle",

        "userRole" : "oracleUser"

      }, {

        "userId" : 1000,

        "userName" : "grid",

        "userRole" : "gridUser"

      } ]

    },

    "objectStoreCredentials" : null

  },

  "nodes" : [ {

    "nodeNumber" : "0",

    "nodeName" : "odas1",

    "localDisk" : null,

    "network" : [ {

      "name" : null,

      "nicName" : "bond0.2999",

      "vlanId" : null,

      "ipAddress" : "200.200.67.100",

      "subNetMask" : "255.255.255.0",

      "gateway" : "200.200.67.1",

      "interfaceType" : null,

      "networkType" : [ "Public" ],

      "isDefaultNetwork" : true

    } ],

    "ilom" : null

  }, {

    "nodeNumber" : "1",

    "nodeName" : "odas2",

    "localDisk" : null,

    "network" : [ {

      "name" : null,

      "nicName" : "bond0.2999",

      "vlanId" : null,

      "ipAddress" : "200.200.67.101",

      "subNetMask" : "255.255.255.0",

      "gateway" : "200.200.67.1",

      "interfaceType" : null,

      "networkType" : [ "Public" ],

      "isDefaultNetwork" : true

    } ],

    "ilom" : null

  } ],

  "grid" : {

    "diskGroup" : [ {

      "diskGroupName" : "DATA",

      "redundancy" : "NORMAL",

      "disks" : null,

      "diskPercentage" : 90

    }, {

      "diskGroupName" : "RECO",

      "redundancy" : "NORMAL",

      "disks" : null,

      "diskPercentage" : 10

    }, {

      "diskGroupName" : "REDO",

      "redundancy" : "HIGH",

      "disks" : null,

      "diskPercentage" : null

    }, {

      "diskGroupName" : "FLASH",

      "redundancy" : "NORMAL",

      "disks" : null,

      "diskPercentage" : null

    } ],

    "scan" : {

      "scanName" : "odas-scan",

      "ipAddresses" : [ "200.200.67.104", "200.200.67.105" ]

    },

    "vip" : [ {

      "nodeNumber" : "0",

      "vipName" : "odas1-vip",

      "ipAddress" : "200.200.67.102"

    }, {

      "nodeNumber" : "1",

      "vipName" : "odas2-vip",

      "ipAddress" : "200.200.67.103"

    } ],

    "language" : "en",

    "enableAFD" : "TRUE",

    "gridVersion" : null

  },

  "database" : {

    "dbName" : "mfod",

    "databaseUniqueName" : "mfod_ora",

    "dbVersion" : "18.3.0.0.180717",

    "dbHomeId" : null,

    "instanceOnly" : false,

    "registerOnly" : null,

    "isCdb" : true,

    "pdBName" : "pdb1",

    "pdbAdminuserName" : "pdbuser",

    "enableTDE" : false,

    "adminPassword" : "WWeeeeee99##",

    "dbType" : "RAC",

    "dbTargetNodeNumber" : null,

    "dbClass" : "OLTP",

    "dbShape" : "odb1",

    "dbStorage" : "ASM",

    "dbRedundancy" : null,

    "dbCharacterSet" : {

      "characterSet" : "AL32UTF8",

      "nlsCharacterset" : "AL16UTF16",

      "dbTerritory" : "AMERICA",

      "dbLanguage" : "AMERICAN"

    },

    "dbConsoleEnable" : false,

    "backupConfigId" : null,

    "backupConfigName" : null,

    "dbOnFlashStorage" : true,

    "dbEdition" : "EE",

    "rmanBkupPassword" : null,

    "dbDomainName" : null,

    "enableFlashCache" : null,

    "level0BackupDay" : null

  },

  "asr" : null

}

[root@oak1 ~]#


The file starts with a basic config where you define the name parameter and the instanceBaseName that will define some basic internal identification parameters. One example is the time zone.

The first that needs attention is isRoleSeparated and osUserGroup. If you choose the first as false, you don’t need the specify grid users. Otherwise, is needed to specify it.

For nodes option, you specify the hostname, ips, and gateway. Be careful with the nickname that is the same name that you configured the first network. Some steps above. The ilom parameter, since I am not changing now (and I think that in reimage scenario you do not change at this moment), remains null.

The grid parameter defines the disk division. You can choose arbitrary values for diskPercentage and disks that you will select for each one if needed. For disk, left undefined that ODA will divide automatically and accordingly. The key point here is that even, by default, FLASH and REDO diskgroups are created with default values (check that I even defined disk percentage for each one), if you forgot to specify, they are not created and the disk will not be used. I already wrote about one problem in this particular part and you can read here: http://www.fernandosimon.com/blog/oda-json-and-flash/

The database parameter defines the information for database creation. You have the basic like database name, unique name, cdb (or no), pdb name. The most tricky here are dbVersion (that need to be the same from that you uploaded in the repository), adminPassword (that will be used for root password after create the appliance), dbShape (that is the same – you choose one – for ODA available shapes), and dbStorage (that you choose between ASM and ACFS, 11 just support ACFS). Again, for ASM and ACFS I already wrote about that here: http://www.fernandosimon.com/blog/oda-acfs-and-asm-dilemma/

Important hint, if you want to create a database in another version that 18c you need to upload the files/clone images in the repository before.

To create just execute:

 

[root@oak1 ~]# /opt/oracle/dcs/bin/odacli create-appliance -r odas.json

{

  "instance" : {

    "name" : "odas",

    "instanceBaseName" : "odas",

    "systemPassword" : null,

    "dbEdition" : "EE",

    "timeZone" : "Europe/Luxembourg",

    "ntpServers" : [ "200.200.13.125" ],

    "dnsServers" : [ "200.200.1.32", "200.200.1.33" ],

    "domainName" : "xxxx.xxxx.xxx",

    "isRoleSeparated" : true,

    "osUserGroup" : {

      "groups" : [ {

        "groupId" : 1001,

        "groupName" : "oinstall",

        "groupRole" : "oinstall"

      }, {

        "groupId" : 1003,

        "groupName" : "dbaoper",

        "groupRole" : "dbaoper"

      }, {

        "groupId" : 1002,

        "groupName" : "dba",

        "groupRole" : "dba"

      }, {

        "groupId" : 1006,

        "groupName" : "asmadmin",

        "groupRole" : "asmadmin"

      }, {

        "groupId" : 1005,

        "groupName" : "asmoper",

        "groupRole" : "asmoper"

      }, {

        "groupId" : 1004,

        "groupName" : "asmdba",

        "groupRole" : "asmdba"

      } ],

      "users" : [ {

        "userId" : 1001,

        "userName" : "oracle",

        "userRole" : "oracleUser"

      }, {

        "userId" : 1000,

        "userName" : "grid",

        "userRole" : "gridUser"

      } ]

    },

    "objectStoreCredentials" : null

  },

  "nodes" : [ {

    "nodeNumber" : "0",

    "nodeName" : "odas1",

    "localDisk" : null,

    "network" : [ {

      "name" : null,

      "nicName" : "bond0.2999",

      "vlanId" : null,

      "ipAddress" : "200.200.67.100",

      "subNetMask" : "255.255.255.0",

      "gateway" : "200.200.67.1",

      "interfaceType" : null,

      "networkType" : [ "Public" ],

      "isDefaultNetwork" : true

    } ],

    "ilom" : null

  }, {

    "nodeNumber" : "1",

    "nodeName" : "odas2",

    "localDisk" : null,

    "network" : [ {

      "name" : null,

      "nicName" : "bond0.2999",

      "vlanId" : null,

      "ipAddress" : "200.200.67.101",

      "subNetMask" : "255.255.255.0",

      "gateway" : "200.200.67.1",

      "interfaceType" : null,

      "networkType" : [ "Public" ],

      "isDefaultNetwork" : true

    } ],

    "ilom" : null

  } ],

  "grid" : {

    "diskGroup" : [ {

      "diskGroupName" : "DATA",

      "redundancy" : "NORMAL",

      "disks" : null,

      "diskPercentage" : 90

    }, {

      "diskGroupName" : "RECO",

      "redundancy" : "NORMAL",

      "disks" : null,

      "diskPercentage" : 10

    }, {

      "diskGroupName" : "REDO",

      "redundancy" : "HIGH",

      "disks" : null,

      "diskPercentage" : null

    }, {

      "diskGroupName" : "FLASH",

      "redundancy" : "NORMAL",

      "disks" : null,

      "diskPercentage" : null

    } ],

    "scan" : {

      "scanName" : "odas-scan",

      "ipAddresses" : [ "200.200.67.104", "200.200.67.105" ]

    },

    "vip" : [ {

      "nodeNumber" : "0",

      "vipName" : "odas1-vip",

      "ipAddress" : "200.200.67.102"

    }, {

      "nodeNumber" : "1",

      "vipName" : "odas2-vip",

      "ipAddress" : "200.200.67.103"

    } ],

    "language" : "en",

    "enableAFD" : "TRUE",

    "gridVersion" : null

  },

  "database" : {

    "dbName" : "mfod",

    "databaseUniqueName" : "mfod_ora",

    "dbVersion" : "18.3.0.0.180717",

    "dbHomeId" : null,

    "instanceOnly" : false,

    "registerOnly" : null,

    "isCdb" : true,

    "pdBName" : "pdb1",

    "pdbAdminuserName" : "pdbuser",

    "enableTDE" : false,

    "adminPassword" : "WWeeeeee99##",

    "dbType" : "RAC",

    "dbTargetNodeNumber" : null,

    "dbClass" : "OLTP",

    "dbShape" : "odb1",

    "dbStorage" : "ASM",

    "dbRedundancy" : null,

    "dbCharacterSet" : {

      "characterSet" : "AL32UTF8",

      "nlsCharacterset" : "AL16UTF16",

      "dbTerritory" : "AMERICA",

      "dbLanguage" : "AMERICAN"

    },

    "dbConsoleEnable" : false,

    "backupConfigId" : null,

    "backupConfigName" : null,

    "dbOnFlashStorage" : true,

    "dbEdition" : "EE",

    "rmanBkupPassword" : null,

    "dbDomainName" : null,

    "enableFlashCache" : null,

    "level0BackupDay" : null

  },

  "asr" : null

}

{

  "jobId" : "f07961fa-4192-489b-9a55-53165a012376",

  "status" : "Created",

  "message" : null,

  "reports" : [ ],

  "createTimestamp" : "May 22, 2019 09:03:36 AM UTC",

  "resourceList" : [ ],

  "description" : "Provisioning service creation",

  "updatedTime" : "May 22, 2019 09:03:36 AM UTC"

}

[root@oak1 ~]#


Look that you will receive one jobId if everything is fine. To follow just describe the job:

 

[root@oak1 ~]# /opt/oracle/dcs/bin/odacli describe-job -i f07961fa-4192-489b-9a55-53165a012376




Job details

----------------------------------------------------------------

                     ID:  f07961fa-4192-489b-9a55-53165a012376

            Description:  Provisioning service creation

                 Status:  Running

                Created:  May 22, 2019 11:03:36 AM CEST

                Message:




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

networks updation                        May 22, 2019 11:03:37 AM CEST       May 22, 2019 11:03:42 AM CEST       Success

updating network                         May 22, 2019 11:03:37 AM CEST       May 22, 2019 11:03:42 AM CEST       Success

Setting up Vlan                          May 22, 2019 11:03:38 AM CEST       May 22, 2019 11:03:42 AM CEST       Success

networks updation                        May 22, 2019 11:03:42 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

updating network                         May 22, 2019 11:03:42 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

Setting up Vlan                          May 22, 2019 11:03:43 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'asmdba'creation            May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'asmoper'creation           May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'asmadmin'creation          May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'dba'creation               May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'dbaoper'creation           May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'oinstall'creation          May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS user 'grid'creation                   May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:48 AM CEST       Success

OS user 'oracle'creation                 May 22, 2019 11:03:48 AM CEST       May 22, 2019 11:03:48 AM CEST       Success

SSH equivalance setup                    May 22, 2019 11:03:48 AM CEST       May 22, 2019 11:03:48 AM CEST       Success

Grid home creation                       May 22, 2019 11:03:49 AM CEST       May 22, 2019 11:03:49 AM CEST       Running

Creating GI home directories             May 22, 2019 11:03:49 AM CEST       May 22, 2019 11:03:49 AM CEST       Success

Cloning Gi home                          May 22, 2019 11:03:49 AM CEST       May 22, 2019 11:03:49 AM CEST       Running




[root@oak1 ~]#



And after some time:
 
[root@oak1 ~]# /opt/oracle/dcs/bin/odacli describe-job -i f07961fa-4192-489b-9a55-53165a012376




Job details

----------------------------------------------------------------

                     ID:  f07961fa-4192-489b-9a55-53165a012376

            Description:  Provisioning service creation

                 Status:  Success

                Created:  May 22, 2019 11:03:36 AM CEST

                Message:




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

networks updation                        May 22, 2019 11:03:37 AM CEST       May 22, 2019 11:03:42 AM CEST       Success

updating network                         May 22, 2019 11:03:37 AM CEST       May 22, 2019 11:03:42 AM CEST       Success

Setting up Vlan                          May 22, 2019 11:03:38 AM CEST       May 22, 2019 11:03:42 AM CEST       Success

networks updation                        May 22, 2019 11:03:42 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

updating network                         May 22, 2019 11:03:42 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

Setting up Vlan                          May 22, 2019 11:03:43 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'asmdba'creation            May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'asmoper'creation           May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'asmadmin'creation          May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'dba'creation               May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'dbaoper'creation           May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS usergroup 'oinstall'creation          May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:47 AM CEST       Success

OS user 'grid'creation                   May 22, 2019 11:03:47 AM CEST       May 22, 2019 11:03:48 AM CEST       Success

OS user 'oracle'creation                 May 22, 2019 11:03:48 AM CEST       May 22, 2019 11:03:48 AM CEST       Success

SSH equivalance setup                    May 22, 2019 11:03:48 AM CEST       May 22, 2019 11:03:48 AM CEST       Success

Grid home creation                       May 22, 2019 11:03:49 AM CEST       May 22, 2019 11:08:00 AM CEST       Success

Creating GI home directories             May 22, 2019 11:03:49 AM CEST       May 22, 2019 11:03:49 AM CEST       Success

Cloning Gi home                          May 22, 2019 11:03:49 AM CEST       May 22, 2019 11:06:36 AM CEST       Success

Cloning Gi home                          May 22, 2019 11:06:36 AM CEST       May 22, 2019 11:07:58 AM CEST       Success

Updating GiHome version                  May 22, 2019 11:07:58 AM CEST       May 22, 2019 11:08:00 AM CEST       Success

Updating GiHome version                  May 22, 2019 11:07:58 AM CEST       May 22, 2019 11:08:00 AM CEST       Success

Storage discovery                        May 22, 2019 11:08:00 AM CEST       May 22, 2019 11:19:03 AM CEST       Success

Grid stack creation                      May 22, 2019 11:19:03 AM CEST       May 22, 2019 11:53:45 AM CEST       Success

Configuring GI                           May 22, 2019 11:19:03 AM CEST       May 22, 2019 11:20:08 AM CEST       Success

Running GI root scripts                  May 22, 2019 11:20:08 AM CEST       May 22, 2019 11:32:18 AM CEST       Success

Running GI config assistants             May 22, 2019 11:37:22 AM CEST       May 22, 2019 11:40:04 AM CEST       Success

Setting AUDIT SYSLOG LEVEL               May 22, 2019 11:50:06 AM CEST       May 22, 2019 11:50:06 AM CEST       Success

Post cluster OAKD configuration          May 22, 2019 11:53:45 AM CEST       May 22, 2019 11:59:11 AM CEST       Success

Disk group 'RECO'creation                May 22, 2019 11:59:20 AM CEST       May 22, 2019 11:59:38 AM CEST       Success

Disk group 'REDO'creation                May 22, 2019 11:59:38 AM CEST       May 22, 2019 11:59:47 AM CEST       Success

Disk group 'FLASH'creation               May 22, 2019 11:59:47 AM CEST       May 22, 2019 11:59:58 AM CEST       Success

Volume 'commonstore'creation             May 22, 2019 11:59:58 AM CEST       May 22, 2019 12:00:48 PM CEST       Success

ACFS File system 'DATA'creation          May 22, 2019 12:00:48 PM CEST       May 22, 2019 12:01:13 PM CEST       Success

Database home creation                   May 22, 2019 12:01:13 PM CEST       May 22, 2019 12:05:28 PM CEST       Success

Validating dbHome available space        May 22, 2019 12:01:13 PM CEST       May 22, 2019 12:01:13 PM CEST       Success

Validating dbHome available space        May 22, 2019 12:01:13 PM CEST       May 22, 2019 12:01:13 PM CEST       Success

Creating DbHome Directory                May 22, 2019 12:01:13 PM CEST       May 22, 2019 12:01:13 PM CEST       Success

Extract DB clones                        May 22, 2019 12:01:14 PM CEST       May 22, 2019 12:04:12 PM CEST       Success

Clone Db home                            May 22, 2019 12:04:12 PM CEST       May 22, 2019 12:05:11 PM CEST       Success

Enable DB options                        May 22, 2019 12:05:11 PM CEST       May 22, 2019 12:05:27 PM CEST       Success

Run Root DB scripts                      May 22, 2019 12:05:27 PM CEST       May 22, 2019 12:05:27 PM CEST       Success

Provisioning service creation            May 22, 2019 12:05:28 PM CEST       May 22, 2019 12:23:12 PM CEST       Success

Database Creation                        May 22, 2019 12:05:28 PM CEST       May 22, 2019 12:20:27 PM CEST       Success

Change permission for xdb wallet files   May 22, 2019 12:20:27 PM CEST       May 22, 2019 12:20:27 PM CEST       Success

Add Startup Trigger to Open all PDBS     May 22, 2019 12:20:27 PM CEST       May 22, 2019 12:20:28 PM CEST       Success

SqlPatch upgrade                         May 22, 2019 12:21:53 PM CEST       May 22, 2019 12:23:08 PM CEST       Success

Running dbms_stats init_package          May 22, 2019 12:23:08 PM CEST       May 22, 2019 12:23:11 PM CEST       Success

updating the Database version            May 22, 2019 12:23:11 PM CEST       May 22, 2019 12:23:12 PM CEST       Success

users tablespace creation                May 22, 2019 12:23:12 PM CEST       May 22, 2019 12:23:15 PM CEST       Success

Install TFA                              May 22, 2019 12:23:16 PM CEST       May 22, 2019 12:28:08 PM CEST       Success




[root@oak1 ~]#


You can follow the log too if you want to see more detailed (a lot) info. To do just go to directory /opt/oracle/dcs/log/ and check the file dcs-agent.log. Everything will be there, and it is really importing if some step fails. Here you can see one example of the output.
After the end of the job you can check the details for the appliance:

 

[root@oak1 ~]# /opt/oracle/dcs/bin/odacli describe-system




Appliance Information

----------------------------------------------------------------

                     ID: 6ce235cf-effc-4748-9d0f-ac246e9ee819

               Platform: X5-2-HA

        Data Disk Count: 24

         CPU Core Count: 36

                Created: May 22, 2019 11:03:36 AM CEST




System Information

----------------------------------------------------------------

                   Name: odas

            Domain Name: xxxx.xxxx.xxx

              Time Zone: Europe/Luxembourg

             DB Edition: EE

            DNS Servers: 200.200.1.32 200.200.1.33

            NTP Servers: 200.200.13.125




Disk Group Information

----------------------------------------------------------------

DG Name                   Redundancy                Percentage

------------------------- ------------------------- ------------

Data                      Normal                    90

Reco                      Normal                    10

Redo                      High                      100

Flash                     Normal                    100




[root@oak1 ~]#

[root@oak1 ~]# odaadmcli show diskgroup

DiskGroups

----------

DATA

FLASH

RECO

REDO

[root@oak1 ~]#

[root@oak1 ~]# su - grid

[grid@odas1 ~]$ asmcmd

ASMCMD> lsdg

State    Type    Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  NORMAL  N         512             512   4096  4194304  108019712  108007188          6751232        50627978              0             Y  DATA/

MOUNTED  NORMAL  N         512             512   4096  4194304    1525760    1518672           381440          568616              0             N  FLASH/

MOUNTED  NORMAL  N         512             512   4096  4194304   11997184   11995736           749824         5622956              0             N  RECO/

MOUNTED  HIGH    N         512             512   4096  4194304     762880     749908           190720          186396              0             N  REDO/

ASMCMD> exit

[grid@odas1 ~]$

[grid@odas1 ~]$ logout

[root@oak1 ~]#

[root@oak1 ~]# /opt/oracle/dcs/bin/odacli list-dbstorages




ID                                       Type   DBUnique Name        Status

---------------------------------------- ------ -------------------- ----------

d26bf34f-9379-4872-839b-ae0af6926aae     Asm    mfod_ora             Configured

[root@oak1 ~]#

[root@oak1 ~]# /opt/oracle/dcs/bin/odacli describe-dbstorage -i d26bf34f-9379-4872-839b-ae0af6926aae

Database Storage details

----------------------------------------------------------------

                     ID: d26bf34f-9379-4872-839b-ae0af6926aae

                DB Name: mfod

          DBUnique Name: mfod_ora

         DB Resource ID: 938b616a-99ed-4466-8d76-820d3489517d

           Storage Type: ASM

                   DATA:

                         Location: +FLASH/mfod_ora

                       Used Space: 3.21GB

                       Free Space: 741.53GB

                   REDO:

                         Location: +REDO/mfod_ora

                       Used Space: 4.06GB

                       Free Space: 244.11GB

                   RECO:

                         Location: +RECO/mfod_ora

                       Used Space: 334.0MB

                       Free Space: 5.72TB

                  State: ResourceState(status=Configured)

                Created: May 22, 2019 11:03:36 AM CEST

            UpdatedTime: May 22, 2019 12:01:13 PM CEST

[root@oak1 ~]#


After that you reboot both nodes to have a fresh start with all the configuration up and running:

 

[root@oak1 ~]# reboot




Broadcast message from root@odas1

        (/dev/ttyS0) at 12:37 ...




The system is going down for reboot NOW!

[root@oak1 ~]#

       

Oracle Linux Server release 6.10

Kernel 4.1.12-124.18.6.el6uek.x86_64 on an x86_64




odas1 login: root

Password:

Last login: Wed May 22 11:10:13 from odap1.xxxx.xxxx.xxx

[root@odas1 ~]#


As an example, you can add vlan to ODA if needed (after the reboot):

 

[root@odas1 ~]# odaadmcli show vlan

        NAME                     ID    INTERFACE   CONFIG_TYPE IP_ADDRESS      NETMASK         GATEWAY         NODENUM

        Public-network           2999  bond0       Public      200.200.67.100   255.255.255.0   200.200.67.1     0

        Public-network           2999  bond0       Public      200.200.67.101   255.255.255.0   200.200.67.1     1

[root@odas1 ~]#

[root@odas1 ~]# cat /proc/net/vlan/config

VLAN Dev name    | VLAN ID

Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD

bond0.2999     | 2999  | bond0

[root@odas1 ~]#

[root@odas1 ~]# odaadmcli create vlan VLAN_3001 -vlanid 3001 -if bond0 -node 0 -setuptype database -ip 200.200.131.58 -netmask 255.255.255.0 -gateway 200.200.131.1




Created Vlan : VLAN_3001

[root@odas1 ~]# odaadmcli create vlan VLAN_3010 -vlanid 3010 -if bond0 -node 0 -setuptype database -ip 200.190.3.58 -netmask 255.255.255.0 -gateway 200.190.3.1




Created Vlan : VLAN_3010

[root@odas1 ~]# odaadmcli create vlan VLAN_3030 -vlanid 3030 -if bond0 -node 0 -setuptype management -ip 200.200.192.156 -netmask 255.255.255.0 -gateway 200.200.192.1




Created Vlan : VLAN_3030

[root@odas1 ~]# odaadmcli create vlan VLAN_3002 -vlanid 3002 -if bond1 -node 0 -setuptype backup -ip 200.200.68.128 -netmask 255.255.255.0 -gateway 200.200.68.1




Created Vlan : VLAN_3002

[root@odas1 ~]# odaadmcli create vlan VLAN_3001 -vlanid 3001 -if bond0 -node 1 -setuptype database -ip 200.200.131.59 -netmask 255.255.255.0 -gateway 200.200.131.1




Created Vlan : VLAN_3001

[root@odas1 ~]# odaadmcli create vlan VLAN_3010 -vlanid 3010 -if bond0 -node 1 -setuptype database -ip 200.190.3.59 -netmask 255.255.255.0 -gateway 200.190.3.1




Created Vlan : VLAN_3010

[root@odas1 ~]# odaadmcli create vlan VLAN_3030 -vlanid 3030 -if bond0 -node 1 -setuptype management -ip 200.200.192.157 -netmask 255.255.255.0 -gateway 200.200.192.1




Created Vlan : VLAN_3030

[root@odas1 ~]# odaadmcli create vlan VLAN_3002 -vlanid 3002 -if bond1 -node 1 -setuptype backup -ip 200.200.68.129 -netmask 255.255.255.0 -gateway 200.200.68.1




Created Vlan : VLAN_3002

[root@odas1 ~]#

[root@odas1 ~]#

[root@odas1 ~]# odaadmcli show vlan

        NAME                     ID    INTERFACE   CONFIG_TYPE IP_ADDRESS      NETMASK         GATEWAY         NODENUM

        Public-network           2999  bond0       Public      200.200.67.100   255.255.255.0   200.200.67.1     0

        Public-network           2999  bond0       Public      200.200.67.101   255.255.255.0   200.200.67.1     1

        VLAN_3002                3002  bond1       backup      200.200.68.128   255.255.255.0   200.200.68.1     0

        VLAN_3002                3002  bond1       backup      200.200.68.129   255.255.255.0   200.200.68.1     1

        VLAN_3001                3001  bond0       database    200.200.131.58   255.255.255.0   200.200.131.1    0

        VLAN_3001                3001  bond0       database    200.200.131.59   255.255.255.0   200.200.131.1    1

        VLAN_3010                3010  bond0       database    200.190.3.58     255.255.255.0   200.190.3.1      0

        VLAN_3010                3010  bond0       database    200.190.3.59     255.255.255.0   200.190.3.1      1

        VLAN_3030                3030  bond0       management  200.200.192.156  255.255.255.0   200.200.192.1    0

        VLAN_3030                3030  bond0       management  200.200.192.157  255.255.255.0   200.200.192.1    1

[root@odas1 ~]#


Firmware and patch

As described in the documentation, the reimage process does not update firmware and we need to this after creating the appliance. This point ins controversy, because there is no information when need to do this, you can run before or after creating the appliance. Here I made after the create, but if you are in really old version and have a lot of old firmware running maybe you can run before the create to avoid issues.
The funny part is that the firmware’s are inside of one the default patch file. Again, you need to search in patch search page because Oracle removed from 88888.1 page these links (look the path number in the example below). So, the procedure to update firmware is the same to apply a patch in ODA and need to download the same patch version. As usual, the first step is to upload the files:

 

[root@odas1 ~]# cd /tmp/deploy/

[root@odas1 deploy]#

[root@odas1 deploy]# unzip -qa p28864490_183000_Linux-x86-64_1of3.zip

[root@odas1 deploy]# unzip -qa p28864490_183000_Linux-x86-64_2of3.zip

[root@odas1 deploy]# unzip -qa p28864490_183000_Linux-x86-64_3of3.zip

[root@odas1 deploy]#

[root@odas1 deploy]# rm -rf p*zip

[root@odas1 deploy]#

[root@odas1 deploy]# ls -l

total 15441928

-rw-r--r-- 1 root root 2767978208 Dec 13 14:50 oda-sm-18.3.0.0.0-181205-server1of3.zip

-rwxr-xr-x 1 root root 6835610213 Dec 13 15:08 oda-sm-18.3.0.0.0-181205-server2of3.zip

-rwxr-xr-x 1 root root 6193475789 Dec 13 15:22 oda-sm-18.3.0.0.0-181205-server3of3.zip

-rw-r--r-- 1 root root       1155 Dec 13 15:22 README.txt

[root@odas1 deploy]#

[root@odas1 deploy]#


If you want to check what are you running, just execute odacli describe-component (look ilom version as an example):

 

[root@odas1 ~]# odacli describe-component

System Version

---------------

18.3.0.0.0




System node Name

---------------

odas1




Local System Version

---------------

18.3.0.0.0




Component                                Installed Version    Available Version

---------------------------------------- -------------------- --------------------

OAK                                       18.3.0.0.0            up-to-date




GI                                        18.3.0.0.180717       up-to-date




DB                                        18.3.0.0.180717       up-to-date




DCSAGENT                                  18.3.0.0.0            up-to-date




ILOM                                      3.2.9.23.r116695      4.0.2.26.b.r125868




BIOS                                      30110000              30130500




OS                                        6.10                  up-to-date




FIRMWARECONTROLLER {

[ c0 ]                                    4.650.00-7176         up-to-date

[ c1,c2 ]                                 13.00.00.00           up-to-date

}




FIRMWAREEXPANDER                          0018                  up-to-date




FIRMWAREDISK {

[ c0d0,c0d1 ]                             A7E0                  up-to-date

[ c1d0,c1d1,c1d2,c1d3,c1d4,c1d5,c1d6,     PAG1                  up-to-date

c1d7,c1d8,c1d9,c1d10,c1d11,c1d12,c1d13,

c1d14,c1d15,c2d0,c2d1,c2d2,c2d3,c2d4,

c2d5,c2d6,c2d7,c2d8,c2d9,c2d10,c2d11,

c2d12,c2d13,c2d14,c2d15 ]

[ c1d16,c1d17,c1d18,c1d19,c1d20,c1d21,    A29A                  up-to-date

c1d22,c1d23,c2d16,c2d17,c2d18,c2d19,

c2d20,c2d21,c2d22,c2d23 ]

}




System node Name

---------------

odas2




Local System Version

---------------

18.3.0.0.0




Component                                Installed Version    Available Version

---------------------------------------- -------------------- --------------------

OAK                                       18.3.0.0.0            up-to-date




GI                                        18.3.0.0.180717       up-to-date




DB                                        18.3.0.0.180717       up-to-date




DCSAGENT                                  18.3.0.0.0            up-to-date




ILOM                                      3.2.9.23.r116695      4.0.2.26.b.r125868




BIOS                                      30110000              30130500




OS                                        6.10                  up-to-date




FIRMWARECONTROLLER {

[ c0 ]                                    4.650.00-7176         up-to-date

[ c1,c2 ]                                 13.00.00.00           up-to-date

}




FIRMWAREEXPANDER                          0018                  up-to-date




FIRMWAREDISK {

[ c0d0,c0d1 ]                             A7E0                  up-to-date

[ c1d0,c1d1,c1d2,c1d3,c1d4,c1d5,c1d6,     PAG1                  up-to-date

c1d7,c1d8,c1d9,c1d10,c1d11,c1d12,c1d13,

c1d14,c1d15,c2d0,c2d1,c2d2,c2d3,c2d4,

c2d5,c2d6,c2d7,c2d8,c2d9,c2d10,c2d11,

c2d12,c2d13,c2d14,c2d15 ]

[ c1d16,c1d17,c1d18,c1d19,c1d20,c1d21,    A29A                  up-to-date

c1d22,c1d23,c2d16,c2d17,c2d18,c2d19,

c2d20,c2d21,c2d22,c2d23 ]

}




root@odas1 ~]#

 

 

After that you can upload the files in the repository (look the path to the files):

 

[root@odas1 deploy]# odacli update-repository -f /tmp/deploy/oda-sm-18.3.0.0.0-181205-server1of3.zip,/tmp/deploy/oda-sm-18.3.0.0.0-181205-server2of3.zip,/tmp/deploy/oda-sm-18.3.0.0.0-181205-server3of3.zip

{

  "jobId" : "cdfd2e72-777b-4ccb-a670-17b88c7cc102",

  "status" : "Created",

  "message" : "/tmp/deploy/oda-sm-18.3.0.0.0-181205-server1of3.zip,/tmp/deploy/oda-sm-18.3.0.0.0-181205-server2of3.zip,/tmp/deploy/oda-sm-18.3.0.0.0-181205-server3of3.zip",

  "reports" : [ ],

  "createTimestamp" : "May 22, 2019 15:01:03 PM CEST",

  "resourceList" : [ ],

  "description" : "Repository Update",

  "updatedTime" : "May 22, 2019 15:01:03 PM CEST"

}

[root@odas1 deploy]#

[root@odas1 deploy]#

[root@odas1 deploy]# /opt/oracle/dcs/bin/odacli describe-job -i cdfd2e72-777b-4ccb-a670-17b88c7cc102




Job details

----------------------------------------------------------------

                     ID:  cdfd2e72-777b-4ccb-a670-17b88c7cc102

            Description:  Repository Update

                 Status:  Success

                Created:  May 22, 2019 3:01:03 PM CEST

                Message:  /tmp/deploy/oda-sm-18.3.0.0.0-181205-server1of3.zip,/tmp/deploy/oda-sm-18.3.0.0.0-181205-server2of3.zip,/tmp/deploy/oda-sm-18.3.0.0.0-181205-server3of3.zip




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Check AvailableSpace                     May 22, 2019 3:01:03 PM CEST        May 22, 2019 3:01:03 PM CEST        Success

Setting up ssh equivalance               May 22, 2019 3:01:03 PM CEST        May 22, 2019 3:01:03 PM CEST        Success

Copy BundleFile                          May 22, 2019 3:01:03 PM CEST        May 22, 2019 3:02:47 PM CEST        Success

Validating CopiedFile                    May 22, 2019 3:02:49 PM CEST        May 22, 2019 3:03:23 PM CEST        Success

Unzip bundle                             May 22, 2019 3:03:23 PM CEST        May 22, 2019 3:05:43 PM CEST        Success

Unzip bundle                             May 22, 2019 3:05:43 PM CEST        May 22, 2019 3:08:08 PM CEST        Success

Delete PatchBundles                      May 22, 2019 3:08:08 PM CEST        May 22, 2019 3:08:11 PM CEST        Success

Removing ssh keys                        May 22, 2019 3:08:11 PM CEST        May 22, 2019 3:08:26 PM CEST        Success




[root@odas1 deploy]#

[root@odas1 deploy]#

 

After the success to upload you first update the storage using odacli update-storage command to upload to desired version (the version are in the filename or check what you downloaded from MOS):

 

[root@odas1 ~]# odacli update-storage -v 18.3.0.0.0 --rolling

{

  "jobId" : "18d1e5f8-7c31-4ce7-8241-f7b2ed5673cd",

  "status" : "Created",

  "message" : "Success of Storage Update may trigger reboot of node after 4-5 minutes. Please wait till node restart",

  "reports" : [ ],

  "createTimestamp" : "May 22, 2019 15:25:24 PM CEST",

  "resourceList" : [ ],

  "description" : "Storage Firmware Patching",

  "updatedTime" : "May 22, 2019 15:25:24 PM CEST"

}

[root@odas1 ~]#

[root@odas1 ~]# /opt/oracle/dcs/bin/odacli describe-job -i 18d1e5f8-7c31-4ce7-8241-f7b2ed5673cd




Job details

----------------------------------------------------------------

                     ID:  18d1e5f8-7c31-4ce7-8241-f7b2ed5673cd

            Description:  Storage Firmware Patching

                 Status:  Success

                Created:  May 22, 2019 3:25:24 PM CEST

                Message:




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Applying Firmware Disk Patches           May 22, 2019 3:25:24 PM CEST        May 22, 2019 3:25:45 PM CEST        Success

Applying Firmware Controller Patches     May 22, 2019 3:25:45 PM CEST        May 22, 2019 3:25:58 PM CEST        Success




[root@odas1 ~]#
After you can update the server (running odacli update-server). Here, since you are “updating” to the same version that you are running, the steps related to yum does not execute and just firmware part runs:

 

[root@odas1 ~]# odacli update-server -v 18.3.0.0.0

{

  "jobId" : "8bc7cc15-a543-4e26-ade4-8dd578ea67ba",

  "status" : "Created",

  "message" : "Success of Server Update may trigger reboot of node after 4-5 minutes. Please wait till node restart",

  "reports" : [ ],

  "createTimestamp" : "May 22, 2019 15:29:15 PM CEST",

  "resourceList" : [ ],

  "description" : "Server Patching",

  "updatedTime" : "May 22, 2019 15:29:15 PM CEST"

}

[root@odas1 ~]#

[root@odas1 ~]# /opt/oracle/dcs/bin/odacli describe-job -i 8bc7cc15-a543-4e26-ade4-8dd578ea67ba




Job details

----------------------------------------------------------------

                     ID:  8bc7cc15-a543-4e26-ade4-8dd578ea67ba

            Description:  Server Patching

                 Status:  Running

                Created:  May 22, 2019 3:29:15 PM CEST

                Message:




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Patch location validation                May 22, 2019 3:29:16 PM CEST        May 22, 2019 3:29:16 PM CEST        Success

Patch location validation                May 22, 2019 3:29:16 PM CEST        May 22, 2019 3:29:16 PM CEST        Success

dcs-controller upgrade                   May 22, 2019 3:29:19 PM CEST        May 22, 2019 3:29:19 PM CEST        Success

dcs-controller upgrade                   May 22, 2019 3:29:19 PM CEST        May 22, 2019 3:29:20 PM CEST        Success

Patch location validation                May 22, 2019 3:29:20 PM CEST        May 22, 2019 3:29:21 PM CEST        Success

Patch location validation                May 22, 2019 3:29:20 PM CEST        May 22, 2019 3:29:21 PM CEST        Success

dcs-cli upgrade                          May 22, 2019 3:29:21 PM CEST        May 22, 2019 3:29:21 PM CEST        Success

dcs-cli upgrade                          May 22, 2019 3:29:21 PM CEST        May 22, 2019 3:29:21 PM CEST        Success

Creating repositories using yum          May 22, 2019 3:29:29 PM CEST        May 22, 2019 3:29:36 PM CEST        Success

Creating repositories using yum          May 22, 2019 3:29:36 PM CEST        May 22, 2019 3:29:36 PM CEST        Success

Creating repositories using yum          May 22, 2019 3:29:36 PM CEST        May 22, 2019 3:29:36 PM CEST        Success

Creating repositories using yum          May 22, 2019 3:29:36 PM CEST        May 22, 2019 3:29:36 PM CEST        Success

Creating repositories using yum          May 22, 2019 3:29:36 PM CEST        May 22, 2019 3:29:36 PM CEST        Success

Creating repositories using yum          May 22, 2019 3:29:37 PM CEST        May 22, 2019 3:29:37 PM CEST        Success

Creating repositories using yum          May 22, 2019 3:29:37 PM CEST        May 22, 2019 3:29:37 PM CEST        Success

Updating YumPluginVersionLock rpm        May 22, 2019 3:29:37 PM CEST        May 22, 2019 3:29:37 PM CEST        Success

Applying OS Patches                      May 22, 2019 3:29:37 PM CEST        May 22, 2019 3:31:38 PM CEST        Success

Creating repositories using yum          May 22, 2019 3:31:38 PM CEST        May 22, 2019 3:31:39 PM CEST        Success

Applying HMP Patches                     May 22, 2019 3:31:39 PM CEST        May 22, 2019 3:31:39 PM CEST        Success

Patch location validation                May 22, 2019 3:31:40 PM CEST        May 22, 2019 3:31:40 PM CEST        Success

Patch location validation                May 22, 2019 3:31:40 PM CEST        May 22, 2019 3:31:40 PM CEST        Success

oda-hw-mgmt upgrade                      May 22, 2019 3:31:40 PM CEST        May 22, 2019 3:31:41 PM CEST        Success

oda-hw-mgmt upgrade                      May 22, 2019 3:31:41 PM CEST        May 22, 2019 3:31:41 PM CEST        Success

OSS Patching                             May 22, 2019 3:31:41 PM CEST        May 22, 2019 3:31:43 PM CEST        Success

Applying Firmware Disk Patches           May 22, 2019 3:32:02 PM CEST        May 22, 2019 3:32:15 PM CEST        Success

Applying Firmware Expander Patches       May 22, 2019 3:32:27 PM CEST        May 22, 2019 3:32:34 PM CEST        Success

Applying Firmware Controller Patches     May 22, 2019 3:32:47 PM CEST        May 22, 2019 3:32:55 PM CEST        Success

Checking Ilom patch Version              May 22, 2019 3:32:56 PM CEST        May 22, 2019 3:32:59 PM CEST        Success

Checking Ilom patch Version              May 22, 2019 3:32:59 PM CEST        May 22, 2019 3:33:01 PM CEST        Success

Patch location validation                May 22, 2019 3:33:01 PM CEST        May 22, 2019 3:33:02 PM CEST        Success

Patch location validation                May 22, 2019 3:33:01 PM CEST        May 22, 2019 3:33:02 PM CEST        Success

Save password in Wallet                  May 22, 2019 3:33:03 PM CEST        May 22, 2019 3:33:04 PM CEST        Success

Apply Ilom patch                         May 22, 2019 3:33:04 PM CEST        May 22, 2019 3:45:07 PM CEST        Success

Apply Ilom patch                         May 22, 2019 3:45:07 PM CEST        May 22, 2019 3:56:45 PM CEST        Success

Copying Flash Bios to Temp location      May 22, 2019 3:56:45 PM CEST        May 22, 2019 3:56:45 PM CEST        Success

Copying Flash Bios to Temp location      May 22, 2019 3:56:45 PM CEST        May 22, 2019 3:56:45 PM CEST        Success

Starting the clusterware                 May 22, 2019 3:58:41 PM CEST        May 22, 2019 4:00:09 PM CEST        Success

clusterware patch verification           May 22, 2019 4:00:09 PM CEST        May 22, 2019 4:00:10 PM CEST        Success

clusterware patch verification           May 22, 2019 4:00:09 PM CEST        May 22, 2019 4:00:10 PM CEST        Success

Patch location validation                May 22, 2019 4:00:10 PM CEST        May 22, 2019 4:00:10 PM CEST        Success

Patch location validation                May 22, 2019 4:00:10 PM CEST        May 22, 2019 4:00:10 PM CEST        Success

Opatch updation                          May 22, 2019 4:00:11 PM CEST        May 22, 2019 4:00:11 PM CEST        Success

Opatch updation                          May 22, 2019 4:00:11 PM CEST        May 22, 2019 4:00:11 PM CEST        Success

Patch conflict check                     May 22, 2019 4:00:11 PM CEST        May 22, 2019 4:00:11 PM CEST        Success

Patch conflict check                     May 22, 2019 4:00:11 PM CEST        May 22, 2019 4:00:11 PM CEST        Success

clusterware upgrade                      May 22, 2019 4:00:11 PM CEST        May 22, 2019 4:00:11 PM CEST        Success

clusterware upgrade                      May 22, 2019 4:00:11 PM CEST        May 22, 2019 4:00:11 PM CEST        Success

Updating GiHome version                  May 22, 2019 4:00:11 PM CEST        May 22, 2019 4:00:12 PM CEST        Success

Updating GiHome version                  May 22, 2019 4:00:11 PM CEST        May 22, 2019 4:00:12 PM CEST        Success

Update System version                    May 22, 2019 4:00:33 PM CEST        May 22, 2019 4:00:33 PM CEST        Success

Update System version                    May 22, 2019 4:00:33 PM CEST        May 22, 2019 4:00:33 PM CEST        Success

preRebootNode Actions                    May 22, 2019 4:00:33 PM CEST        May 22, 2019 4:00:33 PM CEST        Running




[root@odas1 ~]#

 

 

After, reboot both nodes!
As example check that ilom was updated:

 

[root@odas1 ~]# odacli describe-component

System Version

---------------

18.3.0.0.0




System node Name

---------------

odas1




Local System Version

---------------

18.3.0.0.0




Component                                Installed Version    Available Version

---------------------------------------- -------------------- --------------------

OAK                                       18.3.0.0.0            up-to-date




GI                                        18.3.0.0.180717       up-to-date




DB                                        18.3.0.0.180717       up-to-date




DCSAGENT                                  18.3.0.0.0            up-to-date




ILOM                                      4.0.2.26.b.r125868    up-to-date




BIOS                                      30130500              up-to-date




OS                                        6.10                  up-to-date




FIRMWARECONTROLLER {

[ c0 ]                                    4.650.00-7176         up-to-date

[ c1,c2 ]                                 13.00.00.00           up-to-date

}




FIRMWAREEXPANDER                          0018                  up-to-date




FIRMWAREDISK {

[ c0d0,c0d1 ]                             A7E0                  up-to-date

[ c1d0,c1d1,c1d2,c1d3,c1d4,c1d5,c1d6,     PAG1                  up-to-date

c1d7,c1d8,c1d9,c1d10,c1d11,c1d12,c1d13,

c1d14,c1d15,c2d0,c2d1,c2d2,c2d3,c2d4,

c2d5,c2d6,c2d7,c2d8,c2d9,c2d10,c2d11,

c2d12,c2d13,c2d14,c2d15 ]

[ c1d16,c1d17,c1d18,c1d19,c1d20,c1d21,    A29A                  up-to-date

c1d22,c1d23,c2d16,c2d17,c2d18,c2d19,

c2d20,c2d21,c2d22,c2d23 ]

}




System node Name

---------------

odas2




Local System Version

---------------

18.3.0.0.0




Component                                Installed Version    Available Version

---------------------------------------- -------------------- --------------------

OAK                                       18.3.0.0.0            up-to-date




GI                                        18.3.0.0.180717       up-to-date




DB                                        18.3.0.0.180717       up-to-date




DCSAGENT                                  18.3.0.0.0            up-to-date




ILOM                                      4.0.2.26.b.r125868    up-to-date




BIOS                                      30130500              up-to-date




OS                                        6.10                  up-to-date




FIRMWARECONTROLLER {

[ c0 ]                                    4.650.00-7176         up-to-date

[ c1,c2 ]                                 13.00.00.00           up-to-date

}




FIRMWAREEXPANDER                          0018                  up-to-date




FIRMWAREDISK {

[ c0d0,c0d1 ]                             A7E0                  up-to-date

[ c1d0,c1d1,c1d2,c1d3,c1d4,c1d5,c1d6,     PAG1                  up-to-date

c1d7,c1d8,c1d9,c1d10,c1d11,c1d12,c1d13,

c1d14,c1d15,c2d0,c2d1,c2d2,c2d3,c2d4,

c2d5,c2d6,c2d7,c2d8,c2d9,c2d10,c2d11,

c2d12,c2d13,c2d14,c2d15 ]

[ c1d16,c1d17,c1d18,c1d19,c1d20,c1d21,    A29A                  up-to-date

c1d22,c1d23,c2d16,c2d17,c2d18,c2d19,

c2d20,c2d21,c2d22,c2d23 ]

}







[root@odas1 ~]#

[root@odas1 ~]#
 

Create Oracle Homes and the databases

After update all the firmware’s you can upload the clone files for Oracle Homes versions that you want and create it. The procedure is the same than before, upload the files to (and just) ODA node 1 in somewhere and after that upload the repository and create the homes.
Upload the files and update the repository. Look below that I made one for each version:

 

[root@odas1 ~]# cd /tmp/deploy/

[root@odas1 deploy]#

[root@odas1 deploy]# unzip -qa p23494992_183000_Linux-x86-64.zip

[root@odas1 deploy]# rm p23494992_183000_Linux-x86-64.zip

rm: remove regular file `p23494992_183000_Linux-x86-64.zip'? y

[root@odas1 deploy]#

[root@odas1 deploy]# unzip -qa p23494997_183000_Linux-x86-64.zip

replace README.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: A

[root@odas1 deploy]# rm p23494997_183000_Linux-x86-64.zip

rm: remove regular file `p23494997_183000_Linux-x86-64.zip'? y

[root@odas1 deploy]#

[root@odas1 deploy]# odacli update-repository -f /tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-12.1.0.2.zip

{

  "jobId" : "b88c4922-1107-41f5-b7cd-e1648093051b",

  "status" : "Created",

  "message" : "/tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-12.1.0.2.zip",

  "reports" : [ ],

  "createTimestamp" : "May 22, 2019 16:34:28 PM CEST",

  "resourceList" : [ ],

  "description" : "Repository Update",

  "updatedTime" : "May 22, 2019 16:34:28 PM CEST"

}

[root@odas1 deploy]#

[root@odas1 deploy]# /opt/oracle/dcs/bin/odacli describe-job -i b88c4922-1107-41f5-b7cd-e1648093051b




Job details

----------------------------------------------------------------

                     ID:  b88c4922-1107-41f5-b7cd-e1648093051b

            Description:  Repository Update

                 Status:  Success

                Created:  May 22, 2019 4:34:28 PM CEST

                Message:  /tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-12.1.0.2.zip




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Check AvailableSpace                     May 22, 2019 4:34:28 PM CEST        May 22, 2019 4:34:28 PM CEST        Success

Setting up ssh equivalance               May 22, 2019 4:34:28 PM CEST        May 22, 2019 4:34:29 PM CEST        Success

Copy BundleFile                          May 22, 2019 4:34:29 PM CEST        May 22, 2019 4:34:54 PM CEST        Success

Validating CopiedFile                    May 22, 2019 4:34:54 PM CEST        May 22, 2019 4:35:07 PM CEST        Success

Unzip bundle                             May 22, 2019 4:35:07 PM CEST        May 22, 2019 4:35:42 PM CEST        Success

Unzip bundle                             May 22, 2019 4:35:46 PM CEST        May 22, 2019 4:36:23 PM CEST        Success

Delete PatchBundles                      May 22, 2019 4:36:24 PM CEST        May 22, 2019 4:36:25 PM CEST        Success

Removing ssh keys                        May 22, 2019 4:36:25 PM CEST        May 22, 2019 4:36:47 PM CEST        Success




[root@odas1 deploy]#

[root@odas1 deploy]# odacli update-repository -f /tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-11.2.0.4.zip

{

  "jobId" : "6698aa6d-1a44-4b1c-bda1-1e691a41a133",

  "status" : "Created",

  "message" : "/tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-11.2.0.4.zip",

  "reports" : [ ],

  "createTimestamp" : "May 22, 2019 16:38:53 PM CEST",

  "resourceList" : [ ],

  "description" : "Repository Update",

  "updatedTime" : "May 22, 2019 16:38:53 PM CEST"

}

[root@odas1 deploy]#

[root@odas1 deploy]# /opt/oracle/dcs/bin/odacli describe-job -i 6698aa6d-1a44-4b1c-bda1-1e691a41a133




Job details

----------------------------------------------------------------

                     ID:  6698aa6d-1a44-4b1c-bda1-1e691a41a133

            Description:  Repository Update

                 Status:  Success

                Created:  May 22, 2019 4:38:53 PM CEST

                Message:  /tmp/deploy/odacli-dcs-18.3.0.0.0-180905-DB-11.2.0.4.zip




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Check AvailableSpace                     May 22, 2019 4:38:53 PM CEST        May 22, 2019 4:38:53 PM CEST        Success

Setting up ssh equivalance               May 22, 2019 4:38:53 PM CEST        May 22, 2019 4:38:53 PM CEST        Success

Copy BundleFile                          May 22, 2019 4:38:54 PM CEST        May 22, 2019 4:39:04 PM CEST        Success

Validating CopiedFile                    May 22, 2019 4:39:06 PM CEST        May 22, 2019 4:39:14 PM CEST        Success

Unzip bundle                             May 22, 2019 4:39:14 PM CEST        May 22, 2019 4:39:32 PM CEST        Success

Unzip bundle                             May 22, 2019 4:39:32 PM CEST        May 22, 2019 4:39:54 PM CEST        Success

Delete PatchBundles                      May 22, 2019 4:39:54 PM CEST        May 22, 2019 4:39:55 PM CEST        Success

Removing ssh keys                        May 22, 2019 4:39:55 PM CEST        May 22, 2019 4:39:55 PM CEST        Success




[root@odas1 deploy]#

[root@odas1 deploy]#

After that, you can create the Oracle home for each version using the command odacli create-dbhome. 12c:

 

[root@odas1 deploy]#

[root@odas1 deploy]# odacli create-dbhome -v 12.1.0.2.180717




Job details

----------------------------------------------------------------

                     ID:  5ddcdd49-2d1a-404d-b4b8-7ab79c5f9707

            Description:  Database Home OraDB12102_home1 creation with version :12.1.0.2.180717

                 Status:  Created

                Created:  May 22, 2019 4:42:58 PM CEST

                Message:  Create Database Home




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------




[root@odas1 deploy]#

[root@odas1 deploy]# /opt/oracle/dcs/bin/odacli describe-job -i 5ddcdd49-2d1a-404d-b4b8-7ab79c5f9707




Job details

----------------------------------------------------------------

                     ID:  5ddcdd49-2d1a-404d-b4b8-7ab79c5f9707

            Description:  Database Home OraDB12102_home1 creation with version :12.1.0.2.180717

                 Status:  Success

                Created:  May 22, 2019 4:42:58 PM CEST

                Message:  Create Database Home




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Setting up ssh equivalance               May 22, 2019 4:42:59 PM CEST        May 22, 2019 4:42:59 PM CEST        Success

Validating dbHome available space        May 22, 2019 4:42:59 PM CEST        May 22, 2019 4:42:59 PM CEST        Success

Validating dbHome available space        May 22, 2019 4:42:59 PM CEST        May 22, 2019 4:42:59 PM CEST        Success

Creating DbHome Directory                May 22, 2019 4:42:59 PM CEST        May 22, 2019 4:42:59 PM CEST        Success

Extract DB clones                        May 22, 2019 4:42:59 PM CEST        May 22, 2019 4:46:22 PM CEST        Success

Clone Db home                            May 22, 2019 4:46:22 PM CEST        May 22, 2019 4:47:32 PM CEST        Success

Enable DB options                        May 22, 2019 4:47:32 PM CEST        May 22, 2019 4:47:41 PM CEST        Success

Run Root DB scripts                      May 22, 2019 4:47:41 PM CEST        May 22, 2019 4:47:42 PM CEST        Success

Removing ssh keys                        May 22, 2019 4:47:48 PM CEST        May 22, 2019 4:47:48 PM CEST        Success




[root@odas1 deploy]#

[root@odas1 deploy]#
And 11g:

 

[root@odas1 deploy]# odacli create-dbhome -v 11.2.0.4.180717




Job details

----------------------------------------------------------------

                     ID:  d4512833-3198-4384-b0d5-3588e3ec8cdb

            Description:  Database Home OraDB11204_home1 creation with version :11.2.0.4.180717

                 Status:  Created

                Created:  May 22, 2019 4:50:06 PM CEST

                Message:  Create Database Home




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------




[root@odas1 deploy]#

[root@odas1 deploy]# /opt/oracle/dcs/bin/odacli describe-job -i d4512833-3198-4384-b0d5-3588e3ec8cdb




Job details

----------------------------------------------------------------

                     ID:  d4512833-3198-4384-b0d5-3588e3ec8cdb

            Description:  Database Home OraDB11204_home1 creation with version :11.2.0.4.180717

                 Status:  Success

                Created:  May 22, 2019 4:50:06 PM CEST

                Message:  Create Database Home




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Setting up ssh equivalance               May 22, 2019 4:50:07 PM CEST        May 22, 2019 4:50:07 PM CEST        Success

Validating dbHome available space        May 22, 2019 4:50:07 PM CEST        May 22, 2019 4:50:07 PM CEST        Success

Validating dbHome available space        May 22, 2019 4:50:07 PM CEST        May 22, 2019 4:50:07 PM CEST        Success

Creating DbHome Directory                May 22, 2019 4:50:07 PM CEST        May 22, 2019 4:50:07 PM CEST        Success

Extract DB clones                        May 22, 2019 4:50:07 PM CEST        May 22, 2019 4:51:59 PM CEST        Success

Clone Db home                            May 22, 2019 4:52:00 PM CEST        May 22, 2019 4:53:06 PM CEST        Success

Enable DB options                        May 22, 2019 4:53:06 PM CEST        May 22, 2019 4:53:11 PM CEST        Success

Run Root DB scripts                      May 22, 2019 4:53:11 PM CEST        May 22, 2019 4:53:11 PM CEST        Success

Removing ssh keys                        May 22, 2019 4:53:16 PM CEST        May 22, 2019 4:53:17 PM CEST        Success




[root@odas1 deploy]#

[root@odas1 deploy]#


And after that you can use the odacli list-dbhomes to see the options for database creation:

 

ID                                       Name                 DB Version                               Home Location                                 Status

---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------

886021e5-9bbf-4a14-9b50-398ddd00bfd0     OraDB18000_home1     18.3.0.0.180717                          /u01/app/oracle/product/18.0.0.0/dbhome_1     Configured

d791001f-b812-49fd-94f5-b11c5f580a30     OraDB12102_home1     12.1.0.2.180717                          /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured

d162ff30-dc0d-4c4b-810e-c8f513dc9f62     OraDB11204_home1     11.2.0.4.180717                          /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured




[root@odas1 deploy]#


As an example, if you want to delete the database that was created during the creation of appliance just execute odacli delete-database:

 

[root@odas1 ~]# odacli delete-database --dbName mfod

{

  "jobId" : "69adb079-e5e4-4d64-8b20-4eee73c65491",

  "status" : "Running",

  "message" : null,

  "reports" : [ ],

  "createTimestamp" : "May 22, 2019 16:17:27 PM CEST",

  "resourceList" : [ ],

  "description" : "Database service deletion with db name: mfod with id : 938b616a-99ed-4466-8d76-820d3489517d",

  "updatedTime" : "May 22, 2019 16:17:27 PM CEST"

}

[root@odas1 ~]#



Finish and clean the install

After all the steps listed until now, you have your ODA reimaged, updated, and with 18c, 12c and 11g available version for database creation. To finish you can update some files as needed. Examples of important files are /etc/fstab, /etc/sysctl.conf, /etc/sysconfig/network-scripts/route* and /etc/sysconfig/network-scripts/rules* (both related with routing tables for vlans – if needed), and add the listener for all the vlans that you created (if needed of course).
To finish, reboot both nodes and change the root password.
 
Reference that can be consulted to help you to understand the reimage and deployment procedure:
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.3/cmtxl/oracle-database-appliance-x7-2-deployment-and-users-guide.pdf         
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.3/cmtxd/oracle-database-appliance-x5-2-x4-2-x3-2-deployment-and-users-guide.pdf
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.3/cmtxl/odacli-comparison-chart.html#GUID-567E862F-F4A4-4E1E-A485-BD1945BEC673
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.3/cmtxl/postinstallation-tasks-oda.html#GUID-A256DFBD-9335-41D2-9BD5-A1BA6AAC4C1E
https://docs.oracle.com/cd/E80799_01/doc.121/e80521/GUID-4EAD207D-41FF-4FD7-8804-DE1B2EB7BE90.htm#CMTXG-GUID-E0CC0E23-A339-45D8-AFA2-09732DC6BC18
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.3/cmtxl/reimaging-oda.html#GUID-6DEDCA82-94B7-4155-9FBD-3B1FDC232FCC
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.5/cmtxl/create-appliance-using-json-file.html#GUID-42250FD2-EA91-4457-9ED7-CA3A2A863B40
https://docs.oracle.com/cd/E75550_01/doc.121/e77147/GUID-F99D3337-E913-405D-BE3F-85C31103A26A.htm#CMTAR-GUID-CF1FECF0-722F-4B43-82AA-525EA169B3E8
 

RMAN, Allocate channel, CDB, and CLOSE, bug
Category: Database Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

RMAN, Allocate channel, CDB, and CLOSE, bug

Allocate channel for RMAN is used in various scenarios, most of the time is useful when you use tape as device type or you need to use some kind of format. The way to do the allocation not changed since a long time ago, but when you run the database in container mode you can hit a bug that turns your channel unusable. I will show you the bug and how to avoid it with a simple trick.
This bug hit every version since 12 and I discovered it last year when testing some scenarios, but I was able to test and post just recently. It just occurs for CDB databases and exists just one one-off solution published for 12.2. But there is one workaround more useful and works for every version.
The most interesting part is that everything that we made until now when allocate channel will not work. You can search in all doc available for allocate channel since 9i until 19c the first thing that you made after open the run{} is allocate channel. This is the default and recommended in the docs:

 

 

https://docs.oracle.com/en/database/oracle/oracle-database/19/rcmrf/ALLOCATE-CHANNEL.html#GUID-9320BFF7-0728-4B3D-85B9-2184557ECDCE,
 
https://docs.oracle.com/en/database/oracle/oracle-database/18/rcmrf/ALLOCATE-CHANNEL.html#GUID-9320BFF7-0728-4B3D-85B9-2184557ECDCE

 

https://docs.oracle.com/cd/A97630_01/server.920/a96566/rcmconc1.htm
My environment for this post is 18c database (last update from April 2019) with just one PDB:

 

 

[oracle@orcloel7 ~]$ sqlplus / as sysdba




SQL*Plus: Release 18.0.0.0.0 - Production on Sun Jun 30 23:24:23 2019

Version 18.6.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Version 18.6.0.0.0




SQL> show pdbs




    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       READ ONLY  NO

         3 ORCL18P                        READ WRITE NO

SQL> exit

Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Version 18.6.0.0.0

[oracle@orcloel7 ~]$

And for rman I show the schema and the backup for one datafile (#12) just to have smaller output.

RMAN> report schema;




Report of database schema for database with db_unique_name ORCL18C




List of Permanent Datafiles

===========================

File Size(MB) Tablespace           RB segs Datafile Name

---- -------- -------------------- ------- ------------------------

1    870      SYSTEM               YES     /u01/app/oracle/oradata/ORCL18C/system01.dbf

3    490      SYSAUX               NO      /u01/app/oracle/oradata/ORCL18C/sysaux01.dbf

4    330      UNDOTBS1             YES     /u01/app/oracle/oradata/ORCL18C/undotbs01.dbf

5    290      PDB$SEED:SYSTEM      NO      /u01/app/oracle/oradata/ORCL18C/pdbseed/system01.dbf

6    340      PDB$SEED:SYSAUX      NO      /u01/app/oracle/oradata/ORCL18C/pdbseed/sysaux01.dbf

7    5        USERS                NO      /u01/app/oracle/oradata/ORCL18C/users01.dbf

8    100      PDB$SEED:UNDOTBS1    NO      /u01/app/oracle/oradata/ORCL18C/pdbseed/undotbs01.dbf

9    300      ORCL18P:SYSTEM       YES     /u01/app/oracle/oradata/ORCL18C/ORCL18P/system01.dbf

10   360      ORCL18P:SYSAUX       NO      /u01/app/oracle/oradata/ORCL18C/ORCL18P/sysaux01.dbf

11   100      ORCL18P:UNDOTBS1     YES     /u01/app/oracle/oradata/ORCL18C/ORCL18P/undotbs01.dbf

12   5        ORCL18P:USERS        NO      /u01/app/oracle/oradata/ORCL18C/ORCL18P/users01.dbf




List of Temporary Files

=======================

File Size(MB) Tablespace           Maxsize(MB) Tempfile Name

---- -------- -------------------- ----------- --------------------

1    37       TEMP                 32767       /u01/app/oracle/oradata/ORCL18C/temp01.dbf

2    62       PDB$SEED:TEMP        32767       /u01/app/oracle/oradata/ORCL18C/pdbseed/temp012019-05-25_23-51-33-086-PM.dbf

3    62       ORCL18P:TEMP         32767       /u01/app/oracle/oradata/ORCL18C/ORCL18P/temp01.dbf




RMAN> list backup of datafile 12 completed after "sysdate - 60/1440";




starting full resync of recovery catalog

full resync complete




List of Backup Sets

===================







BS Key  Type LV Size       Device Type Elapsed Time Completion Time

------- ---- -- ---------- ----------- ------------ ----------------

4610    Incr 1  40.00K     SBT_TAPE    00:00:02     02-07-2019_00-03

        BP Key: 4611   Status: AVAILABLE  Compressed: YES  Tag: TAG20190701T235918

        Handle: VB$_4142545763_4609I   Media:

  List of Datafiles in backup set 4610

  File LV Type Ckp SCN    Ckp Time         Abs Fuz SCN Sparse Name

  ---- -- ---- ---------- ---------------- ----------- ------ ----

  12   1  Incr 1714757    02-07-2019_00-03              NO    /u01/app/oracle/oradata/ORCL18C/ORCL18P/users01.dbf




BS Key  Type LV Size       Device Type Elapsed Time Completion Time

------- ---- -- ---------- ----------- ------------ ----------------

4614    Incr 0  36.50K     SBT_TAPE    00:00:02     02-07-2019_00-03

        BP Key: 4615   Status: AVAILABLE  Compressed: YES  Tag: TAG20190701T235918

        Handle: VB$_4142545763_4609_12   Media:

  List of Datafiles in backup set 4614

  File LV Type Ckp SCN    Ckp Time         Abs Fuz SCN Sparse Name

  ---- -- ---- ---------- ---------------- ----------- ------ ----

  12   0  Incr 1714757    02-07-2019_00-03              NO    /u01/app/oracle/oradata/ORCL18C/ORCL18P/users01.dbf




RMAN>

But look the example below where I try to restore the datafile (maybe because was corrupted – whatever the reason):

RMAN> run{

2> ALLOCATE CHANNEL CH1 DEVICE TYPE 'SBT_TAPE' FORMAT '%d_%U' PARMS  "SBT_LIBRARY=/u01/app/oracle/product/18.6.0.0/dbhome_1/lib/libra.so, ENV=(RA_WALLET='location=file:/u01/app/oracle/product/18.6.0.0/dbhome_1/dbs/ra_zdlra credential_alias=zdlra-scan:1521/zdlra:orcl')";

ALTER PLUGGABLE DATABASE ORCL18P CLOSE IMMEDIATE INSTANCES=ALL;

RESTORE DATAFILE 12;

RECOVER DATAFILE 12;

}3> 4> 5> 6>




allocated channel: CH1

channel CH1: SID=72 device type=SBT_TAPE

channel CH1: RA Library (ZDLRA) SID=8CA6E4CA5A751068E053010310AC0A07




Statement processed

starting full resync of recovery catalog

full resync complete




Starting restore at 02-07-2019_00-18

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=72 device type=DISK




RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of restore command at 07/02/2019 00:18:08

RMAN-06026: some targets not found - aborting restore

RMAN-06100: no channel to restore a backup or copy of datafile 12




RMAN>

And now this:

RMAN> run{

ALTER PLUGGABLE DATABASE ORCL18P CLOSE IMMEDIATE INSTANCES=ALL;

ALLOCATE CHANNEL CH1 DEVICE TYPE 'SBT_TAPE' FORMAT '%d_%U' PARMS  "SBT_LIBRARY=/u01/app/oracle/product/18.6.0.0/dbhome_1/lib/libra.so, ENV=(RA_WALLET='location=file:/u01/app/oracle/product/18.6.0.0/dbhome_1/dbs/ra_zdlra credential_alias=zdlra-scan:1521/zdlra:orcl')";

RESTORE DATAFILE 12;

RECOVER DATAFILE 12;

} 2> 3> 4> 5> 6>




Statement processed

starting full resync of recovery catalog

full resync complete




allocated channel: CH1

channel CH1: SID=63 device type=SBT_TAPE

channel CH1: RA Library (ZDLRA) SID=8CA703A063E7127DE053010310ACC85B




Starting restore at 02-07-2019_00-18




channel CH1: starting datafile backup set restore

channel CH1: specifying datafile(s) to restore from backup set

channel CH1: restoring datafile 00012 to /u01/app/oracle/oradata/ORCL18C/ORCL18P/users01.dbf

channel CH1: reading from backup piece VB$_4142545763_4609_12

channel CH1: piece handle=VB$_4142545763_4609_12 tag=TAG20190701T235918

channel CH1: restored backup piece 1

channel CH1: restore complete, elapsed time: 00:00:25

Finished restore at 02-07-2019_00-19




Starting recover at 02-07-2019_00-19




starting media recovery

media recovery complete, elapsed time: 00:00:01




Finished recover at 02-07-2019_00-19

starting full resync of recovery catalog

full resync complete

released channel: CH1




RMAN> ALTER PLUGGABLE DATABASE ORCL18P OPEN INSTANCES=ALL;




Statement processed

starting full resync of recovery catalog

full resync complete




RMAN>

 

Saw the bug? YES, the bug occurs when you allocate the channel and after execute a close command. When you do the “close”, rman internally cleans everything related for the session (including allocated channels) and you can hit the “RMAN-06100: no channel to restore a backup” or others similar errors like RMAN-06012: channel: CH1 not allocated.

 

 

The bug is Bug 28076886 – RMAN: RMAN-06012: “channel: %s not allocated” after ALTER PLUGGABLE DATABASE CLOSE command (Doc ID 28076886.8)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=28076886.8.

 

The error occurs even for disk channels too (look full example):

 

 

RMAN> RUN {

ALLOCATE CHANNEL CH1 DEVICE TYPE DISK;

ALTER PLUGGABLE DATABASE ORCL18P CLOSE;

RECOVER DATABASE PREVIEW;

RELEASE CHANNEL CH1;

}2> 3> 4> 5> 6>




allocated channel: CH1

channel CH1: SID=110 device type=DISK




Statement processed

starting full resync of recovery catalog

full resync complete




Starting recover at 01-07-2019_00-34

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=110 device type=DISK

using channel ORA_DISK_1




no channel to restore a backup or copy of archived log for thread 1 with sequence 5 and starting SCN of 1552035

no channel to restore a backup or copy of archived log for thread 1 with sequence 6 and starting SCN of 1608615

no channel to restore a backup or copy of archived log for thread 1 with sequence 7 and starting SCN of 1669357

no channel to restore a backup or copy of archived log for thread 1 with sequence 8 and starting SCN of 1681533

no channel to restore a backup or copy of archived log for thread 1 with sequence 9 and starting SCN of 1681555

no channel to restore a backup or copy of archived log for thread 1 with sequence 10 and starting SCN of 1682610

List of Archived Log Copies for database with db_unique_name ORCL18C

=====================================================================




Key     Thrd Seq     S Low Time

------- ---- ------- - ----------------

3516    1    11      A 26-05-2019_19-13

        Name: /u01/app/oracle/oradata/ORCL18C/archivelog/2019_06_30/o1_mf_1_11_gklbz4oh_.arc




recovery will be done up to SCN 1603932

Media recovery start SCN is 1603932

Recovery must be done beyond SCN 1692620 to clear datafile fuzziness

Finished recover at 01-07-2019_00-35




RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-03002: failure of release command at 07/01/2019 00:35:01

RMAN-06012: channel: CH1 not allocated




RMAN> RUN {

ALTER PLUGGABLE DATABASE ORCL18P CLOSE;

ALLOCATE CHANNEL CH1 DEVICE TYPE DISK;

RECOVER DATABASE PREVIEW;

RELEASE CHANNEL CH1;

}2> 3> 4> 5> 6>




Statement processed

starting full resync of recovery catalog

full resync complete




allocated channel: CH1

channel CH1: SID=110 device type=DISK




Starting recover at 01-07-2019_00-35




no channel to restore a backup or copy of archived log for thread 1 with sequence 5 and starting SCN of 1552035

no channel to restore a backup or copy of archived log for thread 1 with sequence 6 and starting SCN of 1608615

no channel to restore a backup or copy of archived log for thread 1 with sequence 7 and starting SCN of 1669357

no channel to restore a backup or copy of archived log for thread 1 with sequence 8 and starting SCN of 1681533

no channel to restore a backup or copy of archived log for thread 1 with sequence 9 and starting SCN of 1681555

no channel to restore a backup or copy of archived log for thread 1 with sequence 10 and starting SCN of 1682610

List of Archived Log Copies for database with db_unique_name ORCL18C

=====================================================================




Key     Thrd Seq     S Low Time

------- ---- ------- - ----------------

3516    1    11      A 26-05-2019_19-13

        Name: /u01/app/oracle/oradata/ORCL18C/archivelog/2019_06_30/o1_mf_1_11_gklbz4oh_.arc




recovery will be done up to SCN 1603932

Media recovery start SCN is 1603932

Recovery must be done beyond SCN 1693086 to clear datafile fuzziness

Finished recover at 01-07-2019_00-36




released channel: CH1




RMAN>

But can be trickier to discover the failure. Look the example below that I try to backup datafile #12:

RMAN> run{

ALLOCATE CHANNEL CH1 DEVICE TYPE 'SBT_TAPE' FORMAT '%d_%U' PARMS  "SBT_LIBRARY=/u01/app/oracle/product/18.6.0.0/dbhome_1/lib/libra.so, ENV=(RA_WALLET='location=file:/u01/app/oracle/product/18.6.0.0/dbhome_1/dbs/ra_zdlra credential_alias=zdlra-scan:1521/zdlra:orcl')";

ALTER PLUGGABLE DATABASE ORCL18P CLOSE IMMEDIATE INSTANCES=ALL;

BACKUP INCREMENTAL LEVEL 1 DATAFILE 12;

}2> 3> 4> 5>




allocated channel: CH1

channel CH1: SID=156 device type=SBT_TAPE

channel CH1: RA Library (ZDLRA) SID=8CBA2C4600D309E8E053010310AC3E17




Statement processed

starting full resync of recovery catalog

full resync complete




Starting backup at 02-07-2019_23-25

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=106 device type=DISK

channel ORA_DISK_1: starting incremental level 1 datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00012 name=/u01/app/oracle/oradata/ORCL18C/ORCL18P/users01.dbf

channel ORA_DISK_1: starting piece 1 at 02-07-2019_23-25

channel ORA_DISK_1: finished piece 1 at 02-07-2019_23-25

piece handle=/u01/app/oracle/oradata/ORCL18C/89BE7E168E3D28BDE053010310AC2497/backupset/2019_07_02/o1_mf_nnnd1_TAG20190702T232547_gkqlyx8j_.bkp tag=TAG20190702T232547 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

Finished backup at 02-07-2019_23-25




Starting Control File and SPFILE Autobackup at 02-07-2019_23-25

piece handle=/u01/app/oracle/oradata/ORCL18C/autobackup/2019_07_02/o1_mf_s_1012605951_gkqlz1vm_.bkp comment=NONE

Finished Control File and SPFILE Autobackup at 02-07-2019_23-25




RMAN> list backup of datafile 12 summary;







List of Backups

===============

Key     TY LV S Device Type Completion Time  #Pieces #Copies Compressed Tag

------- -- -- - ----------- ---------------- ------- ------- ---------- ---

4270    B  0  A SBT_TAPE    01-07-2019_00-42 1       1       YES        TAG20190701T003820

4610    B  1  A SBT_TAPE    02-07-2019_00-03 1       1       YES        TAG20190701T235918

4614    B  0  A SBT_TAPE    02-07-2019_00-03 1       1       YES        TAG20190701T235918

5197    B  1  A DISK        02-07-2019_23-25 1       1       NO         TAG20190702T232547




RMAN>

And now:

RMAN> run{

ALTER PLUGGABLE DATABASE ORCL18P CLOSE IMMEDIATE INSTANCES=ALL;

ALLOCATE CHANNEL CH1 DEVICE TYPE 'SBT_TAPE' FORMAT '%d_%U' PARMS  "SBT_LIBRARY=/u01/app/oracle/product/18.6.0.0/dbhome_1/lib/libra.so, ENV=(RA_WALLET='location=file:/u01/app/oracle/product/18.6.0.0/dbhome_1/dbs/ra_zdlra credential_alias=zdlra-scan:1521/zdlra:orcl')";

BACKUP INCREMENTAL LEVEL 1 DATAFILE 12;

}2> 3> 4> 5>




Statement processed

starting full resync of recovery catalog

full resync complete




allocated channel: CH1

channel CH1: SID=105 device type=SBT_TAPE

channel CH1: RA Library (ZDLRA) SID=8CBAA68CCD751232E053010310AC374B




Starting backup at 02-07-2019_23-44

channel CH1: starting incremental level 1 datafile backup set

channel CH1: specifying datafile(s) in backup set

input datafile file number=00012 name=/u01/app/oracle/oradata/ORCL18C/ORCL18P/users01.dbf

channel CH1: starting piece 1 at 02-07-2019_23-44

channel CH1: finished piece 1 at 02-07-2019_23-44

piece handle=ORCL18C_3pu5ma33_1_1 tag=TAG20190702T234435 comment=API Version 2.0,MMS Version 12.2.0.2

channel CH1: backup set complete, elapsed time: 00:00:03

Finished backup at 02-07-2019_23-44




Starting Control File and SPFILE Autobackup at 02-07-2019_23-44

piece handle=c-551670416-20190702-02 comment=API Version 2.0,MMS Version 12.2.0.2

Finished Control File and SPFILE Autobackup at 02-07-2019_23-44

released channel: CH1




RMAN>

 

Look that because of the bug, in the first example, the backup was made to disk and not in the tape as requested. In the last, you can see that backup was made correctly in tape. But as you saw, no error was reported.

 

Conclusion

 

As you saw a tricky bug can change everything that we already know (and need to do) for a daily basis activity. One simple “close” for a PDB can destroy your backup/restore policy that you use from decades. Besides that even without “RMAN-06100: no channel to restore a backup” or other similar errors like RMAN-06012: channel: CH1 not allocated you can have problems.
Luckily if you are running affected version (everything below 19.2) the workaround is very simple to deploy.
 
This post was published in my personal blog too: http://www.fernandosimon.com/blog/rman-allocate-channel-cdb-and-close-bug/
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


Shrink ASM Diskgroup and Exadata Grid Disks
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Shrink ASM Diskgroup and Exadata Grid Disks

Here I will cover the shrink of ASM diskgroup in Exadata environment running VM’s. The process here is the opposite of what I wrote in the previous post, but have a tricky part that demands attention to avoid errors. The same points that you checked for extending are valid now: number the cells, disks per cell, ASM mirroring, and the VM that you want to change continue to be important, but we have more now. Besides that, the post shows how to verify if you have something in the ASM internal extent map that can block the shrink and “fix” this.
Here, in this scenario, I will reduce the size for grid disks linked with diskgroup DATAC8 (that run in a cluster for VM #08).
And before continuing, be aware of Exadata disk division:

 

ASM Extent Map and Moves

So, my actual usage space for DATAC8 is:
 

 

ASMCMD> lsdg

State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB   Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  NORMAL  N         512             512   4096  4194304  29921280  29790140           498688        14645726              0             Y  DATAC8/

MOUNTED  NORMAL  N         512             512   4096  4194304   6144000   6142020           102400         3019810              0             N  RECOC8/

ASMCMD>




SQL> select name, total_mb, free_mb, total_mb - free_mb used_mb, round(100*free_mb/total_mb,2) pct_free

  2  from v$asm_diskgroup

  3  order by 1;




NAME                             TOTAL_MB    FREE_MB    USED_MB   PCT_FREE

------------------------------ ---------- ---------- ---------- ----------

DATAC8                           29921280   29790140     131140      99.56

RECOC8                            6144000    6142020       1980      99.97




SQL>


As you can see above, the RAW size is around 29.2 TB (14.6 after the ASM NORMAL mirror) and the actual free space is around 14.3TB. So, plenty of space for the reduction in this case. Unfortunately, even if I have 99% of free space don’t mean that I can reduce instantly to 1TB.
This occurs because, probably, I have a fragmented diskgroup and something can be allocated at the end of some disk and will deny me do reduce the disk. I will not dig in this topic but will show you how to check this. If you want to check with more details you can read these two articles: http://asmsupportguy.blogspot.com/2011/06/asm-file-extent-map.html and http://asmsupportguy.blogspot.com/2012/10/where-is-my-data.html.  
The idea is to use the ASM extend map from X$KFFXP to discover the object that has the MAX AU_KFFXP into some disk (whatever which one). So, I made:

 

SQL> select GROUP_NUMBER, name from v$asm_diskgroup;




GROUP_NUMBER NAME

------------ ------------------------------

           1 DATAC8

           2 RECOC8




SQL>

SQL> select VALUE from V$ASM_ATTRIBUTE where NAME='au_size' and GROUP_NUMBER=1;




VALUE

--------------------------------------------------------------------------------

4194304




SQL>

SQL> select max(AU_KFFXP) from X$KFFXP where GROUP_KFFXP=1;




MAX(AU_KFFXP)

-------------

       114195




SQL>

SQL> select NUMBER_KFFXP from X$KFFXP where AU_KFFXP = 114195;




NUMBER_KFFXP

------------

         262




SQL>

SQL> select name from v$asm_alias where FILE_NUMBER = 262 and GROUP_NUMBER = 1;




NAME

----------------------------------------------------------------------

group_1.262.983716383




SQL>



Above I discovered that the file 252 (that it is a redo – will show after), is allocated in the allocation unit 114195 from diskgroup 1 (that have – and default fore Exadata – allocation unit of 4MB). Doing a little count (114195*4), this means that this file resides in somewhere around 446GB into some disk. And if I try to reduce the disk below this value, I will receive an error.
To solve this we need to move files, in this case, the file is related to MGMTDB:

 

 

ASMCMD> ls -l +DATAC8/_MGMTDB/ONLINELOG/

Type       Redund  Striped  Time             Sys  Name

ONLINELOG  MIRROR  COARSE   MAY 03 10:00:00  Y    group_1.262.983716383

ONLINELOG  MIRROR  COARSE   MAY 03 10:00:00  Y    group_2.264.983716383

ONLINELOG  MIRROR  COARSE   MAY 03 10:00:00  Y    group_3.263.983716383

ASMCMD>

ASMCMD> ls -l +DATAC8

Type      Redund  Striped  Time         Sys  Name

                                        Y    ASM/

                                        Y    _MGMTDB/

                                        Y    exa-cl8/

PASSWORD  HIGH    COARSE   JAN 20 2017  N    orapwasm => +DATAC8/ASM/PASSWORD/pwdasm.256.933784119

PASSWORD  HIGH    COARSE   AUG 09 2018  N    orapwasm_backup => +DATAC8/ASM/PASSWORD/pwdasm.1471.983713235

ASMCMD> ls -l +RECOC8

ASMCMD>


To move MGMTDB we can follow the steps from note 2065175.1 and use the script mdbutil.pl. In this case, I moved to RECOC8:

 

[grid@exa01vm08 +ASM1]$ /tmp/MGMTDB/mdbutil.pl --mvmgmtdb --target=+RECOC8 -debug

mdbutil.pl version : 1.98

2019-06-21 11:43:33: D Executing: /u01/app/18.0.0/grid/bin/srvctl status diskgroup -g RECOC8

2019-06-21 11:43:34: D Exit code: 0

2019-06-21 11:43:34: D Output of last command execution:

Disk Group RECOC8 is running on exa01vm08

2019-06-21 11:43:34: D Executing: /u01/app/18.0.0/grid/bin/srvctl status mgmtdb

2019-06-21 11:43:35: D Exit code: 0

2019-06-21 11:43:35: D Output of last command execution:





2019-06-21 11:53:20: D Executing: /u01/app/18.0.0/grid/bin/crsctl query crs activeversion

2019-06-21 11:53:20: D Exit code: 0

2019-06-21 11:53:20: D Output of last command execution:

Oracle Clusterware active version on the cluster is [18.0.0.0.0]

2019-06-21 11:53:20: I Starting the Cluster Health Analysis Resource

2019-06-21 11:53:20: D Executing: /u01/app/18.0.0/grid/bin/srvctl start cha

2019-06-21 11:53:22: D Exit code: 0

2019-06-21 11:53:22: D Output of last command execution:

2019-06-21 11:53:22: I MGMTDB Successfully moved to +RECOC8!

[grid@exa01vm08 +ASM1]$


The output above was cropped and you can see in raw here. And after that, doing the same query as above, I checked again the extent map:

 

SQL> select max(AU_KFFXP) from X$KFFXP where GROUP_KFFXP=1;




MAX(AU_KFFXP)

-------------

       114132




SQL> select NUMBER_KFFXP from X$KFFXP where AU_KFFXP = 114132;




NUMBER_KFFXP

------------

         255




SQL> select name from v$asm_alias where FILE_NUMBER = 255 and GROUP_NUMBER = 1;




NAME

----------------------------------------------------------------------

REGISTRY.255.933784121




SQL>

So, more one file to move and not it is related to OCR. To move I made:

 

[root@exa01vm08 ~]# export ORACLE_HOME=/u01/app/18.0.0/grid

[root@exa01vm08 ~]# export PATH=$ORACLE_HOME/bin:$PATH

[root@exa01vm08 ~]#

[root@exa01vm08 ~]# ocrconfig -add +RECOC8

[root@exa01vm08 ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          4

         Total space (kbytes)     :     491684

         Used space (kbytes)      :      88624

         Available space (kbytes) :     403060

         ID                       :  354072626

         Device/File Name         :    +DATAC8

                                    Device/File integrity check succeeded

         Device/File Name         :    +RECOC8

                                    Device/File integrity check succeeded




                                    Device/File not configured




                                    Device/File not configured




                                    Device/File not configured




         Cluster registry integrity check succeeded




         Logical corruption check succeeded




[root@exa01vm08 ~]#

[root@exa01vm08 ~]# ocrconfig -delete +DATAC8

[root@exa01vm08 ~]#

[root@exa01vm08 ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :

         Version                  :          4

         Total space (kbytes)     :     491684

         Used space (kbytes)      :      88624

         Available space (kbytes) :     403060

         ID                       :  354072626

         Device/File Name         :    +RECOC8

                                    Device/File integrity check succeeded




                                    Device/File not configured




                                    Device/File not configured




                                    Device/File not configured




                                    Device/File not configured




         Cluster registry integrity check succeeded




         Logical corruption check succeeded




[root@exa01vm08 ~]#


Above I: added a new place to OCR store the OCR files(RECOC8) and deleted the other (DATAC8). So, we still have more to move (votedisks, older OCR backups and ASM password file). To move votedisks I made:

 

[root@exa01vm08 ~]# crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------
  1. ONLINE 077367b255804f0abf804a3a3ca8045d (o/200.200.10.11;200.200.10.12/DATAC8_CD_02_exaceladm04) [DATAC8]
  2. ONLINE 59ef748a7d2a4f43bffb54a90fe1b1a9 (o/200.200.10.13;200.200.10.14/DATAC8_CD_02_exaceladm05) [DATAC8]
  3. ONLINE cbaad30809f94fdcbfe5d60f6529ba63 (o/200.200.10.7;200.200.10.8/DATAC8_CD_02_exaceladm02) [DATAC8]
Located 3 voting disk(s).

[root@exa01vm08 ~]#

[root@exa01vm08 ~]# crsctl replace votedisk +RECOC8

Successful addition of voting disk 081e9e767bc44ff2bff6067229378db5.

Successful addition of voting disk 82649e3309d34fa0bf4fd3c89c93e42f.

Successful addition of voting disk 57f3ec44b44b4fcdbf35e716e13011e9.

Successful deletion of voting disk 077367b255804f0abf804a3a3ca8045d.

Successful deletion of voting disk 59ef748a7d2a4f43bffb54a90fe1b1a9.

Successful deletion of voting disk cbaad30809f94fdcbfe5d60f6529ba63.

Successfully replaced voting disk group with +RECOC8.

CRS-4266: Voting file(s) successfully replaced

[root@exa01vm08 ~]#

[root@exa01vm08 ~]# crsctl query css votedisk

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------
  1. ONLINE 081e9e767bc44ff2bff6067229378db5 (o/200.200.10.11;200.200.10.12/RECOC8_CD_02_exaceladm04) [RECOC8]
  2. ONLINE 82649e3309d34fa0bf4fd3c89c93e42f (o/200.200.10.9;200.200.10.10/RECOC8_CD_02_exaceladm03) [RECOC8]
  3. ONLINE 57f3ec44b44b4fcdbf35e716e13011e9 (o/200.200.10.13;200.200.10.14/RECOC8_CD_03_exaceladm05) [RECOC8]
Located 3 voting disk(s).

[root@exa01vm08 ~]#


And to backups for OCR:

 

[root@exa01vm08 ~]# ocrconfig -showbackuploc

The Oracle Cluster Registry backup location is [+DATAC8].

[root@exa01vm08 ~]# ocrconfig -backuploc +RECOC8

[root@exa01vm08 ~]#

[root@exa01vm08 ~]#

[root@exa01vm08 ~]# ocrconfig -showbackuploc

The Oracle Cluster Registry backup location is [+RECOC8].

[root@exa01vm08 ~]#

[root@exa01vm08 ~]# ocrconfig -manualbackup




exa01vm08     2019/06/21 12:08:05     +RECOC8:/exa-cl8/OCRBACKUP/backup_20190621_120805.ocr.4722.1011528485     671056737




exa01vm08     2018/08/16 12:38:11     +DATAC8:/exa-cl8/OCRBACKUP/backup_20180816_123811.ocr.1511.984314291     671056737




exa01vm08     2018/08/09 15:56:08     +DATAC8:/exa-cl8/OCRBACKUP/backup_20180809_155608.ocr.1508.983721369     2960767134

[root@exa01vm08 ~]#



 

After one more backup, I deleted manually the old OCR backups. And to move ASM password:

 

[grid@exa01vm08 +ASM1]$ asmcmd pwget --asm

+DATAC8/orapwASM

[grid@exa01vm08 +ASM1]$ srvctl config asm -detail

ASM home: <CRS home>

Password file: +DATAC8/orapwASM

Backup of Password file: +DATAC8/orapwASM_backup

ASM listener: LISTENER

ASM is enabled.

ASM is individually enabled on nodes:

ASM is individually disabled on nodes:

ASM instance count: ALL

Cluster ASM listener: ASMNET1LSNR_ASM

[grid@exa01vm08 +ASM1]$

[grid@exa01vm08 +ASM1]$

[grid@exa01vm08 +ASM1]$ asmcmd pwmove --asm +DATAC8/orapwASM +RECOC8/orapwASM -f

moving +DATAC8/orapwASM -> +RECOC8/orapwASM

[grid@exa01vm08 +ASM1]$

[grid@exa01vm08 +ASM1]$ srvctl config asm -detail

ASM home: <CRS home>

Password file: +RECOC8/orapwASM

Backup of Password file: +DATAC8/orapwASM_backup

ASM listener: LISTENER

ASM is enabled.

ASM is individually enabled on nodes:

ASM is individually disabled on nodes:

ASM instance count: ALL

Cluster ASM listener: ASMNET1LSNR_ASM

[grid@exa01vm08 +ASM1]$

[grid@exa01vm08 +ASM1]$ asmcmd pwmove --asm +DATAC8/orapwASM_backup +RECOC8/orapwASM_backup -f

moving +DATAC8/orapwASM_backup -> +RECOC8/orapwASM_backup

[grid@exa01vm08 +ASM1]$

[grid@exa01vm08 +ASM1]$


Maybe you don’t need to move everything, I recommend that after every move you check the extent map for ASM and verify if the value less the minimum size per disk that you want.
Since I changed a lot of things I made a restart for the cluster in both nodes:

 

 

[root@exa01vm08 ~]# crsctl stop cluster -all

CRS-2673: Attempting to stop 'ora.crsd' on 'exa01vm08'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'exa01vm08'



CRS-2673: Attempting to stop 'ora.diskmon' on 'exa01vm08'

CRS-2677: Stop of 'ora.diskmon' on 'exa01vm08' succeeded

[root@exa01vm08 ~]#

[root@exa01vm08 ~]# crsctl start cluster -all

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'exa01vm08'





CRS-2672: Attempting to start 'ora.crsd' on 'exa01vm08'

CRS-2676: Start of 'ora.crsd' on 'exa01vm08' succeeded

[root@exa01vm08 ~]#

And after all of these moves I got the extent map with:

 

 

SQL> select max(AU_KFFXP) from X$KFFXP where GROUP_KFFXP=1;




MAX(AU_KFFXP)

-------------

        15149




SQL>

SQL> select NUMBER_KFFXP from X$KFFXP where AU_KFFXP = 15149;




NUMBER_KFFXP

------------

           1




SQL>
Basically, this means that something is around 60Gb of some disk. But the link is for a file #1. So, to check what is this you can query inside X$KFFXP:

 

SQL> select NUMBER_KFFXP "ASM file number",

  2  DECODE (NUMBER_KFFXP,

  3          1, 'File directory'

  4          , 2, 'Disk directory'

  5          , 3, 'Active change directory'

  6          , 4, 'Continuing operations directory'

  7          , 5, 'Template directory'

  8          , 6, 'Alias directory'

  9          , 7, 'ADVM file directory'

 10          , 8, 'Disk free space directory'

 11          , 9, 'Attributes directory'

 12          , 10, 'ASM User directory'

 13          , 11, 'ASM user group directory'

 14          , 12, 'Staleness directory'

 15          , 253, 'spfile for ASM instance'

 16          , 254, 'Stale bit map space registry '

 17          , 255, 'Oracle Cluster Repository registry') "ASM metadata file name",

 18  count(AU_KFFXP) "Allocation units"

 19  from X$KFFXP

 20  where GROUP_KFFXP=1

 21  group by NUMBER_KFFXP

 22  order by 1;




ASM file number ASM metadata file name             Allocation units

--------------- ---------------------------------- ----------------

              1 File directory                                   15

              2 Disk directory                                    3

              3 Active change directory                          69

              4 Continuing operations directory                   6

              5 Template directory                                3

              6 Alias directory                                   3

              8 Disk free space directory                         3

              9 Attributes directory                              3

             12 Staleness directory                               3

             13                                                   3

             16                                                   3




ASM file number ASM metadata file name             Allocation units

--------------- ---------------------------------- ----------------

            120                                                   3

            121                                                 180

            253 spfile for ASM instance                           2

            254 Stale bit map space registry                     18

            255 Oracle Cluster Repository registry               83

            262                                                  99

            321                                                 185

            428                                                 221

            492                                                 305

            551                                                 129

            584                                                 705




ASM file number ASM metadata file name             Allocation units

--------------- ---------------------------------- ----------------

            740                                                1865

           1003                                                 105

           1417                                                   2

           4708                                                   6

           4710                                                   6

           4713                                                   6

           4716                                                   6

           4719                                                   6

           4722                                                   6

           4731                                                  52

           4733                                                  52




ASM file number ASM metadata file name             Allocation units

--------------- ---------------------------------- ----------------

           4736                                                  52

           4739                                                 379

           4742                                                 379

           4744                                                  52

           4747                                                 379

           4750                                                  24

           4753                                                 129

           4756                                                5125

           4759                                                7685

           4762                                                1029

           4765                                                 517




ASM file number ASM metadata file name             Allocation units


--------------- ---------------------------------- ----------------

           4768                                                  52

           4771                                                   4

           4773                                               11819




47 rows selected.




SQL>



In this case, it is a file directory ASM, but I will not try to move because it is even impossible in my case because does not appears in ASMCMD command. Basically, can be internal file/directory that we can’t see. Since I know that the diskgroup is empty and the allocation is related to internal ASM, I continued.
Remember at the end of the procedure, after shrinking in grid disk (that I will show later) to move again these files to original diskgroup. I will not cover this new move here in the post but the steps are the same that I showed before.

 

Shrink in ASM

The shrinking part start in ASM it is simple and you just need to define the value that you want per disk and reduce them. The most critical part here is to define one size for the disk that it is aligned with 16 MB of the grid disk. Different from when you are increasing the space, here, for shrink, you start to define the size in ASM side and after go to grid disk. And since the grid disk in storage cell round down the value to the nearest 16MB value, you can misalign and have ASM disk with a different value than grid disk (and this will be REALLY BAD). A trick secret is to think directly in GB to avoid the error.
For 16 Mb explanation you can check in the Exadata docs https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-administering-asm.html#GUID-42DA2512-667D-443C-93C5-6E5110DFAE21:
Find the closest 16 MB boundary for the new grid disk size. If you do not perform this check, then the cell will round down the grid disk size to the nearest 16 MB boundary automatically, and you could end up with a mismatch in size between the Oracle ASM disks and the grid disks.
Since I defined to reduce the DATAC8 to around 1.8TB I defined the disks with 30GB:

 

SQL> select ((30*12)*5) as sizeGB FROM dual;




    SIZEGB

----------

      1800




SQL>


Remember in the beginning that you need to know your environment? Here I have 12 disks per cell and 5 cells:

 

SQL> set linesize 250

SQL> select dg.name, d.failgroup, d.state, d.header_status, d.mount_status, d.mode_status, count(1) num_disks

  2  from v$asm_disk d, v$asm_diskgroup dg

  3  where d.group_number = dg.group_number

  4  and dg.name IN ('DATAC8')

  5  group by dg.name, d.failgroup, d.state, d.header_status, d.mount_status, d.mode_status

  6  order by 1,2,3;




NAME   FAILGROUP                      STATE    HEADER_STATU MOUNT_S MODE_ST  NUM_DISKS

------ ------------------------------ -------- ------------ ------- ------- ----------

DATAC8 EXACELADM01                    NORMAL   MEMBER       CACHED  ONLINE          12

DATAC8 EXACELADM02                    NORMAL   MEMBER       CACHED  ONLINE          12

DATAC8 EXACELADM03                    NORMAL   MEMBER       CACHED  ONLINE          12

DATAC8 EXACELADM04                    NORMAL   MEMBER       CACHED  ONLINE          12

DATAC8 EXACELADM05                    NORMAL   MEMBER       CACHED  ONLINE          12




SQL>



After choosing the size that you want, you need to check if all the grid disk as online. It is not recommended to do the shrink with missing disks or even faulty failgroups. To check both at the same time you can do:

 

SQL> col path format a100

SQL> select dg.name, d.failgroup, d.path

  2  from v$asm_disk d, v$asm_diskgroup dg

  3  where d.group_number = dg.group_number

  4  and dg.name IN ('DATAC8') and state = 'NORMAL'

  5  order by 1,2,3;




NAME                           FAILGROUP                      PATH

------------------------------ ------------------------------ ----------------------------------------------------------------------------------------------------

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_00_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_01_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_02_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_03_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_04_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_05_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_06_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_07_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_08_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_09_exaceladm01

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_10_exaceladm01




NAME                           FAILGROUP                      PATH

------------------------------ ------------------------------ ----------------------------------------------------------------------------------------------------

DATAC8                         EXACELADM01                    o/200.200.10.5;200.200.10.6/DATAC8_CD_11_exaceladm01

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_00_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_01_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_02_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_03_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_04_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_05_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_06_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_07_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_08_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_09_exaceladm02




NAME                           FAILGROUP                      PATH

------------------------------ ------------------------------ ----------------------------------------------------------------------------------------------------

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_10_exaceladm02

DATAC8                         EXACELADM02                    o/200.200.10.7;200.200.10.8/DATAC8_CD_11_exaceladm02

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_00_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_01_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_02_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_03_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_04_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_05_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_06_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_07_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_08_exaceladm03




NAME                           FAILGROUP                      PATH

------------------------------ ------------------------------ ----------------------------------------------------------------------------------------------------

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_09_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_10_exaceladm03

DATAC8                         EXACELADM03                    o/200.200.10.9;200.200.10.10/DATAC8_CD_11_exaceladm03

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_00_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_01_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_02_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_03_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_04_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_05_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_06_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_07_exaceladm04




NAME                           FAILGROUP                      PATH

------------------------------ ------------------------------ ----------------------------------------------------------------------------------------------------

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_08_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_09_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_10_exaceladm04

DATAC8                         EXACELADM04                    o/200.200.10.11;200.200.10.12/DATAC8_CD_11_exaceladm04

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_00_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_01_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_02_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_03_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_04_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_05_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_06_exaceladm05




NAME                           FAILGROUP                      PATH

------------------------------ ------------------------------ ----------------------------------------------------------------------------------------------------

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_07_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_08_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_09_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_10_exaceladm05

DATAC8                         EXACELADM05                    o/200.200.10.13;200.200.10.14/DATAC8_CD_11_exaceladm05




60 rows selected.




SQL>


After all the checks 16MB align, missing disks, and online failgroups you can do the resize in ASM:

 

SQL> alter diskgroup DATAC8 resize all size 30720M rebalance power 1024;




Diskgroup altered.



As you can see above, I specified in MB and I made this to show you where the error can occurs. If I defined (as example) disk size as 30700M it will not be aligned to 16MB (30700/16 = 1918,75) and the griddisk will be 30688M and if ASM allocate something in the end of the disk you will corrupt something.
After executing the change in ASM you just continue after having nothing in v$asm_operation. And for ASM now you see the size for your diskgroup:

 

SQL> select name, total_mb, free_mb, total_mb - free_mb used_mb, round(100*free_mb/total_mb,2) pct_free

  2  from v$asm_diskgroup

  3  order by 1;




NAME                             TOTAL_MB    FREE_MB    USED_MB   PCT_FREE

------------------------------ ---------- ---------- ---------- ----------

DATAC8                            1843200    1837896       5304      99.71

RECOC8                            6144000    6016048     127952      97.92




SQL>



Shrink for GRID DISK

After shrinking in ASM side, you need to reduce in storage side the grid disk to release the space to celldisk. The procedure is the same for the increase and you use ALTER GRIDDISK to specify the new value.
Just to show, that before the shrink in storage cell I have disk defined for grid disk (487GB) and celldisk (346GB free):

 

CellCLI> list griddisk where name = 'DATAC8_CD_00_exaceladm01' detail;

         name:                   DATAC8_CD_00_exaceladm01

         asmDiskGroupName:       DATAC8

         asmDiskName:            DATAC8_CD_00_EXACELADM01

         asmFailGroupName:       EXACELADM01

         availableTo:

         cachedBy:               FD_00_exaceladm01

         cachingPolicy:          default

         cellDisk:               CD_00_exaceladm01

         comment:                "Cluster exa-cl8 diskgroup DATAC8"

         creationTime:           2017-01-20T16:19:29+01:00

         diskType:               HardDisk

         errorCount:             0

         id:                     0ddfb7c0-1351-4df3-b5d6-82d3bbffa6e2

         size:                   487G

         status:                 active




CellCLI>




CellCLI> list celldisk where name = 'CD_00_exaceladm01' detail;

         name:                   CD_00_exaceladm01

         comment:

         creationTime:           2016-11-29T10:44:55+01:00

         deviceName:             /dev/sda

         devicePartition:        /dev/sda3

         diskType:               HardDisk

         errorCount:             0

         freeSpace:              346.0625G

         id:                     f73cfdb7-aa40-47d9-99e0-e39e456b0b55

         physicalDisk:           PUUK3V

         size:                   7.1192474365234375T

         status:                 normal




CellCLI>



As before, you have two option: execute manually disk by disk or using the script with dcli. Below you see that I created and called the script (I cropped the output but you can see the RAW execution here):

 

[DOM0 - root@exadbadm01 tmp]$  vi Change_Disk_Size_Of_DATAC8_Cluster_To_30G.sh

[DOM0 - root@exadbadm01 tmp]$

[DOM0 - root@exadbadm01 tmp]$  chmod +x Change_Disk_Size_Of_DATAC8_Cluster_To_30G.sh

[DOM0 - root@exadbadm01 tmp]$

[DOM0 - root@exadbadm01 tmp]$  cat Change_Disk_Size_Of_DATAC8_Cluster_To_30G.sh

dcli -l root -c exaceladm01 cellcli -e ALTER GRIDDISK DATAC8_CD_00_EXACELADM01 size=30720M;

dcli -l root -c exaceladm02 cellcli -e ALTER GRIDDISK DATAC8_CD_00_EXACELADM02 size=30720M;





dcli -l root -c exaceladm02 cellcli -e ALTER GRIDDISK DATAC8_CD_11_EXACELADM02 size=30720M;

dcli -l root -c exaceladm03 cellcli -e ALTER GRIDDISK DATAC8_CD_11_EXACELADM03 size=30720M;

dcli -l root -c exaceladm04 cellcli -e ALTER GRIDDISK DATAC8_CD_11_EXACELADM04 size=30720M;

dcli -l root -c exaceladm05 cellcli -e ALTER GRIDDISK DATAC8_CD_11_EXACELADM05 size=30720M;

[DOM0 - root@exadbadm01 tmp]$

[DOM0 - root@exadbadm01 tmp]$




[DOM0 - root@exadbadm01 tmp]$  ./Change_Disk_Size_Of_DATAC8_Cluster_To_30G.sh

exaceladm01: GridDisk DATAC8_CD_00_exaceladm01 successfully altered

exaceladm02: GridDisk DATAC8_CD_00_exaceladm02 successfully altered





exaceladm02: GridDisk DATAC8_CD_11_exaceladm02 successfully altered

exaceladm03: GridDisk DATAC8_CD_11_exaceladm03 successfully altered

exaceladm04: GridDisk DATAC8_CD_11_exaceladm04 successfully altered

exaceladm05: GridDisk DATAC8_CD_11_exaceladm05 successfully altered

[DOM0 - root@exadbadm01 tmp]$

 

Check again that I used the value defined in MB, 30720MB in this case. Again, be careful with 16MB align. After the change I have in storage cell:

 

CellCLI> list griddisk where name = 'DATAC8_CD_00_exaceladm01' detail;

         name:                   DATAC8_CD_00_exaceladm01

         asmDiskGroupName:       DATAC8

         asmDiskName:            DATAC8_CD_00_EXACELADM01

         asmFailGroupName:       EXACELADM01

         availableTo:

         cachedBy:               FD_00_exaceladm01

         cachingPolicy:          default

         cellDisk:               CD_00_exaceladm01

         comment:                "Cluster exa-cl8 diskgroup DATAC8"

         creationTime:           2017-01-20T16:19:29+01:00

         diskType:               HardDisk

         errorCount:             0

         id:                     0ddfb7c0-1351-4df3-b5d6-82d3bbffa6e2

         size:                   30G

         status:                 active




CellCLI> list celldisk where name = 'CD_00_exaceladm01' detail;

         name:                   CD_00_exaceladm01

         comment:

         creationTime:           2016-11-29T10:44:55+01:00

         deviceName:             /dev/sda

         devicePartition:        /dev/sda3

         diskType:               HardDisk

         errorCount:             0

         freeSpace:              803.0625G

         id:                     f73cfdb7-aa40-47d9-99e0-e39e456b0b55

         physicalDisk:           PUUK3V

         size:                   7.1192474365234375T

         status:                 normal




CellCLI>


Conclusion

Execute the shrink for Exadata is something that you don’t realize every day in your daily tasks. I suppose that not in a quarterly task because today is always “add, add, add”. Besides the increase of space, the shrink is trickier, and you really need to take care for more steps. Maybe you need to move some data, restart the cluster, move data again.
But, the most critical part is the 16MB alignment, more than when you add space. This is important because for shrink you define the value in two places: for ASM and for grid disk. If you choose a bad value, the size for grid disk can differ ASM and you will corrupt something. As I told, if you think directly in GB instead of MB for disk sizes, you are safer because GB is always compatible with 16MB.
 
This post was published in my personal blog too: http://www.fernandosimon.com/blog/shrink-asm-diskgroup-and-exadata-grid-disks/
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


Increase Size For Exadata Grid Disks
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Increase Size For Exadata Grid Disks

A quick article about a maintenance task for Oracle Exadata when you are using OVM and you divided your storage cell disks for every VM. Here I will show you how to extend your Grid Disks to add more space in your ASM diskgroup.
The first thing is being aware of your environment, before everything you need to know the points below because, they are important to calculate the new space, and to avoid do something wrong:
  • Number of cells in your appliance.
  • Number of disks for each cell.
  • Mirroring for your ASM.
  • The VM that you want to add the space.
The “normal” Exadata storage cell has 12 disks, the Extreme Flash version uses 8 disks per storage. If you have doubt about how many disks you have per storage cell, you can connect in each one and check the number of celldisks you have. And before continuing, be aware of Exadata disk division:

 

 

To do this change we execute three major steps: ASM, Exadata Storage, and ASM again.

 

For ASM

 

Inside ASM we can use this query to collect some information about the diskgroups:

 

SQL> col name format a12 head 'Disk Group';

SQL> col total_mb format 999999999 head 'Total GB|Raw';

SQL> col free_mb format 999999999 head 'Free GB|Raw';

SQL> col avail_mb format 999999999 head 'Total GB|Usable';

SQL> col usable_mb format 999999999 head 'Free GB|Usable';

SQL> col usable_mb format 999999999 head 'Free GB|Usable';

SQL> col cdisks format 99999 head 'Cell|Disksl';

SQL>

SQL> select a.name,a.total_mb,a.free_mb,a.type,

  2  decode(a.type,'NORMAL',a.total_mb/2/1024,'HIGH',a.total_mb/3/1024) avail_mb,

  3  decode(a.type,'NORMAL',a.free_mb/2/1024,'HIGH',a.free_mb/3/1024) usable_mb,

  4  count(b.path) cdisks

  5  from v$asm_diskgroup a, v$asm_disk b

  6  where a.group_number=b.group_number

  7  group by a.name,a.total_mb,a.free_mb,a.type,

  8  decode(a.type,'NORMAL',a.total_mb/2/1024,'HIGH',a.total_mb/3/1024) ,

  9  decode(a.type,'NORMAL',a.free_mb/2/1024,'HIGH',a.free_mb/3/1024)

 10  order by 2,1

 11  /




               Total GB    Free GB          Total GB    Free GB   Cell

Disk Group          Raw        Raw TYPE       Usable     Usable Disksl

------------ ---------- ---------- ------ ---------- ---------- ------

RECOC3          4239360    2465540 NORMAL       2070       1204     60

DATAC3         15790080    2253048 NORMAL       7710       1100     60




SQL>

SQL> select dg.name, d.failgroup, d.state, d.header_status, d.mount_status, d.mode_status, count(1) num_disks

  2  from v$asm_disk d, v$asm_diskgroup dg

  3  where d.group_number = dg.group_number

  4  and dg.name IN ('DATAC3')

  5  group by dg.name, d.failgroup, d.state, d.header_status, d.mount_status, d.mode_status

  6  order by 1,2,3;




NAME   FAILGROUP                      STATE    HEADER_STATU MOUNT_S MODE_ST  NUM_DISKS

------ ------------------------------ -------- ------------ ------- ------- ----------

DATAC3 EXACELADM01                    NORMAL   MEMBER       CACHED  ONLINE          12

DATAC3 EXACELADM02                    NORMAL   MEMBER       CACHED  ONLINE          12

DATAC3 EXACELADM03                    NORMAL   MEMBER       CACHED  ONLINE          12

DATAC3 EXACELADM04                    NORMAL   MEMBER       CACHED  ONLINE          12

DATAC3 EXACELADM05                    NORMAL   MEMBER       CACHED  ONLINE          12




SQL>

 

 

With that, I have three important information: number the disks (60)redundancy type (NORMAL)total actual size (RAW value – 15790080). To discover the size for each disk in ASM (here I do manually and not check in v$asm_disk just to show you the steps and to be more didact) you can divide the raw space/#disks:

 

SQL> SELECT (15790080/1024)/60 as gbDISK FROM dual; 

GBDISK
----------      
257

SQL>

 

So, each disk has 257GB of space size (in raw). Since the actual free space is 1100GB (1.07TB) and we want to add more 2TB we need to increase the value for each disk.

 

The formula is simple: NewValue(inGB)*#OfDisksPerCell*#NumberofCells. Here I choose 330GB per disk, so, the new size for diskgroup will be:

 

SQL> SELECT (330*12*5) AS newsizeGB FROM dual;  

NEWSIZEGB
----------    
19800 

SQL>

 

But this value is not correct because does not consider the mirror type, so, we need to divide this value. If it NORMAL, divide by 2, if HIGH, divide by 3. To compare, the old and new expected value:

 

SQL> SELECT (257*12*5)/2 as actualsizeGBUsable, (330*12*5)/2 AS newsizeGBUSable FROM dual;

ACTUALSIZEGBUSABLE NEWSIZEGBUSABLE
------------------ ---------------    
          7710            9900

SQL>


 

So, the new total space for diskgroup will be around 9.6 TB (9900 GB). And we will add (as free space) around 2.1 TB. Probably you need to execute these formulas more than one time to find the desired size per disk.

 

I start to calculate by disk (and after discovering the final diskgroup size) instead of starting with diskgroup size (and dividing to discover the disk size) because doing this way, the size for disk will be always correct and align with storage cell grid disk. Remember that grid disks are aligned in 16MB and, if you start to choose one arbitrary value to the max size for the ASM diskgroup, you can reach a value per grid disk that is not 16MB aligned. As an example, if I start choosing 20TB for diskgroup, the size per disk will be (20*1024)/60 = 341.33GB and this is not aligned with 16MB.

 

For 16 Mb explanation, you can check in the Exadata docs

 

Find the closest 16 MB boundary for the new grid disk size. If you do not perform this check, then the cell will round down the grid disk size to the nearest 16 MB boundary automatically, and you could end up with a mismatch in size between the Oracle ASM disks and the grid disks.

 

For Storage Cell

 

In the Exadata side, first, check some info about the actual state for grid disks. Here I connect in one cell (if you want you can use dcli to call every/all cells) and check some info for the grid disk:

 

 

[root@exaceladm01 ~]# cellcliCellCLI

Release 18.1.6.0.0 - Production on Fri Jun 21 16:57:49 CEST 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved. 

CellCLI> list celldisk

CD_00_exaceladm01       normal  
CD_01_exaceladm01       normal        
CD_02_exaceladm01       normal        
CD_03_exaceladm01       normal        
CD_04_exaceladm01       normal        
CD_05_exaceladm01       normal        
CD_06_exaceladm01       normal        
CD_07_exaceladm01       normal        
CD_08_exaceladm01       normal        
CD_09_exaceladm01       normal        
CD_10_exaceladm01       normal        
CD_11_exaceladm01       normal        
FD_00_exaceladm01       normal        
FD_01_exaceladm01       normal        
FD_02_exaceladm01       normal
FD_03_exaceladm01       normal

CellCLI> list celldisk
CD_02_exaceladm01 detail

name:                   CD_02_exaceladm01        
comment:        
creationTime:           2016-11-29T10:23:35+01:00        
deviceName:             /dev/sdc        
devicePartition:        /dev/sdc        
diskType:               HardDisk        
errorCount:             0        
freeSpace:              2.6688079833984375T        
id:                     d57b31bb-6043-4cea-b992-ef8075f42e77        
physicalDisk:           PUT81V        
size:                   7.152252197265625T        
status:                 normal


CellCLI> 

CellCLI> list griddisk where name like 'DATAC3.*';

DATAC3_CD_00_exaceladm01        active        
DATAC3_CD_01_exaceladm01        active        
DATAC3_CD_02_exaceladm01        active        
DATAC3_CD_03_exaceladm01        active        
DATAC3_CD_04_exaceladm01        active        
DATAC3_CD_05_exaceladm01        active        
DATAC3_CD_06_exaceladm01        active        
DATAC3_CD_07_exaceladm01        active        
DATAC3_CD_08_exaceladm01        active        
DATAC3_CD_09_exaceladm01        active        
DATAC3_CD_10_exaceladm01        active        
DATAC3_CD_11_exaceladm01        active

CellCLI>

CellCLI> list griddisk where name = 'DATAC3_CD_04_exaceladm01' detail;

name:                   DATAC3_CD_04_exaceladm01        
asmDiskGroupName:       DATAC3        
asmDiskName:            DATAC3_CD_04_EXACELADM01        
asmFailGroupName:       EXACELADM01        
availableTo:        
cachedBy:               FD_00_exaceladm01        
cachingPolicy:          default        
cellDisk:               CD_04_exaceladm01        
comment:                "Cluster exa-cl3 diskgroup DATAC3"        
creationTime:           2017-01-20T17:23:21+01:00        
diskType:               HardDisk        
errorCount:             0        
id:                     2cb2aecb-cfa1-4282-b90d-3a08ed079778        
size:                   257G        
status:                 active

CellCLI>


Here you can see that I checked:
  •  The celldisks info for this cell.
  •  Detail for one celldisk (look the freeSpaceattribute to verify if you have free space).
  •  The grid disks for this cell.
  •  Details for the griddisk (look that the size is the same value that I calculated manually).
This part was just to check and show you how to verify some info, with time, you don’t need to check this in every maintenance (because you will be familiar with the environment). Be careful that, if you have different grid disk space division per storage cells, you need to check if you have available space in all your storage celldisks.

 

To expand the grid disks you have two options, enter in each cell and expand manually one by one, or create one script and call by dcli (the option that I choose). So, create one script that executes the ALTER GRIDDISK command for the new desired size. Just remember to be careful and choose the correct grid disks (here is for VM 03, that means DATAC3):

 

DOM0 - root@exadbadm01 tmp]$  vi Change_Disk_Size_Of_DATAC3_Cluster_To_330G.sh

[DOM0 - root@exadbadm01 tmp]$

[DOM0 - root@exadbadm01 tmp]$

[DOM0 - root@exadbadm01 tmp]$  cat Change_Disk_Size_Of_DATAC3_Cluster_To_330G.sh

dcli -l root -c exaceladm01 cellcli -e ALTER GRIDDISK DATAC3_CD_00_EXACELADM01 size=330G;

dcli -l root -c exaceladm02 cellcli -e ALTER GRIDDISK DATAC3_CD_00_EXACELADM02 size=330G;



dcli -l root -c exaceladm02 cellcli -e ALTER GRIDDISK DATAC3_CD_11_EXACELADM02 size=330G;

dcli -l root -c exaceladm03 cellcli -e ALTER GRIDDISK DATAC3_CD_11_EXACELADM03 size=330G;

dcli -l root -c exaceladm04 cellcli -e ALTER GRIDDISK DATAC3_CD_11_EXACELADM04 size=330G;

dcli -l root -c exaceladm05 cellcli -e ALTER GRIDDISK DATAC3_CD_11_EXACELADM05 size=330G;

[DOM0 - root@exadbadm01 tmp]$

[DOM0 - root@exadbadm01 tmp]$  chmod +x Change_Disk_Size_Of_DATAC3_Cluster_To_330G.sh

[DOM0 - root@exadbadm01 tmp]$

[DOM0 - root@exadbadm01 tmp]$  ./Change_Disk_Size_Of_DATAC3_Cluster_To_330G.sh

exaceladm01: GridDisk DATAC3_CD_00_exaceladm01 successfully altered

exaceladm02: GridDisk DATAC3_CD_00_exaceladm02 successfully altered

exaceladm03: GridDisk DATAC3_CD_00_exaceladm03 successfully altered





exaceladm01: GridDisk DATAC3_CD_11_exaceladm01 successfully altered

exaceladm02: GridDisk DATAC3_CD_11_exaceladm02 successfully altered

exaceladm03: GridDisk DATAC3_CD_11_exaceladm03 successfully altered

exaceladm04: GridDisk DATAC3_CD_11_exaceladm04 successfully altered

exaceladm05: GridDisk DATAC3_CD_11_exaceladm05 successfully altered

[DOM0 - root@exadbadm01 tmp]$

[DOM0 - root@exadbadm01 tmp]$




Above I cropped the output to reduce the size or post, but you can check the raw output here. After the change you can check the info for the grid disk:

 

[root@exaceladm01 ~]# cellcli

CellCLI: Release 18.1.6.0.0 - Production on Fri Jun 21 17:24:03 CEST 2019




Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.




CellCLI> list griddisk where name = 'DATAC3_CD_04_exaceladm01' detail;

         name:                   DATAC3_CD_04_exaceladm01

         asmDiskGroupName:       DATAC3

         asmDiskName:            DATAC3_CD_04_EXACELADM01

         asmFailGroupName:       EXACELADM01

         availableTo:

         cachedBy:               FD_00_exaceladm01

         cachingPolicy:          default

         cellDisk:               CD_04_exaceladm01

         comment:                "Cluster exa-cl3 diskgroup DATAC3"

         creationTime:           2017-01-20T17:23:21+01:00

         diskType:               HardDisk

         errorCount:             0

         id:                     2cb2aecb-cfa1-4282-b90d-3a08ed079778

         size:                   330G

         status:                 active




CellCLI> list celldisk CD_02_exaceladm01 detail

         name:                   CD_02_exaceladm01

         comment:

         creationTime:           2016-11-29T10:23:35+01:00

         deviceName:             /dev/sdc

         devicePartition:        /dev/sdc

         diskType:               HardDisk

         errorCount:             0

         freeSpace:              2.5975189208984375T

         id:                     d57b31bb-6043-4cea-b992-ef8075f42e77

         physicalDisk:           PUT81V

         size:                   7.152252197265625T

         status:                 normal




CellCLI> exit

quitting




[root@exaceladm01 ~]#


For ASM – Part #2
After you change the grid disks in the storage side, you can go back to ASM and extend the diskgroup:

 

SQL> ALTER DISKGROUP DATAC3 RESIZE ALL;




Diskgroup altered.




SQL>


And you can check that the size was already added (look that values hit what we calculated before):

 

SQL> select a.name,a.total_mb,a.free_mb,a.type,

  2  decode(a.type,'NORMAL',a.total_mb/2/1024,'HIGH',a.total_mb/3/1024) avail_mb,

  3  decode(a.type,'NORMAL',a.free_mb/2/1024,'HIGH',a.free_mb/3/1024) usable_mb,

  4  count(b.path) cdisks

  5  from v$asm_diskgroup a, v$asm_disk b

  6  where a.group_number=b.group_number

  7  group by a.name,a.total_mb,a.free_mb,a.type,

  8  decode(a.type,'NORMAL',a.total_mb/2/1024,'HIGH',a.total_mb/3/1024) ,

  9  decode(a.type,'NORMAL',a.free_mb/2/1024,'HIGH',a.free_mb/3/1024)

 10  order by 2,1

 11  /




               Total GB    Free GB          Total GB    Free GB   Cell

Disk Group          Raw        Raw TYPE       Usable     Usable Disksl

------------ ---------- ---------- ------ ---------- ---------- ------

RECOC3          4239360    2465540 NORMAL       2070       1204     60

DATAC3         20275200    6738128 NORMAL       9900       3290     60




SQL>


And you can check the v$asm_operation to check the rebalance progress:

 

SQL> select operation, EST_MINUTES, EST_RATE, EST_WORK, sofar from v$asm_operation;




OPERA EST_MINUTES   EST_RATE   EST_WORK      SOFAR

----- ----------- ---------- ---------- ----------

REBAL           0          0          0          0

REBAL           0      46158      18490       5761

REBAL           0          0          0          0

REBAL           0          0          0          0

REBAL           0          0          0          0




SQL>




Conclusion
As you can see, the steps to do that are simple and not complex, you just need to take care about some details of your environment: Number of the disks per cell, number the cells and the VM where you want to add the space is critical to do the correct change. Remember to align with 16MB the size of your grid disk, when you are adding it is not a big deal, but if you want to shrink this can break your ASM diskgroup.
Check that the only change effectively is the size of the grid disk, all the others occur automatically because of the grid disk. ASM diskgroup will increase to the max value that it is available and space is available just after the command.
The steps above are more detailed that you will do in daily maintenance, but help you to understand most of the datils for this kind of change.
 
Reference:
How to Resize Grid Disks in Exadata (Doc ID 2176737.1) – https://support.oracle.com/epmos/faces/DocContentDisplay?id=2176737.1
Resizing Grid Disks – https://docs.oracle.com/en/engineered-systems/exadata-database-machine/sagug/exadata-administering-asm.html#GUID-570A0C37-907C-4417-BC93-AC4ABAF7E3AD 

 

This post is published in my personal blog too: http://www.fernandosimon.com/blog/increase-size-for-exadata-grid-disks/
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific


ODA, JSON and Flash
Category: Engineer System Author: Fernando Simon (Board Member) Date: 6 years ago Comments: 0

ODA, JSON and Flash

Recently, in March, I made the reimage from an X5-2 HA ODA and saw a strange behavior during the diskgroup creation and couldn’t reproduce (because involve reimaging again). Basically, the FLASH diskgroup was not created.
But in last May I reimaged another ODA using the same patch/imageversion (18.3.0.0 – Patch 27604623) and was possible to verify again. In both cases, I created the appliance using the CLI “odacli create-appliance” using JSON file because the network uses VLAN (what it is impossible to create using the web interface), and both appliances are identical (X5-2 HA with SSD for RECO and FLASH).
To reimage, I followed the steps in the docs for this version and used the ISO to do the baremetal procedure. If you look in the docs about the options for storage (check here) you can see that there is no single reference to use FLASH diskgroup (or that you need to do that). Checking in the readme/reference JSON files that exist in the folder “/opt/oracle/dcs/sample” under file “sample-oda-ha-json-readme.txt”:grid:
 

    diskGroup: (ODA HA contains DATA, RECO and REDO Diskgroups)

        diskgroupName: DATA|RECO|REDO

        redundancy: Normal|High

            If the system has less than 5 NVMe storage devices, redundancy is Normal.

            If the system has 5 or more NVMes, then Normal or High are supported.

        diskPercentage: Percentage of NVMe drive capacity is used for this particular diskgroup.

 

And the example that come with image (sample-oda-ha.json):

  “grid” : {

    “diskGroup” : [ {

      “diskGroupName” : “DATA”,

      “redundancy” : “”,

      “diskPercentage” :80

    }, {

      “diskGroupName” : “RECO”,

      “redundancy” : “”,

      “diskPercentage” :20

    }, {

      “diskGroupName”: “REDO”,

      “diskPercentage”: 100,

      “redundancy”: “HIGH”

    } ],

 
As you can see there is no reference to FLASH, or even if it is possible to use that (because the options are diskgroupName: DATA|RECO|REDO). Based on that (and supposing that FLASH will be created automatically – since X5-2 have these SSD disk dedicated to that) I made JSON file with:

  “grid” : {

    “diskGroup” : [ {

      “diskGroupName” : “DATA”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : 90

    }, {

      “diskGroupName” : “RECO”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : 10

    }, {

      “diskGroupName” : “REDO”,

      “redundancy” : “HIGH”,

      “disks” : null,

      “diskPercentage” : null

    } ],

 

And after the create-appliance command I got just these filegroups in ASM:

ASMCMD> lsdg

State    Type    Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  NORMAL  N         512             512   4096  4194304  108019712  108000740          6751232        50624754              0             Y  DATA/

MOUNTED  NORMAL  N         512             512   4096  4194304   11997184   11995456           749824         5622816              0             N  RECO/

MOUNTED  HIGH    N         512             512   4096  4194304     762880     749908           190720          186396              0             N  REDO/

ASMCMD>

 

And the describe-system reports:

[root@ODA1 ~]# /opt/oracle/dcs/bin/odacli describe-system

 

Appliance Information

—————————————————————-

                     ID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

               Platform: X5-2-HA

        Data Disk Count: 24

         CPU Core Count: 6

                Created: March 26, 2019 11:56:24 AM CET

 

System Information

—————————————————————-

                   Name: ODA

            Domain Name: XYZ.XYZ

              Time Zone: Europe/Luxembourg

             DB Edition: EE

            DNS Servers: 1.1.1.1 2.2.2.2

            NTP Servers: 3.3.3.3

 

Disk Group Information

—————————————————————-

DG Name                   Redundancy                Percentage

————————- ————————- ————

Data                      Normal                    90

Reco                      Normal                    10

Redo                      High                      100

 

[root@ODA1 ~]#

 

After that, the only option was creating the FLASH manually:

SQL> create diskgroup FLASH normal redundancy

  2     DISK ‘AFD:SSD_E0_S16_1320409344P1’ NAME SSD_E0_S16_1320409344P1

  3     ,  ‘AFD:SSD_E0_S17_1320408816P1’ NAME SSD_E0_S17_1320408816P1

  4     ,  ‘AFD:SSD_E0_S18_1320404784P1’ NAME SSD_E0_S18_1320404784P1

  5     ,  ‘AFD:SSD_E0_S19_1320406740P1’ NAME SSD_E0_S19_1320406740P1

  6     attribute  ‘COMPATIBLE.ASM’ =   ‘18.0.0.0.0’

  7     , ‘COMPATIBLE.rdbms’ = ‘12.1.0.2’

  8     , ‘compatible.advm’ = ‘18.0.0.0’

  9     , ‘au_size’=’4M’

  10  ;

 

Diskgroup created.

 

SQL> select NAME, SECTOR_SIZE, ALLOCATION_UNIT_SIZE, COMPATIBILITY, DATABASE_COMPATIBILITY from v$asm_diskgroup;

 

NAME                           SECTOR_SIZE ALLOCATION_UNIT_SIZE COMPATIBILITY                                                DATABASE_COMPATIBILITY

—————————— ———– ——————– ———————————————————— ————————————————————

DATA                                   512              4194304 18.0.0.0.0                                                   12.1.0.2.0

RECO                                   512              4194304 18.0.0.0.0                                                   12.1.0.2.0

REDO                                   512              4194304 18.0.0.0.0                                                   12.1.0.2.0

FLASH                                  512              4194304 18.0.0.0.0                                                   12.1.0.2.0

 

SQL>

Reimage again, or cleanup everything, sometimes it is not an option. Change the JSON file and modify the parameter “dbOnFlashStorage” it is not correct since will fail again as you can see in the release notes for version 12.2.1.3.0 that says in workaround: “Provide information for DATA, REDO, RECO, and FLASH disk groups at the time of provisioning. If you have provisioned your environment without creating the FLASH disk group, then create the FLASH disk group manually. This issue is tracked with Oracle bug 27721310.
 
Second try
For the second reimage I specified my JSON file as:

  “grid” : {

    “diskGroup” : [ {

      “diskGroupName” : “DATA”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : 90

    }, {

      “diskGroupName” : “RECO”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : 10

    }, {

      “diskGroupName” : “REDO”,

      “redundancy” : “HIGH”,

      “disks” : null,

      “diskPercentage” : null

    }, {

      “diskGroupName” : “FLASH”,

      “redundancy” : “NORMAL”,

      “disks” : null,

      “diskPercentage” : null

    } ],

 

And the describe for system now says:

[root@OAK1 ~]# /opt/oracle/dcs/bin/odacli describe-system

 

Appliance Information

—————————————————————-

                     ID: XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

               Platform: X5-2-HA

        Data Disk Count: 24

         CPU Core Count: 36

                Created: May 22, 2019 11:03:36 AM CEST

 

System Information

—————————————————————-

                   Name: OAK

            Domain Name: XYZ.XYZ

              Time Zone: Europe/Luxembourg

             DB Edition: EE

            DNS Servers: 1.1.1.1 2.2.2.2

            NTP Servers: 3.3.3.3

 

Disk Group Information

—————————————————————-

DG Name                   Redundancy                Percentage

————————- ————————- ————

Data                      Normal                    90

Reco                      Normal                    10

Redo                      High                      100

Flash                     Normal                    100

 

[root@OAK1 ~]#

And I can confirm everything as expected in ASM:

ASMCMD> lsdg

State    Type    Rebal  Sector  Logical_Sector  Block       AU   Total_MB    Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  NORMAL  N         512             512   4096  4194304  108019712  108007188          6751232        50627978              0             Y  DATA/

MOUNTED  NORMAL  N         512             512   4096  4194304    1525760    1518672           381440          568616              0             N  FLASH/

MOUNTED  NORMAL  N         512             512   4096  4194304   11997184   11995736           749824         5622956              0             N  RECO/

MOUNTED  HIGH    N         512             512   4096  4194304     762880     749908           190720          186396              0             N  REDO/

ASMCMD>

If you compare the job report for the new create-appliance execution:

Post cluster OAKD configuration          May 22, 2019 11:53:45 AM CEST       May 22, 2019 11:59:11 AM CEST       Success

Disk group ‘RECO’creation                May 22, 2019 11:59:20 AM CEST       May 22, 2019 11:59:38 AM CEST       Success

Disk group ‘REDO’creation                May 22, 2019 11:59:38 AM CEST       May 22, 2019 11:59:47 AM CEST       Success

Disk group ‘FLASH’creation               May 22, 2019 11:59:47 AM CEST       May 22, 2019 11:59:58 AM CEST       Success

Volume ‘commonstore’creation             May 22, 2019 11:59:58 AM CEST       May 22, 2019 12:00:48 PM CEST       Success

It is different from the previous:

Post cluster OAKD configuration          March 26, 2019 1:47:06 PM CET       March 26, 2019 1:52:32 PM CET       Success

Disk group ‘RECO’creation                March 26, 2019 1:52:41 PM CET       March 26, 2019 1:52:57 PM CET       Success

Disk group ‘REDO’creation                March 26, 2019 1:52:57 PM CET       March 26, 2019 1:53:07 PM CET       Success

Volume ‘commonstore’creation             March 26, 2019 1:53:07 PM CET       March 26, 2019 1:53:53 PM CET       Success

 

Know your environment and your target
So, be careful during your ODA deployment, even if the doc says something, check again to avoid errors. It is important to know your environment and be aware of what is expected as a result. Here in this case, even if the docs says nothing about specifying the FLASH diskgroup, it is needed and if you forgot, not FLASH for you.
 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

 

 

Fernando Simon – http://www.fernandosimon.com/blog

1 4 5 6 7