Fernando Simon (Board Member)
19c Grid Infrastructure Upgrade
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

19c Grid Infrastructure Upgrade

 
Upgrade GRID infrastructure is one activity that usually is postponed because it involves a sensible area that, when not works, causes big downtime until be fixed. But, in the last versions, it is not a complicated task and if you follow the basic rules, it works without problems.
Here I will show a little example of how to upgrade the GI from 18.6.0 to 19.5. The steps below were executed at Exadata running version 19.2.7.0.0.191012 and GI 18.6.0.0, but can be done in every environment that supports Oracle GI.

 

Requirements and Plan

 

The basis for every upgrade is, first, check the requirements to do that. This means not just for operational system version, but the current GI version (that one that will be upgraded). For Exadata and GI 19 you can follow the note 19c Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running on Oracle Linux (Doc ID 2542082.1) but can use as a guide even in the non-Exadata environment.
Every upgrade or patch needs to be planned to be successful. Check previously the current versions, backup, and others are the basis. There is no rule of thumb, but a base plan has:
  • Check operation system requirements (like version).
  • Check current GI requirements (maybe install additional patches, like 28553832 for Exadata).
  • Check system requirements (space and others).
  • Check Oracle Home Requirements (maybe you need to apply additional patches for Oracle Homes).
  • Download and copy to one server that will be patched (this includes the patches to be applied over the new GI).
  • Prepare the X (you can use xauth and redirection through ssh).
You can follow the note hinted previously to see all the requirements. Another REQUIRED reference is the blog of Mike Dietrich that is dedicated to Oracle Upgrade Practices. Since I am doing this at Oracle Exadata, I downloaded the compatible versions from note Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1).

 

Pre-Upgrade

 

When you start the upgrade phase you already have all the requirements OK (operational system, previous GI patched). The steps here are based in the note 19c Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running on Oracle Linux (Doc ID 2542082.1) but I will include some extra details and tricks that will help you.

 

Memory parameters

It is necessary to have at least 3GB for SGA to correctly upgrade (and runs) the GI 19c version. Check this in the current GI to guarantee that this is OK (executed as grid user):

 

[grid@exacl04n1 +ASM1]$ sqlplus / as sysdba




SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jan 24 14:57:24 2020

Version 18.6.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Version 18.6.0.0.0




SQL> show parameter memory_target




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

memory_target                        big integer 0

SQL> show parameter memory_max_target




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

memory_max_target                    big integer 0

SQL> show parameter use_large_pages




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

use_large_pages                      string      TRUE

SQL> show parameter sga_max_size




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

sga_max_size                         big integer 3G

SQL> show parameter sga_target




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

sga_target                           big integer 3G

SQL> exit

Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Version 18.6.0.0.0

[grid@exacl04n1 +ASM1]$

 

Folders and Unzip

Create all the folders for oracle GI installation. This means (usually) the /u01/app/19.0.0.0/grid for GI at Exadata.

 

[root@exacl04n1 ~]# dcli -g /zfs/EXADATA/exacl04.txt -l root mkdir -p /u01/app/19.0.0.0/grid

[root@exacl04n1 ~]# dcli -g /zfs/EXADATA/exacl04.txt -l root chown grid /u01/app/19.0.0.0/grid

[root@exacl04n1 ~]# dcli -g /zfs/EXADATA/exacl04.txt -l root chgrp -R oinstall /u01/app/19.0.0.0/grid

[root@exacl04n1 ~]#

 

In Exadata, I used dcli to do that automatically in every node of the cluster. But remember to check the ownership of the folder in every node.
After you create the folders, in the first node (where +ASM1 runs), you can unzip the GI Base version for 19c (downloaded from edelivery.oracle.com) in the folder for GI destination. Remember to execute this as the user that will run GI, in this case, I made with grid user and unzipped directly from an NFS folder (but you can put in a local folder and unzip):

 

[grid@exacl04n1 +ASM1]$ unzip -q /nfs/19c/Exadata-Patch/19c-Grid/V982068-01.zip -d /u01/app/19.0.0.0/grid

[grid@exacl04n1 +ASM1]$

 

runcluvfy

Cluster verify utility is amazing because automates a lot of tests and checks. It is needed to execute before you continue because it will report to you the success (if everything is OK), or error and points to be fixed.
In this case, I made (the output was cropped but a full output can be checked here):

 

[grid@exacl04n1 +ASM1]$ cd /u01/app/19.0.0.0/grid/

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/18.0.0/grid -dest_crshome /u01/app/19.0.0.0/grid -dest_version 19.0.0.0.0 -fixup -verbose




Verifying Physical Memory ...

  Node Name     Available                 Required                  Status

  ------------  ------------------------  ------------------------  ----------

  exacl04n2     84.1372GB (8.8224204E7KB)  8GB (8388608.0KB)         passed

  exacl04n1     84.1372GB (8.8224204E7KB)  8GB (8388608.0KB)         passed



Verifying ASM Filter Driver configuration ...PASSED

Verifying Systemd login manager IPC parameter ...PASSED




Pre-check for cluster services setup was successful.

Verifying RPM Package Manager database ...INFORMATION

PRVG-11250 : The check "RPM Package Manager database" was not performed because

it needs 'root' user privileges.







CVU operation performed:      stage -pre crsinst

Date:                         Jan 24, 2020 3:17:24 PM

CVU home:                     /u01/app/19.0.0.0/grid/

User:                         grid

[grid@exacl04n1 +ASM1]$

 

As you can see, was a success. But one example of a failed report is:

 

Checks did not pass for the following nodes:

        zeroing02,zeroing01







Failures were encountered during execution of CVU verification request "stage -pre crsinst".




Verifying Node Connectivity ...FAILED

zeroing02: PRVG-11068 : Highly Available IP (HAIP) is enabled on the nodes

            "zeroing01,zeroing02".




zeroing01: PRVG-11068 : Highly Available IP (HAIP) is enabled on the nodes

            "zeroing01,zeroing02".




Verifying RPM Package Manager database ...INFORMATION

PRVG-11250 : The check "RPM Package Manager database" was not performed because

it needs 'root' user privileges.







CVU operation performed:      stage -pre crsinst

Date:                         Dec 9, 2019 1:38:46 PM

CVU home:                     /u01/app/19.0.0.0/grid/

User:                         oracle

[root@zeroing01 ~]#

 

Prepare Patch for the New GI

It is possible to patch the new GI even before install the 19c GI (the same was possible to do at 12 and 18c too). It is recommended by the way. Fortunately, you don’t need to patch manually, you just call the gridSetup.sh with -applyRU if it is Release Update (or -applyRUR for RUR patch).
Since some files will be overwritten, I create in node 1 one new folder to backup them (executed as the same owner from GI):

 

[grid@exacl04n1 +ASM1]$ mkdir /u01/patches/grid-19c

[grid@exacl04n1 +ASM1]$

 

To apply these patches correctly it is needed to update the opatch from the GI base release (that was unzipped previously). Execute this as the same user that will run GI:

 

[grid@exacl04n1 +ASM1]$ #backup current opatch to the folder created before

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ cp /u01/app/19.0.0.0/grid/OPatch /u01/patches/grid-19c/OPatch-ORG -R

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ # go to GI home folder

[grid@exacl04n1 +ASM1]$ cd /u01/app/19.0.0.0/grid

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ # unzip the new version of Opatch for 19c

[grid@exacl04n1 +ASM1]$ unzip -qa /zfs/EXADATA_PATCHING/19c/Exadata-Patch/p6880880_190000_Linux-x86-64.zip

replace OPatch/emdpatch.pl? [y]es, [n]o, [A]ll, [N]one, [r]ename: A

[grid@exacl04n1 +ASM1]$

 

As you saw, I create the backup of Opatch and overwrote with the new version. Remember to execute this with the same user that will runs GI.
Another point is unzipping the patch that you want to apply. Here I used the same folder create before to store the patch. Everything executed in the first node and as the user for GI:

 

[grid@exacl04n1 +ASM1]$ cd /u01/patches/grid-19c/

[grid@exacl04n1 +ASM1]$ unzip -qa /zfs/EXADATA_PATCHING/19c/Exadata-Patch/19c-Grid/p30116789_190000_Linux-x86-64.zip

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ ls -l

total 232

drwxr-x---  7 grid oinstall   4096 Oct  9 17:11 30116789

drwxr-x--- 14 grid oinstall   4096 Jan 24 15:23 OPatch-ORG

-rw-rw-r--  1 grid oinstall 225499 Oct 15 13:24 PatchSearch.xml

[grid@exacl04n1 +ASM1]$

 

The version here was downloaded from node 888828.1 for Exadata and it is the GI Release Update 19.5.0.0 – Patch 30116789.

 

Fix know issues

Little know issues need to be fixed before you upgrade the GI. These errors were based on my experience during passed upgraded (These errors were not reported by cluster verify utility).

 

Lock for oraInventory

The first is related to inventory lock files. It is common to have an environment with role separation (where GI runs with a different user than OH) and can occur that lock file can be set as other ownership. To fix execute in both nodes (or dcli for Exadata):

 

[root@exacl04n1 ~]# dcli -g /zfs/EXADATA/exacl04.txt -l root "chown grid:oinstall /u01/app/oraInventory/locks -R"

[root@exacl04n1 ~]#

[root@exacl04n1 ~]# dcli -g /zfs/EXADATA/exacl04.txt -l root "ls -l /u01/app/oraInventory/locks"

exacl04n1: total 0

exacl04n1: -rw-r--r-- 1 grid oinstall 0 Jan 20 13:07 inventory.lock

exacl04n2: total 0

exacl04n2: -rw-r--r-- 1 grid oinstall 0 Jan 21 13:36 inventory.lock

[root@exacl04n1 ~]#

 

ACFS for ghchkpt

Can occur that GHCHKPT filesystem was removed (intentionally or no) from the current GI home and during the rootupgrade.sh you can hit the error belowing it does not exists:

 

2019/12/04 17:09:28 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.

PRCR-1129 : Failed to upgrade resource type ora.rhpserver.type

PRCR-1071 : Failed to register or update resource ora.rhpserver

CRS-2510: Resource 'ora.datac3.ghchkpt.acfs' used in dependency 'hard' does not exist or is not registered.

CRS-2514: Dependency attribute specification 'hard' is invalid in resource 'ora.rhpserver'

2019/12/04 17:11:14 CLSRSC-180: An error occurred while executing the command '/u01/app/19.0.0.0/grid/bin/srvctl upgrade model  -s 18.0.0.0.0 -d 19.0.0.0.0 -p first'

2019/12/04 17:11:17 CLSRSC-694: failed to validate CRS entities for upgrade, aborting the upgrade

2019/12/04 17:11:17 CLSRSC-362: The pre-upgrade checks failed, aborting the upgrade

Died at /u01/app/19.0.0.0/grid/crs/install/crsupgrade.pm line 3772.

[root@exacl03n1 ~]#

 

To avoid this you need to recreate it manually doing this:

 

[root@exacl04n1 ~]# su - grid

Last login: Fri Jan 24 14:51:16 CET 2020 from 35.213.248.138 on ssh

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ #CHECK if ghchkpt exists. In this case NO

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME

AUX_VOLUMES=

CANONICAL_VOLUME_DEVICE=/dev/asm/acfsoh-256

VOLUME_DEVICE=/dev/asm/acfsoh-256

AUX_VOLUMES=

CANONICAL_VOLUME_DEVICE=/dev/asm/acfsoh-256

VOLUME_DEVICE=/dev/asm/acfsoh-256

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ #Create the volume at ASM. The size it is important

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ asmcmd

ASMCMD>

ASMCMD> volcreate -G DATAC4 -s 1536M ghchkpt

ASMCMD>

ASMCMD> volinfo -G DATAC4 ghchkpt

Diskgroup Name: DATAC4




         Volume Name: GHCHKPT

         Volume Device: /dev/asm/ghchkpt-256

         State: ENABLED

         Size (MB): 1536

         Resize Unit (MB): 512

         Redundancy: MIRROR

         Stripe Columns: 8

         Stripe Width (K): 1024

         Usage:

         Mountpath:




ASMCMD>

ASMCMD> exit

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ #Format the volume as ACFS

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ /sbin/mkfs -t acfs /dev/asm/ghchkpt-256

mkfs.acfs: version                   = 18.0.0.0.0

mkfs.acfs: on-disk version           = 46.0

mkfs.acfs: volume                    = /dev/asm/ghchkpt-256

mkfs.acfs: volume size               = 1610612736  (   1.50 GB )

mkfs.acfs: Format complete.

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$




[root@exacl04n1 ~]#

[root@exacl04n1 ~]# #As root, add the filesystem at current GI

[root@exacl04n1 ~]#

[root@exacl04n1 ~]# export ORACLE_HOME=/u01/app/18.0.0/grid

[root@exacl04n1 ~]# $ORACLE_HOME/bin/srvctl add filesystem -d /dev/asm/ghchkpt-256 -g DATAC4 -v GHCHKPT -m /mnt/oracle/rhpimages/chkbase/ -user oracle,grid

[root@exacl04n1 ~]#

[root@exacl04n1 ~]#

[root@exacl04n1 ~]# #Check that everything is OK

[root@exacl04n1 ~]#

[root@exacl04n1 ~]# $ORACLE_HOME/bin/crsctl stat res -t |grep acfs

ora.datac4.acfsoh.acfs

               ONLINE  ONLINE       exacl04n1                mounted on /u01/acfs

               ONLINE  ONLINE       exacl04n2                mounted on /u01/acfs

ora.datac4.ghchkpt.acfs

[root@exacl04n1 ~]#

[root@exacl04n1 ~]#

[root@exacl04n1 ~]# $ORACLE_HOME/bin/crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME

AUX_VOLUMES=

CANONICAL_VOLUME_DEVICE=/dev/asm/acfsoh-256

VOLUME_DEVICE=/dev/asm/acfsoh-256

AUX_VOLUMES=

CANONICAL_VOLUME_DEVICE=/dev/asm/acfsoh-256

VOLUME_DEVICE=/dev/asm/acfsoh-256

AUX_VOLUMES=

CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-256

VOLUME_DEVICE=/dev/asm/ghchkpt-256

AUX_VOLUMES=

CANONICAL_VOLUME_DEVICE=/dev/asm/ghchkpt-256

VOLUME_DEVICE=/dev/asm/ghchkpt-256

[root@exacl04n1 ~]#

[root@exacl04n1 ~]#

 

Important here is the size for GHCHKPT volume that needs to be 1536M and registered at current GI. When registered the filesystem remember to set correctly the volume name.

 

ACFS filesystem and CRS resources

If you have some mountpoint over ACFS, it is recommended to stop it “safely” before executing the gridSetup. This occurs because during the upgrade phase the script will try to shutdown the entire clusters in the node,  and if the unmount of ACFS goes wrong you will receive an error (and this can be boring and stressful to handle). The same is valid for resources that you registered at CRS.
So, just verify ACFS and unmount it (including stop the ACFS filesystem) and stop additional CRS resources:

 

[root@exacl04n1 ~]# /u01/app/18.0.0/grid/bin/crsctl stop res flk.mountbind -n exacl04n1

CRS-2673: Attempting to stop 'flk.mountbind' on 'exacl04n1'

CRS-2675: Stop of 'flk.mountbind' on 'exacl04n1' failed

CRS-2679: Attempting to clean 'flk.mountbind' on 'exacl04n1'

CRS-2681: Clean of 'flk.mountbind' on 'exacl04n1' succeeded

[root@exacl04n1 ~]#

[root@exacl04n1 ~]# #Stopped two times (the error above was expected)

[root@exacl04n1 ~]#

[root@exacl04n1 ~]# /u01/app/18.0.0/grid/bin/crsctl stop res flk.mountbind -n exacl04n1

CRS-2500: Cannot stop resource 'flk.mountbind' as it is not running

CRS-4000: Command Stop failed, or completed with errors.

[root@exacl04n1 ~]#

[root@exacl04n1 ~]# #Stop ACFS

[root@exacl04n1 ~]#

[root@exacl04n1 ~]# /u01/app/18.0.0/grid/bin/srvctl stop filesystem -d /dev/asm/acfsoh-256 -n exacl04n1

[root@exacl04n1 ~]#

 

Run gridSetup.sh

After preparing everything, patches, folders, and fix some issues we can call the gridSetup.sh to install the Oracle GI 19c.
The first is clear all the current session variables that point to the current GI, and set some variables (as DISPLAY for X and move to the new GI home to call the grid setup):

 

[root@exacl04n1 ~]# su - grid

.Last login: Fri Jan 24 16:06:12 CET 2020 from 35.213.248.138 on ssh

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ unset ORACLE_HOME

[grid@exacl04n1 +ASM1]$ unset ORACLE_BASE

[grid@exacl04n1 +ASM1]$ unset ORACLE_SID

[grid@exacl04n1 ]$

[grid@exacl04n1 ]$ cd /u01/app/19.0.0.0/grid/

[grid@exacl04n1 ]$

[grid@exacl04n1 ]$ export DISPLAY=35.213.248.116:1.0

[grid@exacl04n1 ]$

 

Now the gridSetup.sh can be called, and I do this with some special parameters:
  • -applyRU: This will apply the Release Update passed as parameter BEFORE start to install the 19c GI.
  • mgmtDB parameters: With 19c the MGMTDB it is not needed anymore, these parameters disable the installation and even the options are not shown during the graphical installation.
So, the call it is:

 

[grid@exacl04n1 ]$ ./gridSetup.sh -applyRU /u01/patches/grid-19c/30116789 -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

Preparing the home to patch...

Applying the patch /u01/patches/grid-19c/30116789...

Successfully applied the patch.

The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2020-01-17_04-49-57PM/installerPatchActions_2020-01-17_04-49-57PM.log

Launching Oracle Grid Infrastructure Setup Wizard...




The response file for this session can be found at:

 /u01/app/19.0.0.0/grid/install/response/grid_2020-01-17_04-49-57PM.rsp

 

As you saw, the report says that the patch 30116789 was installed over the GI home (this took around 10 minutes). Just after that the graphical installer appears and the installation is basically NNF (next next finish) until the request for root upgrade scripts appears:

 

 

rootupgrade.sh – Node 1

At this point it is needed to execute, node by node, the rootupgrade.sh to “finish” the installation. The rootupgrade.sh execute important steps like:
  • TFA upgrade
  • Upgrade OCR
  • Upgrade ACFS driver
  • Upgrade GI scripts
If something wrong occurs here (mainly after step 3) the recommendation is open SR to verify the errors and if the issue.
Another point it is stop all Oracle Databases running in the node before executing the script.
So, in the first node was called the rootupgrade.sh:

 

[root@exacl04n1 ~]# /u01/app/19.0.0.0/grid/rootupgrade.sh

Performing root user operation.




The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/19.0.0.0/grid




Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying coraenv to /usr/local/bin ...




Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/19.0.0.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/exacl04n1/crsconfig/rootcrs_exacl04n1_2020-01-24_04-42-49PM.log

2020/01/24 16:43:14 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.

2020/01/24 16:43:14 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2020/01/24 16:43:14 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.

2020/01/24 16:43:19 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.

2020/01/24 16:43:19 CLSRSC-464: Starting retrieval of the cluster configuration data

2020/01/24 16:47:50 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.

2020/01/24 16:48:25 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2020/01/24 16:50:06 CLSRSC-693: CRS entities validation completed successfully.

2020/01/24 16:50:12 CLSRSC-515: Starting OCR manual backup.

2020/01/24 16:50:48 CLSRSC-516: OCR manual backup successful.

2020/01/24 16:50:57 CLSRSC-486:

 At this stage of upgrade, the OCR has changed.

 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2020/01/24 16:50:57 CLSRSC-541:

 To downgrade the cluster:
  1. All nodes that have been upgraded must be downgraded.
2020/01/24 16:50:57 CLSRSC-542:
  1. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2020/01/24 16:51:07 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2020/01/24 16:51:07 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.

2020/01/24 16:51:08 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.

2020/01/24 16:51:13 CLSRSC-363: User ignored prerequisites during installation

2020/01/24 16:51:25 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.

2020/01/24 16:51:25 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.

2020/01/24 16:55:02 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2020/01/24 16:55:02 CLSRSC-482: Running command: '/u01/app/18.0.0/grid/bin/crsctl start rollingupgrade 19.0.0.0.0'

CRS-1131: The cluster was successfully set to rolling upgrade mode.

2020/01/24 16:55:09 CLSRSC-482: Running command: '/u01/app/19.0.0.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/18.0.0/grid -oldCRSVersion 18.0.0.0.0 -firstNode true -startRolling false '




ASM configuration upgraded in local node successfully.




2020/01/24 16:55:12 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2020/01/24 16:55:18 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2020/01/24 16:56:08 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

2020/01/24 16:56:11 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.

2020/01/24 16:56:12 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.

2020/01/24 16:56:23 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.

2020/01/24 16:56:23 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.

2020/01/24 16:56:30 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.

2020/01/24 16:56:37 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.

2020/01/24 16:56:37 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'

2020/01/24 16:57:29 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.

2020/01/24 16:58:01 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.

2020/01/24 16:58:21 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.

2020/01/24 17:00:27 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.

Successfully taken the backup of node specific configuration in OCR.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

2020/01/24 17:00:52 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.

2020/01/24 17:00:57 CLSRSC-474: Initiating upgrade of resource types

2020/01/24 17:02:09 CLSRSC-475: Upgrade of resource types successfully initiated.

2020/01/24 17:02:24 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.

2020/01/24 17:02:34 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@exacl04n1 ~]#

 

After finish with success, you can start the Oracle Databases in this node, or even relocate RACOneNode databases to this node.

 

rootupgrade.sh – Node 2

Here it is called the script in the second node. But some details before execute it:
  • Stop and unmount ACFS filesystem running in the node
  • Stop any additional resources at CRS running in this node
  • Stop all Databases running in the node (you relocate to others nodes already upgraded
After checking the points above, you can call the script:

 

[root@exacl04n2 ~]# /u01/app/18.0.0/grid/bin/crsctl stop res flk.mountbind -n exacl04n2

CRS-2673: Attempting to stop 'flk.mountbind' on 'exacl04n2'

CRS-2675: Stop of 'flk.mountbind' on 'exacl04n2' failed

CRS-2679: Attempting to clean 'flk.mountbind' on 'exacl04n2'

CRS-2681: Clean of 'flk.mountbind' on 'exacl04n2' succeeded

[root@exacl04n2 ~]#

[root@exacl04n2 ~]#

[root@exacl04n2 ~]# /u01/app/18.0.0/grid/bin/crsctl stop res flk.mountbind -n exacl04n2

CRS-2500: Cannot stop resource 'flk.mountbind' as it is not running

CRS-4000: Command Stop failed, or completed with errors.

[root@exacl04n2 ~]#

[root@exacl04n2 ~]#

[root@exacl04n2 ~]# /u01/app/18.0.0/grid/bin/srvctl stop filesystem -d /dev/asm/acfsoh-256 -n exacl04n2

[root@exacl04n2 ~]#

[root@exacl04n2 ~]#

[root@exacl04n2 ~]# /u01/app/19.0.0.0/grid/rootupgrade.sh

Performing root user operation.




The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/19.0.0.0/grid




Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying coraenv to /usr/local/bin ...




Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/19.0.0.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/exacl04n2/crsconfig/rootcrs_exacl04n2_2020-01-24_06-01-55PM.log

2020/01/24 18:02:12 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.

2020/01/24 18:02:12 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2020/01/24 18:02:12 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.

2020/01/24 18:02:13 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.

2020/01/24 18:02:13 CLSRSC-464: Starting retrieval of the cluster configuration data

2020/01/24 18:02:42 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2020/01/24 18:02:42 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.

2020/01/24 18:02:43 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.

2020/01/24 18:02:43 CLSRSC-363: User ignored prerequisites during installation

2020/01/24 18:02:44 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.

2020/01/24 18:02:44 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.




ASM configuration upgraded in local node successfully.




2020/01/24 18:03:01 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2020/01/24 18:04:45 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

2020/01/24 18:04:57 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.

2020/01/24 18:04:57 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.

2020/01/24 18:05:05 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.

2020/01/24 18:05:06 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.

2020/01/24 18:05:07 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.

2020/01/24 18:05:09 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.

2020/01/24 18:05:09 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'

2020/01/24 18:05:54 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.

2020/01/24 18:06:18 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2020/01/24 18:06:20 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.

2020/01/24 18:06:34 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.

2020/01/24 18:08:31 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 19 detected.

Successfully taken the backup of node specific configuration in OCR.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

2020/01/24 18:08:46 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.

Start upgrade invoked..

2020/01/24 18:08:50 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2020/01/24 18:08:50 CLSRSC-482: Running command: '/u01/app/19.0.0.0/grid/bin/crsctl set crs activeversion'

Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.

Started to upgrade CSS.

CSS was successfully upgraded.

Started to upgrade Oracle ASM.

Started to upgrade CRS.

CRS was successfully upgraded.

Started to upgrade Oracle ACFS.

Oracle ACFS was successfully upgraded.

Successfully upgraded the active version of Oracle Clusterware.

Oracle Clusterware active version was successfully set to 19.0.0.0.0.

2020/01/24 18:10:02 CLSRSC-479: Successfully set Oracle Clusterware active version

2020/01/24 18:10:04 CLSRSC-476: Finishing upgrade of resource types

2020/01/24 18:10:18 CLSRSC-477: Successfully completed upgrade of resource types

2020/01/24 18:10:45 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.

Successfully updated XAG resources.

2020/01/24 18:11:21 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@exacl04n2 ~]#

[root@exacl04n2 ~]#

 

As you can see, everything was fine and the cluster was upgraded in both nodes. If you have more nodes, continue with the others.

Continue with Graphical Installation – AND INTENTIONAL ERROR

After that is possible to continue the installation at the graphical side. But we will hit on intentional error.

 

 

During the phase Upgrade RHP Repository will occur one error. But this is expected. Remember that during the gridSetup.sh the mgmtDB parameters were disabled? So, since it was not created, the RHP will fail because the mgmtDB was not found (this occurred in all of 10 clusters that upgraded recently):
If you check in the log you will see details informing that mgmtDB was not found. To solve, just click OK and Skip. Look below that upgrade RHP was Ignored.

Post Upgrade

After finish the upgrade above, everything will be working correctly. But some details need to be checked to deliver everything at 100%.

 

ASM Compatibility

Remember that now the ASM.COMPATIBILITY parameter can be upgraded to allow you to use some new features. Remember, just ASM.COMPATIBILITY for all diskgroups:

 

[grid@exacl04n1 +ASM1]$ echo "ALTER DISKGROUP DATAC4 SET ATTRIBUTE 'compatible.asm' = '19.3.0.0.0';" | sqlplus -s / as sysasm




Diskgroup altered.




[grid@exacl04n1 +ASM1]$ echo "ALTER DISKGROUP RECOC4 SET ATTRIBUTE 'compatible.asm' = '19.3.0.0.0';" | sqlplus -s / as sysasm




Diskgroup altered.




[grid@exacl04n1 +ASM1]$

 

Inventory Update

Since Oracle 19C, GI did not register the nodes in the oraInventory. So, it is needed to add to allow older versions (11, 12 and 18) to see the GI nodes. Simple call resolves this (take attention the correct path and node names):

 

[grid@exacl04n1 +ASM1]$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={exacl04n1,exacl04n2}" CRS=true LOCAL_NODE=exacl04n1

Starting Oracle Universal Installer...




Checking swap space: must be greater than 500 MB.   Actual 16174 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

[grid@exacl04n1 +ASM1]$

 

Remove previous GI

Remember to delete (or at least mark) in the Oracle inventory that previous GI was removed. To do that, you need to call runInstaller (from previous OH) with option detachHome. Be careful to set the ORACLE_HOME correctly (and in just one node):

 

[grid@exacl04n1 +ASM1]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml |grep grid

<HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="6">

<HOME NAME="OraGI19Home1" LOC="/u01/app/19.0.0.0/grid" TYPE="O" IDX="11" CRS="true">

<HOME NAME="OraGridHome" LOC="/u01/app/12.1.0.2/grid" TYPE="O" IDX="1" REMOVED="T"/>

<HOME NAME="OraGI12Home1" LOC="/u01/app/12.2.0.1/grid" TYPE="O" IDX="5" REMOVED="T"/>

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ unset ORACLE_HOME

[grid@exacl04n1 +ASM1]$ export ORACLE_HOME=/u01/app/18.0.0/grid/

[grid@exacl04n1 +ASM1]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=/u01/app/18.0.0/grid

Starting Oracle Universal Installer...




Checking swap space: must be greater than 500 MB.   Actual 16174 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

'DetachHome' was successful.

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml |grep grid

<HOME NAME="OraGI19Home1" LOC="/u01/app/19.0.0.0/grid" TYPE="O" IDX="11" CRS="true">

<HOME NAME="OraGridHome" LOC="/u01/app/12.1.0.2/grid" TYPE="O" IDX="1" REMOVED="T"/>

<HOME NAME="OraGI12Home1" LOC="/u01/app/12.2.0.1/grid" TYPE="O" IDX="5" REMOVED="T"/>

<HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="6" REMOVED="T"/>

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$

 

After that, you can delete the previous GI Home.

 

Others details

To finish, remember to fix the bash_profile and fix oratab (if you have something particularly set there). Remember to execute in both nodes:

 

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ cat .bash_profile |grep HOME

export ORACLE_HOME=/u01/app/18.0.0/grid

export PATH=$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ vi .bash_profile

[grid@exacl04n1 +ASM1]$

[grid@exacl04n1 +ASM1]$ cat .bash_profile |grep HOME

export ORACLE_HOME=/u01/app/19.0.0.0/grid

export PATH=$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin

[grid@exacl04n1 +ASM1]$




#################

#Second node

#################




[root@exacl04n2 ~]# su - grid

Last login: Fri Jan 24 18:17:43 CET 2020 from exacl04n1.flisk.rulz.org on ssh

[grid@exacl04n2 +ASM2]$ cat .bash_profile |grep HOME

export ORACLE_HOME=/u01/app/18.0.0/grid

export PATH=$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin

[grid@exacl04n2 +ASM2]$

[grid@exacl04n2 +ASM2]$ vi .bash_profile

[grid@exacl04n2 +ASM2]$

[grid@exacl04n2 +ASM2]$ cat .bash_profile |grep HOME

export ORACLE_HOME=/u01/app/19.0.0.0/grid

export PATH=$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin

[grid@exacl04n2 +ASM2]$

 

Conclusion

Upgrade GI to the 19c version is easier than the previous versions. If you follow correctly the requirements probably you will not see errors. At the beginning of 2018, I made the same for 18c upgrade (migrating from 12c) and the process was similar. You can check in my post Reaching Exadata 18c.  
But be careful with ACFS and the repost from cluster verify script. The output from it (even if still requires additional checks as I showed before) is a good indication of system health and “OK to continue”.
The process that I showed above was executed in Exadata, but with small adaptions (mainly for dcli), it can be used in other environments as well.
The post-upgrade has some additional tricks as I showed in the previous post about Exadata, workaround for oracka.ko error and TFA error after GI upgrade to 19c. Nothing special but will require additional attention too.

 

 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


ZDLRA, Protection Policies
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

ZDLRA, Protection Policies

For ZDLRA the protection policies have a significant role in the appliance management, but not just that, for the architecture design too. And usually (and unfortunately) policies do not take a lot of attention as deserved.
To create a good ZDLRA design, and avoid future problems, it is important to understand all the requirements for the protection policies and all the impacts. You can check the official documentation for this, but I will explain deeply the details that can pass without you notice them in the documentation.

 

DBMS_RA.CREATE_PROTECTION_POLICY

To create the policy it is easy, just need to use the DBMS_RA.CREATE_PROTECTION_POLICY and set the parameters:

 

SQL> BEGIN

  2  DBMS_RA.CREATE_PROTECTION_POLICY(

  3      protection_policy_name => 'ZDLRA_BRONZE'

  4      , description => 'Policy ZDLRA MAA BRONZE'

  5      , storage_location_name => 'DELTA'

  6      , recovery_window_goal => INTERVAL '30' DAY

  7      , max_retention_window => INTERVAL '60' DAY

  8      , recovery_window_sbt => INTERVAL '120' DAY

  9      , guaranteed_copy => 'NO'

 10      , allow_backup_deletion => 'YES'

 11  );

 12  END;

 13  /




PL/SQL procedure successfully completed.




SQL>

 

As you can see, the parameters are self-explained, you just define the name, description, and recovery/retention goals. But these retention windows are important and need some attention. To check inside of ZDLRA database, you can check the table RASYS.RA_PROTECTION_POLICY.

 

Retention Window

As you saw above, when you create the policy you have three parameters related to the retention window:
  • RECOVERY_WINDOW_GOAL: This parameter defines the time that ZDLRA will keep the backups (inside the appliance disks) for the database that is covered by this policy. In the example above, all the databases will stay for 30 days. Oldest, it is not guaranteed and can be deleted.
  • MAX_RETENTION_WINDOW: If ZDLRA has free space the backups can be retained until this retention window days inside of ZDLRA. In the example above, will be 60 days. If you do not specify, it will be “until having space”. If you specify, ZDLRA will delete everything after that period.
  • RECOVERY_WINDOW_SBT: It is the window that ZDLRA will retain for backups cloned to tape. In the example above, it will be 120 days and after that, the backup is not valid and will be expired.
The import here is understanding the small details. By the ZDLRA rules, it always tries to support (for each database) the point-in-time recovery from today until the recovery window goal. As the documentation says: “For each protected database, the Recovery Appliance attempts to ensure that the oldest backup on disk can support a point-in-time recovery to any time within the specified interval, counting backward from the current time.”.

 

One collateral effect for RECOVERY_WINDOW parameter is that it is global for policy (and not per database), and if you remember, when you enroll the database at ZDLRA, you need to define the “reserved_space” for it. And the detail it is that this value (reserved_space) needs to cover the recovery_window_goal. So, if your database changes a lot (or it is a big database) you need to constantly check the “Recovery Window Goal” and adjust the reserved space for your database. You can read some best practices here (page 15).
The MAX_RETENTION_WINDOW means the maximum time that your databases will be inside of ZDLRA. Imagine that the period between RECOVERY_WINDOW_GOAL and the MAX_RETENTION_WINDOW as a bonus, the backups are not guaranteed that will remains or will be there. If ZDLRA needs to delete some backups (because of lack of space), it will delete these backups between these dates first. And since the management is based in backupset, it can occur that one backupset for your database is deleted and you can’t use this point in time to restore the database.
One detail here it is that backups that pass MAX_RETENTION_WINDOW are forcibly deleted by ZDLRA. So, if you have a close date/time/day between RETENTION_WINDOW and MAX_RETENTION_WINDOW (like 10 for the first and 11 for the second), you can put a high pressure over ZDLRA because it will never stop of doing delete tasks for backups. In the ZDLRA best practices (or the old version) there is some vague indication of how to set it, but the idea is not to be aggressive with this value. By experience, I recommend that, at least, the MAX_RETENTION_WINDOW be 20% higher than RETENTION_WINDOW (and the same for reservered_space – be at least 20% higher than database size). But if you have big databases, this value needs to be high because the delete task will demand more time to finish and you can lead for a non-ending delete queue/load inside ZDLRA.
RECOVERY_WINDOW_SBT means that the period that backups will be available (for recovery purpose) in the cloned destination (tape or cloud). Since these backups are not inside of ZDLRA, it will not struggle for a lack of space.
More than one Policy

 

For ZDLRA probably you will have more than one policy to protect your databases since you probably will have databases (PROD, UAT, TST, DEV) with different requirements for recovery window. And even inside of same type (like PROD) it is possible to have different requirements (because of law regimentation as an example) and these lead/force you to create more than one policy.

 

Whatever the case, all databases will “fight each other” for disk space, and if you badly design your policies, or left the database in the wrong protection policy, you can have a system with high pressure for disk usage. ZDLRA always will accept new backups, if needed will delete the oldest backup (if you think this is adequate because probably the newest data is more important). But it is true too that ZDLRA will try to support the point-in-time recovery for all databases to reach what was defined in the policy. If you want to control this behavior you can set parameter GUARANTEED_COPY to YES. Doing this ZDLRA will delete old backups just if they were already copied to tape or replicated.
Don’t be afraid to create more than one policy because to move one database from one policy to another it is a simple command and will be more adequate to manage space usage if needed. One drawback of the huge number of protection policies is that clone to tape backups are based/scheduled in protection policies. If you have a lot of them, you need to create one clone to tape job for each one.

 

 

Protection Policies and ZDLRA Replication

One important detail is ZDLRA replication and how it interacted with protection policies. This is important because the replication between ZDLRA’s is purely based on policies, this means that replicates everything/all databases for the protection policy that you defined as a parameter. So, as you can imagine, if you want to replicate just some part of your databases between ZDLRA’s you need to create a specific protection policy.
Another interesting point is that on both sides of replicated ZDLRA the protection policies can have different recovery window goal. As example, in the primary site, the upstream ZDLRA can have 30 days of recovery windows and guaranteed copy as YES (because this ZDLRA receive more backups), but in the downstream ZDLRA, the destination protection policy can have 120 days as recovery window goal (because this ZDLRA protect fewer databases and the pressure for space usage will be less).
Let’s imagine protection policy for SILVER databases (https://www.oracle.com/a/tech/docs/maa-overview-onpremise-2019.pdf), that you want to replicate just some of them. In this case both ZDLRA’s (upstream and downstream) will have the “normal” silver protection policy (named as policy_silver as an example), as well another policy just for replicate some silver databases (named as policy_replicated_silver).

 

Architecture Design

 

The correct definition for your protection policies it is important for ZDLRA maintenance and usability. Design correctly the polices are important to avoid high pressure over the storage location for ZDLRA, even if you start to used (or maintain) one already deployed ZDLRA.
Understand recovery window goals and max retention windows constraint will avoid reaching full space utilization. You don’t need to create just one or two protection policies for your ZDLRA, but be careful with your design if you have replicated ZDLRA or protection a mix of database types. Group them correctly.
As explained before, there is a direct link between retention_window and reserved_space for your databases. If you create a unique protection policy for all of your databases, you can lead to putting a high value for reserved space and this can cause problems (like ZDLRA deny to add databases because you already reserved all the available space – even existing free space).
There is no rule of thumb to follow, like create policy A or B with X or Y values for the recovery window. The correct way is checking the requirements (and rules) that you need to follow and design the architecture that meets your requirements. Don’t worry if you need to change it in the future, it is possible and easy to do.
So, the most important is to know and understand the links that exist between the ZDLRA functionalities. Protection policies, replicated backups, and reserved space are some examples. A good time understanding them will reduce rework in the future.
 
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


ZDLRA, Virtual Private Catalog User – VPC
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

ZDLRA, Virtual Private Catalog User - VPC

The Virtual Private Catalog (VPC) user is a key piece for a good ZDLRA architecture design. The detail is not how to create it, but how to correctly integrate it in your design, and this is more important if you have replicated ZDLRA or using Real-Time redo transport.
Here I will show and discuss VPC implications for your architecture design when deploying ZDLRA. Even for a complete and new implementation (together with database) or adding ZDLRA at your already running environment. All points here try to show some perspectives and key points that can help you to correct use and define VPC’s.

 

VPC

For the simple definition, VPC user is created directly inside ZDLRA database using the “racli” (or EM/CC) as you can see below and here in the doc:

 

[root@zdlras1n1 ~]# /opt/oracle.RecoveryAppliance/bin/racli add vpc_user --user_name=vpcsrc

[vpcsrc] New Password:

Mon Nov 25 23:41:45 2019: Start: Add vpc user vpcsrc.

Mon Nov 25 23:41:46 2019:        Add vpc user vpcsrc successfully.

Mon Nov 25 23:41:46 2019: End:   Add vpc user vpcsrc.

[root@zdlras1n1 ~]#

 

The VPC is used when you connect using rman from the client, and it is the catalog owner for your rman:

 

[oracle@orcloel7 ~]$ rman target=/ catalog=vpcsrc/vpcsrc@zdlras1




Recovery Manager: Release 18.0.0.0.0 - Production on Tue Nov 26 00:23:40 2019

Version 18.6.0.0.0




Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.




connected to target database: ORCL18 (DBID=3914023082)

connected to recovery catalog database

recovery catalog schema version 19.03.00.00. is newer than RMAN version




RMAN>

 

Inside ZDLRA it is just a logical organization for your connections, it does not store tables or views. Just made the link for rman catalog views (RC_*) between the VPC user itself and the internal RASYS (ZDLRA user/schema that have all tables).
When you add the database inside of ZDLRA you grant the access for the database using VPC user as a parameter too:

 

SQL> BEGIN

  2  DBMS_RA.GRANT_DB_ACCESS (

  3        db_unique_name => 'ORCL18'

  4        , username => 'VPCSRC'

  5  );

  6  END;

  7  /




PL/SQL procedure successfully completed.




SQL>

 

But to be honest this is just a simple definition. VPC is more important in the architecture of your solution than “creation and connection commands”. I will explain.
For ZDLRA, the VPC is just a logical definition (don’t store data itself), you can create more than you single VPC. You can create multiple VPC’s and separate databases based on your own rules. Like one VPC for DEV database, one for PROD, or can be one per rack (like Exadata, AIX, Power). You choose.
Remember that the VPC is the rman catalog user, so every database with the grant, “share” it. If you do “list db_unique_name all” (and if do “set dbid”) everyone in the same catalog can be access the backups.
So, when you are planning your architecture for ZDLRA deployment, think about the VPC users that you will have. How you can divide/aggregate your databases. But don’t worry if you need to create more VPC’s in the future, remember that it is just a logical definition. If you want to move databases between VPC’s you just need to call DBMS_RA.GRANT_DB_ACCESS for your database to the new VPC. All the backups will appear automatically with/in the new catalog.
But it is not just a rman catalog that VPC defines. There are other points that can change your architecture, I will explain below.

 

VPC and Real-Time Redo transport

The Real-Time Redo is the feature for ZDLRA that guarantee the zero RPO, you can see more details in my previous posts (here and here). But the point is that the VPC needs to be used as REDO_TRANSPORT_USER parameter for your database to real-time redo work.
This is needed because the ZDLRA is one destination for archive logs (log_archive_dest parameter) and since ZDLRA “simulate” dataguard in this case, both sides need to have the same user (as a normal DG does). Usually, you don’t change this for DG since both sides talks using sys user. But for ZDLRA, you need to use VPC as redo user. And besides that, the automatic backups for archivelog need to appear inside rman catalog (list backup of archive as an example). As told in my post here, create user inside your database (with sysoper permission at least) and change the database parameter.
So, unfortunately, this can impact again at the architecture design and not just for ZDLRA, but for all databases. Sometimes the database username needs to follow some naming/security rules and VPC definition will need to follow this too.

 

VPC and Real-Time Redo transport and DataGuard

The point for VPC and Real-Time Redo when using Dataguard is that in this scenario you have at least two sites, two ZDLRA’s, two databases with dataguard, and real-time redo transport enabled. But, everything needs to have/use the same, and only one, VPC username.
Database in site 1 will talk with ZDLRA at site 1 using the VPC for backup and real-time redo. And since it is dataguard, the database in site 1 talks with the database in site 2 (for DG) using the redo_transport_user. So, if you are doing a backup of the database in site 2 at ZDLRA in site 2, and even the real-time redo, the VPC for ZDLRA in site 2 needs to be the same than redo_transport_user. As you can imagine, the same VPC username in both ZDLRA’s in this case. If you are not planning to use real-time redo for your database, you can use different VPC usernames.
So, if you are planning to use multiple ZDLRA’s to protect your sites the architecture design needs to be well planned. If you were planning to use different VPC usernames (to represent different rman catalogs for each site), you need to review. You can use this approach for databases that don’t use dataguard, but will not fit for DG databases.

 

VPC and Replicated ZDLRA

Another point that needs attention for VPC is when using replicated ZDLRA. Besides, if you are using the real-time redo, here, you need two additional VPC’s (three at total):
  1. ZDLRA site 1 VPC: The normal VPC used to database (from the same site) connect and do backups and use real-time redo.
  2. ZDLRA site 2 VPC: VPC at second that will be used as is normal rman catalog. This is needed if you want to connect the database from site 1 in ZDLRA site 2 (think that your ZDLRA from site 1 is out).
  3. ZDLRA side 2 VPC for replication: VPC user to receive the replicated backups from ZDLRA in the other site. This is used internally by ZDLRA to intercommunication but still is a VPC.
When using replicated ZDLRA you have one database doing backup at a local ZDLRA in the same site, and this ZDLRA sends this backup to other replicated ZDLRA. The connection between the database and local ZDLRA uses the VPC from point 1 listed above. The replication between ZDLRA’s will use the VPC from point number 3. And if you want, you can have the VPC of point #2 to do backups when the “source” ZDLRA is down.
This scenario can complicate a little more if you are planning to use real-time redo because the VPC from points 1 and 2 needs to be the same.
As you can see, other points that you need to verify and take care of ZDLRA architecture design.

 

Architecture

 

In this level of architecture design probably you will have more than one ZDLRA, protecting more than one site, or doing replication to achieve ideal protection. But even if you use only one ZDLRA, the concerns are the same.
Independently the number of ZDLRA’s is possible that you will mix all the available features, real-time redo, and replication. It is common that you have different requirements for your databases, you may need to handle (and protect) single/isolated databases (like DEV), databases with just real-time redo in use, databases with ZDLRA replication (for multi-site protection), and probably databases with DG that needed to be protected in both sides.
Usually, when deploying ZDLRA you are dealing with huge and mixed environments that need different levels of protection. But before “just deploy” ZDLRA there you need to start with good architecture design, identify the needs and how you care VPC to handle correctly everything.
Maybe you will need to create more than one VPC, this totally depends on your requirements. It does not exist a magic receipt to do that, but if you know all the details that can impact VPC definition, the success will be achieved. As you saw, VPC it is not just the user for rman connection.
The idea of this post was pointing most of the details that are important to know about VPC when designing the architecture to deploy and integrate ZDLRA.
 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


Exadata, Missing Metric
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Exadata, Missing Metric

Understand metrics for Exadata Storage Server is important to understand how all the software features are being used and all the details from that. Here I will discuss one case where the FC_IO_BY_R_SEC metric can show not precise values. And I will discuss one missing metric that can save a lot.
If you have doubts about metrics, you can check my post about metrics, it was an introduction, but cover some aspects of how to read and use it. You can check my other post where I show how to use metric DB_FC_IO_BY_SEC to identify database problems that can be hidden when checking only from the database side.

 

Metrics collection

For this post, all metrics were collected from one storage server, but the behavior occurs in all cells from Exadata, and in all versions. And they were collected using the same timestamp be accurate. So, they represent one minute, but again, the same behavior that I show here occurs every time. When reading the metrics, please look at the metric name and the collect moment.
The idea for this post came when I was investigating storage usage from one database to see if we are hitting the limits for Exadata Storage. I needed to check disk, flash cache, and other details. For the behavior I show below, I already worked with Oracle about (SR, BUG, ER).

 

DB_FC_IO_BY_SEC

I already discussed about this metric in a previous post, but its report (according to the official doc) the “The number of megabytes of I/O per second for this database to flash cache”. Look the usage for the database DBPR1_EXA:

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:42:00+01:00' and collectionTime < '2019-12-13T15:43:00+01:00' and name = 'DB_FC_IO_BY_SEC' and metricObjectName like 'DBPR1_EXA'

         DB_FC_IO_BY_SEC         DBPR1_EXA       850 MB/sec      2019-12-13T15:42:03+01:00




CellCLI>

 

As you can see, at 15:42:03 of 2019-12-13 the storage server reported that during the last minute this database made around 850 MB/s of reading from flash cache.
And this was the highest database consumption (that used more than 0.5 MB/s):

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:42:00+01:00' and collectionTime < '2019-12-13T15:43:00+01:00' and name like 'DB_.*BY.*SEC' and metricvalue > 0.5

         DB_FC_IO_BY_SEC         ORADB01P                1 MB/sec        2019-12-13T15:42:03+01:00

         DB_FC_IO_BY_SEC         ORADB02P                7 MB/sec        2019-12-13T15:42:03+01:00

         DB_FC_IO_BY_SEC         ORADB01V                1 MB/sec        2019-12-13T15:42:03+01:00

         DB_FC_IO_BY_SEC         ORADB03P                5 MB/sec        2019-12-13T15:42:03+01:00

         DB_FC_IO_BY_SEC         ORAD01P                 6 MB/sec        2019-12-13T15:42:03+01:00

         DB_FC_IO_BY_SEC         ORADBPR_EXA             2 MB/sec        2019-12-13T15:42:03+01:00

         DB_FC_IO_BY_SEC         DBPR1_EXA               850 MB/sec      2019-12-13T15:42:03+01:00

         DB_FC_IO_BY_SEC         DBP01                   104 MB/sec      2019-12-13T15:42:03+01:00

         DB_FD_IO_BY_SEC         ORADB01P                1 MB/sec        2019-12-13T15:42:03+01:00

         DB_FD_IO_BY_SEC         ORADB02P                7 MB/sec        2019-12-13T15:42:03+01:00

         DB_FD_IO_BY_SEC         ORADB01V                1 MB/sec        2019-12-13T15:42:03+01:00

         DB_FD_IO_BY_SEC         ORADB03P                5 MB/sec        2019-12-13T15:42:03+01:00

         DB_FD_IO_BY_SEC         ORAD01P                 6 MB/sec        2019-12-13T15:42:03+01:00

         DB_FD_IO_BY_SEC         ORADBPR_EXA             2 MB/sec        2019-12-13T15:42:03+01:00

         DB_FD_IO_BY_SEC         DBPR1_EXA               850 MB/sec      2019-12-13T15:42:03+01:00

         DB_FD_IO_BY_SEC         DBP01                   105 MB/sec      2019-12-13T15:42:03+01:00

         DB_FD_IO_BY_SEC         _OTHER_DATABASE_        1 MB/sec        2019-12-13T15:42:03+01:00

         DB_FL_IO_BY_SEC         DBP01                   1.150 MB/sec    2019-12-13T15:42:03+01:00

         DB_IO_BY_SEC            ORAD01P                 2 MB/sec        2019-12-13T15:42:03+01:00

         DB_IO_BY_SEC            ORADB1P_EXA             1 MB/sec        2019-12-13T15:42:03+01:00

         DB_IO_BY_SEC            DBP01                   2 MB/sec        2019-12-13T15:42:03+01:00




CellCLI>

 

If I sum all the database usage from flash cache, it will be around 950 MB/s.

 

FC_IO_BY_R_SEC

 

Since DB_FC_IO_BY_SEC reported just a small usage, I went check the metric for whole flash cache. So, I went to FC_IO_BY_R_SEC. This metric report (according to the docs): “The number of megabytes read per second from flash cache”. And since we are reading the metrics for flash cache direct, it report the number for all databases.
But, look the numbers:

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:42:00+01:00' and collectionTime < '2019-12-13T15:43:00+01:00' and name = 'FC_IO_BY_R_SEC';

         FC_IO_BY_R_SEC  FLASHCACHE      19.343 MB/sec   2019-12-13T15:42:03+01:00




CellCLI>
 
So, at the same time moment that one database metric reported around 850 MB/s from FC reading, the metrics from FC reported around 19MB/s. Something it is not correct.
And even if I check the minute before and after (because maybe was a glimpse of time collection) the numbers are not quite different:

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:41:00+01:00' and collectionTime < '2019-12-13T15:44:00+01:00' and name = 'FC_IO_BY_R_SEC';

         FC_IO_BY_R_SEC  FLASHCACHE      27.519 MB/sec   2019-12-13T15:41:03+01:00

         FC_IO_BY_R_SEC  FLASHCACHE      19.343 MB/sec   2019-12-13T15:42:03+01:00

         FC_IO_BY_R_SEC  FLASHCACHE      39.600 MB/sec   2019-12-13T15:43:03+01:00




CellCLI>

 

Others flash metrics

So, since the numbers appears to be complete wrong I searched for others metrics that report flash usage (disks and flash cache as example).

 

CD_IO_BY_*_*_SEC

 

I started to search with the lowest level possible, going directly to the celldisk metrics. For that I used CD_IO_BY_R_LG_SEC, CD_IO_BY_R_SM_SEC (for reads), CD_IO_BY_W_LG_SEC, and CD_IO_BY_W_SM_SEC (for writes). Basically, it reports: “The rate which is the number of megabytes read in large blocks per second from a cell disk”.
Filtering just from flash disks type I had this numbers:

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:42:00+01:00' and collectionTime < '2019-12-13T15:43:00+01:00' and name like 'CD_IO_BY.*SEC.*' and metricobjectname like 'FD_.*' and metricvalue > 0

         CD_IO_BY_R_LG_SEC       FD_00_exastradm01       179 MB/sec      2019-12-13T15:42:03+01:00

         CD_IO_BY_R_LG_SEC       FD_01_exastradm01       296 MB/sec      2019-12-13T15:42:03+01:00

         CD_IO_BY_R_LG_SEC       FD_02_exastradm01       200 MB/sec      2019-12-13T15:42:03+01:00

         CD_IO_BY_R_LG_SEC       FD_03_exastradm01       250 MB/sec      2019-12-13T15:42:03+01:00

         CD_IO_BY_R_SM_SEC       FD_00_exastradm01       3.161 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_R_SM_SEC       FD_01_exastradm01       3.152 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_R_SM_SEC       FD_02_exastradm01       2.990 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_R_SM_SEC       FD_03_exastradm01       3.741 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_W_LG_SEC       FD_00_exastradm01       0.859 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_W_LG_SEC       FD_01_exastradm01       1.125 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_W_LG_SEC       FD_02_exastradm01       1.028 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_W_LG_SEC       FD_03_exastradm01       0.801 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_W_SM_SEC       FD_00_exastradm01       0.982 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_W_SM_SEC       FD_01_exastradm01       0.998 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_W_SM_SEC       FD_02_exastradm01       1.006 MB/sec    2019-12-13T15:42:03+01:00

         CD_IO_BY_W_SM_SEC       FD_03_exastradm01       0.937 MB/sec    2019-12-13T15:42:03+01:00




CellCLI>

 

As you can see, just picking up the CD_IO_BY_R_LG_SEC I got around 925 MB/s, a close number from the same that I got from the database metric reading the flash. Another hint from that says to me that reads are requesting more than 128KB for each access since they were counted as large (LG).

 

FL_IO_DB_BY_W_SEC and FL_IO_FL_BY_W_SEC

 

Since I was checking for bad flash report, I checked the flash log metrics but they reported close values from cell disk metrics:

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:42:00+01:00' and collectionTime < '2019-12-13T15:43:00+01:00' and name like 'FL_IO.*_SEC'

         FL_IO_DB_BY_W_SEC       FLASHLOG        1.974 MB/sec    2019-12-13T15:42:03+01:00

         FL_IO_FL_BY_W_SEC       FLASHLOG        2.673 MB/sec    2019-12-13T15:42:03+01:00




CellCLI>

 

So, the metrics for disk usage, was OK, reporting correct values.

 

SIO_IO_RD_FC_SEC

 

As you can see, the metrics from database usage from flash cache are reporting the same values that were possible to check from the closest as from hardware report (celldisk). So, maybe could be a software misreading in-memory values.
Because of that, I checked from other metrics that can report form features that interact with flash cache. So, I checked from Smart I/O:

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:42:00+01:00' and collectionTime < '2019-12-13T15:43:00+01:00' and name like 'SIO_.*SEC.*'

         SIO_IO_EL_OF_SEC        SMARTIO         1,223 MB/sec    2019-12-13T15:42:03+01:00

         SIO_IO_OF_RE_SEC        SMARTIO         34.688 MB/sec   2019-12-13T15:42:03+01:00

         SIO_IO_PA_TH_SEC        SMARTIO         0.000 MB/sec    2019-12-13T15:42:03+01:00

         SIO_IO_RD_FC_HD_SEC     SMARTIO         0.174 MB/sec    2019-12-13T15:42:03+01:00

         SIO_IO_RD_FC_SEC        SMARTIO         843 MB/sec      2019-12-13T15:42:03+01:00

         SIO_IO_RD_HD_SEC        SMARTIO         0.101 MB/sec    2019-12-13T15:42:03+01:00

         SIO_IO_RD_RQ_FC_HD_SEC  SMARTIO         0.183 IO/sec    2019-12-13T15:42:03+01:00

         SIO_IO_RD_RQ_FC_SEC     SMARTIO         850 IO/sec      2019-12-13T15:42:03+01:00

         SIO_IO_RD_RQ_HD_SEC     SMARTIO         0.000 IO/sec    2019-12-13T15:42:03+01:00

         SIO_IO_RV_OF_SEC        SMARTIO         3.392 MB/sec    2019-12-13T15:42:03+01:00

         SIO_IO_SI_SV_SEC        SMARTIO         362 MB/sec      2019-12-13T15:42:03+01:00

         SIO_IO_WR_FC_SEC        SMARTIO         0.008 MB/sec    2019-12-13T15:42:03+01:00

         SIO_IO_WR_HD_SEC        SMARTIO         0.000 MB/sec    2019-12-13T15:42:03+01:00

         SIO_IO_WR_RQ_FC_SEC     SMARTIO         0.017 IO/sec    2019-12-13T15:42:03+01:00

         SIO_IO_WR_RQ_HD_SEC     SMARTIO         0.000 IO/sec    2019-12-13T15:42:03+01:00




CellCLI>

 

And as you can see the SIO_IO_RD_FC_SEC (that means “The number of megabytes per second read from flash cache by smart I/O”) reported almost the same that value (843 MB/s) from the database and cell disk.

 

The Missing Metric

 

As you can see above, the metrics close from HW reported almost the same that was reported by DB. And from features usage, almost the same too. To be more clear is that:

 

 

So, it is not miss reading from memory since the Smart I/O report correctly. Returning the review for flash cache, all metrics are:

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:42:00+01:00' and collectionTime < '2019-12-13T15:43:00+01:00' and name like 'FC_IO_BY.*' and metricvalue > 0;

         FC_IO_BYKEEP_W                          FLASHCACHE      519 MB                          2019-12-13T15:42:03+01:00

         FC_IO_BY_DISK_WRITE                     FLASHCACHE      111,626,264 MB                  2019-12-13T15:42:03+01:00

         FC_IO_BY_DISK_WRITE_SEC                 FLASHCACHE      2.052 MB/sec                    2019-12-13T15:42:03+01:00

         FC_IO_BY_R                              FLASHCACHE      1,475,538,050 MB                2019-12-13T15:42:03+01:00

         FC_IO_BY_R_ACTIVE_SECONDARY             FLASHCACHE      1,132,686 MB                    2019-12-13T15:42:03+01:00

         FC_IO_BY_R_ACTIVE_SECONDARY_MISS        FLASHCACHE      237,442 MB                      2019-12-13T15:42:03+01:00

         FC_IO_BY_R_DISK_WRITER                  FLASHCACHE      114,172,096 MB                  2019-12-13T15:42:03+01:00

         FC_IO_BY_R_DISK_WRITER_SEC              FLASHCACHE      2.057 MB/sec                    2019-12-13T15:42:03+01:00

         FC_IO_BY_R_DW                           FLASHCACHE      13,071,021,100 MB               2019-12-13T15:42:03+01:00

         FC_IO_BY_R_MISS                         FLASHCACHE      42,381,380 MB                   2019-12-13T15:42:03+01:00

         FC_IO_BY_R_MISS_DW                      FLASHCACHE      133,613,408 MB                  2019-12-13T15:42:03+01:00

         FC_IO_BY_R_MISS_SEC                     FLASHCACHE      0.438 MB/sec                    2019-12-13T15:42:03+01:00

         FC_IO_BY_R_SEC                          FLASHCACHE      19.343 MB/sec                   2019-12-13T15:42:03+01:00

         FC_IO_BY_R_SKIP                         FLASHCACHE      97,002,568 MB                   2019-12-13T15:42:03+01:00

         FC_IO_BY_W                              FLASHCACHE      1,761,639,940 MB                2019-12-13T15:42:03+01:00

         FC_IO_BY_W_DISK_WRITER                  FLASHCACHE      111,615,088 MB                  2019-12-13T15:42:03+01:00

         FC_IO_BY_W_DISK_WRITER_SEC              FLASHCACHE      2.052 MB/sec                    2019-12-13T15:42:03+01:00

         FC_IO_BY_W_FIRST                        FLASHCACHE      183,171,872 MB                  2019-12-13T15:42:03+01:00

         FC_IO_BY_W_FIRST_SEC                    FLASHCACHE      2.452 MB/sec                    2019-12-13T15:42:03+01:00

         FC_IO_BY_W_OVERWRITE                    FLASHCACHE      1,475,454,720 MB                2019-12-13T15:42:03+01:00

         FC_IO_BY_W_OVERWRITE_SEC                FLASHCACHE      1.346 MB/sec                    2019-12-13T15:42:03+01:00

         FC_IO_BY_W_POPULATE                     FLASHCACHE      103,121,912 MB                  2019-12-13T15:42:03+01:00

         FC_IO_BY_W_POPULATE_SEC                 FLASHCACHE      0.381 MB/sec                    2019-12-13T15:42:03+01:00

         FC_IO_BY_W_SEC                          FLASHCACHE      4.179 MB/sec                    2019-12-13T15:42:03+01:00

         FC_IO_BY_W_SKIP                         FLASHCACHE      1,039,399,810 MB                2019-12-13T15:42:03+01:00

         FC_IO_BY_W_SKIP_LG                      FLASHCACHE      605,535,040 MB                  2019-12-13T15:42:03+01:00

         FC_IO_BY_W_SKIP_LG_SEC                  FLASHCACHE      2.695 MB/sec                    2019-12-13T15:42:03+01:00

         FC_IO_BY_W_SKIP_SEC                     FLASHCACHE      6.589 MB/sec                    2019-12-13T15:42:03+01:00




CellCLI>

 

As you can above it is not the case for FC_IO_BY_R_SKIP (“The number of megabytes read from disks for I/O requests that bypass flash cache”) and FC_IO_BY_R_MISS_SEC (“The number of megabytes read from disks per second because not all requested data was in flash cache”). And even if we consider some minutes before and after:

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:41:00+01:00' and collectionTime < '2019-12-13T15:44:00+01:00' and name = 'FC_IO_BY_R_SKIP';

         FC_IO_BY_R_SKIP         FLASHCACHE      97,002,568 MB   2019-12-13T15:41:03+01:00

         FC_IO_BY_R_SKIP         FLASHCACHE      97,002,568 MB   2019-12-13T15:42:03+01:00

         FC_IO_BY_R_SKIP         FLASHCACHE      97,002,568 MB   2019-12-13T15:43:03+01:00




CellCLI> list metrichistory where collectionTime > '2019-12-13T15:41:00+01:00' and collectionTime < '2019-12-13T15:44:00+01:00' and name = 'FC_IO_BY_W_SKIP';

         FC_IO_BY_W_SKIP         FLASHCACHE      1,039,399,420 MB        2019-12-13T15:41:03+01:00

         FC_IO_BY_W_SKIP         FLASHCACHE      1,039,399,810 MB        2019-12-13T15:42:03+01:00

         FC_IO_BY_W_SKIP         FLASHCACHE      1,039,401,150 MB        2019-12-13T15:43:03+01:00




CellCLI> list metrichistory where collectionTime > '2019-12-13T15:41:00+01:00' and collectionTime < '2019-12-13T15:44:00+01:00' and name like 'FC_IO_BY_.*_MISS.*SEC' and  metricvalue > 0

         FC_IO_BY_R_MISS_SEC     FLASHCACHE      0.702 MB/sec    2019-12-13T15:41:03+01:00

         FC_IO_BY_R_MISS_SEC     FLASHCACHE      0.438 MB/sec    2019-12-13T15:42:03+01:00

         FC_IO_BY_R_MISS_SEC     FLASHCACHE      0.521 MB/sec    2019-12-13T15:43:03+01:00




CellCLI>

 

But if we look closely the metrics from FC above, we can see the high value for FC_IO_BY_R_DW (“The number of megabytes of DW data read from flash cache”). And if we check some minutes we can see the usage:

 

CellCLI> list metrichistory where collectionTime > '2019-12-13T15:41:00+01:00' and collectionTime < '2019-12-13T15:44:00+01:00' and name = 'FC_IO_BY_R_DW';

         FC_IO_BY_R_DW   FLASHCACHE      13,070,966,800 MB       2019-12-13T15:41:03+01:00

         FC_IO_BY_R_DW   FLASHCACHE      13,071,021,100 MB       2019-12-13T15:42:03+01:00

         FC_IO_BY_R_DW   FLASHCACHE      13,071,055,900 MB       2019-12-13T15:43:03+01:00




CellCLI>

 

From that, we can see that at 15:41 the usage was 13,070,966,800 MB and at 15:42 the usage was 13,071,021,100 MB. And if we do the diff, between this minute the difference was 54300 MB. And if we divide by each sec we have 54300/60 = 905 MB/s.
So, the missing values were found, the flash cache read usage was counted as DW. But if you followed everything, we wad legitim reads from flash cache that was not counted at FC_IO_BY_R_SEC. So, the missing metric, in this case, it is FC_IO_BY_R_DW_SEC.

 

 

Reasons

The reason why the read was counted at FC_IO_BY_R_DW and not by the generic FC_IO_BY_R_SEC is not clear. Maybe because the block size of tablespace for the database is different than 8K, maybe the requests were large (more than 128 KB are we saw in cell disk metric). It is not clear why. Or maybe FC_IO_BY_R_SEC just report what was really read by flash cache from the database. It can occur that documentation is not clear and some details why this behavior occurs need to be clarified.
But we can see that FC_IO_BY_R_SEC don’t reflect the current usage in some cases. If you want to verify the FC usage, the metric FC_IO_BY_R_SEC can, sometimes, report wrong values. If the other metrics for FC are correct (like SKIP or MISS) was impossible to check at this point. One option is check directly at celldisk for flash disk usage to verify if you reached the limits for flash usage or check directly from database usage.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”

 


TFA error after GI upgrade to 19c
Category: Database Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

TFA error after GI upgrade to 19c

Recently I made an Exadata stack upgrade/update to the last 19.2 version (19.2.7.0.0.191012) and I upgraded the GI from 18c to 19c (last 19c version – 19.5.0.0.191015) and after that, TFA does not work.
Since I don’t want to complete execute a TFA clean and reinstallation I tried to find the error and the solution. Here I want to share with you the workaround (since there is no solution yet) that I discovered and used to fix the error.

 

The environment

 

The actual environment is:
  • Old Grid Infrastructure: Version 18.6.0.0.190416
  • New Grid Infrastructure: Version 19.5.0.0.191015
  • Exadata domU: Version 19.2.7.0.0.191012 running kernel 4.1.12-124.30.1.el7uek.x86_64
 

TFA error

 

After upgrade the GI from 18c to 19c, the TFA does not work. If you try to start it or collect log using it, you can receive errors. In the environment described here, the TFA was running fine with the 18c version, and the rootupgrade script from 18c to 19c does not report an error.
And to be more precise, the TFA upgrade from 18c to 19c called by rootupgrade was ok (according to the log – I will show later). But even after that, the error occurs.
The provided solution as usual (by MOS support): download the lastest TFA and reinstall the actual one. Unfortunately, I not like this approach because can lead to an error during GI upgrade for next releases (like 20) and updates (19.6 as an example).
So, when I tried to collect TFA:

 

[root@exsite1c1 ~]# /u01/app/19.0.0.0/grid/tfa/bin/tfactl diagcollect

WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

TFA-00002 Oracle Trace File Analyzer (TFA) is not running

Please start TFA before running collections

[root@exsite1c1 ~]#

 

So, when checking for running TFA I made ps -ef and not saw process running:

 

[root@exsite1c1 ~]# ps -ef |grep tfa

root      10665      1  0 Nov21 ?        00:00:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null

root      40285  37137  0 11:05 pts/0    00:00:00 grep --color=auto tfa

[root@exsite1c1 ~]#

 

And if I try to start TFA (as root), nothing report (error or OK):

 

[root@exsite1c1 ~]# /etc/init.d/init.tfa start

Starting TFA..

Waiting up to 100 seconds for TFA to be started..

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

[root@exsite1c1 ~]#

[root@exsite1c1 ~]# ps -ef |grep tfa

root      10665      1  0 Nov21 ?        00:00:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null

root      46031  37137  0 11:07 pts/0    00:00:00 grep --color=auto tfa

[root@exsite1c1 ~]#

 

Checking in the MOS I saw related problems with bad Perl version. For this TFA release is needed version 5.10 at lease. But was not the case:

 

[root@exsite1c1 ~]# perl -v




This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi

(with 39 registered patches, see perl -V for more detail)




Copyright 1987-2012, Larry Wall




Perl may be copied only under the terms of either the Artistic License or the

GNU General Public License, which may be found in the Perl 5 source kit.




Complete documentation for Perl, including FAQ lists, should be found on

this system using "man perl" or "perldoc perl".  If you have access to the

Internet, point your browser at http://www.perl.org/, the Perl Home Page.




[root@exsite1c1 ~]#

 

Searching the problem

 

Digging for the source of the problem I checked the rootupgrade but the report was good. The TFA upgrade completed with success:

 

[root@exsite1c1 ~]# vi /u01/app/grid/crsdata/exsite1c2/crsconfig/rootcrs_exsite1c2_2019-11-15_12-12-21AM.log

...

...

2019-11-14 14:18:40: Executing the [UpgradeTFA] step with checkpoint [null] ...

2019-11-14 14:18:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/clsecho -p has -f clsrsc -m 595 '1' '18' 'UpgradeTFA'

2019-11-14 14:18:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/clsecho -p has -f clsrsc -m 595 '1' '18' 'UpgradeTFA'

2019-11-14 14:18:40: Command output:

>  CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.

>End Command output

2019-11-14 14:18:40: CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.

2019-11-14 14:18:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/clsecho -p has -f clsrsc -m 4015

2019-11-14 14:18:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/clsecho -p has -f clsrsc -m 4015

2019-11-14 14:18:40: Command output:

>  CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

>End Command output

2019-11-14 14:18:40: CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2019-11-14 14:18:40: Executing the [ValidateEnv] step with checkpoint [null] ...

2019-11-14 14:18:40: Executing cmd: /u01/app/19.0.0.0/grid/crs/install/tfa_setup -silent -crshome /u01/app/19.0.0.0/grid

2019-11-14 14:18:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/clsecho -p has -f clsrsc -m 595 '2' '18' 'ValidateEnv'

2019-11-14 14:18:40: Executing cmd: /u01/app/19.0.0.0/grid/bin/clsecho -p has -f clsrsc -m 595 '2' '18' 'ValidateEnv'

2019-11-14 14:18:40: Command output:

>  CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.

...

...

2019-11-14 14:23:45: Command output:



>  TFA Installation Log will be written to File : /tmp/tfa_install_293046_2019_11_14-14_18_40.log

...

...

2019-11-14 14:23:45: Command output:

>  CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

>End Command output

 

And other related logs reported complete success:

 

[root@exsite1c1 ~]# cat /tmp/tfa_install_293046_2019_11_14-14_18_40.log

[2019-11-14 14:18:40] Log File written to : /tmp/tfa_install_293046_2019_11_14-14_18_40.log

[2019-11-14 14:18:40]

[2019-11-14 14:18:40] Starting TFA installation

[2019-11-14 14:18:40]

[2019-11-14 14:18:40] TFA Version: 192000 Build Date: 201904260414

[2019-11-14 14:18:40]

[2019-11-14 14:18:40] About to check previous TFA installations ...

[2019-11-14 14:18:40] TFA HOME : /u01/app/18.0.0/grid/tfa/exsite1c1/tfa_home

[2019-11-14 14:18:40]

[2019-11-14 14:18:40] Installed Build Version: 184100 Build Date: 201902260236

[2019-11-14 14:18:40]

[2019-11-14 14:18:40] INSTALL_TYPE GI

[2019-11-14 14:18:40] Shutting down TFA for Migration...

[2019-11-14 14:20:24]

[2019-11-14 14:20:24] Removing /etc/init.d/init.tfa...

[2019-11-14 14:20:24]

[2019-11-14 14:20:24] Migrating TFA to /u01/app/19.0.0.0/grid/tfa/exsite1c1/tfa_home...

[2019-11-14 14:20:50]

[2019-11-14 14:20:50] Starting TFA on exsite1c1...

[2019-11-14 14:20:50]

[2019-11-14 14:21:05]

[2019-11-14 14:21:05] TFA_INSTALLER /u01/app/19.0.0.0/grid/crs/install/tfa_setup

[2019-11-14 14:21:05] TFA is already installed. Upgrading TFA

[2019-11-14 14:21:05]

[2019-11-14 14:21:05] TFA patching CRS or DB from zipfile extracted to /tmp/.293046.tfa

[2019-11-14 14:21:06] TFA Upgrade Log : /u01/app/19.0.0.0/grid/tfa/exsite1c1/tfapatch.log

[2019-11-14 14:23:31] Patch Status : 0

[2019-11-14 14:23:31] Patching OK : Running install_ext

[2019-11-14 14:23:32] Installing oratop extension..

[2019-11-14 14:23:32]

.-----------------------------------------------------------------.

| Host      | TFA Version | TFA Build ID         | Upgrade Status |

+-----------+-------------+----------------------+----------------+

| exsite1c1 |  19.2.0.0.0 | 19200020190426041420 | UPGRADED       |

| exsite1c2 |  18.4.1.0.0 | 18410020190226023629 | NOT UPGRADED   |

'-----------+-------------+----------------------+----------------'




[2019-11-14 14:23:44] Removing Old TFA /u01/app/18.0.0/grid/tfa/exsite1c1/tfa_home...

[2019-11-14 14:23:45] Cleanup serializable files

[2019-11-14 14:23:45]

[root@exsite1c1 ~]#

[root@exsite1c1 ~]# cat /u01/app/19.0.0.0/grid/tfa/exsite1c1/tfapatch.log




TFA will be upgraded on Node exsite1c1:







Upgrading TFA on exsite1c1 :




Stopping TFA Support Tools...




Shutting down TFA for Patching...




Shutting down TFA

. . . . .

. . .

Successfully shutdown TFA..




No Berkeley DB upgrade required




Copying TFA Certificates...







Starting TFA in exsite1c1...




Starting TFA..

Waiting up to 100 seconds for TFA to be started..

. . . . .

Successfully started TFA Process..

. . . . .

TFA Started and listening for commands




Enabling Access for Non-root Users on exsite1c1...




[root@exsite1c1 ~]#

 

One know problem occurs when (for some reason) the nodes of the clusters lost the sync for TFA. I tried to do the sync, and this pointed one clue:

 

[root@exsite1c1 ~]# /u01/app/19.0.0.0/grid/tfa/bin/tfactl syncnodes

WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.




/u01/app/19.0.0.0/grid/tfa/exsite1c1/tfa_home/bin/synctfanodes.sh: line 237: /u01/app/18.0.0/grid/perl/bin/perl: No such file or directory

TFA-00002 Oracle Trace File Analyzer (TFA) is not running




Current Node List in TFA :

1.




Unable to determine Node List to be synced. Please update manually.




Do you want to update this node list? [Y|N] [N]: ^C[root@exsite1c1 ~]#

[root@exsite1c1 ~]#

 

As you can see, the syncnodes.sh made a reference for the old 18c GI home. And inside of the sync script, you can see the reference of that like 237 (my mark below) checked for PERL, and this came from the file tfa_setup.txt.

 

[root@exsite1c1 ~]# vi /u01/app/19.0.0.0/grid/tfa/exsite1c1/tfa_home/bin/synctfanodes.sh

...

...

        if [ `$GREP -c '^PERL=' $tfa_home/tfa_setup.txt` -ge 1 ]    <== TFA CHECK

        then

                PERL=`$GREP '^PERL=' $tfa_home/tfa_setup.txt | $AWK -F"=" '{print $2}'`;

        fi




        if [ `$GREP -c '^CRS_HOME=' $tfa_home/tfa_setup.txt` -ge 1 ]

        then

                CRS_HOME=`$GREP '^CRS_HOME=' $tfa_home/tfa_setup.txt | $AWK -F"=" '{print $2}'`;

        fi




        if [ `$GREP -c '^RUN_MODE=' $tfa_home/tfa_setup.txt` -ge 1 ]

        then

                RUN_MODE=`$GREP '^RUN_MODE=' $tfa_home/tfa_setup.txt | $AWK -F"=" '{print $2}'`;

        fi

fi




RUSER=`$RUID | $AWK '{print $1}' | $AWK -F\( '{print $2}' | $AWK -F\) '{print $1}'`;




if [ $RUSER != $DAEMON_OWNER ]

then

        $ECHO "User '$RUSER' does not have permissions to run this script.";

        exit 1;

fi




SSH_USER="$DAEMON_OWNER";




HOSTNAME=`hostname | $CUT -d. -f1 | $PERL -ne 'print lc'`;    <===== LINE 237

...

...

 

Checking tfa_setup.txt

Checking the file we can see the error:

 

[root@exsite1c1 ~]# cat /u01/app/19.0.0.0/grid/tfa/exsite1c1/tfa_home/tfa_setup.txt

CRS_HOME=/u01/app/18.0.0/grid

exsite1c1%CRS_INSTALLED=1

NODE_NAMES=exsite1c1

ORACLE_BASE=/u01/app/grid

JAVA_HOME=/u01/app/18.0.0/grid/jdk/jre

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/OPatch/crs/log

exsite1c1%CFGTOOLS%DIAGDEST=/u01/app/12.1.0.2/grid/cfgtoollogs

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/crf/db/exsite1c1

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/crs/log

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/cv/log

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/evm/admin/log

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/evm/admin/logger

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/evm/log

exsite1c1%INSTALL%DIAGDEST=/u01/app/12.1.0.2/grid/install

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/log

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/network/log

exsite1c1%DBWLM%DIAGDEST=/u01/app/12.1.0.2/grid/oc4j/j2ee/home/log

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/opmn/logs

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/racg/log

exsite1c1%ASM%DIAGDEST=/u01/app/12.1.0.2/grid/rdbms/log

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/scheduler/log

exsite1c1%CRS%DIAGDEST=/u01/app/12.1.0.2/grid/srvm/log

exsite1c1%ACFS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/acfs

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/core

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/crsconfig

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/crsdiag

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/cvu

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/evm

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/output

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/trace

exsite1c1%INSTALL%DIAGDEST=/u01/app/oraInventory/ContentsXML

exsite1c1%INSTALL%DIAGDEST=/u01/app/oraInventory/logs

TRACE_LEVEL=1

INSTALL_TYPE=GI

PERL=/u01/app/18.0.0/grid/perl/bin/perl

RDBMS_ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1||

RDBMS_ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1||

RDBMS_ORACLE_HOME=/u01/app/12.2.0.1/grid||

TZ=Europe/Luxembourg

RDBMS_ORACLE_HOME=/u01/app/18.0.0/grid||

localnode%ADRBASE=/u01/app/grid

RDBMS_ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1||

localnode%ADRBASE=/u01/app/oracle

RDBMS_ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/financ||

localnode%ADRBASE=/u01/app/oracle

RDBMS_ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/financ||

localnode%ADRBASE=/u01/app/oracle

DAEMON_OWNER=root

RDBMS_ORACLE_HOME=/u01/app/oracle/agent/13.2.0/agent_13.2.0.0.0||

RDBMS_ORACLE_HOME=/u01/app/12.1.0.2/grid||

RDBMS_ORACLE_HOME=/u01/app/19.0.0.0/grid||

localnode%ADRBASE=/u01/app/grid

CRS_ACTIVE_VERSION=

[root@exsite1c1 ~]#

 

As you can see above, the CRS_HOME, JAVA_HOME, PERL, and ORACLE_HOME parameters are pointing to the old GI folder. As a workaround I edited the tfa_setup.txt in both nodes and fixed the GI folder from 18.0 to 19.0:

 

[root@exsite1c1 ~]# vi /u01/app/19.0.0.0/grid/tfa/exsite1c1/tfa_home/tfa_setup.txt

[root@exsite1c1 ~]#

[root@exsite1c1 ~]#

[root@exsite1c1 ~]#

[root@exsite1c1 ~]# cat /u01/app/19.0.0.0/grid/tfa/exsite1c1/tfa_home/tfa_setup.txt

CRS_HOME=/u01/app/19.0.0.0/grid

exsite1c1%CRS_INSTALLED=1

NODE_NAMES=exsite1c1

ORACLE_BASE=/u01/app/grid

JAVA_HOME=/u01/app/19.0.0.0/grid/jdk/jre

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/OPatch/crs/log

exsite1c1%CFGTOOLS%DIAGDEST=/u01/app/19.0.0.0/grid/cfgtoollogs

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/crf/db/exsite1c1

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/crs/log

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/cv/log

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/evm/admin/log

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/evm/admin/logger

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/evm/log

exsite1c1%INSTALL%DIAGDEST=/u01/app/19.0.0.0/grid/install

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/log

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/network/log

exsite1c1%DBWLM%DIAGDEST=/u01/app/19.0.0.0/grid/oc4j/j2ee/home/log

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/opmn/logs

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/racg/log

exsite1c1%ASM%DIAGDEST=/u01/app/19.0.0.0/grid/rdbms/log

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/scheduler/log

exsite1c1%CRS%DIAGDEST=/u01/app/19.0.0.0/grid/srvm/log

exsite1c1%ACFS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/acfs

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/core

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/crsconfig

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/crsdiag

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/cvu

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/evm

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/output

exsite1c1%CRS%DIAGDEST=/u01/app/grid/crsdata/exsite1c1/trace

exsite1c1%INSTALL%DIAGDEST=/u01/app/oraInventory/ContentsXML

exsite1c1%INSTALL%DIAGDEST=/u01/app/oraInventory/logs

TRACE_LEVEL=1

INSTALL_TYPE=GI

PERL=/u01/app/19.0.0.0/grid/perl/bin/perl

RDBMS_ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1||

RDBMS_ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1||

TZ=Europe/Luxembourg

RDBMS_ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1||

localnode%ADRBASE=/u01/app/oracle

RDBMS_ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/financ||

localnode%ADRBASE=/u01/app/oracle

RDBMS_ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/financ||

localnode%ADRBASE=/u01/app/oracle

DAEMON_OWNER=root

RDBMS_ORACLE_HOME=/u01/app/oracle/agent/13.2.0/agent_13.2.0.0.0||

RDBMS_ORACLE_HOME=/u01/app/19.0.0.0/grid||

localnode%ADRBASE=/u01/app/grid

CRS_ACTIVE_VERSION=19.0.0.0

[root@exsite1c1 ~]#

 

And after edit was possible to start TAF correctly:

 

[root@exsite1c1 ~]# /etc/init.d/init.tfa start

Starting TFA..

Waiting up to 100 seconds for TFA to be started..

. . . . .

Successfully started TFA Process..

. . . . .

WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

TFA Started and listening for commands

[root@exsite1c1 ~]#

[root@exsite1c1 ~]#

[root@exsite1c1 ~]# ps -ef |grep tfa

root     113905      1  0 11:31 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null

root     115917      1 99 11:31 ?        00:00:24 /u01/app/19.0.0.0/grid/jdk/jre/bin/java -server -Xms256m -Xmx512m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /u01/app/19.0.0.0/grid/tfa/exsite1c1/tfa_home

root     117853  37137  0 11:31 pts/0    00:00:00 grep --color=auto tfa

[root@exsite1c1 ~]#

 

And execute the diagcollect:

 

[root@exsite1c1 ~]# /u01/app/19.0.0.0/grid/tfa/bin/tfactl diagcollect

WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

 

By default TFA will collect diagnostics for the last 12 hours. This can result in large collections

For more targeted collections enter the time of the incident, otherwise hit <RETURN> to collect for the last 12 hours

[YYYY-MM-DD HH24:MI:SS,<RETURN>=Collect for last 12 hours] :

 

Collecting data for the last 12 hours for all components...

Collecting data for all nodes

 

Collection Id : 20191122124148exsite1c1

 

Detailed Logging at : /u01/app/grid/tfa/repository/collection_Fri_Nov_22_12_41_49_CET_2019_node_all/diagcollect_20191122124148_exsite1c1.log

2019/11/22 12:41:53 CET : NOTE : Any file or directory name containing the string .com will be renamed to replace .com with dotcom

2019/11/22 12:41:53 CET : Collection Name : tfa_Fri_Nov_22_12_41_49_CET_2019.zip

2019/11/22 12:41:54 CET : Collecting diagnostics from hosts : [exsite1c1, exsite1c2]

2019/11/22 12:41:54 CET : Scanning of files for Collection in progress...

2019/11/22 12:41:54 CET : Collecting additional diagnostic information...

2019/11/22 12:44:13 CET : Completed collection of additional diagnostic information...

2019/11/22 13:15:39 CET : Getting list of files satisfying time range [11/22/2019 00:41:53 CET, 11/22/2019 13:15:39 CET]

2019/11/22 13:40:42 CET : Collecting ADR incident files...

2019/11/22 13:40:48 CET : Completed Local Collection

2019/11/22 13:40:48 CET : Remote Collection in Progress...

.---------------------------------------.

|           Collection Summary          |

+-----------+-----------+-------+-------+

| Host      | Status    | Size  | Time  |

+-----------+-----------+-------+-------+

| exsite1c2 | Completed | 412MB |  318s |

| exsite1c1 | Completed | 284MB | 3534s |

'-----------+-----------+-------+-------'

 

Logs are being collected to: /u01/app/grid/tfa/repository/collection_Fri_Nov_22_12_41_49_CET_2019_node_all

/u01/app/grid/tfa/repository/collection_Fri_Nov_22_12_41_49_CET_2019_node_all/exsite1c1.tfa_Fri_Nov_22_12_41_49_CET_2019.zip

/u01/app/grid/tfa/repository/collection_Fri_Nov_22_12_41_49_CET_2019_node_all/exsite1c2.tfa_Fri_Nov_22_12_41_49_CET_2019.zip

[root@exsite1c1 ~]#

[root@exsite1c1 ~]# /u01/app/19.0.0.0/grid/tfa/bin/tfactl diagcollect -since 1h

WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

Collecting data for all nodes

 

Collection Id : 20191122134319exsite1c1

 

Detailed Logging at : /u01/app/grid/tfa/repository/collection_Fri_Nov_22_13_43_20_CET_2019_node_all/diagcollect_20191122134319_exsite1c1.log

2019/11/22 13:43:24 CET : NOTE : Any file or directory name containing the string .com will be renamed to replace .com with dotcom

2019/11/22 13:43:24 CET : Collection Name : tfa_Fri_Nov_22_13_43_20_CET_2019.zip

2019/11/22 13:43:24 CET : Collecting diagnostics from hosts : [exsite1c1, exsite1c2]

2019/11/22 13:43:24 CET : Scanning of files for Collection in progress...

2019/11/22 13:43:24 CET : Collecting additional diagnostic information...

2019/11/22 13:44:49 CET : Getting list of files satisfying time range [11/22/2019 12:43:24 CET, 11/22/2019 13:44:49 CET]

2019/11/22 13:45:50 CET : Completed collection of additional diagnostic information...

2019/11/22 13:59:19 CET : Collecting ADR incident files...

2019/11/22 13:59:19 CET : Completed Local Collection

2019/11/22 13:59:19 CET : Remote Collection in Progress...

.--------------------------------------.

|          Collection Summary          |

+-----------+-----------+-------+------+

| Host      | Status    | Size  | Time |

+-----------+-----------+-------+------+

| exsite1c2 | Completed | 230MB | 295s |

| exsite1c1 | Completed | 105MB | 955s |

'-----------+-----------+-------+------'

 

Logs are being collected to: /u01/app/grid/tfa/repository/collection_Fri_Nov_22_13_43_20_CET_2019_node_all

/u01/app/grid/tfa/repository/collection_Fri_Nov_22_13_43_20_CET_2019_node_all/exsite1c2.tfa_Fri_Nov_22_13_43_20_CET_2019.zip

/u01/app/grid/tfa/repository/collection_Fri_Nov_22_13_43_20_CET_2019_node_all/exsite1c1.tfa_Fri_Nov_22_13_43_20_CET_2019.zip

[root@exsite1c1 ~]#

 

TFA error #2

Another error that I got in another cluster that passed for the same update/upgrade process was related with *ser files in tfa home. If I try to use TFA (with diagcolect as an example) I receive this error:

 

[root@exsite2c1 ~]# /u01/app/19.0.0.0/grid/tfa/exsite2c1/tfa_home/bin/tfactl diagcollect

WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

Storable binary image v2.10 contains data of type 101. This Storable is v2.9 and can only handle data types up to 30 at /usr/lib64/perl5/vendor_perl/Storable.pm line 381, at /u01/app/19.0.0.0/grid/tfa/exsite2c1/tfa_home/bin/common/tfactlshare.pm line 25611.

[root@exsite2c1 ~]#   

 

If you look in the MOS, will point to PERL version. But it is not the case here, the perl it is more than 5.10 version for this version of Exadata. The solution was more *.ser files to another folder (remove from TFA home), or delete it. After that, no more “Storage binary error” (but the error about with tfa_setup.txt continues):

 

[root@exsite2c1 ~]# mv /u01/app/19.0.0.0/grid/tfa/exsite2c1/tfa_home/internal/*ser /tmp

[root@exsite2c1 ~]# ls -l /u01/app/19.0.0.0/grid/tfa/exsite2c1/tfa_home/internal/*ser

ls: cannot access /u01/app/19.0.0.0/grid/tfa/exsite2c1/tfa_home/internal/*ser: No such file or directory

[root@exsite2c1 ~]#

 

Problem and Solution

It is not clear the source of the problem in this case. As you saw above, the logs of upgrade/update of GI from 18c to 19c reported success, even for TFA. But it is clear that tfa_setup.txt was left with wrong parameters inside. And if you look closely you can see that exists reference to the new GI home in the first version.

But unfortunately, the needed parameters were left with the wrong values. The workaround was just to change the tfa_setup.txt and fix the wrong folders for parameters. Was not tested to execute the $GI_HOME/grid/crs/install/tfa_setup -silent -crshome $GI_HOME to fix the filed, but you can try. The idea was trying to identify the issue instead of just remove TFA and reinstall it.

Again, this is a workaround tested in my environment and worked. You need to verify logs and other files to see if you hit the same issues. If yes, at least, you can try.

 
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”

 


Exadata, Workaround for oracka.ko error
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Exadata, Workaround for oracka.ko error

Recently I made an Exadata stack upgrade/update to the last 19.2 version (19.2.7.0.0.191012) released in October of 2019, and update the GI to the last 19c version (19.5.0.0.191015) and after that, I hade some issues to create 11G databases.
So, when I try to create an 11G RAC database, the error “File -oracka.ko- was not found” appears and creation fails. Here I want to share with you the workaround (since there is no solution yet) that I discovered and used to bypass the error.

 

The environment

 

The actual environment is:
  • Grid Infrastructure: Version 19.5.0.0.191015
  • Exadata domU: Version 19.2.7.0.0.191012 running kernel 4.1.12-124.30.1.el7uek.x86_64
  • 11G Database: Version 11.2.0.4.180717
  • ACFS: Used to store some files

oracka.ko

So, calling dbca:

 

[DEV-oracle@exsite1c1-]$ /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName D11TST19 -adminManaged -sid D11TST19 -sysPassword oracle11 -systemPassword oracle11 -characterSet WE8ISO8859P15 -emConfiguration NONE -storageType ASM -diskGroupName DATAC8 -recoveryGroupName RECOC8 -nodelist exsite1c1,exsite1c2 -sampleSchema false

Copying database files

100% complete

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/D11TST19/D11TST19.log" for further details.

[DEV-oracle@exsite1c1-]$

 

And at the log:

 

[DEV-oracle@exsite1c1-]$ cat /u01/app/oracle/cfgtoollogs/dbca/D11TST19/D11TST19.log

Creating and starting Oracle instance

PRCR-1006 : Failed to add resource ora.d11tst19.db for d11tst19

PRCD-1094 : Failed to create start dependency for database d11tst19 on the ASM cluster filesystem where ACFS path /u01/app/oracle/product/11.2.0.4/dbhome_1 resides

PRCT-1129 : Execution of ACFS driver state check failed on local node with result Errors uncompressing file: oracka.zip.

File -oracka.ko- was not found.

Error # 50 - . Output: .

Errors uncompressing file: oracka.zip.

File -oracka.ko- was not found.

Error # 50 - . Output: .

ACFS-9200: Supported

DBCA_PROGRESS : 100%

[DEV-oracle@exsite1c1-]$

 

If you check for the trace of database creation:

 

[DEV-oracle@exsite1c1-]$ vi /u01/app/oracle/cfgtoollogs/dbca/D11TST19/trace.log

[Thread-166] [ 2019-11-07 08:30:37.860 CET ] [ASMFactoryImpl.isACFSSupported:954]  Entry: oraHome=/u01/app/oracle/product/11.2.0.4/dbhome_1, m_crsHome=/u01/app/19.0.0.0/grid

[Thread-166] [ 2019-11-07 08:30:37.860 CET ] [ASMFactoryImpl.isACFSSupported:958]  Checking if ACFS now...

[Thread-166] [ 2019-11-07 08:30:37.861 CET ] [USMDriverUtil.<init>:117]  Checking file exists for: /u01/app/19.0.0.0/grid/bin/acfsdriverstate

[Thread-166] [ 2019-11-07 08:30:37.862 CET ] [NativeSystem.isCmdScv:502]  isCmdScv: cmd=[]

[Thread-182] [ 2019-11-07 08:30:37.866 CET ] [StreamReader.run:61]  In StreamReader.run

[Thread-181] [ 2019-11-07 08:30:37.868 CET ] [StreamReader.run:61]  In StreamReader.run

[Thread-182] [ 2019-11-07 08:30:38.230 CET ] [StreamReader.run:65]  ERROR>Argument "2/8" isn't numeric in numeric ne (!=) at /u01/app/19.0.0.0/grid/lib/osds_acfsroot.pm line 2093.

[Thread-182] [ 2019-11-07 08:30:38.230 CET ] [StreamReader.run:65]  ERROR>Argument "2/8" isn't numeric in numeric ne (!=) at /u01/app/19.0.0.0/grid/lib/osds_acfsroot.pm line 2093.

[Thread-182] [ 2019-11-07 08:30:38.235 CET ] [StreamReader.run:65]  ERROR>error:  cannot delete old /u01/app/19.0.0.0/grid/usm/install/Oracle/EL7UEK/x86_64/4.1.12-112.16.4/4.1.12-112.16.4-x86_64/bin/oracka.ko

[Thread-182] [ 2019-11-07 08:30:38.235 CET ] [StreamReader.run:65]  ERROR>        Permission denied

[Thread-181] [ 2019-11-07 08:30:38.310 CET ] [StreamReader.run:65]  OUTPUT>Errors uncompressing file: oracka.zip.

[Thread-181] [ 2019-11-07 08:30:38.405 CET ] [StreamReader.run:65]  OUTPUT>File -oracka.ko- was not found.

[Thread-181] [ 2019-11-07 08:30:38.485 CET ] [StreamReader.run:65]  OUTPUT>Error # 50 - . Output: .

[Thread-182] [ 2019-11-07 08:30:38.504 CET ] [StreamReader.run:65]  ERROR>error:  cannot delete old /u01/app/19.0.0.0/grid/usm/install/Oracle/EL7UEK/x86_64/4.1.12-112.16.4/4.1.12-112.16.4-x86_64/bin/oracka.ko

[Thread-182] [ 2019-11-07 08:30:38.504 CET ] [StreamReader.run:65]  ERROR>        Permission denied

[Thread-181] [ 2019-11-07 08:30:38.587 CET ] [StreamReader.run:65]  OUTPUT>Errors uncompressing file: oracka.zip.

[Thread-181] [ 2019-11-07 08:30:38.667 CET ] [StreamReader.run:65]  OUTPUT>File -oracka.ko- was not found.

[Thread-181] [ 2019-11-07 08:30:38.753 CET ] [StreamReader.run:65]  OUTPUT>Error # 50 - . Output: .

[Thread-181] [ 2019-11-07 08:30:40.877 CET ] [StreamReader.run:65]  OUTPUT>ACFS-9200: Supported

[Thread-166] [ 2019-11-07 08:30:40.881 CET ] [UnixSystem.dorunRemoteExecCmd:3232]  retval = 0

[Thread-166] [ 2019-11-07 08:30:40.881 CET ] [UnixSystem.dorunRemoteExecCmd:3256]  exitValue = 0

[Thread-166] [ 2019-11-07 08:30:40.882 CET ] [USMDriverUtil.checkACFSState:267]  Printing ACFS output

[Thread-166] [ 2019-11-07 08:30:40.882 CET ] [USMDriverUtil.checkACFSState:268]  Errors uncompressing file: oracka.zip.

File -oracka.ko- was not found.

Error # 50 - . Output: .

Errors uncompressing file: oracka.zip.

File -oracka.ko- was not found.

Error # 50 - . Output: .

ACFS-9200: Supported

[Thread-166] [ 2019-11-07 08:30:40.882 CET ] [InstanceStepOPS.executeImpl:1014]  PRCR-1006 : Failed to add resource ora.d11tst19.db for d11tst19

PRCD-1094 : Failed to create start dependency for database d11tst19 on the ASM cluster filesystem where ACFS path /u01/app/oracle/product/11.2.0.4/dbhome_1 resides

PRCT-1129 : Execution of ACFS driver state check failed on local node with result Errors uncompressing file: oracka.zip.

File -oracka.ko- was not found.

Error # 50 - . Output: .

Errors uncompressing file: oracka.zip.

File -oracka.ko- was not found.

Error # 50 - . Output: .

ACFS-9200: Supported

[Thread-166] [ 2019-11-07 08:30:40.883 CET ] [BasicStep.configureSettings:304]  messageHandler being set=oracle.sysman.assistants.util.SilentMessageHandler@5a943dc4

[Thread-166] [ 2019-11-07 08:30:40.883 CET ] [BasicStep.configureSettings:304]  messageHandler being set=oracle.sysman.assistants.util.SilentMessageHandler@5a943dc4

oracle.sysman.assistants.util.step.StepExecutionException: PRCR-1006 : Failed to add resource ora.d11tst19.db for d11tst19

PRCD-1094 : Failed to create start dependency for database d11tst19 on the ASM cluster filesystem where ACFS path /u01/app/oracle/product/11.2.0.4/dbhome_1 resides

PRCT-1129 : Execution of ACFS driver state check failed on local node with result Errors uncompressing file: oracka.zip.

File -oracka.ko- was not found.

Error # 50 - . Output: .

Errors uncompressing file: oracka.zip.

File -oracka.ko- was not found.

Error # 50 - . Output: .

ACFS-9200: Supported

        at oracle.sysman.assistants.dbca.backend.InstanceStepOPS.executeImpl(InstanceStepOPS.java:1015)

        at oracle.sysman.assistants.util.step.BasicStep.execute(BasicStep.java:210)

        at oracle.sysman.assistants.util.step.BasicStep.callStep(BasicStep.java:251)

        at oracle.sysman.assistants.dbca.backend.DBEntryStep.executeImpl(DBEntryStep.java:229)

        at oracle.sysman.assistants.util.step.BasicStep.execute(BasicStep.java:210)

        at oracle.sysman.assistants.util.step.Step.execute(Step.java:140)

        at oracle.sysman.assistants.util.step.StepContext$ModeRunner.run(StepContext.java:2711)

        at java.lang.Thread.run(Thread.java:637)

[Thread-166] [ 2019-11-07 08:30:40.883 CET ] [SQLEngine.done:2189]  Done called

 

So, as you can see in the trace the process for database creation when detecting that ACFS is in place, tries to unzip some files (.ko, kernel objects) related to ACFS access but receive the error or that cannot delete old oracka.ko and for uncompressing file oracka.zip.

 

Source of the problem

Checking for the problem, we can see that access denied occurs for folder $GI_HOME/usm/install/Oracle/EL7UEK/x86_64/4.1.12-112.16.4/4.1.12-112.16.4-x86_64/bin/ inside of GI home. And as you can see is the same family of our domU kernel version. Inside os MOS there is no reference for this error, and after open the SR for Oracle was sent do dev team without ETA (as usual).
Because of that I continue to check and checked the folder:

 

[root@exsite1c1 ~]# cd /u01/app/19.0.0.0/grid/usm/install/Oracle/EL7UEK/x86_64/4.1.12-112.16.4/4.1.12-112.16.4-x86_64/bin/

[root@exsite1c1 bin]#

[root@exsite1c1 bin]# ls -l

total 151244

-rw-r--r-- 1 root oinstall  3085340 Sep  1 12:40 oracka.ko

-rw-r--r-- 1 root oinstall  2304100 Sep  1 12:44 oracka_mod_kga.ko

-rw-r--r-- 1 grid oinstall   623207 Oct  9 17:13 oracka_mod_kga.zip

-rw-r--r-- 1 grid oinstall   889060 Oct  9 17:14 oracka.zip

-rw-r--r-- 1 root oinstall 64000932 Sep  1 12:04 oracleacfs.ko

-rw-r--r-- 1 grid oinstall 19246647 Oct  9 17:13 oracleacfs.zip

-rw-r--r-- 1 root oinstall 27882452 Sep  1 11:25 oracleadvm.ko

-rw-r--r-- 1 grid oinstall  8777000 Oct  9 17:13 oracleadvm.zip

-rw-r--r-- 1 root oinstall  8776676 Sep  1 12:55 oracleafd.ko

-rw-r--r-- 1 grid oinstall  2879744 Oct  9 17:14 oracleafd.zip

-rw-r--r-- 1 root oinstall 10696156 Sep  1 11:04 oracleoks.ko

-rw-r--r-- 1 grid oinstall  3346594 Oct  9 17:14 oracleoks.zip

-rw-r--r-- 1 root oinstall  1757116 Sep  1 13:02 oracleolfs.ko

-rw-r--r-- 1 grid oinstall   567865 Oct  9 17:14 oracleolfs.zip

[root@exsite1c1 bin]#

 

As you can see, the GI Home came with some ko files already uncompressed and the permission for a group just as read. So, the delete and uncompressing by dbca will not work properly, and as a workaround to solve the issue, I changed the permission for folder bin and ko files to allow change for oinstall – made in all nodes of the cluster:

 

[root@exsite1c1 4.1.12-112.16.4-x86_64]# pwd

/u01/app/19.0.0.0/grid/usm/install/Oracle/EL7UEK/x86_64/4.1.12-112.16.4/4.1.12-112.16.4-x86_64

[root@exsite1c1 4.1.12-112.16.4-x86_64]#

[root@exsite1c1 4.1.12-112.16.4-x86_64]#

[root@exsite1c1 4.1.12-112.16.4-x86_64]# chmod g+w bin

[root@exsite1c1 4.1.12-112.16.4-x86_64]#

[root@exsite1c1 4.1.12-112.16.4-x86_64]#

[root@exsite1c1 4.1.12-112.16.4-x86_64]# ls -l

total 4

drwxrwxr-x 2 grid oinstall 4096 Nov  8 15:08 bin

[root@exsite1c1 4.1.12-112.16.4-x86_64]# cd bin

[root@exsite1c1 bin]# chown grid:oinstall oracka.ko oracka_mod_kga.ko oracleacfs.ko oracleadvm.ko oracleafd.ko oracleoks.ko oracleolfs.ko

[root@exsite1c1 bin]#
After executing this in both nodes, the dbca finish:

 

[DEV-oracle@exsite1c1-]$ /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName D11TST19 -adminManaged -sid D11TST19 -sysPassword oracle11 -systemPassword orcle11 -characterSet WE8ISO8859P15 -emConfiguration NONE -storageType ASM -diskGroupName DATAC8 -recoveryGroupName RECOC8 -nodelist exsite1c1,exsite1c2 -sampleSchema false

Creating and starting Oracle instance

1% complete



96% complete

100% complete

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/D11TST19/D11TST19.log" for further details.

[DEV-oracle@exsite1c1-]$

[DEV-oracle@exsite1c1-]$ $ORACLE_HOME/bin/srvctl status database -d D11TST19

Instance D11TST191 is running on node exsite1c1

Instance D11TST192 is running on node exsite1c2

[DEV-oracle@exsite1c1-]$

 

Just to check the ko files and bin folder already came with wrong permissions in the original 19c packages:

 

[grid@exsite1c1 +ASM1]$ cd /u01/patches/grid

[grid@exsite1c1 +ASM1]$ unzip -q V982068-01.zip

[grid@exsite1c1 +ASM1]$ cd usm/install/Oracle/EL7UEK/x86_64/4.1.12-112.16.4/4.1.12-112.16.4-x86_64/bin

[grid@exsite1c1 +ASM1]$ ls -l

total 149116

-rw-r--r-- 1 grid oinstall  3085340 Feb 23  2019 oracka.ko

-rw-r--r-- 1 grid oinstall  2304100 Feb 23  2019 oracka_mod_kga.ko

-rw-r--r-- 1 grid oinstall   623214 Feb 23  2019 oracka_mod_kga.zip

-rw-r--r-- 1 grid oinstall   889106 Feb 23  2019 oracka.zip

-rw-r--r-- 1 grid oinstall 62740980 Feb 23  2019 oracleacfs.ko

-rw-r--r-- 1 grid oinstall 18881690 Feb 23  2019 oracleacfs.zip

-rw-r--r-- 1 grid oinstall 27485580 Feb 23  2019 oracleadvm.ko

-rw-r--r-- 1 grid oinstall  8643808 Feb 23  2019 oracleadvm.zip

-rw-r--r-- 1 grid oinstall  8773444 Feb 23  2019 oracleafd.ko

-rw-r--r-- 1 grid oinstall  2878816 Feb 23  2019 oracleafd.zip

-rw-r--r-- 1 grid oinstall 10655524 Feb 23  2019 oracleoks.ko

-rw-r--r-- 1 grid oinstall  3335912 Feb 23  2019 oracleoks.zip

-rw-r--r-- 1 grid oinstall  1757348 Feb 23  2019 oracleolfs.ko

-rw-r--r-- 1 grid oinstall   567876 Feb 23  2019 oracleolfs.zip

[grid@exsite1c1 +ASM1]$ cd ..

[grid@exsite1c1 +ASM1]$ ls -l

total 4

drwxr-xr-x 2 grid oinstall 4096 Apr 18  2019 bin

[grid@exsite1c1 +ASM1]$

 

The sum of all problems

This error for oracka.zip and oracka.ko occurs because there is a sum of little details. The release of GI 19c version already came with bad folder permission. The release update 19.5.0.0.191015 does not solve the issue too (and even the installation or rootupgrade script). This bin folder was checked by dbca because the kernel image for Exadata 19.2.7.0.0 (19.2.7.0.0.191012) comes with kernel 4.1.12-124.30.1.el7uek.x86_64 and the ACFS modules for this family are there. And to finish the sum of all problems, the ACFS was used and for that reason, it tries to use that kernel modules to check access.
So, since there is no solution at MOS (today while I am publishing this post), and there is no official workaround, was needed to find own solution. So, the workaround described here it is just temporary to allow 11g RAC database creation for this specific sun of Exadata Version + GI Home 19c + ACFS in use.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


Exadata, Understanding Metrics
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Exadata, Understanding Metrics

Metrics for Exadata deliver to you one way to deeply see, and understand, what it is happening for Exadata Storage Server and Exadata Software. Understand it is fundamental to identify and solve problems that can be hidden (or even unsee) from the database side. In this post, I will explain details about these metrics and what you can do using them.
My last article about Exadata Storage Server metrics was about one example of how to use them to identify problems that do not appear in the database side. In that post, I showed how I used the metric DB_FC_IO_BY_SEC to identify bad queries.
The point for Exadata (that I made in that article), is that most of the time, Exadata is so powerful that bad statements are handled without a problem because of the features that exist (flashcache, smartio, and others). But another point is that usually, Exadata is a high consolidated environment, where you “consolidate” a lot of databases and it is normal that some of them have different workloads and needs. Using metrics can help you to do a fine tune of your environment, but besides that, it delivers to you one way to check and control everything that’s happening.
In this post, I will not explain each metric one by one, but guide you to understand metrics and some interesting and important details about them.

 

Understanding metrics

 

Metrics for Exadata are values extract directly from hardware or directly from storage server software. Values from “IOPS from each disk”, or “MB/s read by SMARTIO” are an example of what you can discover. Directly from the docs:
“Metrics are recorded observations of important run-time properties or internal instrumentation values of the storage cell and its components, such as cell disks or grid disks. Metrics are a series of measurements that are computed and retained in memory for an interval of time, and stored on a disk for a more permanent history.”
To check the definition for Exadata metrics, and all metrics available the best place it the official Exadata User Guide, chapter 6. You can see the definition for all metrics and other important information. I really recommend that you read it to be aware of what you can extract from the metrics.
When reading metrics, you can read the current values (from the last minute), or from history view. From the historic list, values are for each minute from the last 7 days. So, with metrics, you cover 24×7 for each minute during the last 7 days. So, a good source of information to help you. And most important, they are individual and read from each storage server.

 

Reading metrics

 

To read metrics you can connect directly in the storage server and with the cellcli use the “list metriccurrent” or “list metrichistory” commands to read it:

 

[root@exacellsrvx-01 ~]# cellcli

CellCLI: Release 18.1.9.0.0 - Production on Sun Dec 08 15:01:42 BRST 2019




Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.




CellCLI> list metriccurrent

         CD_BY_FC_DIRTY                          CD_00_exacellsrvx-01                            0.000 MB





         SIO_IO_WR_RQ_FC_SEC                     SMARTIO                                         0.000 IO/sec

         SIO_IO_WR_RQ_HD                         SMARTIO                                         2,768,097 IO requests

         SIO_IO_WR_RQ_HD_SEC                     SMARTIO                                         0.000 IO/sec

 

Since it is based in the list command you can detail it, restrict with where, or change the attributes to display it:

 

CellCLI> list metriccurrent where name = 'FL_IO_DB_BY_W_SEC' detail

         name:                   FL_IO_DB_BY_W_SEC

         alertState:             normal

         collectionTime:         2019-12-08T15:10:14-02:00

         metricObjectName:       FLASHLOG

         metricType:             Instantaneous

         metricValue:            0.189 MB/sec

         objectType:             FLASHLOG




CellCLI> list metriccurrent where name = 'FL_IO_DB_BY_W_SEC' attributes name, metricvalue, collectionTime

         FL_IO_DB_BY_W_SEC       0.133 MB/sec    2019-12-08T15:11:14-02:00




CellCLI>

 

You can query the metric for each one of the attributes. Like all metrics for IORM or all metrics for that have FC in the name. If you want to query values in the past, you need to use list metrichistory:

 

CellCLI> list metrichistory where name = 'FL_IO_DB_BY_W_SEC' and collectionTime = '2019-12-08T15:21:14-02:00' attributes name, metricvalue, collectionTime

         FL_IO_DB_BY_W_SEC       0.196 MB/sec    2019-12-08T15:21:14-02:00




CellCLI>

 

Metric types

There are three types of metrics: Instantaneous (value reflect the moment when was read), Rate (values computed based in the period of time), Cumulative (values since you started storage server from the last time).
All the metrics type Rate, usually are expressed by second. This means that Exadata counted the values from the last minute and divided it by seconds. So, the *_SEC means the average based at the last minute.
One important detail is that some have the “small” and “large” metrics. This means that if your request from the database needs more than 128KB the values are marked as large *LG*, otherwise, as small *SM*.

 

Using metrics

 

To understand metrics for Exadata it is important to know the limits for your hardware, and for Exadata the good (and quick way) is the datasheet. Using the X8M datasheet as an example we can see that the max GB/s per second for each storage server is around 1.78 (25GB/s for full rack divided by 14 storage – as for example). Understand these numbers are important, I recommend you to read the datasheet and understand them.
Since every storage computes each metric in the separate way you need to query each one to have the big picture. But this does not mean that some metrics need to be analyzed globally, instead of per each server. I usually divide Exadata metrics in two ways, Isolated and Per Database.

 

Isolated Metrics

 

I consider isolated metrics that are important to check for each server. They express values that are important to check isolated per each storage server. Some metrics that I like to check isolated:
  • CL_CPUT: The cell CPU utilization.
  • CL_MEMUT: The percentage of total physical memory used.
  • N_HCA_MB_RCV_SEC: The number of megabytes received by the InfiniBand interfaces per second
  • N_HCA_MB_TRANS_SEC: The number of megabytes transmitted by the InfiniBand interfaces per second.
  • CD_IO_BY_R_LG_SEC: The rate which is the number of megabytes read in large blocks per second from a cell disk.
  • CD_IO_BY_R_SM_SEC: The rate which is the number of megabytes read in small blocks per second from a cell disk.
  • CD_IO_BY_W_LG_SEC: The rate which is the number of megabytes written in large blocks per second on a cell disk.
  • CD_IO_BY_W_SM_SEC: The rate which is the number of megabytes written in small blocks per second on a cell disk.
  • CD_IO_RQ_R_LG_SEC: The rate which is the number of requests to read large blocks per second from a cell disk.
  • CD_IO_RQ_R_SM_SEC: The rate which is the number of requests to read small blocks per second from a cell disk.
  • CD_IO_RQ_W_LG_SEC: The rate which is the number of requests to write large blocks per second to a cell disk.
  • CD_IO_RQ_W_SM_SEC: The rate which is the number of requests to write small blocks per second to a cell disk.
  • CD_IO_TM_R_LG_RQ: The rate which is the average latency of reading large blocks per request to a cell disk.
  • CD_IO_TM_R_SM_RQ: The rate which is the average latency of reading small blocks per request from a cell disk.
  • CD_IO_TM_W_LG_RQ: The rate which is the average latency of writing large blocks per request to a cell disk.
  • CD_IO_TM_W_SM_RQ: The rate which is the average latency of writing small blocks per request to a cell disk.
  • CD_IO_UTIL: The percentage of device utilization for the cell disk.
  • FC_BY_ALLOCATED: The number of megabytes allocated in flash cache.
  • FC_BY_USED: The number of megabytes used in flash cache.
  • FC_BY_DIRTY: The number of megabytes in flash cache that are not synchronized to the grid disks.
  • FC_IO_BY_R_SEC: The number of megabytes read per second from flash cache.
  • FC_IO_BY_R_SKIP_SEC: The number of megabytes read from disks per second for I/O requests that bypass flash cache. Read I/O requests that bypass flash cache go directly to disks. These requests do not populate flash cache after reading the requested data.
  • FC_IO_BY_R_MISS_SEC: The number of megabytes read from disks per second because not all requested data was in flash cache.
  • FC_IO_BY_W_SEC: The number of megabytes per second written to flash cache.
  • FC_IO_BY_W_SKIP_SEC: The number of megabytes written to disk per second for I/O requests that bypass flash cache.
  • FC_IO_RQ_R_SEC: The number of read I/O requests satisfied per second from flash cache.
  • FC_IO_RQ_W_SEC: The number of I/O requests per second which resulted in flash cache being populated with data.
  • FC_IO_RQ_R_SKIP_SEC: The number of read I/O requests per second that bypass flash cache. Read I/O requests that bypass flash cache go directly to disks
  • FC_IO_RQ_W_SKIP_SEC: The number of write I/O requests per second that bypass flash cache
  • FL_IO_DB_BY_W_SEC: The number of megabytes written per second were written to hard disk by Exadata Smart Flash Log
  • FL_IO_FL_BY_W_SEC: The number of megabytes written per second were written to flash by Exadata Smart Flash Log.
  • FL_IO_TM_W_RQ: Average redo log write latency. It includes write I/O latency only.
  • FL_RQ_TM_W_RQ: Average redo log write request latency.
  • FL_IO_W_SKIP_BUSY: The number of redo writes during the last minute that could not be serviced by Exadata Smart Flash Log.
  • N_MB_RECEIVED_SEC: The rate which is the number of megabytes received per second from a particular host.
  • N_MB_SENT_SEC: The rate which is the number of megabytes received per second from a particular host.
  • SIO_IO_EL_OF_SEC: The number of megabytes per second eligible for offload by smart I/O.
  • SIO_IO_OF_RE_SEC: The number of interconnect megabytes per second returned by smart I/O.
  • SIO_IO_RD_FC_SEC: The number of megabytes per second read from flash cache by smart I/O.
  • SIO_IO_RD_HD_SEC: The number of megabytes per second read from hard disk by smart I/O.
  • SIO_IO_RD_RQ_FC_SEC: The number of read I/O requests per second from flash cache by smart I/O.
  • SIO_IO_RD_FC_HD_SEC: The number of read I/O requests per second from hard disk by smart I/O.
  • SIO_IO_WR_FC_SEC: The number of megabytes per second of flash cache population writes by smart I/O.
  • SIO_IO_SI_SV_SEC: The number of megabytes per second saved by the storage index.
With these metrics you can discover, by example, in each server:
  • Percentage of CPU and memory utilization.
  • GB/s sent by Smart I/O to one database server. And if you compare with the ingest that came from the database you can see percentage safe by smartio.
  • The number of flashcache reads that was redirected to disk because the data was not there. Here if you see increase the value tread, maybe you have fixed (or have a lot of data) in flash cache and your queries are demanding other tables.
  • For celldisk (CD*) metrics it is important to divide by metricObjectName attribute to identify the read from Disks and from Flash. There is no direct metrics for FD*, they are present (at storage level) as a celldisk, but they have different throughputs values. This is true for EF too that just have flash.
  • For flashcache directly, you can check the allocated values (usually 100%), but also the “dirty” usage when data it does not sync (between flash and disks), this can mean a lot of writes for your database, bad query design, or high pressure between consolidating databases (maybe you can disable for one category/database trough IORM).
  • From smartscan you can check the MB/s read from flashcache or disk to offload your query. Or even check MB/s that was saved by storage index.
So, as you can see there is a lot of information that you can extract from storage server. I prefer to read these separately (per storage) because if I consider globally (smartio or flascache as an example), I don’t have a correct representation of what it is happening under the hood. Maybe, a good value from one storage can hide a bad from another when I calculate the averages.
The idea for these metrics is to provide one way to see Exadata storage software overview. The amount of data that it is reading from hardware (CD_* metrics) and how it is used by the features. You can see how was safe by storage index of smarscan as an example, or see if the flashcache is correct populated (and not dirty). And even help to identify some queries that may are bypassing the flashache or not using smartio.
 

Database Metrics

 

The concept of global metrics does not exist directly in the Exadata, you still need to read separately from each storage server. But I recommend that check it globally, doing the sum for values from each storage server to analyze it. One example it the IOPS (or MB/s) for database, you usually want to know the value for the entire Exadata and not for each server.
In the list, I will put just for database, but you have the same for PDB, Consumer Groups, and from Categories. Remember that for IORM the hierarchy is first Categories and after Databases when creating the plans.
  • DB_FC_BY_ALLOCATED: The number of megabytes allocated in flash cache for this database.
  • DB_FC_IO_BY_SEC: The number of megabytes of I/O per second for this database to flash cache.
  • DB_FC_IO_RQ_LG_SEC: The number of large I/O requests issued by a database to flash cache per second.
  • DB_FC_IO_RQ_SM_SEC: The number of small I/O requests issued by a database to flash cache per second.
  • DB_FL_IO_BY_SEC: The number of megabytes written per second to Exadata Smart Flash Log.
  • DB_FL_IO_RQ_SEC: The number of I/O requests per second issued to Exadata Smart Flash Log.
  • DB_IO_BY_SEC: The number of megabytes of I/O per second for this database to hard disks.
  • DB_IO_LOAD: The average I/O load from this database for hard disks. For a description of I/O load, see CD_IO_LOAD.
  • DB_IO_RQ_LG_SEC: The rate of large I/O requests issued to hard disks by a database per second over the past minute.
  • DB_IO_RQ_SM_SEC: The rate of small I/O requests issued to hard disks by a database per second over the past minute.
  • DB_IO_TM_LG_RQ: The rate which is the average latency of reading or writing large blocks per request by a database from hard disks.
  • DB_IO_TM_SM_RQ: The rate which is the average latency of reading or writing small blocks per request by a database from hard disks.
  • DB_IO_UTIL_LG: The percentage of hard disk resources utilized by large requests from this database.
  • DB_IO_UTIL_SM: The percentage of hard disk resources utilized by small requests from this database.
  • DB_IO_WT_LG_RQ: The average IORM wait time per request for large I/O requests issued to hard disks by a database.
  • DB_IO_WT_SM_RQ: The average IORM wait time per request for small I/O requests issued to hard disks by a database.
With these metrics you can see how the database/pdb/cg/ct are using the Exadata. As an example, you can check the MB/s (or IOPS) read from flashcache by seconds. And if you compare with CD_* as an example, you can discover which database is using more it. The same can be done by flashlog.

 

What you can discover

 

With metrics you can discover a lot of things that are hidden when you just look from the database side (AWR as an example). And it is more critical when you have a consolidated environment in your Exadata. You can compare values from different databases and have insights from something that is not correct (as my example from the previous post).
Here I will show you another example. Again, all the values I collected using my script that you can check in my post at OTN. This script connects in each storage server and retrieves the values from the last 10 minutes. After that I stored, I made the average value for the minutes reported. This means that every point listed below are the average values computed for every 10 minutes.

 

 

Look above. This graph represents the values from DB_IO_BY_SEC metric. As you can see, the database reached around 1.6 to 1.7GB/s of disk usage. And if you remember for Exadata datasheet, this means that this database used almost everything from disk capacity/throughput from Exadata storage. This can be confirmed by:
As you can see, the average was around 85 to 90 percent of IO/Utilization for this database. But if I look at the values for each minute, I can see some high usage around 95%.
And as you can image, other databases started to see wait time for the same moment:
If you compare the three graphs at the same time, you can see that when one database started to use a lot from the storage server (around the max throughput from hardware), another database started to see more wait time for each request. The value below is expressed in ms/r (milliseconds per request).
This is one example of what you can discover using metrics. In this case specifically, the analyses started because one application DBA Team reported that started to notice some slow queries during the morning. But the application changed nothing in the previous 4 months and the AWR and tuning their queries helped nothing. Using the metric was discovered that another database deployed a new version with bad SQL. But looking from this database, everything appears normal.

 

How to read

 

If you want to read metrics the best way if trough cellcli, but sometimes you don’t need it of you can’t. You can use the Enterprise Manager/Cloud Control to check some information, but you don’t have all the metrics available, but can use the Exadata plugin to check IOPS and MB/s.
Still, at EM/CC you can try to check the KPI for Storage server and write some queries direct to EM/CC database. Oracle has a good reference about this procedure in this PDF: https://www.oracle.com/technetwork/database/availability/exadata-storage-server-kpis-2855362.pdf
You can still use the Oracle Management Cloud for Exadata to help you to identify bottlenecks in your environment. You can use some IA and Oracle expertise to identify trends easier and be more proactive than reactive https://cloudcustomerconnect.oracle.com/files/32e7361eed/OMC_For_EXADATA_whitepaper_v28.pdf or this blog post from Oracle.  
Another way is writing/use one script to integrate with your monitoring tool. You can use my script as a guide and adapt it to your needs. The script can be checked in my post at Oracle OTN (Portuguese VersionSpanish Version, unfortunately, there is no English version. Was sent to be published, but never was – I don’t know why), the version published read all metrics and output it in one line. You can use it to insert it into one database or save it in one file as you desire.

 

Insights

 

The idea for metric analysis is to check it and have insights about what can be wrong. Sometimes it is easy to identify (like the example before), but other times you need to dig in a lot of metrics to find the problem (smartio, flashcache, dirty cache, etc).
But unfortunately, to understand metric you need to truly understand Exadata. It is more than a simple knowledge of the hardware or how to patch the Exadata. You need to understand the limits for your machine (datasheet can help), database tuning (to understand a bad query plan), and where to search (which metric use). You need to use this together to have the insights, and unfortunately, this came just with day-to-day usage/practice.
The idea for this post was to provide you a guide, or even a shortcut, to understand the Exadata metric and do more. I did not cover every metric, or every detail for each metric, but tried to show you how to read, analyze, and use it to identify problems.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”

 


1 2 3 4 5 7