Andre Luiz Dutra Ontalba (Board Member)
Feature – Auto-tune Detached Block Volumes
Category: Cloud Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

Feature: Auto-tune Detached Block Volumes

Last month Oracle launched a great feature that helps with block volume costs.Let’s see a little bit about this feature.
 
You can now tune the performance of your detached volumes to the Lower Cost setting automatically. With this new capability, while your volumes stay in a detached state you can achieve significant cost savings.
 
When you’re ready to use them for your workloads, simply attach them, and their performance and cost are automatically and instantaneously adjusted to the performance setting you originally configured. 
 
When you enable this feature for your volumes, the volume is monitored and changed to the Lower Cost performance option automatically when it is disconnected and this setting is maintained until you reconnect it. This feature now comes integrated with the storage platform. You can take advantage of this with a click on the console or using a command line option in the CLI for each of your volumes.
 
Auto-tune for detached volumes capability is generally available for all existing and new boot and block volumes in all regions globally, on the Console, and through CLI, SDK, API, and Terraform.
 

 

Enabling and Managing Auto-tune for Detached Volumes

Enabling the auto-tune feature for detached volumes is straightforward with a click in the Oracle Cloud Infrastructure Console.
To enable auto-tune for a volume, on the Block Volume Details screen of the Console, click Edit and slide the Auto-Tune Performance setting to On for the volume. The Edit dialog window is also revised as part of this feature update.When you enable this feature for your volumes, the volume is monitored and changed to the Lower Cost performance option automatically when it is disconnected, and maintained in this setting until you reconnect it. This feature now comes integrated with the storage platform. You can take advantage of this with a click on the console or using a command line option in the CLI for each of your volumes.

 

When the Auto-Tune Performance is set to On for a detached volume, the auto-tuning takes effect after 24 hours. After that, if the volume is still detached, its performance and cost is lowered to the Lower Cost setting automatically.
After Auto-Tune is applied
After 24 hours
When the volume is attached again, its performance is set to the Default Performance setting immediately and automatically.

 

𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬

 

  • Build-in feature – New volumes or Existing ones
  • Applicable to Block & Boot Volumes
  • Available in all Regions.
  • Reduce Operational Expenses
  • Once enabled Auto-tune it takes 24 hours to be effective.
  • Done via Console, CLI, SDK, API, Terraform
  • Switched Automatically Lower cost Performance Option when detached from Instance.
  • When attached again it gets to previous defined settings.
  • Optional Feature – You have to enabled it while creating Volumes
 

 

I hope this helps you!!!
Andre Luiz Dutra Ontalba

 

 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


HOW TO REMOVE HAIP ON ODA 18.8.0.0.0
Category: Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

HOW TO REMOVE HAIP ON ODA 18.8.0.0.0

I needed to remove HAIP from ODA after migrating to version 18.8.0.0.0 and decided to prepare this procedure.
 
This action plan should only require one clusterware restart vs. patching which can result in two or three clusterware restarts.
 
Let’s go to the procedure.
1. Backup gpnp profile

 

[grid@testoda1 peer]$ cd /u01/app/18.0.0.0/grid/gpnp/'hostname'/profiles/peer
[grid@testoda1 peer]$ cp -p profile.xml profile.xml.bkp
[grid@testoda2 peer]$ /u01/app/18.0.0.0/grid/gpnp/'hostname'/profiles/peer
[grid@testoda2 peer]$ cp -p profile.xml profile.xml.bkp
2. Get the cluster_interconnect interfaces (only on node0)
[grid@testoda1 ~]$ /u01/app/18.0.0.0/grid/bin/oifcfg getif

btbond1 10.32.16.0 global public
p1p1 192.168.16.0 global cluster_interconnect,asm
p1p2 192.168.17.0 global cluster_interconnect,asm

Please note: That private interface names might be different depending on the model, and/or ODA version which was used to deploy the machine, etc.

For the rest of this note, we are using p1p1/p1p2 as an example in the steps below.

 
3. Backup existing ifcfg- files
[root@testoda1 ~]# cd /etc/sysconfig/network-scripts
[root@testoda1 network-scripts]# cp ifcfg-p1p1 backupifcfgFiles/ifcfg-p1p1.bak
[root@testoda1 network-scripts]# cp ifcfg-p1p2 backupifcfgFiles/ifcfg-p1p2.bak
[root@testoda2 ~]# cd /etc/sysconfig/network-scripts
[root@testoda2 network-scripts]# cp ifcfg-p1p1 backupifcfgFiles/ifcfg-p1p1.bak
[root@testoda2 network-scripts]# cp ifcfg-p1p2 backupifcfgFiles/ifcfg-p1p2.bak
4. Create ifcfg-icbond0 and modify ifcfg- files
[root@testoda1 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-icbond0

# This file is automatically created by the ODA software.

DEVICE=icbond0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
MTU=9000
BONDING_OPTS=”mode=active-backup miimon=100 primary=p1p1″
IPADDR=192.168.16.24
NETMASK=255.255.255.0

[root@testoda1 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p1

# This file is automatically created by the ODA software.

DEVICE=p1p1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

[root@testoda1 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p2

/etc/sysconfig/network-scripts/ifcfg-p1p2

# This file is automatically created by the ODA software.

DEVICE=p1p2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

[root@testoda2 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-icbond0

# This file is automatically created by the ODA software.

DEVICE=icbond0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
MTU=9000
BONDING_OPTS=”mode=active-backup miimon=100 primary=p1p1″
IPADDR=192.168.16.25
NETMASK=255.255.255.0

[root@testoda2 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p1

# This file is automatically created by the ODA software.

DEVICE=p1p1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

[root@testoda2 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p2

# This file is automatically created by the ODA software.

DEVICE=p1p2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

 
5. Creating/replacing init.ora-s for APX instances
[grid@testoda1]$ echo "+APX1.cluster_interconnects='192.168.16.24'" > $ORACLE_HOME/dbs/init+APX1.ora

[grid@testoda2]$ echo “+APX2.cluster_interconnects=’192.168.16.25′” > $ORACLE_HOME/dbs/init+APX2.ora

6. Stop the Clusterware on node2
[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl stop crs -f
7. Set the new, bonded cluster_interconnect interface and remove p1p1/p1p2 interfaces from the configuration (only on node0)
[grid@testoda1 ~]$ oifcfg setif -global icbond0/192.168.16.0:cluster_interconnect,asm

[grid@testoda1 ~]$ oifcfg getif

btbond1 10.209.244.0 global public
p1p1 192.168.16.0 global cluster_interconnect,asm
p1p2 192.168.17.0 global cluster_interconnect,asm
icbond0 192.168.16.0 global cluster_interconnect,asm

[grid@testoda1 ~]$ oifcfg delif -global p1p1/192.168.16.0

[grid@testoda1 ~]$ oifcfg delif -global p1p2/192.168.17.0

[grid@testoda1 ~]$ oifcfg getif

btbond1 10.209.244.0 global public
icbond0 192.168.16.0 global cluster_interconnect,asm

8. Remove HAIP dependency in ora.asm
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.cluster_interconnect.haip -attr ENABLED=0 -init

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.cluster_interconnect.haip -attr ENABLED=0 -init

[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.asm -attr “START_DEPENDENCIES=’hard(ora.cssd,ora.ctssd) pullup(ora.cssd,ora.ctssd) weak(ora.drivers.acfs)’, STOP_DEPENDENCIES=’hard(intermediate:ora.cssd)'” -init

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.asm -attr “START_DEPENDENCIES=’hard(ora.cssd,ora.ctssd) pullup(ora.cssd,ora.ctssd) weak(ora.drivers.acfs)’, STOP_DEPENDENCIES=’hard(intermediate:ora.cssd)'” -init

9. Removing ora.cluster_interconnect.haip resource
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl delete resource ora.cluster_interconnect.haip -init –f

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl delete resource ora.cluster_interconnect.haip -init –f

10. Stop the Clusterware on node1
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl stop crs -f
11. Restart the network
[root@testoda1 network-scripts]# service network restart

[root@testoda1 network-scripts]# ifconfig -a

[root@testoda1 network-scripts]# cat /proc/net/bondinf/icbond0

[root@testoda2 network-scripts]# service network restart

[root@testoda2 network-scripts]# ifconfig -a

[root@testoda2 network-scripts]# cat /proc/net/bondinf/icbond0

 
12. Restart the Clusterware
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl start crs

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl start crs

13. Restart dcs-agent to rediscover the interfaces automatically
[grid@testoda1 ~]# /opt/oracle/dcs/bin/restartagent.sh

[grid@testoda2 ~]# /opt/oracle/dcs/bin/restartagent.sh

14. Checking the cluster service after removing the service
[root@testoda1 ~]#

[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl check cluster -all
**************************************************************
testoda1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
testoda2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
I hope I helped with this procedure
Andre Luiz Dutra Ontalba
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Oracle Autonomous Data Guard
Category: Cloud Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

Oracle Autonomous Data Guard

Oracle launched several services for customers last week. We already talked a little in this article about the Oracle Dedicated Region
 
Now let’s talk a little bit about Oracle Autonomous Data Guard.
 
When you enable Autonomous Data Guard, the system creates a standby database that continuously gets updated with the changes from the primary database.
With Autonomous Data Guard enabled Autonomous Database provides one identical standby database that allows the following, depending on the state of the primary database:
1. If your primary database goes down, Autonomous Data Guard converts the standby database to the primary database with minimal interruption. After failover completes, Autonomous Data Guard creates a new standby database for you.
2 . You can perform a switchover operation, where the primary database becomes the standby database, and the standby database becomes the primary database.
Autonomous Database does not provide access to the standby database. You perform all operations, such as scaling up the OCPU Count and enabling Auto Scaling on the primary database and Autonomous Data Guard then performs the same actions on the standby database. Likewise, you only perform actions such as stopping or restarting the database on the primary database.

Autonomous Data Guard Features

Autonomous Data Guard monitors the primary database and if the Autonomous Database instance goes down, then the standby instance assumes the role of the primary instance.
The standby database is created in the same region as the primary database. For better resilience, the standby database is provisioned as follows:
1. In regions with more than one availability domain, the standby database is provisioned automatically in a different availability domain than the primary database.
2. In regions with a single availability domain, the standby database is provisioned automatically on a different physical machine than the primary database.
All Autonomous Database features from the primary database are available when the standby instance becomes the primary after the system fails over or after you perform a switchover operation, including the following:
OML Notebooks: Notebooks and users created in the primary database are available in the standby.
APEX Data and Metadata: APEX information created in the primary database is copied to the standby.
ACLs: The Access Control List (ACL) of the primary database is duplicated for the standby.
Private Endpoint: The private endpoint from the primary database applies to the standby.
APIs or Scripts: Any APIs or scripts you use to manage the Autonomous Database continue to work without any changes after a failover operation or after you perform a switchover.
Client Application Connections: Client applications do not need to change their connection strings to connect to the database after a failover to the standby database or after you perform a switchover.
Wallet Based Connections: You can continue using your existing wallets to connect to the database after a failover to the standby database or after you perform a switchover.
Database Options: The OCPU Count, Storage, Display Name, Database Name, Auto Scaling, Tags, and Licensing options have the same values after a failover to the standby database or after you perform a switchover.
When Autonomous Data Guard is enabled the RTO and RPO numbers are as follows:
1.  Automatic Failover: the RTO is two (2) minutes and RPO is zero (0).
2. Manual Failover: the RTO is two (2) minutes and RPO is up to five (5) minutes.
 

Notes for enabling Autonomous Data Guard:

 
To enable Autonomous Data Guard the database version must be Oracle Database 19c or higher.
 
Autonomous Database generates the Enable Autonomous Data Guard work request. To view the request, under Resources click Work Requests.
 
While you enable Autonomous Data Guard, when the Peer State field shows Provisioning, the following actions are disabled for the database:
1. Move Resource
2. Stop
3. Restart
4. Restore
 
I hope this helps you!!!
Andre Luiz Dutra Ontalba

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful


Oracle Dedicated Region
Category: Cloud Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

Oracle Dedicated Region

Yesterday we had the official launch of the Oracle Dedicated Region. Let’s see a little bit about this service.

 

What is Oracle Dedicated Region Cloud@Customer?

 

Dedicated Region Cloud@Customer is a fully-managed cloud region built with Oracle-designed high-performance infrastructure to help customers bring ALL second generation cloud primitives and services closer to existing data and applications. Dedicated Region Cloud@Customer brings best-in-class price-performance and security to mission-critical workloads that are unlikely to move to the public cloud for several years.
Additionally, Oracle Dedicated Region Cloud@Customer is certified to seamlessly run Oracle Cloud applications, including Oracle Fusion Cloud Applications (Cloud ERP, Cloud HCM, Cloud SCM, and Cloud CX, making it a completely integrated cloud experience on-premises. Customers only pay for services they consume using the same predictable low pricing offered in Oracle’s public cloud regions.
 
In which countries is Oracle Dedicated Region Cloud@Customer available?

 

Oracle Dedicated Region Cloud@Customer is available in the following countries: Argentina, Australia, Austria, Belgium, Brazil, Canada, Chile, Colombia, Denmark, Finland, France, Germany, Greece, Hong Kong, India, Israel, Italy, Japan, Kenya, Mexico, Netherlands, New Zealand, Norway, Oman, Peru, Poland, Puerto Rico, Russia, Saudi Arabia, Singapore, South Korea, Spain, Sweden, Switzerland, Thailand, Turkey, United Arab Emirates, United Kingdom, United States.
 
The Oracle Dedicated Region Cloud@Customer requires a minimum commitment of $6M/year in consumption over a three-year period.

 

Highlights:

– Fully Managed Cloud Region at Cloud@Customer.
– All capabilities of Public Cloud Services.
– Strong Isolation to customer data.
– More then 50 Services available
– No change in pricing & capabilities
– Oracle Managed maintenance & Ops
– Physical & data security in customer control.
– Better then AWS Outpost – Provides Limited set of offerings.
– Mission-Critical, latency-sensitive application
– All (100%) Gen2 Services available to Customer Datacenter
– Fusion, Autonomous DB & Cloud Applications to Cloud@Customer
– Consumption based Model – Pay what you use
– Cloud Services, API’s, SLA as per Public Cloud.
 

 

With the update to the consumption-based Cloud@Customer, Oracle looks to advance its position in a hybrid cloud market increasingly contested by hyper-scalers.

 

I hope this helps you!!!
Andre Luiz Dutra Ontalba

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


LuxOUG – Virtual Tech Days
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0
First LuxOUG Virtual Tech Days event for the Oracle community.
 
We will hold our first online event, covering various technologies such as DevOps, Engineered System, Middleware, Cloud and others.
 
 Event Schedule:

Day 1 – 22/06

Speaker: Toon Koppelaars
Session:  Database Core Performance Principles.
Speaker: Piet Visser
Session: Partitioning – Positives and Pitfalls. (database, development) – this one will include an SQL demo.
Speaker: Bruno Reis
Session: Beginner-friendly Python for Oracle DBAs.
Speaker: Heli Helskyaho
Session: Introduction to Machine Learning.

Day 2 – 23/06

Speaker: Kamran Agayev
Session: From DBA to Data Engineer – How to survive a career transition.
Speaker: Nikitas Xenakis
Session: Building a Highly Available & Scalable Logistics Platform with Oracle 19c & Goldengate 19c.
Speaker: Franky Weber
Session: Cheating your application code with Oracle Database.
Speaker: Rodrigo Jorge
Session: Scanning Oracle Database for Malicious Changes.

Day 3 – 24/06

Speaker: Sandesh Rao 
Session: Introduction to AutoML and Data Science using the Oracle Autonomous Database.
Speaker: Robert Marz
Session: Oracle Cloud Infrastructure – Network Setup for DBAs.
Speaker: Alex Zaballa
Session: Exploring All Options to Move your Oracle Databases to the Oracle Cloud.
Speaker: Mariami Kupatadze
Session: Main components, memory structures, physical and logical structures and more.

Day 4 – 25/06

 
Speaker: Erik Van Roon 
Session:  Handling errors during bulk DML operations.
Speaker: Mohamed Houri 
Session:  Cursor Optimization under Adaptive and Extended Cursor Sharing.
Speaker: Y V Ravi Kumar
Session:  Oracle Sharding Technical Deep Dive.
Speaker: Lonneke Dikmans
Session: Oracle Blockchain Platform – a case study.
===================================================================================
For registration and participation in the virtual event room, please CLICK HERE
 
Enrollment open until 20/06. 
 
After this period the event will be broadcast on our Youtube channel – CLICK HERE
 
We have split the videos by speaker this is the playlist for LuxOUG Virtual Tech Days: CLICK HERE
====================================================================================
 
Presentations for download :
 
Speaker: Heli Helskyaho – PDF
Speaker: Mariami Kupatadze – PDF
Speaker: Toon Koppelaars – PDF
Speaker: Y V Ravi Kumar – PDF
Speaker: Rodrigo Jorge – PDF
Speaker: Erik Van Roon – PDF and Scripts 
Speaker: Alex Zaballa – PDF
Speaker: Bruno Reis – PDF
Speaker: Robert Marz – PDF
Speaker: Mohamed Houri  – PDF
Speaker: Sandesh Rao – PDF
Speaker: Lonneke Dikmans – PDF
Speaker: Piet Visser – PDF
Speaker: Franky Weber – PDF
 
 
 
 
See you at the event
 
 
LuxOUG Board

How to upgrade ODA Patch: 18.8.0.0 for Virtualized System
Category: Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

How to upgrade ODA Patch: 18.8.0.0 for Virtualized System

Introduction

The goal of this document is to describe step by step how to upgrade ODA Virtualized System.

Prerequisites

For this Upgrade the ODA must be at version 18.3

Oracle Database Appliance Documentation (Check the last version of the patch)

Start Upgrade

1 – Backup of ODA_BASE (Both Nodes): It can take up to 2h00 hours.

From DOM0:
As root user:
Node 1:

oakcli stop oda_base

mkdir -p /backup/odax58duts1/odax58duts1_date +"%Y%m%d"

nohup tar -cvzf oakDom1.odax58duts1_dom0.tar.gz /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1 &

After complete the backup:

oakcli start oda_base

Node 2:

oakcli stop oda_base

mkdir -p /backup/odax58duts2/odax58duts2_date +"%Y%m%d"

nohup tar -cvzf oakDom1.odax58duts2_dom0.tar.gz /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1 &

After complete the backup:

oakcli start oda_base

2 – Download patch in a shared directory or separated directory in both ODA servers:

Os User: root

mkdir -p /backup/patchODA2020

Download all required files to this directory:

/backup/patchODA2020

There is 2 .zip files to be download patch (30518438):

p30518438_188000_Linux-x86-64_1of2.zip

p30518438_188000_Linux-x86-64_2of2.zip

Note: We must guarantee some minimum amount of free space (20GB) in the ODA BASE, file systems: “/” and “/u01
– Purge old logs from ODA, with root user:

oakcli manage cleanrepo --ver 18.3.0.0.0

oakcli manage cleanrepo --ver 18.6.0.0.0

/usr/local/bin/purgeODALog -orcl 20 -tfa 10 -osw 10 -oak 10

– Clean up old patches from GRID_HOME, affects “/u01“:

su - grid

. oraenv

+ASM1 or +ASM2

cd $ORACLE_HOME/OPatch

./opatch util cleanup

– Clean up old patches from ORACLE_HOME, affects “/u01“:

su - oracle

. oraenv <SID>

cd $ORACLE_HOME/OPatch

./opatch util cleanup

Note:
– It must be performed this clean in every ORACLE_HOME versions that exists in the server.
– Look at “/home/oracle“, “/home/grid” and “/tmp” to perform some clean up and release some space in the file system “/“.

3 – Unpack downloaded patch in both ODA nodes:

cd /backup/patchODA2020

oakcli unpack -package p30518438_188000_Linux-x86-64_1of2.zip

oakcli unpack -package p30518438_188000_Linux-x86-64_2of2.zip

4 – Verify and Validate Upgrade ODA components (S.O):

export EXTRA_OS_RPMS_LOC=/backup/patchODA2020

oakcli validate -c ospatch -ver 18.8.0.0.0

Note:
Solving rpm’s conflict for version 18.8.0.0.0.

e.g: With errors and conflicts

BEGIN OUTPUT:
NODE 1:

[root@odax58duts1 patchODA2020]# oakcli validate -c ospatch -ver 18.8.0.0.0

INFO: Validating the OS patch for the version 18.8.0.0.0

INFO: 2020-06-06 14:06:16: Performing a dry run for OS patching

ERROR: 2020-06-06 14:06:31: Unable to run the command : /usr/bin/yum --exclude=kmod-mpt2sas,ibutils-libs,dapl,libcxgb3,libipathverbs,libmthca,libnes,ofed-docs update --disablerepo=* --enablerepo=ODA_REPOS_LOC -y

ERROR: 2020-06-06 14:06:31: Loaded plugins: rhnplugin, ulninfo, versionlock

This system is not registered with ULN.

You can use uln_register to register.

ULN support will be disabled.

Repository ol6_latest is listed more than once in the configuration

Repository ol6_addons is listed more than once in the configuration

Repository ol6_UEK_latest is listed more than once in the configuration

Setting up Update Process

Resolving Dependencies

--> Running transaction check

---> Package cpupowerutils.x86_64 0:1.3-2.el6 will be updated

---> Package cpupowerutils.x86_64 0:1.3-2.0.1.el6 will be an update

---> Package cups-libs.x86_64 1:1.4.2-79.el6 will be updated

---> Package cups-libs.x86_64 1:1.4.2-81.el6_10 will be an update

---> Package dbus.x86_64 1:1.2.24-9.0.1.el6 will be updated

---> Package dbus.x86_64 1:1.2.24-11.0.1.el6_10 will be an update

---> Package dbus-libs.x86_64 1:1.2.24-9.0.1.el6 will be updated

---> Package dbus-libs.x86_64 1:1.2.24-11.0.1.el6_10 will be an update

---> Package dracut.noarch 0:004-411.0.3.el6 will be updated

---> Package dracut.noarch 0:004-411.0.4.el6 will be an update

---> Package dracut-kernel.noarch 0:004-411.0.3.el6 will be updated

---> Package dracut-kernel.noarch 0:004-411.0.4.el6 will be an update

---> Package glibc.x86_64 0:2.12-1.212.0.2.el6 will be updated

---> Package glibc.x86_64 0:2.12-1.212.0.3.el6_10.3 will be an update

---> Package glibc-common.x86_64 0:2.12-1.212.0.2.el6 will be updated

---> Package glibc-common.x86_64 0:2.12-1.212.0.3.el6_10.3 will be an update

---> Package glibc-devel.i686 0:2.12-1.212.0.2.el6 will be updated

---> Package glibc-devel.x86_64 0:2.12-1.212.0.2.el6 will be updated

---> Package glibc-devel.i686 0:2.12-1.212.0.3.el6_10.3 will be an update

---> Package glibc-devel.x86_64 0:2.12-1.212.0.3.el6_10.3 will be an update

---> Package glibc-headers.x86_64 0:2.12-1.212.0.2.el6 will be updated

---> Package glibc-headers.x86_64 0:2.12-1.212.0.3.el6_10.3 will be an update

---> Package initscripts.x86_64 0:9.03.61-1.0.3.el6 will be updated

---> Package initscripts.x86_64 0:9.03.61-1.0.6.el6 will be an update

---> Package kernel-headers.x86_64 0:2.6.32-754.11.1.el6 will be updated

---> Package kernel-headers.x86_64 0:2.6.32-754.18.2.el6 will be an update

---> Package kernel-uek.x86_64 0:4.1.12-124.33.4.el6uek will be installed

---> Package kernel-uek-firmware.noarch 0:4.1.12-124.33.4.el6uek will be installed

---> Package kexec-tools.x86_64 0:2.0.7-1.0.27.el6 will be updated

---> Package kexec-tools.x86_64 0:2.0.7-1.0.28.el6 will be an update

---> Package ksplice.x86_64 0:1.0.38-1.el6 will be updated

---> Package ksplice.x86_64 0:1.0.43-1.el6 will be an update

---> Package ksplice-core0.x86_64 0:1.0.38-1.el6 will be updated

---> Package ksplice-core0.x86_64 0:1.0.43-1.el6 will be an update

---> Package ksplice-offline.x86_64 0:1.0.38-1.el6 will be updated

---> Package ksplice-offline.x86_64 0:1.0.43-1.el6 will be an update

---> Package ksplice-tools.x86_64 0:1.0.38-1.el6 will be updated

---> Package ksplice-tools.x86_64 0:1.0.43-1.el6 will be an update

---> Package libgudev1.x86_64 0:147-2.73.0.1.el6_8.2 will be updated

---> Package libgudev1.x86_64 0:147-2.73.0.2.el6_8.2 will be an update

---> Package libudev.x86_64 0:147-2.73.0.1.el6_8.2 will be updated

---> Package libudev.x86_64 0:147-2.73.0.2.el6_8.2 will be an update

---> Package mailx.x86_64 0:12.4-8.el6_6 will be updated

---> Package mailx.x86_64 0:12.4-10.el6_10 will be an update

---> Package openssl.x86_64 0:1.0.1e-57.0.6.el6 will be updated

---> Package openssl.x86_64 0:1.0.1e-58.0.1.el6_10 will be an update

---> Package oracle-ofed-release.x86_64 0:1.0.0-50.el6 will be updated

---> Package oracle-ofed-release.x86_64 0:1.0.0-51.el6 will be an update

---> Package perf.x86_64 0:2.6.32-754.11.1.el6 will be updated

---> Package perf.x86_64 0:2.6.32-754.18.2.el6 will be an update

---> Package python.x86_64 0:2.6.6-66.0.1.el6_8 will be updated

---> Package python.x86_64 0:2.6.6-68.0.1.el6_10 will be an update

---> Package python-libs.x86_64 0:2.6.6-66.0.1.el6_8 will be updated

---> Package python-libs.x86_64 0:2.6.6-68.0.1.el6_10 will be an update

---> Package rdma.noarch 2:3.10-3.0.40.el6 will be updated

---> Package rdma.noarch 2:3.10-3.0.41.el6 will be an update

---> Package samba.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package samba-client.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba-client.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package samba-common.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba-common.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package samba-winbind.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba-winbind.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package samba-winbind-clients.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba-winbind-clients.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package sudo.x86_64 0:1.8.6p3-29.el6_9 will be updated

---> Package sudo.x86_64 0:1.8.6p3-29.0.1.el6_10.2 will be an update

---> Package tzdata.noarch 0:2019a-1.el6 will be updated

---> Package tzdata.noarch 0:2019c-1.el6 will be an update

---> Package tzdata-java.noarch 0:2018e-3.el6 will be updated

---> Package tzdata-java.noarch 0:2019c-1.el6 will be an update

---> Package udev.x86_64 0:147-2.73.0.1.el6_8.2 will be updated

---> Package udev.x86_64 0:147-2.73.0.2.el6_8.2 will be an update

---> Package vim-common.x86_64 2:7.4.629-5.el6_8.1 will be updated

---> Package vim-common.x86_64 2:7.4.629-5.el6_10.2 will be an update

---> Package vim-enhanced.x86_64 2:7.4.629-5.el6_8.1 will be updated

---> Package vim-enhanced.x86_64 2:7.4.629-5.el6_10.2 will be an update

---> Package vim-minimal.x86_64 2:7.4.629-5.el6_8.1 will be updated

---> Package vim-minimal.x86_64 2:7.4.629-5.el6_10.2 will be an update

---> Package xorg-x11-server-Xorg.x86_64 0:1.17.4-17.0.1.el6 will be updated

---> Package xorg-x11-server-Xorg.x86_64 0:1.17.4-17.0.2.el6 will be an update

---> Package xorg-x11-server-common.x86_64 0:1.17.4-17.0.1.el6 will be updated

---> Package xorg-x11-server-common.x86_64 0:1.17.4-17.0.2.el6 will be an update

--> Finished Dependency Resolution

Dependencies Resolved

================================================================================

 Package                 Arch    Version                   Repository      Size

================================================================================

Installing:

 kernel-uek              x86_64  4.1.12-124.33.4.el6uek    ODA_REPOS_LOC   42 M

 kernel-uek-firmware     noarch  4.1.12-124.33.4.el6uek    ODA_REPOS_LOC  2.6 M

Updating:

 cpupowerutils           x86_64  1.3-2.0.1.el6             ODA_REPOS_LOC   77 k

 cups-libs               x86_64  1:1.4.2-81.el6_10         ODA_REPOS_LOC  322 k

 dbus                    x86_64  1:1.2.24-11.0.1.el6_10    ODA_REPOS_LOC  211 k

 dbus-libs               x86_64  1:1.2.24-11.0.1.el6_10    ODA_REPOS_LOC  127 k

 dracut                  noarch  004-411.0.4.el6           ODA_REPOS_LOC  129 k

 dracut-kernel           noarch  004-411.0.4.el6           ODA_REPOS_LOC   29 k

 glibc                   x86_64  2.12-1.212.0.3.el6_10.3   ODA_REPOS_LOC  3.8 M

 glibc-common            x86_64  2.12-1.212.0.3.el6_10.3   ODA_REPOS_LOC   14 M

 glibc-devel             i686    2.12-1.212.0.3.el6_10.3   ODA_REPOS_LOC  992 k

 glibc-devel             x86_64  2.12-1.212.0.3.el6_10.3   ODA_REPOS_LOC  991 k

 glibc-headers           x86_64  2.12-1.212.0.3.el6_10.3   ODA_REPOS_LOC  620 k

 initscripts             x86_64  9.03.61-1.0.6.el6         ODA_REPOS_LOC  952 k

 kernel-headers          x86_64  2.6.32-754.18.2.el6       ODA_REPOS_LOC  4.6 M

 kexec-tools             x86_64  2.0.7-1.0.28.el6          ODA_REPOS_LOC  339 k

 ksplice                 x86_64  1.0.43-1.el6              ODA_REPOS_LOC  9.1 k

 ksplice-core0           x86_64  1.0.43-1.el6              ODA_REPOS_LOC  271 k

 ksplice-offline         x86_64  1.0.43-1.el6              ODA_REPOS_LOC  7.9 k

 ksplice-tools           x86_64  1.0.43-1.el6              ODA_REPOS_LOC   92 k

 libgudev1               x86_64  147-2.73.0.2.el6_8.2      ODA_REPOS_LOC   65 k

 libudev                 x86_64  147-2.73.0.2.el6_8.2      ODA_REPOS_LOC   78 k

 mailx                   x86_64  12.4-10.el6_10            ODA_REPOS_LOC  235 k

 openssl                 x86_64  1.0.1e-58.0.1.el6_10      ODA_REPOS_LOC  1.5 M

 oracle-ofed-release     x86_64  1.0.0-51.el6              ODA_REPOS_LOC   16 k

 perf                    x86_64  2.6.32-754.18.2.el6       ODA_REPOS_LOC  4.8 M

 python                  x86_64  2.6.6-68.0.1.el6_10       ODA_REPOS_LOC   76 k

 python-libs             x86_64  2.6.6-68.0.1.el6_10       ODA_REPOS_LOC  5.3 M

 rdma                    noarch  2:3.10-3.0.41.el6         ODA_REPOS_LOC   76 k

 samba                   x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC  5.1 M

 samba-client            x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC   11 M

 samba-common            x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC   10 M

 samba-winbind           x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC  2.2 M

 samba-winbind-clients   x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC  2.0 M

 sudo                    x86_64  1.8.6p3-29.0.1.el6_10.2   ODA_REPOS_LOC  712 k

 tzdata                  noarch  2019c-1.el6               ODA_REPOS_LOC  507 k

 tzdata-java             noarch  2019c-1.el6               ODA_REPOS_LOC  188 k

 udev                    x86_64  147-2.73.0.2.el6_8.2      ODA_REPOS_LOC  360 k

 vim-common              x86_64  2:7.4.629-5.el6_10.2      ODA_REPOS_LOC  6.7 M

 vim-enhanced            x86_64  2:7.4.629-5.el6_10.2      ODA_REPOS_LOC  1.0 M

 vim-minimal             x86_64  2:7.4.629-5.el6_10.2      ODA_REPOS_LOC  421 k

 xorg-x11-server-Xorg    x86_64  1.17.4-17.0.2.el6         ODA_REPOS_LOC  1.4 M

 xorg-x11-server-common  x86_64  1.17.4-17.0.2.el6         ODA_REPOS_LOC   51 k

Transaction Summary

================================================================================

Install       2 Package(s)

Upgrade      41 Package(s)

Total download size: 126 M

Downloading Packages:

--------------------------------------------------------------------------------

Total                                           190 MB/s | 126 MB     00:00

Running rpm_check_debug

Running Transaction Test

Transaction Check Error:

  file /usr/bin/ldd from install of glibc-common-2.12-1.212.0.3.el6_10.3.x86_64 conflicts with file from package glibc-common-2.12-1.212.0.3.el6_10.3.i686

  file /usr/lib/locale/locale-archive.tmpl from install of glibc-common-2.12-1.212.0.3.el6_10.3.x86_64 conflicts with file from package glibc-common-2.12-1.212.0.3.el6_10.3.i686

Error Summary

-------------

WARNING: 2020-06-06 14:06:31: OS Upgrade is not successful. Need to resolve conflicts

INFO: 2020-06-06 14:06:31: Copy the required RPMs to a location and set EXTRA_OS_RPMS_LOC to that location

Here we need to solve the dependency problem, in this case we will remove the package

[root@odax58duts1 patchODA2020]# rpm -e glibc-common-2.12-1.212.0.3.el6_10.3.i686 --nodeps

You have new mail in /var/spool/mail/root

[root@odax58duts1 patchODA2020]# oakcli validate -c ospatch -ver 18.8.0.0.0

INFO: Validating the OS patch for the version 18.8.0.0.0

INFO: 2020-06-06 14:09:51: Performing a dry run for OS patching

INFO: 2020-06-06 14:10:09: No conflict detected during the OS update, dry run check.

NODE 2:

[root@odax58duts2 patchODA2020]# oakcli validate -c ospatch -ver 18.8.0.0.0

INFO: Validating the OS patch for the version 18.8.0.0.0

INFO: 2020-06-06 14:15:59: Performing a dry run for OS patching

ERROR: 2020-06-06 14:16:18: Unable to run the command : /usr/bin/yum --exclude=kmod-mpt2sas,ibutils-libs,dapl,libcxgb3,libipathverbs,libmthca,libnes,ofed-docs update --disablerepo=* --enablerepo=ODA_REPOS_LOC -y

ERROR: 2020-06-06 14:16:18: Loaded plugins: rhnplugin, ulninfo, versionlock

This system is not registered with ULN.

You can use uln_register to register.

ULN support will be disabled.

Repository ol6_latest is listed more than once in the configuration

Repository ol6_addons is listed more than once in the configuration

Repository ol6_UEK_latest is listed more than once in the configuration

Setting up Update Process

Resolving Dependencies

--> Running transaction check

---> Package cpupowerutils.x86_64 0:1.3-2.el6 will be updated

---> Package cpupowerutils.x86_64 0:1.3-2.0.1.el6 will be an update

---> Package cups-libs.x86_64 1:1.4.2-79.el6 will be updated

---> Package cups-libs.x86_64 1:1.4.2-81.el6_10 will be an update

---> Package dbus.x86_64 1:1.2.24-9.0.1.el6 will be updated

---> Package dbus.x86_64 1:1.2.24-11.0.1.el6_10 will be an update

---> Package dbus-libs.x86_64 1:1.2.24-9.0.1.el6 will be updated

---> Package dbus-libs.x86_64 1:1.2.24-11.0.1.el6_10 will be an update

---> Package dracut.noarch 0:004-411.0.3.el6 will be updated

---> Package dracut.noarch 0:004-411.0.4.el6 will be an update

---> Package dracut-kernel.noarch 0:004-411.0.3.el6 will be updated

---> Package dracut-kernel.noarch 0:004-411.0.4.el6 will be an update

---> Package glibc.x86_64 0:2.12-1.212.0.2.el6 will be updated

---> Package glibc.x86_64 0:2.12-1.212.0.3.el6_10.3 will be an update

---> Package glibc-devel.i686 0:2.12-1.212.0.2.el6 will be updated

---> Package glibc-devel.x86_64 0:2.12-1.212.0.2.el6 will be updated

---> Package glibc-devel.i686 0:2.12-1.212.0.3.el6_10.3 will be an update

---> Package glibc-devel.x86_64 0:2.12-1.212.0.3.el6_10.3 will be an update

---> Package glibc-headers.x86_64 0:2.12-1.212.0.2.el6 will be updated

---> Package glibc-headers.x86_64 0:2.12-1.212.0.3.el6_10.3 will be an update

---> Package initscripts.x86_64 0:9.03.61-1.0.3.el6 will be updated

---> Package initscripts.x86_64 0:9.03.61-1.0.6.el6 will be an update

---> Package kernel-headers.x86_64 0:2.6.32-754.11.1.el6 will be updated

---> Package kernel-headers.x86_64 0:2.6.32-754.18.2.el6 will be an update

---> Package kernel-uek.x86_64 0:4.1.12-124.33.4.el6uek will be installed

---> Package kernel-uek-firmware.noarch 0:4.1.12-124.33.4.el6uek will be installed

---> Package kexec-tools.x86_64 0:2.0.7-1.0.27.el6 will be updated

---> Package kexec-tools.x86_64 0:2.0.7-1.0.28.el6 will be an update

---> Package ksplice.x86_64 0:1.0.38-1.el6 will be updated

---> Package ksplice.x86_64 0:1.0.43-1.el6 will be an update

---> Package ksplice-core0.x86_64 0:1.0.38-1.el6 will be updated

---> Package ksplice-core0.x86_64 0:1.0.43-1.el6 will be an update

---> Package ksplice-offline.x86_64 0:1.0.38-1.el6 will be updated

---> Package ksplice-offline.x86_64 0:1.0.43-1.el6 will be an update

---> Package ksplice-tools.x86_64 0:1.0.38-1.el6 will be updated

---> Package ksplice-tools.x86_64 0:1.0.43-1.el6 will be an update

---> Package libgudev1.x86_64 0:147-2.73.0.1.el6_8.2 will be updated

---> Package libgudev1.x86_64 0:147-2.73.0.2.el6_8.2 will be an update

---> Package libudev.x86_64 0:147-2.73.0.1.el6_8.2 will be updated

---> Package libudev.x86_64 0:147-2.73.0.2.el6_8.2 will be an update

---> Package mailx.x86_64 0:12.4-8.el6_6 will be updated

---> Package mailx.x86_64 0:12.4-10.el6_10 will be an update

---> Package openssl.x86_64 0:1.0.1e-57.0.6.el6 will be updated

---> Package openssl.x86_64 0:1.0.1e-58.0.1.el6_10 will be an update

---> Package oracle-ofed-release.x86_64 0:1.0.0-50.el6 will be updated

---> Package oracle-ofed-release.x86_64 0:1.0.0-51.el6 will be an update

---> Package perf.x86_64 0:2.6.32-754.11.1.el6 will be updated

---> Package perf.x86_64 0:2.6.32-754.18.2.el6 will be an update

---> Package python.x86_64 0:2.6.6-66.0.1.el6_8 will be updated

---> Package python.x86_64 0:2.6.6-68.0.1.el6_10 will be an update

---> Package python-libs.x86_64 0:2.6.6-66.0.1.el6_8 will be updated

---> Package python-libs.x86_64 0:2.6.6-68.0.1.el6_10 will be an update

---> Package rdma.noarch 2:3.10-3.0.40.el6 will be updated

---> Package rdma.noarch 2:3.10-3.0.41.el6 will be an update

---> Package samba.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package samba-client.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba-client.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package samba-common.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba-common.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package samba-winbind.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba-winbind.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package samba-winbind-clients.x86_64 0:3.6.23-51.0.1.el6 will be updated

---> Package samba-winbind-clients.x86_64 0:3.6.23-52.0.1.el6_10 will be an update

---> Package sudo.x86_64 0:1.8.6p3-29.el6_9 will be updated

---> Package sudo.x86_64 0:1.8.6p3-29.0.1.el6_10.2 will be an update

---> Package tzdata.noarch 0:2019a-1.el6 will be updated

---> Package tzdata.noarch 0:2019c-1.el6 will be an update

---> Package tzdata-java.noarch 0:2018e-3.el6 will be updated

---> Package tzdata-java.noarch 0:2019c-1.el6 will be an update

---> Package udev.x86_64 0:147-2.73.0.1.el6_8.2 will be updated

---> Package udev.x86_64 0:147-2.73.0.2.el6_8.2 will be an update

---> Package vim-common.x86_64 2:7.4.629-5.el6_8.1 will be updated

---> Package vim-common.x86_64 2:7.4.629-5.el6_10.2 will be an update

---> Package vim-enhanced.x86_64 2:7.4.629-5.el6_8.1 will be updated

---> Package vim-enhanced.x86_64 2:7.4.629-5.el6_10.2 will be an update

---> Package vim-minimal.x86_64 2:7.4.629-5.el6_8.1 will be updated

---> Package vim-minimal.x86_64 2:7.4.629-5.el6_10.2 will be an update

---> Package xorg-x11-server-Xorg.x86_64 0:1.17.4-17.0.1.el6 will be updated

---> Package xorg-x11-server-Xorg.x86_64 0:1.17.4-17.0.2.el6 will be an update

---> Package xorg-x11-server-common.x86_64 0:1.17.4-17.0.1.el6 will be updated

---> Package xorg-x11-server-common.x86_64 0:1.17.4-17.0.2.el6 will be an update

--> Finished Dependency Resolution

Dependencies Resolved

================================================================================

 Package                 Arch    Version                   Repository      Size

================================================================================

Installing:

 kernel-uek              x86_64  4.1.12-124.33.4.el6uek    ODA_REPOS_LOC   42 M

 kernel-uek-firmware     noarch  4.1.12-124.33.4.el6uek    ODA_REPOS_LOC  2.6 M

Updating:

 cpupowerutils           x86_64  1.3-2.0.1.el6             ODA_REPOS_LOC   77 k

 cups-libs               x86_64  1:1.4.2-81.el6_10         ODA_REPOS_LOC  322 k

 dbus                    x86_64  1:1.2.24-11.0.1.el6_10    ODA_REPOS_LOC  211 k

 dbus-libs               x86_64  1:1.2.24-11.0.1.el6_10    ODA_REPOS_LOC  127 k

 dracut                  noarch  004-411.0.4.el6           ODA_REPOS_LOC  129 k

 dracut-kernel           noarch  004-411.0.4.el6           ODA_REPOS_LOC   29 k

 glibc                   x86_64  2.12-1.212.0.3.el6_10.3   ODA_REPOS_LOC  3.8 M

 glibc-devel             i686    2.12-1.212.0.3.el6_10.3   ODA_REPOS_LOC  992 k

 glibc-devel             x86_64  2.12-1.212.0.3.el6_10.3   ODA_REPOS_LOC  991 k

 glibc-headers           x86_64  2.12-1.212.0.3.el6_10.3   ODA_REPOS_LOC  620 k

 initscripts             x86_64  9.03.61-1.0.6.el6         ODA_REPOS_LOC  952 k

 kernel-headers          x86_64  2.6.32-754.18.2.el6       ODA_REPOS_LOC  4.6 M

 kexec-tools             x86_64  2.0.7-1.0.28.el6          ODA_REPOS_LOC  339 k

 ksplice                 x86_64  1.0.43-1.el6              ODA_REPOS_LOC  9.1 k

 ksplice-core0           x86_64  1.0.43-1.el6              ODA_REPOS_LOC  271 k

 ksplice-offline         x86_64  1.0.43-1.el6              ODA_REPOS_LOC  7.9 k

 ksplice-tools           x86_64  1.0.43-1.el6              ODA_REPOS_LOC   92 k

 libgudev1               x86_64  147-2.73.0.2.el6_8.2      ODA_REPOS_LOC   65 k

 libudev                 x86_64  147-2.73.0.2.el6_8.2      ODA_REPOS_LOC   78 k

 mailx                   x86_64  12.4-10.el6_10            ODA_REPOS_LOC  235 k

 openssl                 x86_64  1.0.1e-58.0.1.el6_10      ODA_REPOS_LOC  1.5 M

 oracle-ofed-release     x86_64  1.0.0-51.el6              ODA_REPOS_LOC   16 k

 perf                    x86_64  2.6.32-754.18.2.el6       ODA_REPOS_LOC  4.8 M

 python                  x86_64  2.6.6-68.0.1.el6_10       ODA_REPOS_LOC   76 k

 python-libs             x86_64  2.6.6-68.0.1.el6_10       ODA_REPOS_LOC  5.3 M

 rdma                    noarch  2:3.10-3.0.41.el6         ODA_REPOS_LOC   76 k

 samba                   x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC  5.1 M

 samba-client            x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC   11 M

 samba-common            x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC   10 M

 samba-winbind           x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC  2.2 M

 samba-winbind-clients   x86_64  3.6.23-52.0.1.el6_10      ODA_REPOS_LOC  2.0 M

 sudo                    x86_64  1.8.6p3-29.0.1.el6_10.2   ODA_REPOS_LOC  712 k

 tzdata                  noarch  2019c-1.el6               ODA_REPOS_LOC  507 k

 tzdata-java             noarch  2019c-1.el6               ODA_REPOS_LOC  188 k

 udev                    x86_64  147-2.73.0.2.el6_8.2      ODA_REPOS_LOC  360 k

 vim-common              x86_64  2:7.4.629-5.el6_10.2      ODA_REPOS_LOC  6.7 M

 vim-enhanced            x86_64  2:7.4.629-5.el6_10.2      ODA_REPOS_LOC  1.0 M

 vim-minimal             x86_64  2:7.4.629-5.el6_10.2      ODA_REPOS_LOC  421 k

 xorg-x11-server-Xorg    x86_64  1.17.4-17.0.2.el6         ODA_REPOS_LOC  1.4 M

 xorg-x11-server-common  x86_64  1.17.4-17.0.2.el6         ODA_REPOS_LOC   51 k

Transaction Summary

================================================================================

Install       2 Package(s)

Upgrade      40 Package(s)

Total download size: 112 M

Downloading Packages:

--------------------------------------------------------------------------------

Total                                           187 MB/s | 112 MB     00:00

Running rpm_check_debug

ERROR with rpm_check_debug vs depsolve:

glibc = 2.12-1.212.0.1.el6 is needed by (installed) nscd-2.12-1.212.0.1.el6.x86_64

** Found 11 pre-existing rpmdb problem(s), 'yum check' output follows:

glibc-2.12-1.212.0.2.el6.x86_64 has missing requires of glibc-common = ('0', '2.12', '1.212.0.2.el6')

glibc-2.12-1.212.0.3.el6_10.3.i686 is a duplicate with glibc-2.12-1.212.0.2.el6.x86_64

libgcc-4.4.7-23.0.1.el6.x86_64 is a duplicate with libgcc-4.4.7-18.el6.i686

nscd-2.12-1.212.0.1.el6.x86_64 has missing requires of glibc = ('0', '2.12', '1.212.0.1.el6')

oak-18.7.0.0.0_LINUX.X64_190915-1.x86_64 has missing requires of libnfsodm18.so()(64bit)

oak-18.7.0.0.0_LINUX.X64_190915-1.x86_64 has missing requires of perl(GridDefParams)

oak-18.7.0.0.0_LINUX.X64_190915-1.x86_64 has missing requires of perl(oakosdiskinfo)

oak-18.7.0.0.0_LINUX.X64_190915-1.x86_64 has missing requires of perl(oaksharedstorageinfo)

oak-18.7.0.0.0_LINUX.X64_190915-1.x86_64 has missing requires of perl(oakstoragetopology)

oak-18.7.0.0.0_LINUX.X64_190915-1.x86_64 has missing requires of perl(ol5_to_ol6_upgrade)

oak-18.7.0.0.0_LINUX.X64_190915-1.x86_64 has missing requires of perl(s_GridSteps)

Your transaction was saved, rerun it with:

 yum load-transaction /tmp/yum_save_tx-2020-06-06-14-16PkHNuJ.yumtx

WARNING: 2020-06-06 14:16:18: OS Upgrade is not successful. Need to resolve conflicts

INFO: 2020-06-06 14:16:18: Copy the required RPMs to a location and set EXTRA_OS_RPMS_LOC to that location

Here we need to solve the dependency problem, in this case we will remove the package

[root@odax58duts2 patchODA2020]# rpm -e nscd-2.12-1.212.0.1.el6 --nodeps

[root@odax58duts2 patchODA2020]# oakcli validate -c ospatch -ver 18.8.0.0.0

INFO: Validating the OS patch for the version 18.8.0.0.0

INFO: 2020-06-06 14:16:38: Performing a dry run for OS patching

INFO: 2020-06-06 14:16:55: No conflict detected during the OS update, dry run check.

Now we have the environment validated for updating the operating system

5 – Verify which components it will be required to update:

oakcli update -patch 18.8.0.0.0 --verify

e.g:

[root@odax58duts1 patchODA2020]#   oakcli update -patch 18.8.0.0.0 --verify

INFO: 2020-06-06 14:27:34: Reading the metadata file now...

                Component Name            Installed Version         Proposed Patch Version

                ---------------           ------------------        -----------------

                Controller_INT            11.05.03.00               Up-to-date

                Controller_EXT            11.05.03.00               Up-to-date

                Expander                  001E                      Up-to-date

                SSD_SHARED                944A                      Up-to-date

                HDD_LOCAL                 A7E0                      Up-to-date

                HDD_SHARED                A7E0                      Up-to-date

                ILOM                      4.0.4.40 r130079          5.0.0.20 r133445

                BIOS                      25080100                  Up-to-date

                IPMI                      1.8.15.0                  Up-to-date

                HMP                       2.4.5.0.1                 Up-to-date

                OAK                       18.7.0.0.0                18.8.0.0.0

                OL                        6.10                      Up-to-date

                OVM                       3.4.4                     Up-to-date

                GI_HOME                   18.7.0.0.190716           18.8.0.0.191015

                DB_HOME {

                [ OraDb12102_home1 ]      12.1.0.2.190716           12.1.0.2.191015

                [ OraDb11204_home1 ]      11.2.0.4.190716           11.2.0.4.191015

                [ OraDB18Home1 ]          18.7.1.0.191015           18.8.0.0.191015

                [ OraDb11203_home2 ]      11.2.0.3.15               No-update

                             }

[root@odax58duts2 patchODA2020]#   oakcli update -patch 18.8.0.0.0 --verify

INFO: 2020-06-06 14:27:39: Reading the metadata file now...

                Component Name            Installed Version         Proposed Patch Version

                ---------------           ------------------        -----------------

                Controller_INT            11.05.03.00               Up-to-date

                Controller_EXT            11.05.03.00               Up-to-date

                Expander                  001E                      Up-to-date

                SSD_SHARED                944A                      Up-to-date

                HDD_LOCAL                 A7E0                      Up-to-date

                HDD_SHARED                A7E0                      Up-to-date

                ILOM                      4.0.4.40 r130079          5.0.0.20 r133445

                BIOS                      25080100                  Up-to-date

                IPMI                      1.8.15.0                  Up-to-date

                HMP                       2.4.5.0.1                 Up-to-date

                OAK                       18.7.0.0.0                18.8.0.0.0

                OL                        6.10                      Up-to-date

                OVM                       3.4.4                     Up-to-date

                GI_HOME                   18.7.0.0.190716           18.8.0.0.191015

                DB_HOME {

                [ OraDb12102_home1 ]      12.1.0.2.190716           12.1.0.2.191015

                [ OraDb11204_home1 ]      11.2.0.4.190716           11.2.0.4.191015

                [ OraDB18Home1 ]          18.7.1.0.191015           18.8.0.0.191015

                [ OraDb11203_home2 ]      11.2.0.3.15               No-update

Note:

– Stop all databases and clusterware resource before patch ILOM in both servers

6 – Update ILOM

I prefer to first update ILOM in case there is a problem during the patch I can access the environment via ILOM.
 
If you want to update ILOM with the patch, it may be your choice as well.
 
 
Note:  Upgrade ILOM: Both ILOM nodes, but not in the same time.
 
Download ILOM Patch separately: Sun Server X5-8 (ILOM 5.0.0.20 133445)
Custom File Name: p30802633_300_Generic.zip
The firmware are inside of this .zip file in the following path:
C:\Users\andre.ontalba\Downloads\p30802633_300_Generic\Oracle_Server_X5-8-3.0.0.91223-FIRMWARE_PACK\Firmware\service-processor\ILOM-5_0_0_20_r133445-Oracle_Server_X5-4_X5-8.pkg
You must copy this file to the following directory in the server to use for transfer this package. In my case dutsLinux1.

/patch/ILOM

This file must be owned by “oracle” user and group “oinstall
Connect to the gateway machine: dutsLinux1

ssh xxxx@dutsLinux1

Connect to the first ILOM from the gateway machine:

ssh root@odax58duts1-ilom

Enter remote user password: **********

Check current version:
e.g:
Type:

  -> version

SP firmware 4.0.4.40

SP firmware build number: 130079

SP firmware date: Thu May 07 09:54:31 CST 2019

SP filesystem version: 0.2.10

Stopping DOM0 and ODA_BASE: (Connect to the ILOM):

stop /SYS

Check status:

show /SYS

e.g: Section “Properties –> power_state = Off”

  Properties:

      type = Host System

      ipmi_name = /SYS

      product_name = SUN SERVER X5-8

      product_part_number = XXXXXXXXXXXXXXX

      product_serial_number = XXXXXXXXXXXXXXX

      product_manufacturer = Oracle Corporation

      fault_state = OK

      clear_fault_action = (none)

      power_state = Off

Load new ILOM image:

load -source scp://xxxx@dutsLinux1/ILOM/ ILOM-5_0_0_20_r133445-Oracle_Server_X5-4_X5-8.pkg

e.g:

load -source scp://root@dutsLinux1/ILOM/ ILOM-5_0_0_20_r133445-Oracle_Server_X5-4_X5-8.pkg

Enter remote user password: **********

NOTE: An upgrade takes several minutes to complete. ILOM

      will enter a special mode to load new firmware. No

      other tasks can be performed in ILOM until the

      firmware upgrade is complete and ILOM is reset.

    You can choose to postpone the server BIOS upgrade until the

    next server poweroff. If you do not do that, you should

    perform a clean shutdown of the server before continuing.

Are you sure you want to load the specified file (y/n)? y

Preserve existing SP configuration (y/n)? y

Preserve existing BIOS configuration (y/n)? y

Delay BIOS upgrade until next server poweroff or reset (y/n)? n

...

After the automatic reboot performed in the ILOM, you can validate the new firmware and BIOS version:

version

Hostname: odax58duts1-ilom

-> version

SP firmware 5.0.0.20

SP firmware build number: 133445

SP firmware date: Thr Feb 06 09:54:31 CST 2020

SP filesystem version: 0.2.10

show /SYS/MB/BIOS

/SYS/MB/BIOS

  Targets:

  Properties:

      type = BIOS

      ipmi_name = MB/BIOS

      fru_name = SYSTEM BIOS

      fru_manufacturer = AMERICAN MEGATRENDS

      fru_version = 25080100

      fru_part_number = APTIO

Start the DOM0 and ODA_BASE after upgrade ILOM:

start /SYS

exit

Note:  Repeat this procedure on the second node

7 – Upgrade ODA Servers

Note: Run only from first server and make sure that is the master node: oakcli show ismaster ⇒ “OAKD is in Master Mode”

oakcli update -patch 18.8.0.0.0 –server

e.g:

[root@odax58duts2 patchODA2020]# oakcli update -patch 18.8.0.0.0 –server

This procedure can take between 2 to 3 hours to execute on both nodes.

INFO: DB, ASM, Clusterware may be stopped during the patch if required

INFO: Both Nodes may get rebooted automatically during the patch if required

Do you want to continue: [Y/N]?: y

INFO: User has confirmed for the reboot

INFO: Patch bundle must be unpacked on the second Node also before applying the patch

Did you unpack the patch bundle on the second Node? : [Y/N]? : y

INFO: All the VMs except the ODABASE will be shutdown forcefully if needed

Do you want to continue : [Y/N]? : y

To be able to monitor the application of the patch we can see the logs in the directories below
Log file directory node 1:   /opt/oracle/oak/log/odax58duts1/patch/18.8.0.0.0/
Log file directory node 2:   /opt/oracle/oak/log/odax58duts2/patch/18.8.0.0.0/

Let’s check how is the server update
Node 1

[root@odax58duts1 patchODA2020]# oakcli show version -detail

INFO: 2020-06-06 17:17:34: Reading the metadata file now...

                Component Name            Installed Version         Proposed Patch Version

                ---------------           ------------------        -----------------

                Controller_INT            11.05.03.00               Up-to-date

                Controller_EXT            11.05.03.00               Up-to-date

                Expander                  001E                      Up-to-date

                SSD_SHARED                944A                      Up-to-date

                HDD_LOCAL                 A7E0                      Up-to-date

                HDD_SHARED                A7E0                      Up-to-date

                ILOM                      5.0.0.20 r133445          Up-to-date

                BIOS                      25080100                  Up-to-date

                IPMI                      1.8.15.0                  Up-to-date

                HMP                       2.4.5.0.1                 Up-to-date

                OAK                       18.8.0.0.0                Up-to-date

                OL                        6.10                      Up-to-date

                OVM                       3.4.4                     Up-to-date

               GI_HOME                   18.8.0.0.191015          Up-to-date

                DB_HOME {

                [ OraDb12102_home1 ]      12.1.0.2.190716           12.1.0.2.191015

                [ OraDb11204_home1 ]      11.2.0.4.190716           11.2.0.4.191015

                [ OraDB18Home1 ]          18.7.1.0.191015           18.8.0.0.191015

                [ OraDb11203_home2 ]      11.2.0.3.15               No-update

Node 2

[root@odax58duts2 patchODA2020]#   oakcli show version -detail

INFO: 2020-06-06 17:18:39: Reading the metadata file now...

                Component Name            Installed Version         Proposed Patch Version

                ---------------           ------------------        -----------------

                Controller_INT            11.05.03.00               Up-to-date

                Controller_EXT            11.05.03.00               Up-to-date

                Expander                  001E                      Up-to-date

                SSD_SHARED                944A                      Up-to-date

                HDD_LOCAL                 A7E0                      Up-to-date

                HDD_SHARED                A7E0                      Up-to-date

                ILOM                      5.0.0.20 r133445          Up-to-date

                BIOS                      25080100                  Up-to-date

                IPMI                      1.8.15.0                  Up-to-date

                HMP                       2.4.5.0.1                 Up-to-date

                OAK                       18.8.0.0.0                Up-to-date

                OL                        6.10                      Up-to-date

                OVM                       3.4.4                     Up-to-date

                GI_HOME                   18.8.0.0.191015           Up-to-date

                DB_HOME {

                [ OraDb12102_home1 ]      12.1.0.2.190716           12.1.0.2.191015

                [ OraDb11204_home1 ]      11.2.0.4.190716           11.2.0.4.191015

                [ OraDB18Home1 ]          18.7.1.0.191015           18.8.0.0.191015

                [ OraDb11203_home2 ]      11.2.0.3.15               No-update

8 – ODA Patch: Database Binaries

Now is time to apply patch on the Oracle Database Binaries (11.2.0.4,12.1,12.2,18.).
Get the database list and Oracle Home before patch:

oakcli show databases

First step is stop TFA (Both Nodes with root)

tfactl stop

To apply the patch in the Oracle Binaries: (Execute Only From First Node)

oakcli update -patch 18.8.0.0.0 --database

[root@odax58duts1 18.8.0.0.0]# oakcli update -patch 18.8.0.0.0 --database

INFO: Running pre-install scripts

INFO: Running  prepatching on node 0

INFO: Running  prepatching on node 1

INFO: Completed pre-install scripts

...

...

INFO: 2020-06-06 18:51:24: ------------------Patching DB-------------------------

INFO: 2020-06-06 18:51:24: Getting all the possible Database Homes for patching

...

INFO: 2020-06-06 18:52:03: Patching 11.2.0.4 Database Homes on the Node odax58duts1

Found the following 11.2.0.4 homes possible for patching:

HOME_NAME                      HOME_LOCATION

---------                      -------------

OraDb11204_home1               /u01/app/oracle/product/11.2.0.4/dbhome_1

[Please note that few of the above Database Homes may be already up-to-date. They will be automatically ignored]

Would you like to patch all the above homes: Y | N ? : Y

INFO: 2020-06-06 18:52:15: Updating OPATCH

Verifying Opatch version for home:</u01/app/oracle/product/11.2.0.4/dbhome_1>.

Expecting version:<11.2.0.3.22>

Opatch version on node <odax58duts1> is <11.2.0.3.22>

Opatch version on node <odax58duts2> is <11.2.0.3.22>

INFO: 2020-06-06 18:53:41: Performing the conflict checks...

SUCCESS: 2020-06-06 18:53:53: Conflict checks passed for all the Homes

INFO: 2020-06-06 18:53:53: Checking if the patch is already applied on any of the Homes

INFO: 2020-06-06 18:53:58: Home is not Up-to-date

SUCCESS: 2020-06-06 18:53:59: Successfully stopped the Database consoles

SUCCESS: 2020-06-06 18:54:06: Successfully stopped the EM agents

INFO: 2020-06-06 18:54:11: Applying the patch on oracle home : /u01/app/oracle/product/11.2.0.4/dbhome_1 ...

SUCCESS: 2020-06-06 18:56:20: Successfully applied the patch on the Home : /u01/app/oracle/product/11.2.0.4/dbhome_1

SUCCESS: 2020-06-06 18:56:20: Successfully started the Database consoles

SUCCESS: 2020-06-06 18:56:20: Successfully started the EM Agents

INFO: 2020-06-06 18:56:23: Patching 11.2.0.4 Database Homes on the Node odax58duts2

...

INFO: 2020-06-06 19:00:02: Patching 12.1.0.2 Database Homes on the Node odax58duts1

Found the following 12.1.0.2 homes possible for patching:

HOME_NAME                      HOME_LOCATION

---------                      -------------

OraDb12102_home1               /u01/app/oracle/product/12.1.0.2/dbhome_1

[Please note that few of the above Database Homes may be already up-to-date. They will be automatically ignored]

Would you like to patch all the above homes: Y | N ? : Y

INFO: 2020-06-06 19:06:38: Updating OPATCH

Verifying Opatch version for home:</u01/app/oracle/product/12.1.0.2/dbhome_1>.

Expecting version:<12.2.0.1.18>

Opatch version on node <odax58duts1> is <12.2.0.1.18>

Opatch version on node <odax58duts2> is <12.2.0.1.18>

INFO: 2020-06-06 19:07:37: Rolling back patches on 12.1.0.2.x home if required...

INFO: 2020-06-06 19:07:43: Checking if any patches need to be rolled back on </u01/app/oracle/product/12.1.0.2/dbhome_1>

INFO: 2020-06-06 19:11:35: Performing the conflict checks...

SUCCESS: 2020-06-06 19:11:59: Conflict checks passed for all the Homes

INFO: 2020-06-06 19:11:59: Checking if the patch is already applied on any of the Homes

INFO: 2020-06-06 19:12:11: Home is not Up-to-date

SUCCESS: 2020-06-06 19:12:13: Successfully stopped the Database consoles

SUCCESS: 2020-06-06 19:12:19: Successfully stopped the EM agents

INFO: 2020-06-06 19:12:25: Applying patch on /u01/app/oracle/product/12.1.0.2/dbhome_1 Homes

INFO: 2020-06-06 19:12:25: It may take upto 15 mins. Please wait...

SUCCESS: 2020-06-06 19:17:19: Successfully applied the patch on the Home : /u01/app/oracle/product/12.1.0.2/dbhome_1

SUCCESS: 2020-06-06 19:17:19: Successfully started the Database consoles

SUCCESS: 2020-06-06 19:17:19: Successfully started the EM Agents

INFO: 2020-06-06 19:17:23: Patching 12.1.0.2 Database Homes on the Node odax58duts2

...

INFO: 2020-06-06 19:27:08: Patching 18.0.0.0 Database Homes on the Node odax58duts1

Found the following 18.0.0.0 homes possible for patching:

HOME_NAME                      HOME_LOCATION

---------                      -------------

OraDB18Home1                   /u01/app/oracle/product/18.0.0.0

[Please note that few of the above Database Homes may be already up-to-date. They will be automatically ignored]

Would you like to patch all the above homes: Y | N ? : Y

INFO: 2020-06-06 19:27:15: Updating OPATCH

Verifying Opatch version for home:</u01/app/oracle/product/18.0.0.0>.

Expecting version:<12.2.0.1.18>

Opatch version on node <odax58duts1> is <12.2.0.1.18>

Opatch version on node <odax58duts2> is <12.2.0.1.18>

INFO: 2020-06-06 19:27:26: Rolling back patches on 18.x home if required...

INFO: 2020-06-06 19:27:33: Checking if any patches need to be rolled back on </u01/app/oracle/product/18.0.0.0>

INFO: 2020-06-06 19:28:57: Performing the conflict checks...

SUCCESS: 2020-06-06 19:29:57: Conflict checks passed for all the Homes

INFO: 2020-06-06 19:29:57: Checking if the patch is already applied on any of the Homes

INFO: 2020-06-06 19:30:36: Home is not Up-to-date

SUCCESS: 2020-06-06 19:30:38: Successfully stopped the Database consoles

SUCCESS: 2020-06-06 19:30:44: Successfully stopped the EM agents

INFO: 2020-06-06 19:30:49: Applying patch on /u01/app/oracle/product/18.0.0.0 Homes

INFO: 2020-06-06 19:30:49: It may take up to 15 mins. Please wait...

SUCCESS: 2020-06-06 19:40:34: Successfully applied the patch on the Home : /u01/app/oracle/product/18.0.0.0

SUCCESS: 2020-06-06 19:40:34: Successfully started the Database consoles

SUCCESS: 2020-06-06 19:40:34: Successfully started the EM Agents

INFO: 2020-06-06 19:40:37: Patching 18.0.0.0 Database Homes on the Node odax58duts2

INFO: DB patching summary on node: odax58duts1

SUCCESS: 2020-06-06 19:52:28:  Successfully applied the patch on the Home /u01/app/oracle/product/11.2.0.4/dbhome_1

SUCCESS: 2020-06-06 19:52:28:  Successfully applied the patch on the Home /u01/app/oracle/product/12.1.0.2/dbhome_1

SUCCESS: 2020-06-06 19:52:28:  Successfully applied the patch on the Home /u01/app/oracle/product/18.0.0.0

INFO: DB patching summary on node: odax58duts2

SUCCESS: 2020-06-06 19:52:28:  Successfully applied the patch on the Home /u01/app/oracle/product/11.2.0.4/dbhome_1

SUCCESS: 2020-06-06 19:52:28:  Successfully applied the patch on the Home /u01/app/oracle/product/12.1.0.2/dbhome_1

SUCCESS: 2020-06-06 19:52:28:  Successfully applied the patch on the Home /u01/app/oracle/product/18.0.0.0

INFO: Executing /tmp/pending_actions on both nodes

You have new mail in /var/spool/mail/root

[root@odax58duts1 18.8.0.0.0]#

Let’s check how is the database update
Node 1

[root@odax58duts1 patchODA2020]# oakcli show version -detail

INFO: 2020-06-06 19:55:34: Reading the metadata file now...

                Component Name            Installed Version         Proposed Patch Version

                ---------------           ------------------        -----------------

                Controller_INT            11.05.03.00               Up-to-date

                Controller_EXT            11.05.03.00               Up-to-date

                Expander                  001E                      Up-to-date

                SSD_SHARED                944A                      Up-to-date

                HDD_LOCAL                 A7E0                      Up-to-date

                HDD_SHARED                A7E0                      Up-to-date

                ILOM                      5.0.0.20 r133445          Up-to-date

                BIOS                      25080100                  Up-to-date

                IPMI                      1.8.15.0                  Up-to-date

                HMP                       2.4.5.0.1                 Up-to-date

                OAK                       18.8.0.0.0                Up-to-date

                OL                        6.10                      Up-to-date

                OVM                       3.4.4                     Up-to-date

                GI_HOME                   18.8.0.0.191015           Up-to-date

                DB_HOME {

                [ OraDb12102_home1 ]      12.1.0.2.191015           Up-to-date

                [ OraDb11204_home1 ]      11.2.0.4.191015           Up-to-date

                [ OraDB18Home1 ]          18.8.0.0.191015           Up-to-date

                [ OraDb11203_home2 ]      11.2.0.3.15               No-update

                             }

Node 2

[root@odax58duts2 patchODA2020]#   oakcli show version -detail

INFO: 2020-06-06 19:56:39: Reading the metadata file now...

                Component Name            Installed Version         Proposed Patch Version

                ---------------           ------------------        -----------------

                Controller_INT            11.05.03.00               Up-to-date

                Controller_EXT            11.05.03.00               Up-to-date

                Expander                  001E                      Up-to-date

                SSD_SHARED                944A                      Up-to-date

                HDD_LOCAL                 A7E0                      Up-to-date

                HDD_SHARED                A7E0                      Up-to-date

                ILOM                      5.0.0.20 r133445          Up-to-date

                BIOS                      25080100                  Up-to-date

                IPMI                      1.8.15.0                  Up-to-date

                HMP                       2.4.5.0.1                 Up-to-date

                OAK                       18.8.0.0.0                Up-to-date

                OL                        6.10                      Up-to-date

                OVM                       3.4.4                     Up-to-date

                GI_HOME                   18.8.0.0.191015           Up-to-date

                DB_HOME {

                [ OraDb12102_home1 ]      12.1.0.2.191015           Up-to-date

                [ OraDb11204_home1 ]      11.2.0.4.191015           Up-to-date

                [ OraDB18Home1 ]          18.8.0.0.191015           Up-to-date

                [ OraDb11203_home2 ]      11.2.0.3.15               No-update

9- Apply DATAPATCH/CATBUNDLE in databases 11.2 / 12.1 / 12.2 and 18.8

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

$ORACLE_HOME/OPatch/datapatch -verbose

Note: It is required set up the NLS_LANG to “AMERICAN_AMERICA.AL32UTF8” variable in order to avoid a BUG during the DATAPATCH in the Oracle Database 12.1

Reference Documents:

Oracle Database Appliance – 18.2, 12.X, and 2.X Supported ODA Versions & Known Issues (Doc ID 888888.1)
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/18.8/cmtxn/patching-oda.html#GUID-49F5F510-3A38-4E6A-B915-FCBCD36CDDDB

I hope I helped with this ODA upgrade procedure
Andre Ontalba

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.


New Feature – Per-Second Billing for Compute and Autonomous Database.
Category: Cloud Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

New Feature - Per-Second Billing for Compute and Autonomous Database.

Oracle this month announced a new form of billing for Compute Instance and Autonomous Database with that the Oracle Cloud has a stronger platform for legacy applications moving to cloud and a new family of cloud native applications that rely on microservices and dynamic scaling.
 
Compute instances are now billed per second of usage, rather than per hour. This helps you reduce costs when using instances for short amounts of time. Virtual machine (VM) instances have a minimum charge of one minute. Bare metal instances have a minimum charge of one hour. After the first minute (for VMs) or the first hour (for bare metal instances), usage is billed in one-second increments.
 
With this billing model, usage of Compute and Autonomous Database is billed per-second. All prices continue to be quoted on an hourly basis.
Here are some details about this billing model:
 
  • All virtual machine (VM) Compute instances, including those with graphical processing unit (GPU) chips, are now billed per-second with a one-minute minimum.
 
  • All bare metal Compute instances, including those with GPU chips, are now billed per-second with a one-hour minimum.
 
  • Autonomous Data Warehouse and Autonomous Transaction Processing usage is now billed per-second with a one-minute minimum.
 
  • Windows OS images are now billed per-second with a one-minute minimum.
 
  • Microsoft SQL Server images available in the Oracle Cloud Marketplace are now billed per-second with a 744-hour (one month) minimum.
 
 
The workloads that will see the biggest impact from this change are those that stop and start frequently, and those that run for short durations. Here are some examples of significant benefits:
 
 
I believe that with this, Oracle increasingly invests and provides more resources and different ways for its customers to be able to plan and prepare their environments for the cloud.
 
 
Reference: https://docs.cloud.oracle.com/

 

I hope this short article has helped.

 

Andre Ontalba

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”

 


1 2 3 4 5 8