Blog
rootupgrade.sh Fails with CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched
Category: Database,Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

rootupgrade.sh Fails with CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched

Yesterday during an ODA upgrade, I came across an error during the cluster upgrade process, where this error was presented.

 

.
.
2020/03/03 14:34:00 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2020/03/03 14:34:04 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2020/03/03 14:34:32 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl start rollingupgrade 18.0.0.0.0'
CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched.
CRS-4000: Command Start failed, or completed with errors.
2020/03/03 14:34:32 CLSRSC-511: failed to set Oracle Clusterware and ASM to rolling migration mode
Died at /u01/app/18.0.0.0/grid/crs/install/oraasm.pm line 1455.

 

Well following Oracle’s note rootupgrade.sh Fails with CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched (Doc ID 2494827.1)    .
 
I found the solution to the problem by following the steps below.
 
Run the commands below to identify the versions of crs, releasepatch and softwarepatch to see if there are any differences.

 

bash-4.3# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [2660242823].
bash-4.3#

bash-4.3# crsctl query crs releasepatch
Oracle Clusterware release patch level is [1953265745] and the complete list of patches [23600818 26839277 27001739 27105253 27128906 27144050 27335416 ] have been applied on the local node.
bash-4.3#

bash-4.3# crsctl query crs softwarepatch
Oracle Clusterware patch level on node odatest1 is [1953265745]

 

We can see that the crs has a different version than the releasepatch and softwarepatch.
 
Well done that we will fix the problem.
 
1 – Run stop rollingpatch as root user, which will update OCR with correct values
<GRID_HOME>/bin/crsctl stop rollingpatch  

 

root@odatest1:~# /u01/app/12.1.0.2/grid/bin/crsctl stop rollingpatch
CRS-1161: The cluster was successfully patched to patch level [1953265745].
root@odatest1:~# 

 

2 – Verify software/release patch levels and retry rootupgrade.sh.

 

bash-4.3# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [1953265745].
bash-4.3#

bash-4.3# crsctl query crs releasepatch
Oracle Clusterware release patch level is [1953265745] and the complete list of patches [23600818 26839277 27001739 27105253 27128906 27144050 27335416 ] have been applied on the local node.
bash-4.3#

bash-4.3# crsctl query crs softwarepatch
Oracle Clusterware patch level on node odatest1 is [1953265745]

 

root@odatest1:~# /u01/app/18.0.0.0/grid/rootupgrade.sh







.
.
2020/03/03 15:34:00 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2020/03/03 15:34:04 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2020/03/03 15:34:32 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl start rollingupgrade 18.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2020/03/03 15:35:10 CLSRSC-482: Running command: '/u01/app/18.0.0.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.1.0.2/grid -oldCRSVersion 12.1.0.2.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2020/03/03 15:34:20 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

.
.

2020/03/03 15:54:00 CLSRSC-595: Executing upgrade step 8 of 19: 'UpgradeNode'.

2020/03/03 15:54:04 CLSRSC-474: Initiating upgrade of resource types

2020/03/03 15:56:20 CLSRSC-475: Upgrade of resource types successfully initiated.

2020/03/03 15:56:44 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.

2020/03/03 15:57:05 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 

 

I hope this helps you!!!

 

Stay tuned, following on twitter @aontalba and on Linkedin
 
Andre Luiz Dutra Ontalba

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Feature – Auto-tune Detached Block Volumes
Category: Cloud Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

Feature: Auto-tune Detached Block Volumes

Last month Oracle launched a great feature that helps with block volume costs.Let’s see a little bit about this feature.
 
You can now tune the performance of your detached volumes to the Lower Cost setting automatically. With this new capability, while your volumes stay in a detached state you can achieve significant cost savings.
 
When you’re ready to use them for your workloads, simply attach them, and their performance and cost are automatically and instantaneously adjusted to the performance setting you originally configured. 
 
When you enable this feature for your volumes, the volume is monitored and changed to the Lower Cost performance option automatically when it is disconnected and this setting is maintained until you reconnect it. This feature now comes integrated with the storage platform. You can take advantage of this with a click on the console or using a command line option in the CLI for each of your volumes.
 
Auto-tune for detached volumes capability is generally available for all existing and new boot and block volumes in all regions globally, on the Console, and through CLI, SDK, API, and Terraform.
 

 

Enabling and Managing Auto-tune for Detached Volumes

Enabling the auto-tune feature for detached volumes is straightforward with a click in the Oracle Cloud Infrastructure Console.
To enable auto-tune for a volume, on the Block Volume Details screen of the Console, click Edit and slide the Auto-Tune Performance setting to On for the volume. The Edit dialog window is also revised as part of this feature update.When you enable this feature for your volumes, the volume is monitored and changed to the Lower Cost performance option automatically when it is disconnected, and maintained in this setting until you reconnect it. This feature now comes integrated with the storage platform. You can take advantage of this with a click on the console or using a command line option in the CLI for each of your volumes.

 

When the Auto-Tune Performance is set to On for a detached volume, the auto-tuning takes effect after 24 hours. After that, if the volume is still detached, its performance and cost is lowered to the Lower Cost setting automatically.
After Auto-Tune is applied
After 24 hours
When the volume is attached again, its performance is set to the Default Performance setting immediately and automatically.

 

𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬

 

  • Build-in feature – New volumes or Existing ones
  • Applicable to Block & Boot Volumes
  • Available in all Regions.
  • Reduce Operational Expenses
  • Once enabled Auto-tune it takes 24 hours to be effective.
  • Done via Console, CLI, SDK, API, Terraform
  • Switched Automatically Lower cost Performance Option when detached from Instance.
  • When attached again it gets to previous defined settings.
  • Optional Feature – You have to enabled it while creating Volumes
 

 

I hope this helps you!!!
Andre Luiz Dutra Ontalba

 

 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


HOW TO REMOVE HAIP ON ODA 18.8.0.0.0
Category: Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

HOW TO REMOVE HAIP ON ODA 18.8.0.0.0

I needed to remove HAIP from ODA after migrating to version 18.8.0.0.0 and decided to prepare this procedure.
 
This action plan should only require one clusterware restart vs. patching which can result in two or three clusterware restarts.
 
Let’s go to the procedure.
1. Backup gpnp profile

 

[grid@testoda1 peer]$ cd /u01/app/18.0.0.0/grid/gpnp/'hostname'/profiles/peer
[grid@testoda1 peer]$ cp -p profile.xml profile.xml.bkp
[grid@testoda2 peer]$ /u01/app/18.0.0.0/grid/gpnp/'hostname'/profiles/peer
[grid@testoda2 peer]$ cp -p profile.xml profile.xml.bkp
2. Get the cluster_interconnect interfaces (only on node0)
[grid@testoda1 ~]$ /u01/app/18.0.0.0/grid/bin/oifcfg getif

btbond1 10.32.16.0 global public
p1p1 192.168.16.0 global cluster_interconnect,asm
p1p2 192.168.17.0 global cluster_interconnect,asm

Please note: That private interface names might be different depending on the model, and/or ODA version which was used to deploy the machine, etc.

For the rest of this note, we are using p1p1/p1p2 as an example in the steps below.

 
3. Backup existing ifcfg- files
[root@testoda1 ~]# cd /etc/sysconfig/network-scripts
[root@testoda1 network-scripts]# cp ifcfg-p1p1 backupifcfgFiles/ifcfg-p1p1.bak
[root@testoda1 network-scripts]# cp ifcfg-p1p2 backupifcfgFiles/ifcfg-p1p2.bak
[root@testoda2 ~]# cd /etc/sysconfig/network-scripts
[root@testoda2 network-scripts]# cp ifcfg-p1p1 backupifcfgFiles/ifcfg-p1p1.bak
[root@testoda2 network-scripts]# cp ifcfg-p1p2 backupifcfgFiles/ifcfg-p1p2.bak
4. Create ifcfg-icbond0 and modify ifcfg- files
[root@testoda1 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-icbond0

# This file is automatically created by the ODA software.

DEVICE=icbond0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
MTU=9000
BONDING_OPTS=”mode=active-backup miimon=100 primary=p1p1″
IPADDR=192.168.16.24
NETMASK=255.255.255.0

[root@testoda1 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p1

# This file is automatically created by the ODA software.

DEVICE=p1p1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

[root@testoda1 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p2

/etc/sysconfig/network-scripts/ifcfg-p1p2

# This file is automatically created by the ODA software.

DEVICE=p1p2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

[root@testoda2 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-icbond0

# This file is automatically created by the ODA software.

DEVICE=icbond0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
MTU=9000
BONDING_OPTS=”mode=active-backup miimon=100 primary=p1p1″
IPADDR=192.168.16.25
NETMASK=255.255.255.0

[root@testoda2 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p1

# This file is automatically created by the ODA software.

DEVICE=p1p1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

[root@testoda2 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p2

# This file is automatically created by the ODA software.

DEVICE=p1p2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

 
5. Creating/replacing init.ora-s for APX instances
[grid@testoda1]$ echo "+APX1.cluster_interconnects='192.168.16.24'" > $ORACLE_HOME/dbs/init+APX1.ora

[grid@testoda2]$ echo “+APX2.cluster_interconnects=’192.168.16.25′” > $ORACLE_HOME/dbs/init+APX2.ora

6. Stop the Clusterware on node2
[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl stop crs -f
7. Set the new, bonded cluster_interconnect interface and remove p1p1/p1p2 interfaces from the configuration (only on node0)
[grid@testoda1 ~]$ oifcfg setif -global icbond0/192.168.16.0:cluster_interconnect,asm

[grid@testoda1 ~]$ oifcfg getif

btbond1 10.209.244.0 global public
p1p1 192.168.16.0 global cluster_interconnect,asm
p1p2 192.168.17.0 global cluster_interconnect,asm
icbond0 192.168.16.0 global cluster_interconnect,asm

[grid@testoda1 ~]$ oifcfg delif -global p1p1/192.168.16.0

[grid@testoda1 ~]$ oifcfg delif -global p1p2/192.168.17.0

[grid@testoda1 ~]$ oifcfg getif

btbond1 10.209.244.0 global public
icbond0 192.168.16.0 global cluster_interconnect,asm

8. Remove HAIP dependency in ora.asm
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.cluster_interconnect.haip -attr ENABLED=0 -init

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.cluster_interconnect.haip -attr ENABLED=0 -init

[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.asm -attr “START_DEPENDENCIES=’hard(ora.cssd,ora.ctssd) pullup(ora.cssd,ora.ctssd) weak(ora.drivers.acfs)’, STOP_DEPENDENCIES=’hard(intermediate:ora.cssd)'” -init

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.asm -attr “START_DEPENDENCIES=’hard(ora.cssd,ora.ctssd) pullup(ora.cssd,ora.ctssd) weak(ora.drivers.acfs)’, STOP_DEPENDENCIES=’hard(intermediate:ora.cssd)'” -init

9. Removing ora.cluster_interconnect.haip resource
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl delete resource ora.cluster_interconnect.haip -init –f

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl delete resource ora.cluster_interconnect.haip -init –f

10. Stop the Clusterware on node1
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl stop crs -f
11. Restart the network
[root@testoda1 network-scripts]# service network restart

[root@testoda1 network-scripts]# ifconfig -a

[root@testoda1 network-scripts]# cat /proc/net/bondinf/icbond0

[root@testoda2 network-scripts]# service network restart

[root@testoda2 network-scripts]# ifconfig -a

[root@testoda2 network-scripts]# cat /proc/net/bondinf/icbond0

 
12. Restart the Clusterware
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl start crs

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl start crs

13. Restart dcs-agent to rediscover the interfaces automatically
[grid@testoda1 ~]# /opt/oracle/dcs/bin/restartagent.sh

[grid@testoda2 ~]# /opt/oracle/dcs/bin/restartagent.sh

14. Checking the cluster service after removing the service
[root@testoda1 ~]#

[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl check cluster -all
**************************************************************
testoda1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
testoda2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
I hope I helped with this procedure
Andre Luiz Dutra Ontalba
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


ZDLRA, Creating the Replication Server
Category: Engineer System Author: Fernando Simon (Board Member) Date: 4 years ago Comments: 0

ZDLRA, Creating the Replication Server

The replication for ZDLRA operates in several ways, from a single upstream/downstream config to a multiple replication config, but both are done using the same procedure. The process is not complicated but has some details that are needed to be aware to avoid reconstruct (or even loss) replicated data. In this post, I will show the details to create the replication config.
The base about how the replication works for ZDLRA I wrote in this post. And how to configure the replication network config in this other post. This network configuration needs to be done just when you are adding the replication after the ZDLRA has been deployed, if you already deployed with replication enabled it is not needed. The official documentation about replication can be found here.

 

 

Replication Topology

The topology for ZDLRA replication can vary, but basic is:

 

And the resume is:

 

1 . One-Way: The data flows in one way only, only one ZDLRA forwards the backups.

 

2. Bi-Directional: Both ZDLRA’s send backups to each other. Is this case, the protected databases for each ZDLRA (usually one at the separated datacenter) are replicated between them since both operated as upstream/downstream.
 
3. Hub-Spoke: One ZDLRA receives backups from several ZDLRA’s. And this “third” ZDLRA is responsible to archive to tape.

 

In any type of replication that you have exists:

 

1. Upstream: It is the ZDLRA that receives the backup and forward it to another ZDLRA

 

2. Downstream: Is the ZDLRA that receives the backup from another ZDLRA

 

Scenario

 

In this post (and others when I use the replication) I will use the “One-Way config” where I have one upstream and one downstream. But if you have other types, you just need to follow the same procedure and take care of details like user, wallets, and credentials.

 

 

 

It will be:

 

1. Upstream: ZDLRAS1.
2. Downstream: ZDLRAS2.

 

Creating the Replication

 

The replication for ZDLRA operates differently than Oracle DG, it is native replication using similar procedure than ingest backup at ZDLRA. I already wrote about this in my previous post (Replication and Index topic).
To configure the replication we use the procedure DBMS_RA.CREATE_REPLICATION_SERVER but before we need to check some details. The replication is done based on protection policy, so, all the databases linked with that will have the backups replicated. I will write about this in another post, in this post I will show how to create the replication config.

 

A user at downstream to receive replication

 

The ZDLRA replication requires that you use one specific user to send the backups from upstream to downstream. This user is created just in the downstream ZDLRA and never needs to be used to connect using rman.
The form/best practices to create user is REPUSER_FROM_[ZDLRA_UPSTREAM_DB_NAME]. Doing this you know the source of connection (when your downstream receives backup from more than one upstream).
So, the first step is to create the user at downstream:

 

[root@zdlras2n1 ~]# /opt/oracle.RecoveryAppliance/bin/racli add vpc_user --user_name=repusr_from_zdlras1

[repusr_from_zdlras1] New Password:

Mon Nov 25 23:34:50 2019: Start: Add vpc user repusr_from_zdlras1.

Mon Nov 25 23:34:51 2019:        Add vpc user repusr_from_zdlras1 successfully.

Mon Nov 25 23:34:51 2019: End:   Add vpc user repusr_from_zdlras1.

[root@zdlras2n1 ~]#

 

Wallet at upstream

 

To allow the upstream connect at downstream to send the backup is needed to create one wallet at upstream ZDLRA with credentials from the repuser create in the first step. The wallet can be stored in one shared filesystem to allow both nodes of the cluster to access is, or each node can store at one folder (but path needs to be the same in both).
The wallet needs to be ALO (auto-login) and can be shared (if you have one). To create the wallet at upstream we need to do:

 

[root@zdlras1n1 ~]# su - oracle

Last login: Mon Nov 25 23:43:26 CET 2019 on pts/3

[oracle@zdlras1n1 ~]$ mkdir /radump/wallrep

[oracle@zdlras1n1 ~]$

[oracle@zdlras1n1 ~]$ mkstore -wrl /radump/wallrep -createALO

Oracle Secret Store Tool Release 19.0.0.0.0 - Production

Version 19.3.0.0.0

Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.




[oracle@zdlras1n1 ~]$
 
And after that, we create the credential with username and password that was created at downstream:

 

[oracle@zdlras1n1 ~]$ mkstore -wrl /radump/wallrep -createCredential zdlras2-rep.oralocal:1522/zdlras2 repusr_from_zdlras1 repuser

Oracle Secret Store Tool Release 19.0.0.0.0 - Production

Version 19.3.0.0.0

Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.




[oracle@zdlras1n1 ~]$

[oracle@zdlras1n1 ~]$ mkstore -wrl /radump/wallrep -listCredential

Oracle Secret Store Tool Release 19.0.0.0.0 - Production

Version 19.3.0.0.0

Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.




List credential (index: connect_string username)

1: zdlras2-rep.oralocal:1522/zdlras2 repusr_from_zdlras1

[oracle@zdlras1n1 ~]$

 

The credential name you can define, but I usually specify it with the same pattern as EZCONNECT. Doing this, I directly know where this credential is.

 

DBMS_RA.CREATE_REPLICATION_SERVER

 

The third and last step to create the replication is to call the procedure to create the configuration at upstream. This is done just at upstream and it uses the wallet create at step two.
So, we use the DBMS_RA.CREATE_REPLICATION_SERVER with parameters:

 

. replication_server_name: Name for the downstream server. You can define the name that you want.
sbt_so_name: It will be always “libra.so”.

 

. catalog_user_name: Is the user that will connect using the wallet. Always RASYS.

 

. wallet_alias: The credential name that you defined what wallet.

 

. wallet_path: Where the wallet is located.

 

. max_streams: Max number of concurrent replication streams. The default value is 4.

 

The replication information can be checked at RASYS.RA_REPLICATION_SERVER tables that store all the information for replicated servers at your upstream.

 

So, to create the replication configuration:

 

[oracle@zdlras1n1 ~]$ sqlplus rasys/change^Me2




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Dec 22 20:46:51 2019

Version 19.3.0.0.0




Copyright (c) 1982, 2019, Oracle.  All rights reserved.




Last Successful login time: Sun Dec 22 2019 20:33:15 +01:00




Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.3.0.0.0




SQL> SELECT COUNT(*)  FROM RA_REPLICATION_SERVER;




  COUNT(*)

----------

         0




SQL>

SQL> BEGIN

  2  DBMS_RA.CREATE_REPLICATION_SERVER (

  3      replication_server_name => 'zdlras2_rep',

  4      sbt_so_name      => 'libra.so',

  5      catalog_user_name       => 'RASYS',

  6      wallet_alias            => 'zdlras2-rep.oralocal:1522/zdlras2',

  7      wallet_path             => 'file:/radump/wallrep');

  8  END;

  9  /




PL/SQL procedure successfully completed.




SQL> SELECT COUNT(*)  FROM RA_REPLICATION_SERVER;




  COUNT(*)

----------

         1




SQL>

 

One important point here is the “max_streams” parameter. It needs to be tuned, if you are replicating more databases, maybe is good to increase this value. You can check the queue just select the “rasys.ra_task” table and verify if there are waiting for tasks for replication. Of course, this depends on the size of your files too.

 

Replication

 

The steps described here are just the small part for replication. We just created the replication server config at upstream (wallets and information) and downstream (username). But we still not finish the configuration for replication workflow:

 

 

And if you check the workflow for manual config we still need to do some steps:
But the missing past is related to “logical” definition, like policies that will be replicated and databases that are linked with these policies. The basic configuration (replication server config) was done in this post, and at previous posts.
At next post will show how to configure the backup policies and the details that you need to take care to correctly define it. If you want to understand more about protection policies you can check the post that I made about it.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


1 4 5 6 7 8 32