Blog
Full transportable tablespace: Oracle 12c
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 5 years ago Comments: 0

Full transportable tablespace: Oracle 12c

This is one of most cool features on Oracle 12c and it is a improvement of a old feature called transportable tablespaces. In this howto artcile we will show to you how to use this feature introduced on 12cR1 and you can use it to migrate data from 11g directly to a PDB using simple commands, in fact you can use this with a dblink and you will not need to create dumpfiles, but we will
cover the method using dump files on this article.
First thing to do is connect on source database and create the tablespaces that we will transport, in our case will be on same CDB for demo purposes, but this is more useful when you intend to migrate data between distinct database, migrations and etc.
On PDB1, create two tablespaces, one user and one table on each tablespace:

 

SQL> alter session set container = pdb1;
 Session altered.

SQL> create tablespace ts_mufalani_01 datafile '/u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani01.dbf' size 10m ;
 Tablespace created.

SQL> create tablespace ts_mufalani_02 datafile '/u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani02.dbf' size 10m ;
 Tablespace created.

SQL> create user mufalani identified by oracle default tablespace ts_mufalani_01;
 User created.

SQL> alter user mufalani quota unlimited on ts_mufalani_01;
 User altered.

SQL> alter user mufalani quota unlimited on ts_mufalani_02;
 User altered.

SQL> create table mufalani.test1 tablespace ts_mufalani_01 as select * from dba_users;
 Table created.

SQL> create table mufalani.test2 tablespace ts_mufalani_02 as select * from v$session;
 Table created.

SQL> alter tablespace ts_mufalani_01 read only;
 Tablespace altered.

SQL> alter tablespace ts_mufalani_02 read only;
 Tablespace altered.

 

Now, the tablespaces are read only, check if the tablespaces are self-contained using dbms_tts.transport_set_check, it means, there is no dependency in any other tablespaces.

 

SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('ts_mufalani_01,ts_mufalani_02',TRUE);
 PL/SQL procedure successfully completed.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
 no rows selected

 

No rows selected is the expected result on this select. Remember that the directory home, just exists on pdb1 so far, so use EZconnect to export data for the PDB1.

 

SQL> create or replace directory home as '/home/oracle' ;
 Directory created.

 SQL> grant read, write on directory home to system;
 Grant succeeded.

 

After create the directory and grant access on it to user system, export the metadata using transportable=ALWAYS and full=y, it instructs Oracle to use the full transportable tablespace feature.

 

SQL> !expdp system/oracle_4U@vm2:1521/PDB1 directory=home dumpfile=tts.dmp logfile=tts.log transportable=always full=y 
 Export: Release 12.1.0.2.0 - Production on Fri Sep 13 21:17:05 2019
 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/@vm2:1521/PDB1 directory=home dumpfile=tts.dmp logfile=tts.log transportable=always full=y 
 Estimate in progress using BLOCKS method…
 Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE
 Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
 Total estimation using BLOCKS method: 4.265 MB
 Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/TABLESPACE
 Processing object type DATABASE_EXPORT/PROFILE
 Processing object type DATABASE_EXPORT/SYS_USER/USER
 Processing object type DATABASE_EXPORT/SCHEMA/USER
 Processing object type DATABASE_EXPORT/RADM_FPTM
 Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
 Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
 Processing object type DATABASE_EXPORT/RESOURCE_COST
 Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
 Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
 Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
 Processing object type DATABASE_EXPORT/STATISTICS/MARKER
 Processing object type DATABASE_EXPORT/END_PLUGTS_BLK
 Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
 Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
 . . exported "SYS"."KU$_USER_MAPPING_VIEW"               6.109 KB      38 rows
 . . exported "ORDDATA"."ORDDCM_DOCS"                     252.9 KB       9 rows
 . . exported "WMSYS"."WM$CONSTRAINTS_TABLE$"                 0 KB       0 rows
 . . exported "WMSYS"."WM$LOCKROWS_INFO$"                     0 KB       0 rows
 . . exported "WMSYS"."WM$UDTRIG_INFO$"                       0 KB       0 rows
 . . exported "LBACSYS"."OLS$AUDIT_ACTIONS"               5.757 KB       8 rows
 . . exported "LBACSYS"."OLS$DIP_EVENTS"                  5.539 KB       2 rows
 . . exported "LBACSYS"."OLS$INSTALLATIONS"               6.960 KB       2 rows
 . . exported "LBACSYS"."OLS$PROPS"                       6.234 KB       5 rows
 . . exported "SYS"."DAM_CONFIG_PARAM$"                   6.531 KB      14 rows
 . . exported "SYS"."TSDP_PARAMETER$"                     5.953 KB       1 rows
 . . exported "SYS"."TSDP_POLICY$"                        5.921 KB       1 rows
 . . exported "SYS"."TSDP_SUBPOL$"                        6.328 KB       1 rows
 . . exported "SYSTEM"."REDO_DB"                          25.59 KB       1 rows
 . . exported "WMSYS"."WM$BATCH_COMPRESSIBLE_TABLES$"         0 KB       0 rows
 . . exported "WMSYS"."WM$CONS_COLUMNS$"                      0 KB       0 rows
 . . exported "WMSYS"."WM$ENV_VARS$"                      6.015 KB       3 rows
 . . exported "WMSYS"."WM$EVENTS_INFO$"                   5.812 KB      12 rows
 . . exported "WMSYS"."WM$HINT_TABLE$"                    9.453 KB      75 rows
 . . exported "WMSYS"."WM$MODIFIED_TABLES$"                   0 KB       0 rows
 . . exported "WMSYS"."WM$MP_GRAPH_WORKSPACES_TABLE$"         0 KB       0 rows
 . . exported "WMSYS"."WM$MP_PARENT_WORKSPACES_TABLE$"        0 KB       0 rows
 . . exported "WMSYS"."WM$NESTED_COLUMNS_TABLE$"              0 KB       0 rows
 . . exported "WMSYS"."WM$NEXTVER_TABLE$"                 6.375 KB       1 rows
 . . exported "WMSYS"."WM$REMOVED_WORKSPACES_TABLE$"          0 KB       0 rows
 . . exported "WMSYS"."WM$RESOLVE_WORKSPACES_TABLE$"          0 KB       0 rows
 . . exported "WMSYS"."WM$RIC_LOCKING_TABLE$"                 0 KB       0 rows
 . . exported "WMSYS"."WM$RIC_TABLE$"                         0 KB       0 rows
 . . exported "WMSYS"."WM$RIC_TRIGGERS_TABLE$"                0 KB       0 rows
 . . exported "WMSYS"."WM$UDTRIG_DISPATCH_PROCS$"             0 KB       0 rows
 . . exported "WMSYS"."WM$VERSION_HIERARCHY_TABLE$"       5.984 KB       1 rows
 . . exported "WMSYS"."WM$VERSION_TABLE$"                     0 KB       0 rows
 . . exported "WMSYS"."WM$VT_ERRORS_TABLE$"                   0 KB       0 rows
 . . exported "WMSYS"."WM$WORKSPACES_TABLE$"              12.10 KB       1 rows
 . . exported "WMSYS"."WM$WORKSPACE_PRIV_TABLE$"          7.054 KB      10 rows
 . . exported "WMSYS"."WM$WORKSPACE_SAVEPOINTS_TABLE$"        0 KB       0 rows
 . . exported "LBACSYS"."OLS$AUDIT"                           0 KB       0 rows
 . . exported "LBACSYS"."OLS$COMPARTMENTS"                    0 KB       0 rows
 . . exported "LBACSYS"."OLS$DIP_DEBUG"                       0 KB       0 rows
 . . exported "LBACSYS"."OLS$GROUPS"                          0 KB       0 rows
 . . exported "LBACSYS"."OLS$LAB"                             0 KB       0 rows
 . . exported "LBACSYS"."OLS$LEVELS"                          0 KB       0 rows
 . . exported "LBACSYS"."OLS$POL"                             0 KB       0 rows
 . . exported "LBACSYS"."OLS$POLICY_ADMIN"                    0 KB       0 rows
 . . exported "LBACSYS"."OLS$POLS"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$POLT"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$PROFILE"                         0 KB       0 rows
 . . exported "LBACSYS"."OLS$PROFILES"                        0 KB       0 rows
 . . exported "LBACSYS"."OLS$PROG"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$SESSINFO"                        0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER_COMPARTMENTS"               0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER_GROUPS"                     0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER_LEVELS"                     0 KB       0 rows
 . . exported "SYS"."AUD$"                                    0 KB       0 rows
 . . exported "SYS"."DAM_CLEANUP_EVENTS$"                     0 KB       0 rows
 . . exported "SYS"."DAM_CLEANUP_JOBS$"                       0 KB       0 rows
 . . exported "SYS"."TSDP_ASSOCIATION$"                       0 KB       0 rows
 . . exported "SYS"."TSDP_CONDITION$"                         0 KB       0 rows
 . . exported "SYS"."TSDP_FEATURE_POLICY$"                    0 KB       0 rows
 . . exported "SYS"."TSDP_PROTECTION$"                        0 KB       0 rows
 . . exported "SYS"."TSDP_SENSITIVE_DATA$"                    0 KB       0 rows
 . . exported "SYS"."TSDP_SENSITIVE_TYPE$"                    0 KB       0 rows
 . . exported "SYS"."TSDP_SOURCE$"                            0 KB       0 rows
 . . exported "SYSTEM"."REDO_LOG"                             0 KB       0 rows
 . . exported "SYSTEM"."SCHEDULER_JOB_ARGS"                   0 KB       0 rows
 . . exported "SYSTEM"."SCHEDULER_PROGRAM_ARGS"           9.515 KB      12 rows
 . . exported "SYS"."AUDTAB$TBS$FOR_EXPORT"               5.953 KB       2 rows
 . . exported "SYS"."DBA_SENSITIVE_DATA"                      0 KB       0 rows
 . . exported "SYS"."DBA_TSDP_POLICY_PROTECTION"              0 KB       0 rows
 . . exported "SYS"."FGA_LOG$FOR_EXPORT"                      0 KB       0 rows
 . . exported "SYS"."NACL$_ACE_EXP"                       9.929 KB       1 rows
 . . exported "SYS"."NACL$_HOST_EXP"                      6.914 KB       1 rows
 . . exported "SYS"."NACL$_WALLET_EXP"                        0 KB       0 rows
 . . exported "WMSYS"."WM$EXP_MAP"                        7.718 KB       3 rows
 Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
 
 Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
   /home/oracle/tts.dmp
 
 Datafiles required for transportable tablespace TS_MUFALANI_01:
   /u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani01.dbf
 Datafiles required for transportable tablespace TS_MUFALANI_02:
   /u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani02.dbf
 Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Fri Sep 13 21:30:32 2019 elapsed 0 00:13:10

 

As we can see, the metadata and some data where exported and also Oracle marked to us on final of the log the datafiles required to transport and plug into the other database.
It’s time to copy the datafiles to the pdb on destination database, in our case, is the same CDB, but could be a different database, as you want.

 

SQL> !cp /u01/app/oracle/oradata/cdb1/pdb1/tsmufalani* /u01/app/oracle/oradata/cdb1/pdb2/ 
On destination PDB, we need to create the directory pointing to the same path where we put the dumpfiles, so, we just use the same command to create the directory inside PDB2 as we have created on PDB1, HOME, using the same path and grant to system user.
 
SQL> alter session set container = PDB2;
 Session altered.

SQL> create or replace directory home as '/home/oracle' ;
 Directory created.

SQL> grant read, write on directory home to system;
 Grant succeeded.
As the setup is fine on pdb2, we can import the metadata of the plugged datafiles, we must use the parameter transport_datafiles and put the files enclosed by (‘) and separaterd by (,) as we can see below. On the output we will see some errors related to APEX component, please ignore it, it will not cause any issue on our procedure. Keep in mind it is an VM for test purposes, on a real environment, you may want to remove apex before to execute this procedure. I have a blog note for it, please check it.
 
SQL> !impdp system/oracle_4U@vm2:1521/PDB2 full=Y dumpfile=tts.dmp directory=home transport_datafiles='/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani01.dbf','/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani02.dbf' logfile=import.log
 Import: Release 12.1.0.2.0 - Production on Fri Sep 13 21:34:50 2019
 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
 Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/@vm2:1521/PDB2 full=Y dumpfile=tts.dmp directory=home transport_datafiles=/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani01.dbf,/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani02.dbf logfile=import.log 
 Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
 Processing object type DATABASE_EXPORT/TABLESPACE
 ORA-31684: Object type TABLESPACE:"TEMP" already exists
 Processing object type DATABASE_EXPORT/PROFILE
 Processing object type DATABASE_EXPORT/SYS_USER/USER
 Processing object type DATABASE_EXPORT/SCHEMA/USER
 Processing object type DATABASE_EXPORT/RADM_FPTM
 Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
 Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
 Processing object type DATABASE_EXPORT/RESOURCE_COST
 Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
 Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
 ORA-31684: Object type DIRECTORY:"HOME" already exists
 Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 . . imported "SYS"."KU$_EXPORT_USER_MAP"                 6.109 KB      38 rows
 Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
 . . imported "ORDDATA"."ORDDCM_DOCS_TRANSIENT"           252.9 KB       9 rows
 . . imported "WMSYS"."E$CONSTRAINTS_TABLE$"                  0 KB       0 rows
 . . imported "WMSYS"."E$LOCKROWS_INFO$"                      0 KB       0 rows
 . . imported "WMSYS"."E$UDTRIG_INFO$"                        0 KB       0 rows
 . . imported "SYS"."AMGT$DP$DAM_CONFIG_PARAM$"           6.531 KB      14 rows
 . . imported "SYS"."DP$TSDP_PARAMETER$"                  5.953 KB       1 rows
 . . imported "SYS"."DP$TSDP_POLICY$"                     5.921 KB       1 rows
 . . imported "SYS"."DP$TSDP_SUBPOL$"                     6.328 KB       1 rows
 . . imported "SYSTEM"."REDO_DB_TMP"                      25.59 KB       1 rows
 . . imported "WMSYS"."E$BATCH_COMPRESSIBLE_TABLES$"          0 KB       0 rows
 . . imported "WMSYS"."E$CONS_COLUMNS$"                       0 KB       0 rows
 . . imported "WMSYS"."E$ENV_VARS$"                       6.015 KB       3 rows
 . . imported "WMSYS"."E$EVENTS_INFO$"                    5.812 KB      12 rows
 . . imported "WMSYS"."E$HINT_TABLE$"                     9.453 KB      75 rows
 . . imported "WMSYS"."E$MODIFIED_TABLES$"                    0 KB       0 rows
 . . imported "WMSYS"."E$MP_GRAPH_WORKSPACES_TABLE$"          0 KB       0 rows
 . . imported "WMSYS"."E$MP_PARENT_WORKSPACES_TABLE$"         0 KB       0 rows
 . . imported "WMSYS"."E$NESTED_COLUMNS_TABLE$"               0 KB       0 rows
 . . imported "WMSYS"."E$NEXTVER_TABLE$"                  6.375 KB       1 rows
 . . imported "WMSYS"."E$REMOVED_WORKSPACES_TABLE$"           0 KB       0 rows
 . . imported "WMSYS"."E$RESOLVE_WORKSPACES_TABLE$"           0 KB       0 rows
 . . imported "WMSYS"."E$RIC_LOCKING_TABLE$"                  0 KB       0 rows
 . . imported "WMSYS"."E$RIC_TABLE$"                          0 KB       0 rows
 . . imported "WMSYS"."E$RIC_TRIGGERS_TABLE$"                 0 KB       0 rows
 . . imported "WMSYS"."E$UDTRIG_DISPATCH_PROCS$"              0 KB       0 rows
 . . imported "WMSYS"."E$VERSION_HIERARCHY_TABLE$"        5.984 KB       1 rows
 . . imported "WMSYS"."E$VERSION_TABLE$"                      0 KB       0 rows
 . . imported "WMSYS"."E$VT_ERRORS_TABLE$"                    0 KB       0 rows
 . . imported "WMSYS"."E$WORKSPACES_TABLE$"               12.10 KB       1 rows
 . . imported "WMSYS"."E$WORKSPACE_PRIV_TABLE$"           7.054 KB      10 rows
 . . imported "WMSYS"."E$WORKSPACE_SAVEPOINTS_TABLE$"         0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$AUDIT"                    0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$COMPARTMENTS"             0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$GROUPS"                   0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$LAB"                      0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$LEVELS"                   0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$POL"                      0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$POLS"                     0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$POLT"                     0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$PROFILE"                  0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$PROG"                     0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$USER"                     0 KB       0 rows
 . . imported "SYS"."AMGT$DP$AUD$"                            0 KB       0 rows
 . . imported "SYS"."AMGT$DP$DAM_CLEANUP_EVENTS$"             0 KB       0 rows
 . . imported "SYS"."AMGT$DP$DAM_CLEANUP_JOBS$"               0 KB       0 rows
 . . imported "SYS"."DP$TSDP_ASSOCIATION$"                    0 KB       0 rows
 . . imported "SYS"."DP$TSDP_CONDITION$"                      0 KB       0 rows
 . . imported "SYS"."DP$TSDP_FEATURE_POLICY$"                 0 KB       0 rows
 . . imported "SYS"."DP$TSDP_PROTECTION$"                     0 KB       0 rows
 . . imported "SYS"."DP$TSDP_SENSITIVE_DATA$"                 0 KB       0 rows
 . . imported "SYS"."DP$TSDP_SENSITIVE_TYPE$"                 0 KB       0 rows
 . . imported "SYS"."DP$TSDP_SOURCE$"                         0 KB       0 rows
 . . imported "SYSTEM"."REDO_LOG_TMP"                         0 KB       0 rows
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 . . imported "SYSTEM"."SCHEDULER_JOB_ARGS_TMP"               0 KB       0 rows
 . . imported "SYSTEM"."SCHEDULER_PROGRAM_ARGS_TMP"       9.515 KB      12 rows
 . . imported "SYS"."AMGT$DP$AUDTAB$TBS$FOR_EXPORT"       5.953 KB       2 rows
 . . imported "SYS"."DP$DBA_SENSITIVE_DATA"                   0 KB       0 rows
 . . imported "SYS"."DP$DBA_TSDP_POLICY_PROTECTION"           0 KB       0 rows
 . . imported "SYS"."AMGT$DP$FGA_LOG$FOR_EXPORT"              0 KB       0 rows
 . . imported "SYS"."NACL$_ACE_IMP"                       9.929 KB       1 rows
 . . imported "SYS"."NACL$_HOST_IMP"                      6.914 KB       1 rows
 . . imported "SYS"."NACL$_WALLET_IMP"                        0 KB       0 rows
 . . imported "WMSYS"."E$EXP_MAP"                         7.718 KB       3 rows
 Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
 Processing object type DATABASE_EXPORT/STATISTICS/MARKER
 Processing object type DATABASE_EXPORT/END_PLUGTS_BLK
 Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_PURGE_SESSIONS" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_PURGE_SESSIONS" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_PURGE_SESSIONS"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_CACHE.PURGE_SESSIONS'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.179792000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval=> 
 'FREQ
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_MAIL_QUEUE" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_MAIL_QUEUE" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_MAIL_QUEUE"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_MAIL.PUSH_QUEUE_IMMEDIATE'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.239709000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval=> 
 'FREQ=MINUTE
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_WS_NOTIFICATIONS" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_WS_NOTIFICATIONS" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_WS_NOTIFICATIONS"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_WORKSHEET_API.DO_NOTIFY'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.244043000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_DAILY_MAINTENANCE" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_DAILY_MAINTENANCE" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_DAILY_MAINTENANCE"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_MAINT.DAILY_MAINTENANCE'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.248433000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_inter
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
 Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
 Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 6 error(s) at Fri Sep 13 21:40:12 2019 elapsed 0 00:05:16

After the import is done, we can check the data on the destination PDB, PDB2.

SQL> alter session set container =pdb2;
 Session altered.

SQL> select file_name from dba_data_files;

 FILE_NAME
--------------------------------------------------------
 /u01/app/oracle/oradata/cdb1/pdb2/sysaux01.dbf
 /u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani01.dbf
 /u01/app/oracle/oradata/cdb1/pdb2/system01.dbf
 /u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani02.dbf
 
SQL> select count(1) from mufalani.test1;

COUNT(1)
------------------------
 38

SQL> select count(1) from mufalani.test2;
 COUNT(1)
------------------------
42
 
 SQL> show con_name;

 CON_NAME
------------------------
 PDB2
 
So, this is one of most used technique to migrate databases to Oracle 12cR1, you can check more info on Oracle documentation: 
https://docs.oracle.com/database/121/ADMIN/transport.htm#GUID-1901E9C3-8FCE-4D4E-AB65-34D703474E52
and also on Mike Dietrich blog – https://mikedietrichde.com/tag/transportable-tablespaces/

 

This article was written by André Ontalba and me.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

 


Generating an export from a Data Guard database using NETWORK_LINK.
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Generating an export from a Data Guard database using NETWORK_LINK

This article explains how we can export Data Guard using NETWORK_LINK.
 
The Physical Standby database must be opened in “READ ONLY” mode.

Steps to execute to export from Physical Standby Database

Physical Standby Database

— Connect to Physical Standby database and check its status

 

 

[oracle@vm1 admin] sqlplus / as sysdba


SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
PHYSTBY MOUNTED

-- Cancel managed recovery and open database in "READ ONLY" mode.


SQL> alter database recover managed standby database cancel;



SQL> alter database open read only;




-- Verify database status

SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
VTABOLDG OPEN


SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY


 

“Non Standby” Database           

On the same machine I created a new instance just to serve as a bridge to perform the procedure.

-- create DB Link, Oracle Directory.


[oracle@vm1 admin] sqlplus / as sysdba


SQL> create database link expd connect to system identified by oracle using ‘DG_VTABOL’;

SQL> select db_unique_name from v$database;

DB_UNIQUE_NAME
--------------------
DP

SQL> select db_unique_name from v$database@exp;

DB_UNIQUE_NAME
--------------------
VTABOL

SQL> create directory datapump as ‘/tmp’;

 


Use NETWORK_LINK to database link above to connect to the Physical Standby database:

 

[oracle@vm1 admin] expdp DIRECTORY=DATAPUMP network_link=EXP DUMPFILE=DG_EXPDP.dmp SCHEMAS=DBA_DISCOVER,ITAMARATI,VENDASEXTERNAS,VTABOL,ABSTRATOS logfile=expdp.log PARALLEL=4



Export: Release 12.1.0.2.0 - Production on Fri Sep 6 17:32:54 2019

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options

FLASHBACK automatically enabled to preserve database integrity.

Starting "SYS"."SYS_EXPORT_SCHEMA_02":  sys/******** AS SYSDBA DIRECTORY=DATAPUMP network_link=EXP DUMPFILE=AWS.dmp SCHEMAS=DBA_DISCOVER,ITAMARATI,VENDASEXTERNAS,VTABOL,ABSTRATOS logfile=expdp.log PARALLEL=4

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 84.29 GB

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM

Processing object type SCHEMA_EXPORT/DB_LINK

Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE

Processing object type SCHEMA_EXPORT/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC

Processing object type SCHEMA_EXPORT/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION

Processing object type SCHEMA_EXPORT/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE

Processing object type SCHEMA_EXPORT/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC

Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION

Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/TRIGGER

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW

Processing object type SCHEMA_EXPORT/JOB

Processing object type SCHEMA_EXPORT/REFRESH_GROUP

Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCOBJ




Dump file set for SYS.SYS_EXPORT_SCHEMA_02 is:

  /tmp/DG_EXPDP.dmp

Job "SYS"."SYS_EXPORT_SCHEMA_02" completed with 0 error(s) at Fri Sep 6 19:55:35 2019 elapsed 0 02:22:29


 

On Physical Standby Database

[oracle@vm1 admin] sqlplus / as sysdba


SQL> shutdown immediate



SQL> startup mount




SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;




SQL> SELECT
       ARCH.THREAD# "Thread",
       ARCH.SEQUENCE# "Last Sequence Received",
       APPL.SEQUENCE# "Last Sequence Applied",
       (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
       (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN
       (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
       (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN
       (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
  WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

  2    3    4    5    6    7    8    9   10   11

    Thread Last Sequence Received Last Sequence Applied Difference

---------- ---------------------- --------------------- ----------

         1                  72153                 72153          0

         2                  67021                 67021          0




SQL>

 

I hope this helps you!!!

 

Andre Luiz Dutra Ontalba 
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Configuring TDE on a PDB with PLUG and UNPLUG Option
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Configuring TDE on a PDB with PLUG and UNPLUG Option

 
This article describes the process of configure a TDE on CDB and unplugging PDB1 from the CDB1 instance and plugging into the CDB2 instance on the same machine with a new name of PDB2.

 

1 – Configure TDE Source – CDB (cdb1)
A) Create directory

 

[root@vm1 ~]# mkdir -p /etc/ORACLE/WALLETS/cdb1
[root@vm1 ~]# mkdir -p /etc/ORACLE/WALLETS/cdb2
[root@vm1 ~]# chown -R oracle:oinstall /etc/ORACLE
[root@vm1 ~]# chmod -R 755 /etc/ORACLE
B) Edit sqlnet.ora and configure the Wallet

 

ENCRYPTION_WALLET_LOCATION=
(SOURCE=
(METHOD=FILE)
(METHOD_DATA=
(DIRECTORY=/etc/ORACLE/WALLETS/$ORACLE_SID/)))
C) Configure Database – cdb1

 

[oracle@vm1 ~]$ . oraenv <<< cdb1
ORACLE_SID = [db01] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@vm1 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 3 20:58:03 2019

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>
SQL>
SQL>
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO


SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/etc/ORACLE/WALLETS/cdb1' IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle container=all;

keystore altered.


SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle WITH BACKUP USING 'emp_key_backup'

keystore altered.


SQL> alter session set container=pdb1;

Session altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle WITH BACKUP USING 'emp_key_backup';

keystore altered.

SQL> set linesize 500
SQL> select * from v$encryption_wallet;

WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
-------------------- -------------------------------------------------- ------------------------------ -------------------- --------- --------- ----------
FILE /etc/ORACLE/WALLETS/cdb1/ OPEN PASSWORD SINGLE NO 0


SQL> create tablespace TESTE
datafile '/u01/app/oracle/oradata/cdb1/pdb1/teste.dbf' size 10m
encryption using 'AES256'
default storage (encrypt);

Tablespace created.

D) Export Key from Source – PDB(pdb1)

 


SQL>ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "oracle" TO '/home/oracle/export.p12' IDENTIFIED BY oracle;

keystore altered.

SQL> !
[oracle@vm1 ~]$ ls -ltr export.p12
-rw-r--r--. 1 oracle oinstall 2612 Sep 3 21:08 export.p12
[oracle@vm1 ~]$ exit
 
E) Unplug and DROP PDB(pdb1)

 

SQL> alter pluggable database close immediate;

Pluggable database altered.

SQL> conn / as sysdba
Connected.

SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

Pluggable database altered.

SQL> drop pluggable database pdb1 keep datafiles;

Pluggable database dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

2 – Configure TDE Destination CDB(cdb2)

 

[oracle@vm1 ~]$ . oraenv <<< cdb2
ORACLE_SID = [cdb1] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@vm1 ~]$ ls /etc/ORACLE/WALLETS/
cdb1 cdb2
[oracle@vm1 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 3 21:13:12 2019

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1241513984 bytes
Fixed Size 2923872 bytes
Variable Size 452985504 bytes
Database Buffers 771751936 bytes
Redo Buffers 13852672 bytes
Database mounted.
Database opened.
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED
SQL>
SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/etc/ORACLE/WALLETS/cdb2' IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle WITH BACKUP USING 'emp_key_backup';

keystore altered.

SQL>
SQL>
F) PLUG PDB(pdb2) using Unplugged pdb1

 

SQL> create pluggable database pdb2 as clone using '/home/oracle/pdb1.xml'
2 file_name_convert=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');

Pluggable database created.

SQL> alter pluggable database pdb2 open;

Warning: PDB altered with errors.
G) IMPORT KEY from Source PDB(pdb1) to Destination PDB(pdb2)

 

SQL> alter session set container=pdb2;

Session altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "oracle" FROM '/home/oracle/export.p12' IDENTIFIED BY oracle WITH BACKUP;

keystore altered.

SQL> shut immediate;
Pluggable Database closed.
SQL> startup
Pluggable Database opened.

 

H) Validate PDB is fully integrated with CDB2

 

 

SQL> conn / as sysdba
Connected.

SQL> select message,status from pdb_plug_in_violations where status <> 'RESOLVED';
SQL> /

no rows selected

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED
4 PDB2 READ WRITE NO
SQL>

 

I hope this helps you!!!
 
This article was written by Andre Ontalba e Rodrigo Mufalani
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Observer, Quorum
Category: Database Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Observer, Quorum

This article closes the series for DG and Fast-Start Failover that I covered with more details the case of isolation can leverage the shutdown of your healthy/running primary database. The “ORA-16830: primary isolated from fast-start failover partners”.
In the first article, I wrote about how one simple detail that impacts dramatically the reliability of your MAA environment. Where you put your Observer in DG environment (when Fast-Start Failover is in use) have a core figure in case of outages, and you can face Primary isolation and shutdown. Besides that, there is no clear documentation to base yourself about “pros and cons” to define the correct place for Observer. You read more in my article here.
In the second article, I wrote about one new feature that can help to have more protected and cover more scenarios for Fast-Start Failover/DG. Using Multiple Observers you can remove the single point of failure and allow you to put one Observer in each side of your environment (primary, standby and a third one). You can read more in my article here.
In this last article I discuss how, even using all the features, there is no               perfect solution. Another point is discussing here is how (maybe) Oracle can improve that. Below I will show more details that even multiple observers continue to shutdown a healthy primary database. Unfortunately, it is a lot of tech info and is a log thread output. But you can jump directly to the end to see the discussion about how this can be improved.

 

Fast-Start Failover and Multiple Observers

Because the design of Fast-Start Failover, Broker and DG even using multiple observers, we continue to have the decision (to failover or no the database) based in just one observer report. The others are just backup from the master, but what they saw not count when the failover scenario hit the environment. Even if the Primary Database can receive connections from other two observers, but not receive the connection master (and standby), it decides to shutdown because it is “isolated”.
Look the example below where I have three observers (one in each site and a third one in the “cloud”):

 

 

The image above can be translated in this config for Broker where the Master Observer it is “dbobss” (that resides in standby datacenter):

 

DGMGRL> show fast_start failover;




Fast-Start Failover: ENABLED




  Threshold:          240 seconds

  Target:             orcls

  Observers:      (*) dbobss

                      dbobsp

                      dbobst

  Lag Limit:          30 seconds (not in use)

  Shutdown Primary:   TRUE

  Auto-reinstate:     TRUE

  Observer Reconnect: 10 seconds

  Observer Override:  FALSE




Configurable Failover Conditions

  Health Conditions:

    Corrupted Controlfile          YES

    Corrupted Dictionary           YES

    Inaccessible Logfile            NO

    Stuck Archiver                  NO

    Datafile Write Errors          YES




  Oracle Error Conditions:

    (none)




DGMGRL>

 

Check that in this case I set the threshold for fast-start failover as 240 seconds just to have more time to show the logs. But be aware that this parameter defines the time that your system waits/freeze until proceeding with the failover in case of system isolation or lost the primary.
Here, to simulate side isolation and same behavior for the first article, I shutdown the network from standby (that talk with primary database), for the Master observer (just network that talks with primary database), and for others observers the network for standby communication. The image below reflects this:

 

 

After that, the log from Broker in the primary start to report a lot of information and I will discuss bellow. You can click here to check the full text for this. From this log you can see that the primary detected that lost communication with standby at 20:14:06.504 and in 4 minutes (240 seconds) will trigger the failover. But the most important part is marked below:

 

2019-05-05 20:14:36.506                      LGWR: still awaiting FSFO ack after 30 seconds

2019-05-05 20:14:41.968                      DMON: Creating process FSFP

2019-05-05 20:14:44.976                      FSFP: Process started

2019-05-05 20:14:44.976                      FSFP: The ping from current master observer (No. 1) to primary database timed out.

2019-05-05 20:14:44.976                      FSFP: Preparing to switch master observer from observer 1 to observer 2

2019-05-05 20:14:44.976                      FSFP: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:14:44.976                      DMON: FSFP successfully started

2019-05-05 20:14:44.977                      FSFP: persisting FSFO state flags=0x10040001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

2019-05-05 20:14:44.979                      FSFP: Data Guard Broker initiated a master observer switch since the current master observer cannot reach the primary database

2019-05-05 20:14:44.980                      FSFP: NET Alias translated: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbstb_dg)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcls) (UR=A)))

2019-05-05 20:14:44.980                      FSFP: Net link using RFIUPI_DB_CDESC, site=2, inst=0

2019-05-05 20:14:44.981                      FSFP: Connect to member orcls using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbstb_dg)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(UR=A)(SERVICE_NAME=orcls_DGB)))

2019-05-05 20:14:44.981                      FSFP: 0 x 10 seconds network retry READY

2019-05-05 20:14:51.506                      LGWR: still awaiting FSFO ack after 45 seconds

2019-05-05 20:14:55.120                      FSFP: Failed to connect to remote database orcls. Error is ORA-12543

2019-05-05 20:14:55.120                      FSFP: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:14:55.120                      FSFP: persisting FSFO state flags=0x40001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

2019-05-05 20:15:06.507                      LGWR: still awaiting FSFO ack after 60 seconds

2019-05-05 20:15:10.124                      FSFP: The ping from current master observer (No. 1) to primary database timed out.

2019-05-05 20:15:10.124                      FSFP: Preparing to switch master observer from observer 1 to observer 2

2019-05-05 20:15:10.124                      FSFP: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:15:10.124                      FSFP: persisting FSFO state flags=0x10040001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

2019-05-05 20:15:10.128                      FSFP: Data Guard Broker initiated a master observer switch since the current master observer cannot reach the primary database

2019-05-05 20:15:10.128                      FSFP: NET Alias translated: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbstb_dg)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcls) (UR=A)))

2019-05-05 20:15:10.128                      FSFP: Net link using RFIUPI_DB_CDESC, site=2, inst=0

2019-05-05 20:15:10.128                      FSFP: Connect to member orcls using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbstb_dg)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(UR=A)(SERVICE_NAME=orcls_DGB)))

2019-05-05 20:15:10.128                      FSFP: 0 x 10 seconds network retry READY

2019-05-05 20:15:14.979                      DMON: A Fast-Start Failover target switch is necessary because the primary cannot reach the Fast-Start Failover target standby database

2019-05-05 20:15:14.980                      DMON: A target switch was not attempted because the observer has not pinging primary recently.

2019-05-05 20:15:21.246                      FSFP: Failed to connect to remote database orcls. Error is ORA-12543

2019-05-05 20:15:21.247                      FSFP: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:15:21.247                      FSFP: persisting FSFO state flags=0x40001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

 

Above you can see that the broker in the primary detected that master observer is down and tries to realize the switch to another observer. Unfortunately, since it was not possible to connect with the standby database was impossible to change it. Look the event “SET SWOB INPRG” was triggered but the “CLR SWOB INPRG” was impossible because “Failed to connect to remote database orcls. Error is ORA-12543”. If you compare with the log from the previous article (when I changed the master observer manually) here we don’t see the “SET OBID” and even “OBSERVED” events.
So, basically, because the standby database was incommunicable, the primary database can’t swap the master observer (even if receive connection with them). This behavior does not change even if you set big values for “FastStartFailoverThreshold”, “OperationTimeout”, “CommunicationTimeout” parameters.

Quorum

In one scenario for DG with Fast-Start Failover enabled you can hit a shutdown from your healthy primary database because it thinks that it is isolated when lost communication from standby and observer. Even when you add multiple observers the behavior does not change.
By the actual design for DG, this is 100% correct, it is impossible for each side to know if it is isolated or no. The primary, when lost the connection from Master Observer and Standby, shutdown because can’t guarantee the transactions. And the standby (if alive and have the connection for the Master Observer) failover to be the next primary.  The problem is that even using multiple observers, where you can have odd votes, you still face isolation if the minimal part (standby + master observer) vote itself (even if it is isolated). As discussed in the first article, where you put your observer it is very important, but you need to check the pros and cons for your decision.
Going deeper, when you use the Fast-Start Failover your DG start to operate in sync (even in Max Availability), and in the first sign of communication failure the primary database freeze and don’t accept more transactions. No records are stored, and this is why, even with multiple observers, the primary can’t switch to another one. The database itself is blocked to store this change.
One option that can improve this gap it is Broker start to use Quorum (or voting system) to control if it is ok to proceed with the failover or no. In the scenario that I showed before, where the primary still has a connection from others two observers, the shutdown of the healthy primary not occur because have more votes/connections from observers (2) compared with standby (that have just one). Unfortunately, there is not a perfect solution, if one outage occurs in the primary side, and you lost connection with most part of observers, the standby can’t failover because don’t know if have votes to do that or no.
This can be a future and additional option for Fast-Start Failover environments. But the key and fundamental part is still there, even quorum will not work correctly if you still put all the observers in just one side. And become even more critical when, now, you can have hybrid clouds with one for databases and other for applications.
The idea about writing these articles was to show the problem and try to fill the gap about the place to put your observer. There is no clear doc to search the “pros and cons” for each side option. Unfortunately, even with the new features, still exists a gap that needs to be covered to improve the reliability for some scenarios. There is no perfect solution, but it is important to know what you can do to reach MAA at the highest level.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for community.”

 


1 19 20 21 22 23 32