Database
How to manually drop/recreate MGMTDB Database – Oracle Rac One Node 12.2
Category: Database Author: Leonardo Santos Lopes Date: 5 years ago Comments: 0

How to manually drop/recreate MGMTDB Database - Oracle Rac One Node 12.2

Hi everyone,
Today in this article I would like to share my experience regarding drop/recreate the MGMTDB in order to purge the repository and free some space from ASM Disk Group.
I was going to perform a migration from Oracle Database 12.2 Single Instance to an Oracle Database 12.2 Rac One Node, when I found something bizarre. The “+DATA” diskgroup was 59GB of used space and it was a fresh installation (Only few weeks waiting for the “GO” to migrate without any database).
After performing the actions below, I was able free / to get back about 20GB.
Documents used in this procedure:
  • How to Move/Recreate GI Management Repository (GIMR / MGMTDB) to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)
  • 12.2: How to Create GI Management Repository (Doc ID 2246123.1)
  • MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)
Steps:
[oracle@lab-dev-datad1 ~]$ . oraenv

 ORACLE_SID = [+ASM1] ? +ASM1

 The Oracle base remains unchanged with value /u01/base




[oracle@lab-dev-datad1 ~]$ ps -ef | grep pmon
 oracle   19280     1  0 08:21 ?        00:00:00 asm_pmon_+ASM1
 oracle   21769     1  0 08:21 ?        00:00:00 mdb_pmon_-MGMTDB
 oracle   22655 29118  0 11:05 pts/1    00:00:00 grep --color=auto pmon

[oracle@lab-dev-datad1 ~]$ asmcmd lsdg
[oracle@lab-dev-datad1 ~]$ asmcmd -p




ASMCMD [+] > ls -l



State    Type    Rebal  Name



MOUNTED  EXTERN  N      DATA/



MOUNTED  EXTERN  N      FRA/



ASMCMD [+] >

ASMCMD [+] > du FRA




Used_MB      Mirror_used_MB




   2268                2268

ASMCMD [+] > du DATA



Used_MB      Mirror_used_MB



  59292               59292

ASMCMD [+] > cd DATA

ASMCMD [+DATA] > ls -l




Type      Redund  Striped  Time             Sys  Name




                                            Y    ASM/




                                            N    _mgmtdb/




                                            Y    dev-data-clu/




PASSWORD  UNPROT  COARSE   JUL 09 16:00:00  N    orapwasm => +DATA/ASM/PASSWORD/pwdasm.256.1013185265




PASSWORD  UNPROT  COARSE   JUL 09 16:00:00  N    orapwasm_backup => +DATA/ASM/PASSWORD/pwdasm.257.1013185629

ASMCMD [+DATA] > du ASM



Used_MB      Mirror_used_MB



      0                   0

ASMCMD [+DATA] > du _mgmtdb/



Used_MB      Mirror_used_MB



  57776               57776
 
Stopping required resources in both nodes:
As root user, from Grid Home:

 

[root@lab-dev-datad1 ~]# /u01/oracle/base/product/12.2.0/grid/bin/crsctl stop res ora.crf -init

 CRS-2673: Attempting to stop 'ora.crf' on 'lab-dev-datad1'

 CRS-2677: Stop of 'ora.crf' on 'lab-dev-datad1' succeeded




[root@lab-dev-datad2 ~]# /u01/oracle/base/product/12.2.0/grid/bin/crsctl stop res ora.crf -init

 CRS-2673: Attempting to stop 'ora.crf' on 'lab-dev-datad2'

 CRS-2677: Stop of 'ora.crf' on 'lab-dev-datad2' succeeded




[root@lab-dev-datad1 ~]# /u01/oracle/base/product/12.2.0/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

[root@lab-dev-datad2 ~]# /u01/oracle/base/product/12.2.0/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init
 
Validate MGMTDB database status:

 

[root@lab-dev-datad1 ~]# /u01/oracle/base/product/12.2.0/grid/bin/srvctl status mgmtdb
 Database is enabled
 Instance -MGMTDB is running on node lab-dev-datad1
 
Deleting MGMTDB using DBCA in silent mode:
As oracle user:

 

[oracle@lab-dev-datad1 ~]$ /u01/oracle/base/product/12.2.0/grid/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB
 Connecting to database
 4% complete
 9% complete
 14% complete
 19% complete
 23% complete
 28% complete
 47% complete
 Updating network configuration files
 52% complete
 Deleting instance and datafiles
 76% complete
 100% complete
 Look at the log file "/u01/base/cfgtoollogs/dbca/_mgmtdb.log" for further details.
 
Validate the current space from “+DATA” diskgroup:

 

[oracle@lab-dev-datad1 ~]$ asmcmd lsdg
In order to recreate the MGMTDB in Oracle Database 12/R2 (12.2), is required to use a perl script (mdbutil.pl), that you can download from here:
  • MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)
Recreating MGMTDB:
Listing options:

 

[oracle@lab-dev-datad1 ~]$ ./mdbutil.pl -h
 Usage:
      Create/Enable MGMTDB & CHM
        mdbutil.pl --addmdb --target=
      Move MGMTDB to another location
        mdbutil.pl --mvmgmtdb --target=
      Check MGMTDB status
        mdbutil.pl --status
mdbutil.pl OPTIONS
    --addmdb            Create MGMTDB/CHM and reconfigure related functions
    --mvmgmtdb          Migrate MGMTDB to another location
    --target='+DATA'    MGMTDB Disk Group location
    --status            Check the CHM & MGMTDB status
    --help              Display this help and exit
    --debug             Verbose commands output/trace
Example:
    Create/Enable MGMTDB:
      mdbutil.pl --addmdb --target=+DATA
    Move MGMTDB to another location:
      mdbutil.pl --mvmgmtdb --target=+REDO
    Check CHM:
      mdbutil.pl --status
 
Launch MGMTDB creation:

 

[oracle@lab-dev-datad1 ~]$ ./mdbutil.pl --addmdb --target=+DATA

mdbutil.pl version : 1.98

 2019-10-04 11:32:18: I Starting To Configure MGMTDB at +DATA…

 2019-10-04 11:32:21: I Container database creation in progress… for GI 12.2.0.1.0

 2019-10-04 11:42:13: I Plugable database creation in progress…

 2019-10-04 11:47:20: I Executing "/tmp/mdbutil.pl --addchm" on lab-dev-datad1 as root to configure CHM.

 root@lab-dev-datad1's password:

 2019-10-04 11:49:50: I MGMTDB & CHM configuration done!

 root@lab-dev-datad2's password:

 2019-10-04 11:49:50: I MGMTDB & CHM configuration done!

 

Modifying back the resource ora.crf in both nodes:
As root user, from Grid Home:

 

[root@lab-dev-datad1 ~]# /u01/oracle/base/product/12.2.0/grid/bin/crsctl modify res ora.crf -attr ENABLED=1 -init

[root@lab-dev-datad1 ~]# /u01/oracle/base/product/12.2.0/grid/bin/crsctl start res ora.crf -init
 CRS-2672: Attempting to start 'ora.crf' on 'lab-dev-datad1'
 CRS-2676: Start of 'ora.crf' on 'lab-dev-datad1' succeeded

[root@lab-dev-datad2 ~]# /u01/oracle/base/product/12.2.0/grid/bin/crsctl modify res ora.crf -attr ENABLED=1 -init

[root@lab-dev-datad2 ~]# /u01/oracle/base/product/12.2.0/grid/bin/crsctl start res ora.crf -init
 CRS-2672: Attempting to start 'ora.crf' on 'lab-dev-datad2'
 CRS-2676: Start of 'ora.crf' on 'lab-dev-datad2' succeeded

[oracle@lab-dev-datad1 ~]$ /u01/oracle/base/product/12.2.0/grid/bin/srvctl status mgmtdb

 Database is enabled

 Instance -MGMTDB is running on node lab-dev-datad1

 

Validate the current size after MGMTDB database creation:

 

[oracle@lab-dev-datad1 ~]$ asmcmd lsdg

 

 

[oracle@lab-dev-datad1 ~]$ asmcmd -p

ASMCMD [+] > ls -l



State    Type    Rebal  Name



MOUNTED  EXTERN  N      DATA/



MOUNTED  EXTERN  N      FRA/

ASMCMD [+] > du DATA Used_MB      Mirror_used_MB   34256               34256

 

From Oracle 12.2 Standalone Cluster is required at least 37.6GB only for MGMTDB

 

Oracle Clusterware Storage Space Requirements (12.2)

 

 

 

See you in the next post!

 

Leonardo Lopes

Great opportunity to upgrade your career
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Great opportunity to upgrade your career

Oracle University is offering 25% off Oracle Learning Subscriptions and career-enhancing Oracle Certification Vouchers.

 

This offer covers training on: 
  • Oracle Cloud (IaaS/PaaS/SaaS)
  • Oracle Database
  • Oracle Systems
  • MySQL
  • Java Programming
  • Oracle Middleware
  • Oracle Applications
  • Oracle Industries
To access this discount click here
 
We hope we helped !!!
 
Andre Ontalba and Rodrigo Mufalani

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

 


Oracle Open World 2019
Category: Cloud,Database,Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Oracle Open World 2019

 
 
First day of OOW19 and we had big announcements of new services and products.
 
Now as Larry said “Oracle Cloud is Allways Free for Everyone”
 
Oracle’s Free Tier program has two components:
Always Free services, which provide access to Oracle Cloud services for an unlimited time
Free Trial, which provides $ 300 in credits for 30 days to try additional services and larger shapes
The new Always Free program includes the essentials users need to build and test applications in the cloud: Oracle Autonomous Database, Compute VMs, Block Volumes, Object and Archive Storage, and Load Balancer. Specifications include:
2 Autonomous Databases (Autonomous Data Warehouse or Autonomous Transaction Processing), each with 1 OCPU and 20 GB storage
2 Compute VMs, each with 1/8 OCPU and 1 GB memory
2 Block Volumes, 100 GB total, with up to 5 free backups
10 GB Object Storage, 10 GB Archive Storage, and 50,000 / month API requests
1 Load Balancer, 10 Mbps bandwidth
10 TB / month Outbound Data Transfer
500 million ingestion Datapoints and 1 billion Datapoints for Monitoring Service
1 million Notification delivery options per month and 1000 emails per month
More information click here.
Another release was the Oracle Autonomous Linux “World’s First Autonomous Operating System.
Introducing Oracle OS Management Service
Oracle Autonomous Linux, in conjunction with Oracle OS Management Service, uses advanced machine learning and autonomous capabilities to deliver unprecedented cost savings, security and availability and frees up critical IT resources to tackle more strategic initiatives.
Eliminate manual OS management — World’s first autonomous operating system in the cloud to deliver automated patching, updates, and tuning without human intervention. Based on a preconfigured Oracle Linux image; automated daily package updates; enhanced OS parameter tuning and OS diagnostics gathering.
Deliver automatic, in-depth protection at all levels — 100 percent hands-off automatic security updates daily to the Linux kernel and key user space libraries. This requires no downtime along with protection from both external attacks and malicious internal users. Known Exploit Detection provides automated alerts if anyone attempts to exploit a vulnerability that has been patched by Oracle.
Provides always-on availability — Includes automated patching and upgrades while the system is running, eliminating unnecessary downtime for users and the system.
More information click here
 
A release expected by customers the new partnership with VMware.
 
For the first time, Oracle will officially support Oracle products running on VMware virtualized environments and VMware will support Oracle running VMware Workloads on Oracle Cloud WIN/WIN for customers!
 

 

 
Running VMware workloads in Oracle Cloud
With this announcement, Oracle becomes a partner in the VMware Cloud Provider Program and Oracle Cloud VMware Solution will be sold by Oracle and its partners. The solution will be based on VMware Cloud Foundation and will deliver a full software-defined data center (SDDC) stack including VMware vSphere, NSX, and vSAN. Through consistent infrastructure and operations, customers will be able to migrate and modernize applications, seamlessly moving workloads between on-premise environments and Oracle Cloud.
Customers will be able to easily use Oracle services, such as Oracle Autonomous Database, Oracle Exadata Cloud Service and Oracle Database Cloud, which run in the same cloud data centers, on the same networks, with a consistent portal and APIs. able to leverage Oracle’s rapidly expanding footprint of global regions to scale globally without needing to establish their own data centers. Oracle will provide administrative access to the underlying physical servers, enabling a level of control previously only possible on premise, and customers will be able to use VMware vCenter to manage both their on-premise clusters and Oracle Cloud-based SDDCs through a single crash of glass Oracle will also provide first line technical support for this solution.
To learn more about the offering visit: https://www.oracle.com/cloud/VMware
 
We also had the Exadata X8M release.
The Exadata X8M uses the Remote Direct Memory (RDMA) to allow database access to the storage server memory directly. And the memory, in this case, it is special, X8M uses Intel Optane DC Persistent Memory modules (DIMM / NVDIMM – Non Volatile DIMM – to provide PMEM – Persistent Memory) attached directly to server storage and these can be accessed directly from the database using RDMA trough RoCE network. Let’s check the details to see what it is.
Our contributor Fernando Simon a few hours ago posted an article about Exadata X8M where it can be viewed http://www.fernandosimon.com/blog/exadata-x8m/.
Soon more will be available here on the Portal this article and more details about the new services and products.
 
 Andre Luiz Dutra Ontalba
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”the new partnership with VMware


DML over Standby for Active Data Guard in 19c
Category: Database Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

DML over Standby for Active Data Guard in 19c

With the new 19c version the Data Guard received some attention and now we can do DML over the standby and it will be redirect to primary database. It is not hard to implement, but unfortunately there is no much information about that in the docs about that.
As training exercise I tested this new feature and want to share some information about that. First, the environment that I used (and the requirements too):
  • Primary and Standby databases running 19c.
  • Data Guard in Maximum Availability .
  • Active Data Guard enabled.
Remember that the idea of DML over the standby it is to use in some cases where your reporting application need to update some tables and few records (like audit logins) while processing the data in the standby. The volume of DML is (and will be) low. At this point there is no effort to allow, or create, a multiple active-active datacenters/sites for your database. If you start to execute a lot of DML in the standby side you can impact the primary database and you adding the fact that you can maximize the problems for locks and concurrency.

 

Theorical info about DDL in the standby you can find here and here.  The basic workflow you can see below:

 

 

 

To create the DG configuration you can follow the steps that are available in the internet in many sources, one good example it is here. If you want to see what I made, you can check this file that contains the output from the steps for my config “Log-Create-PHY-STB-19c.txt”.
Before continue, since here we have a lot of interaction between primary and standby, I put primary with BLUE background and the standby it is GREEN in the codes below.
The first step to configure is guarantee that everything is ok with the config for DG. The main thing is check the “Real Time Query” and ON:
 
DGMGRL> show configuration




Configuration - dg19b2




  Protection Mode: MaxAvailability

  Members:

  dg19  - Primary database

    dg19s - Physical standby database




Fast-Start Failover:  Disabled




Configuration Status:

SUCCESS   (status updated 31 seconds ago)




DGMGRL> show database dg19s




Database - dg19s




  Role:               PHYSICAL STANDBY

  Intended State:     APPLY-ON

  Transport Lag:      0 seconds (computed 0 seconds ago)

  Apply Lag:          0 seconds (computed 0 seconds ago)

  Average Apply Rate: 2.00 KByte/s

  Real Time Query:    ON

  Instance(s):

    dg19s




Database Status:

SUCCESS




DGMGRL>




After that you enable the parameter ADG_REDIRECT_DML  in both sides. Primary:

[oracle@orcl19p ~]$ sqlplus sys/oracle@dg19 as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 17:50:38 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> show parameter adg_redirect_dml




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

adg_redirect_dml                     boolean     FALSE

SQL>

SQL> alter system set adg_redirect_dml = true scope = both sid = '*';




System altered.




SQL>




And in standby:

[oracle@orcl19s ~]$ sqlplus sys/oracle@dg19s as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 17:51:19 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> select open_mode from v$database;




OPEN_MODE

--------------------

READ ONLY WITH APPLY




SQL> show parameter adg_redirect_dml




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

adg_redirect_dml                     boolean     FALSE

SQL>

SQL> alter system set adg_redirect_dml = true scope = both sid = '*';




System altered.




SQL>

To test the DML redirection I created one table with some data in the primary database. Here I amd connected as SYS and without PDB, after I will do the same for PDB and with a normal user:

 

SQL> CREATE TABLE testDML(c1 NUMBER, c2 VARCHAR2(50), c3 DATE) TABLESPACE users;




Table created.




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);




1 row created.




SQL> commit;




Commit complete.




SQL> select c1, TO_CHAR(c3, 'DD/MM/RRRR HH24:MI') FROM testDML;




        C1 TO_CHAR(C3,'DD/M

---------- ----------------

         1 14/04/2019 17:58




SQL>

In the standby you can see the table and do some DML over it:

SQL> select c1, TO_CHAR(c3, 'DD/MM/RRRR HH24:MI') FROM testDML;




        C1 TO_CHAR(C3,'DD/M

---------- ----------------

         1 14/04/2019 17:58




SQL> select database_role from v$database;




DATABASE_ROLE

----------------

PHYSICAL STANDBY




SQL> delete from testDML;




1 row deleted.




SQL> commit;




Commit complete.




SQL>
And in the primary, you can see that the data was deleted:

 

SQL> select c1, TO_CHAR(c3, 'DD/MM/RRRR HH24:MI') FROM testDML;




no rows selected




SQL> SELECT TO_CHAR(sysdate, 'DD/MM/RRRR HH24:MI') FROM dual;




TO_CHAR(SYSDATE,

----------------

14/04/2019 18:01




SQL>

 

Locks
But in the case of locks, how it works? Some new event? Some different? To simulate this I inserted the data in the primary again and in the standby I checked and the data appeared in the table:

 

SQL> select c1, TO_CHAR(c3, 'DD/MM/RRRR HH24:MI') as c3 FROM testDML;




        C1 C3

---------- ----------------

         1 14/04/2019 18:04




SQL>
 
In the primary I deleted all the data but without commit to lock all the table:

 

SQL> delete from testDML;




1 row deleted.




SQL>
After that I tried to update all the rows in the table. But, as expected, my session got stuck:

 

SQL> update testDML set c1 = 2;

 

At same time I went to production and check for locks and events:

 

SQL> SELECT username, final_blocking_session, final_blocking_instance, event lockwait, status, machine, service_name , sql_id FROM gv$session WHERE username IS NOT NULL;




USERNAME             FINAL_BLOCKING_SESSION FINAL_BLOCKING_INSTANCE LOCKWAIT                                                         STATUS   MACHINE              SERVICE_NAME                             SQL_ID

-------------------- ---------------------- ----------------------- ---------------------------------------------------------------- -------- -------------------- ---------------------------------------- -------------

SYS                                                                 OFS idle                                                         ACTIVE   orcl19p.oralocal     SYS$BACKGROUND

SYSRAC                                                              SQL*Net message from client                                      INACTIVE orcl19p.oralocal     SYS$USERS

PUBLIC                                                              SQL*Net message from client                                      INACTIVE orcl19s.oralocal     SYS$USERS

SYS                                                                 SQL*Net message to client                                        ACTIVE   orcl19p.oralocal     SYS$USERS                                4dzzccufjscvp

SYS                                      86                       1 enq: TX - row lock contention                                    ACTIVE   orcl19s.oralocal     SYS$USERS                                49b39kubnb2d1




SQL>

SQL> col sql_text format a50

SQL> select sql_text from v$sql where sql_id = '49b39kubnb2d1';




SQL_TEXT

--------------------------------------------------

UPDATE "TESTDML" "A1" SET "C1" = 2




SQL>
As you can see, there is no different event. The primary database reports TX coming from one connection of standby database. And you can even see the SQL. Normal behaviour.
But, after some time in lock state, the session in the standby reported a new error. ORA-02049: timeout: distributed transaction waiting for lock and ORA-02063: preceding line from ADGREDIRECT. These exists just to avoid eternal wait in the standby.

 

SQL> update testDML set c1 = 2;

update testDML set c1 = 2

*

ERROR at line 1:

ORA-02049: timeout: distributed transaction waiting for lock

ORA-02063: preceding line from ADGREDIRECT


SQL>
 
If you do in the opposite site, deleting the registry standby and not doing the commit. The session in the primary that it is waiting never receive the ORA-02049. And the same for the session in standby, it holds the lock until you commit/rollback.
Over PDB
If you want to do the same using PDB, you need to work a little with services to allow direct connections in the physical standby using the services. Basically you need to create the service and set the role (to start it) as physical_standby. The trick part here is that you need to, at least, call the start command in the primary because doing this you registry it in the service$ table. Since you are in the DG config and running the Active Data Guard (this is requirement because the redo is open with Real Time Query) you can open it in the standby too. I added this info here because with DML over standby this config start to be recurrent.
First, create and add the service in primary database:

 

[oracle@orcl19p ~]$ srvctl add service -db dg19 -service dgpdb_dml -pdb dgpdb -role physical_standby

[oracle@orcl19p ~]$ srvctl start service -db dg19 -service dgpdb_dml -pdb dgpdb

[oracle@orcl19p ~]$

[oracle@orcl19p ~]$ sqlplus / as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 18:48:28 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> alter session set container = DGPDB;




Session altered.




SQL> set linesize 250

SQL> col NAME format a30

SQL> col NETWORK_NAME format a30

SQL> col pdb format a30

SQL> SELECT name, network_name, pdb FROM service$;




NAME                           NETWORK_NAME                   PDB

------------------------------ ------------------------------ ------------------------------

DGPDB                          DGPDB                          DGPDB

dgpdb_dml                      dgpdb_dml                      DGPDB




SQL> exit

Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0

[oracle@orcl19p ~]$
After that you do the same in the standby.

 

[oracle@orcl19s ~]$ srvctl add service -db dg19s -service dgpdb_dml -pdb dgpdb -role physical_standby

[oracle@orcl19s ~]$ srvctl start service -db dg19s -service dgpdb_dml -pdb dgpdb

[oracle@orcl19s ~]$

[oracle@orcl19s ~]$ srvctl stop database -d dg19s

[oracle@orcl19s ~]$ srvctl start database -d dg19s

[oracle@orcl19s ~]$

[oracle@orcl19s ~]$

[oracle@orcl19s ~]$

[oracle@orcl19s ~]$ srvctl status service -d dg19s

Service dgpdb_dml is running

[oracle@orcl19s ~]$

 

Here something to add. I don’t know why but for me just worked (the service appears as enable to connect) after I restart the standby. Since it is not focus for this post this point, you can find good info here (original post from Ivica Arsov) and here.
After that you can test the same as before (create table in PDB and populate):

 

[oracle@orcl19p ~]$ sqlplus simon/simon@orcl19p/DGPDB




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 21:37:36 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.




Last Successful login time: Sun Apr 14 2019 21:36:05 +02:00




Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> CREATE TABLE testDML(c1 NUMBER, c2 VARCHAR2(50), c3 DATE) TABLESPACE users;




Table created.




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);




1 row created.




SQL> commit;




Commit complete.




SQL>

 

After that do some DML over standby:

 

[oracle@orcl19s ~]$ sqlplus simon/simon@orcl19s/DGPDB




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 21:38:17 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.




Last Successful login time: Sun Apr 14 2019 21:37:36 +02:00




Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL>

SQL> delete from testDML;




1 row deleted.




SQL>

 

And if you have some locks the behaviour it is the same than before. But now, with the information from user and con_id from connection:

 

SQL> select USERNAME, FINAL_BLOCKING_SESSION, FINAL_BLOCKING_INSTANCE, EVENT LOCKWAIT, STATUS, machine, service_name , sql_id, con_id from gv$session where username is not null;




USERNAME             FINAL_BLOCKING_SESSION FINAL_BLOCKING_INSTANCE LOCKWAIT                                                         STATUS   MACHINE              SERVICE_NAME         SQL_ID            CON_ID

-------------------- ---------------------- ----------------------- ---------------------------------------------------------------- -------- -------------------- -------------------- ------------- ----------

SYS                                                                 OFS idle                                                         ACTIVE   orcl19p.oralocal     SYS$BACKGROUND                              0

SYSRAC                                                              SQL*Net message from client                                      INACTIVE orcl19p.oralocal     SYS$USERS                                   1

SYSRAC                                                              SQL*Net message from client                                      INACTIVE orcl19p.oralocal     SYS$USERS                                   1

PUBLIC                                                              SQL*Net message from client                                      INACTIVE orcl19s.oralocal     SYS$USERS                                   1

SIMON                                   102                       1 enq: TX - row lock contention                                    ACTIVE   orcl19p.oralocal     dgpdb                4sqjaugwcxar0          3

SYS                                                                 SQL*Net message to client                                        ACTIVE   orcl19p.oralocal     SYS$USERS            cssctts2u81n4          1

SIMON                                                               SQL*Net message from client                                      INACTIVE orcl19s.oralocal     SYS$USERS                                   3




7 rows selected.




SQL>

SQL> select username, machine, service_name, program from gv$session where inst_id = 1 and sid = 102;




USERNAME             MACHINE              SERVICE_NAME         PROGRAM

-------------------- -------------------- -------------------- ------------------------------------------------

SIMON                orcl19s.oralocal     SYS$USERS            [email protected] (TNS V1-V3)




SQL>

 

As you can see above the connection appear in the primary as a normal connection, nothing complicate or special to handle. And in the listener log the connection (coming from standby when receiving the DML) is normal:

 

2019-04-14T22:12:57.977135+02:00

14-APR-2019 22:12:57 * (CONNECT_DATA=(SERVICE_NAME=8106653b19ca6636e053016410ac3c21)(CID=(PROGRAM=oracle)(HOST=orcl19s.oralocal)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.16.200.1)(PORT=36782)) * establish * 8106653b19ca6636e053016410ac3c21 * 0

2019-04-14T22:14:02.121149+02:00

14-APR-2019 22:14:02 * (CONNECT_DATA=(SERVER=DEDICATED)(SID=dg19)(CID=(PROGRAM=oracle)(HOST=orcl19s.oralocal)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.16.200.1)(PORT=36786)) * establish * dg19 * 0

 

The behaviour it is so normal that you will see the cursors open in the primary (the lock system is by row):

 

SQL> create index ixtestDML on testDML(c1) tablespace users;




Index created.




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);




1 row created.




SQL> INSERT INTO testDML(c1, c3) VALUES (2, sysdate);




1 row created.




SQL> commit;




Commit complete.




SQL>




    ###########

    In the sdtandby database

    ###########

    SQL> delete from testDML where c1 = 1;




    1 row deleted.




    SQL>




SQL> update testDML set c1 = 3 where c1 = 2;




1 row updated.




SQL>

SQL> select  sql_text, user_name from v$open_cursor where user_name = 'SIMON' and upper(sql_text) like '%TESTDML%';




SQL_TEXT                                                                                             USER_NAME

---------------------------------------------------------------------------------------------------- --------------------

DELETE FROM "TESTDML" "A1" WHERE "A1"."C1"=1                                                         SIMON

update testDML set c1 = 3 where c1 = 2                                                               SIMON




SQL>

 

ORA-16397
If you receive the error ORA-16397: statement redirection from Oracle Active Data Guard standby database to primary database failed this is linked with the way that you connected in the database. If you use “/ as sysdba” without username and password, when you try to do the DML you hit the error. To solve this, connect using username, password and database:

 

[oracle@orcl19s ~]$ sqlplus / as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 19:34:45 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);

INSERT INTO testDML(c1, c3) VALUES (1, sysdate)

            *

ERROR at line 1:

ORA-16397: statement redirection from Oracle Active Data Guard standby database

to primary database failed







SQL> exit

Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0

[oracle@orcl19s ~]$ sqlplus sys/oracle@dg19s as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 19:34:55 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);




1 row created.




SQL> commit;




Commit complete.




SQL> exit

Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0

[oracle@orcl19s ~]$

 

ORA-16000
If you disable the ADG_REDIRECT_DML you will receive the error ORA-16000: database or pluggable database open for read-only access when you try to execute the DML:

 

SQL> alter system set ADG_REDIRECT_DML=false scope = both sid = '*';




System altered.




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);

INSERT INTO testDML(c1, c3) VALUES (1, sysdate)

            *

ERROR at line 1:

ORA-16000: database or pluggable database open for read-only access







SQL>

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


Full transportable tablespace: Oracle 12c
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 5 years ago Comments: 0

Full transportable tablespace: Oracle 12c

This is one of most cool features on Oracle 12c and it is a improvement of a old feature called transportable tablespaces. In this howto artcile we will show to you how to use this feature introduced on 12cR1 and you can use it to migrate data from 11g directly to a PDB using simple commands, in fact you can use this with a dblink and you will not need to create dumpfiles, but we will
cover the method using dump files on this article.
First thing to do is connect on source database and create the tablespaces that we will transport, in our case will be on same CDB for demo purposes, but this is more useful when you intend to migrate data between distinct database, migrations and etc.
On PDB1, create two tablespaces, one user and one table on each tablespace:

 

SQL> alter session set container = pdb1;
 Session altered.

SQL> create tablespace ts_mufalani_01 datafile '/u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani01.dbf' size 10m ;
 Tablespace created.

SQL> create tablespace ts_mufalani_02 datafile '/u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani02.dbf' size 10m ;
 Tablespace created.

SQL> create user mufalani identified by oracle default tablespace ts_mufalani_01;
 User created.

SQL> alter user mufalani quota unlimited on ts_mufalani_01;
 User altered.

SQL> alter user mufalani quota unlimited on ts_mufalani_02;
 User altered.

SQL> create table mufalani.test1 tablespace ts_mufalani_01 as select * from dba_users;
 Table created.

SQL> create table mufalani.test2 tablespace ts_mufalani_02 as select * from v$session;
 Table created.

SQL> alter tablespace ts_mufalani_01 read only;
 Tablespace altered.

SQL> alter tablespace ts_mufalani_02 read only;
 Tablespace altered.

 

Now, the tablespaces are read only, check if the tablespaces are self-contained using dbms_tts.transport_set_check, it means, there is no dependency in any other tablespaces.

 

SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('ts_mufalani_01,ts_mufalani_02',TRUE);
 PL/SQL procedure successfully completed.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
 no rows selected

 

No rows selected is the expected result on this select. Remember that the directory home, just exists on pdb1 so far, so use EZconnect to export data for the PDB1.

 

SQL> create or replace directory home as '/home/oracle' ;
 Directory created.

 SQL> grant read, write on directory home to system;
 Grant succeeded.

 

After create the directory and grant access on it to user system, export the metadata using transportable=ALWAYS and full=y, it instructs Oracle to use the full transportable tablespace feature.

 

SQL> !expdp system/oracle_4U@vm2:1521/PDB1 directory=home dumpfile=tts.dmp logfile=tts.log transportable=always full=y 
 Export: Release 12.1.0.2.0 - Production on Fri Sep 13 21:17:05 2019
 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/@vm2:1521/PDB1 directory=home dumpfile=tts.dmp logfile=tts.log transportable=always full=y 
 Estimate in progress using BLOCKS method…
 Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE
 Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
 Total estimation using BLOCKS method: 4.265 MB
 Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/TABLESPACE
 Processing object type DATABASE_EXPORT/PROFILE
 Processing object type DATABASE_EXPORT/SYS_USER/USER
 Processing object type DATABASE_EXPORT/SCHEMA/USER
 Processing object type DATABASE_EXPORT/RADM_FPTM
 Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
 Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
 Processing object type DATABASE_EXPORT/RESOURCE_COST
 Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
 Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
 Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
 Processing object type DATABASE_EXPORT/STATISTICS/MARKER
 Processing object type DATABASE_EXPORT/END_PLUGTS_BLK
 Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
 Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
 . . exported "SYS"."KU$_USER_MAPPING_VIEW"               6.109 KB      38 rows
 . . exported "ORDDATA"."ORDDCM_DOCS"                     252.9 KB       9 rows
 . . exported "WMSYS"."WM$CONSTRAINTS_TABLE$"                 0 KB       0 rows
 . . exported "WMSYS"."WM$LOCKROWS_INFO$"                     0 KB       0 rows
 . . exported "WMSYS"."WM$UDTRIG_INFO$"                       0 KB       0 rows
 . . exported "LBACSYS"."OLS$AUDIT_ACTIONS"               5.757 KB       8 rows
 . . exported "LBACSYS"."OLS$DIP_EVENTS"                  5.539 KB       2 rows
 . . exported "LBACSYS"."OLS$INSTALLATIONS"               6.960 KB       2 rows
 . . exported "LBACSYS"."OLS$PROPS"                       6.234 KB       5 rows
 . . exported "SYS"."DAM_CONFIG_PARAM$"                   6.531 KB      14 rows
 . . exported "SYS"."TSDP_PARAMETER$"                     5.953 KB       1 rows
 . . exported "SYS"."TSDP_POLICY$"                        5.921 KB       1 rows
 . . exported "SYS"."TSDP_SUBPOL$"                        6.328 KB       1 rows
 . . exported "SYSTEM"."REDO_DB"                          25.59 KB       1 rows
 . . exported "WMSYS"."WM$BATCH_COMPRESSIBLE_TABLES$"         0 KB       0 rows
 . . exported "WMSYS"."WM$CONS_COLUMNS$"                      0 KB       0 rows
 . . exported "WMSYS"."WM$ENV_VARS$"                      6.015 KB       3 rows
 . . exported "WMSYS"."WM$EVENTS_INFO$"                   5.812 KB      12 rows
 . . exported "WMSYS"."WM$HINT_TABLE$"                    9.453 KB      75 rows
 . . exported "WMSYS"."WM$MODIFIED_TABLES$"                   0 KB       0 rows
 . . exported "WMSYS"."WM$MP_GRAPH_WORKSPACES_TABLE$"         0 KB       0 rows
 . . exported "WMSYS"."WM$MP_PARENT_WORKSPACES_TABLE$"        0 KB       0 rows
 . . exported "WMSYS"."WM$NESTED_COLUMNS_TABLE$"              0 KB       0 rows
 . . exported "WMSYS"."WM$NEXTVER_TABLE$"                 6.375 KB       1 rows
 . . exported "WMSYS"."WM$REMOVED_WORKSPACES_TABLE$"          0 KB       0 rows
 . . exported "WMSYS"."WM$RESOLVE_WORKSPACES_TABLE$"          0 KB       0 rows
 . . exported "WMSYS"."WM$RIC_LOCKING_TABLE$"                 0 KB       0 rows
 . . exported "WMSYS"."WM$RIC_TABLE$"                         0 KB       0 rows
 . . exported "WMSYS"."WM$RIC_TRIGGERS_TABLE$"                0 KB       0 rows
 . . exported "WMSYS"."WM$UDTRIG_DISPATCH_PROCS$"             0 KB       0 rows
 . . exported "WMSYS"."WM$VERSION_HIERARCHY_TABLE$"       5.984 KB       1 rows
 . . exported "WMSYS"."WM$VERSION_TABLE$"                     0 KB       0 rows
 . . exported "WMSYS"."WM$VT_ERRORS_TABLE$"                   0 KB       0 rows
 . . exported "WMSYS"."WM$WORKSPACES_TABLE$"              12.10 KB       1 rows
 . . exported "WMSYS"."WM$WORKSPACE_PRIV_TABLE$"          7.054 KB      10 rows
 . . exported "WMSYS"."WM$WORKSPACE_SAVEPOINTS_TABLE$"        0 KB       0 rows
 . . exported "LBACSYS"."OLS$AUDIT"                           0 KB       0 rows
 . . exported "LBACSYS"."OLS$COMPARTMENTS"                    0 KB       0 rows
 . . exported "LBACSYS"."OLS$DIP_DEBUG"                       0 KB       0 rows
 . . exported "LBACSYS"."OLS$GROUPS"                          0 KB       0 rows
 . . exported "LBACSYS"."OLS$LAB"                             0 KB       0 rows
 . . exported "LBACSYS"."OLS$LEVELS"                          0 KB       0 rows
 . . exported "LBACSYS"."OLS$POL"                             0 KB       0 rows
 . . exported "LBACSYS"."OLS$POLICY_ADMIN"                    0 KB       0 rows
 . . exported "LBACSYS"."OLS$POLS"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$POLT"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$PROFILE"                         0 KB       0 rows
 . . exported "LBACSYS"."OLS$PROFILES"                        0 KB       0 rows
 . . exported "LBACSYS"."OLS$PROG"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$SESSINFO"                        0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER_COMPARTMENTS"               0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER_GROUPS"                     0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER_LEVELS"                     0 KB       0 rows
 . . exported "SYS"."AUD$"                                    0 KB       0 rows
 . . exported "SYS"."DAM_CLEANUP_EVENTS$"                     0 KB       0 rows
 . . exported "SYS"."DAM_CLEANUP_JOBS$"                       0 KB       0 rows
 . . exported "SYS"."TSDP_ASSOCIATION$"                       0 KB       0 rows
 . . exported "SYS"."TSDP_CONDITION$"                         0 KB       0 rows
 . . exported "SYS"."TSDP_FEATURE_POLICY$"                    0 KB       0 rows
 . . exported "SYS"."TSDP_PROTECTION$"                        0 KB       0 rows
 . . exported "SYS"."TSDP_SENSITIVE_DATA$"                    0 KB       0 rows
 . . exported "SYS"."TSDP_SENSITIVE_TYPE$"                    0 KB       0 rows
 . . exported "SYS"."TSDP_SOURCE$"                            0 KB       0 rows
 . . exported "SYSTEM"."REDO_LOG"                             0 KB       0 rows
 . . exported "SYSTEM"."SCHEDULER_JOB_ARGS"                   0 KB       0 rows
 . . exported "SYSTEM"."SCHEDULER_PROGRAM_ARGS"           9.515 KB      12 rows
 . . exported "SYS"."AUDTAB$TBS$FOR_EXPORT"               5.953 KB       2 rows
 . . exported "SYS"."DBA_SENSITIVE_DATA"                      0 KB       0 rows
 . . exported "SYS"."DBA_TSDP_POLICY_PROTECTION"              0 KB       0 rows
 . . exported "SYS"."FGA_LOG$FOR_EXPORT"                      0 KB       0 rows
 . . exported "SYS"."NACL$_ACE_EXP"                       9.929 KB       1 rows
 . . exported "SYS"."NACL$_HOST_EXP"                      6.914 KB       1 rows
 . . exported "SYS"."NACL$_WALLET_EXP"                        0 KB       0 rows
 . . exported "WMSYS"."WM$EXP_MAP"                        7.718 KB       3 rows
 Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
 
 Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
   /home/oracle/tts.dmp
 
 Datafiles required for transportable tablespace TS_MUFALANI_01:
   /u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani01.dbf
 Datafiles required for transportable tablespace TS_MUFALANI_02:
   /u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani02.dbf
 Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Fri Sep 13 21:30:32 2019 elapsed 0 00:13:10

 

As we can see, the metadata and some data where exported and also Oracle marked to us on final of the log the datafiles required to transport and plug into the other database.
It’s time to copy the datafiles to the pdb on destination database, in our case, is the same CDB, but could be a different database, as you want.

 

SQL> !cp /u01/app/oracle/oradata/cdb1/pdb1/tsmufalani* /u01/app/oracle/oradata/cdb1/pdb2/ 
On destination PDB, we need to create the directory pointing to the same path where we put the dumpfiles, so, we just use the same command to create the directory inside PDB2 as we have created on PDB1, HOME, using the same path and grant to system user.
 
SQL> alter session set container = PDB2;
 Session altered.

SQL> create or replace directory home as '/home/oracle' ;
 Directory created.

SQL> grant read, write on directory home to system;
 Grant succeeded.
As the setup is fine on pdb2, we can import the metadata of the plugged datafiles, we must use the parameter transport_datafiles and put the files enclosed by (‘) and separaterd by (,) as we can see below. On the output we will see some errors related to APEX component, please ignore it, it will not cause any issue on our procedure. Keep in mind it is an VM for test purposes, on a real environment, you may want to remove apex before to execute this procedure. I have a blog note for it, please check it.
 
SQL> !impdp system/oracle_4U@vm2:1521/PDB2 full=Y dumpfile=tts.dmp directory=home transport_datafiles='/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani01.dbf','/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani02.dbf' logfile=import.log
 Import: Release 12.1.0.2.0 - Production on Fri Sep 13 21:34:50 2019
 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
 Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/@vm2:1521/PDB2 full=Y dumpfile=tts.dmp directory=home transport_datafiles=/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani01.dbf,/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani02.dbf logfile=import.log 
 Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
 Processing object type DATABASE_EXPORT/TABLESPACE
 ORA-31684: Object type TABLESPACE:"TEMP" already exists
 Processing object type DATABASE_EXPORT/PROFILE
 Processing object type DATABASE_EXPORT/SYS_USER/USER
 Processing object type DATABASE_EXPORT/SCHEMA/USER
 Processing object type DATABASE_EXPORT/RADM_FPTM
 Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
 Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
 Processing object type DATABASE_EXPORT/RESOURCE_COST
 Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
 Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
 ORA-31684: Object type DIRECTORY:"HOME" already exists
 Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 . . imported "SYS"."KU$_EXPORT_USER_MAP"                 6.109 KB      38 rows
 Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
 . . imported "ORDDATA"."ORDDCM_DOCS_TRANSIENT"           252.9 KB       9 rows
 . . imported "WMSYS"."E$CONSTRAINTS_TABLE$"                  0 KB       0 rows
 . . imported "WMSYS"."E$LOCKROWS_INFO$"                      0 KB       0 rows
 . . imported "WMSYS"."E$UDTRIG_INFO$"                        0 KB       0 rows
 . . imported "SYS"."AMGT$DP$DAM_CONFIG_PARAM$"           6.531 KB      14 rows
 . . imported "SYS"."DP$TSDP_PARAMETER$"                  5.953 KB       1 rows
 . . imported "SYS"."DP$TSDP_POLICY$"                     5.921 KB       1 rows
 . . imported "SYS"."DP$TSDP_SUBPOL$"                     6.328 KB       1 rows
 . . imported "SYSTEM"."REDO_DB_TMP"                      25.59 KB       1 rows
 . . imported "WMSYS"."E$BATCH_COMPRESSIBLE_TABLES$"          0 KB       0 rows
 . . imported "WMSYS"."E$CONS_COLUMNS$"                       0 KB       0 rows
 . . imported "WMSYS"."E$ENV_VARS$"                       6.015 KB       3 rows
 . . imported "WMSYS"."E$EVENTS_INFO$"                    5.812 KB      12 rows
 . . imported "WMSYS"."E$HINT_TABLE$"                     9.453 KB      75 rows
 . . imported "WMSYS"."E$MODIFIED_TABLES$"                    0 KB       0 rows
 . . imported "WMSYS"."E$MP_GRAPH_WORKSPACES_TABLE$"          0 KB       0 rows
 . . imported "WMSYS"."E$MP_PARENT_WORKSPACES_TABLE$"         0 KB       0 rows
 . . imported "WMSYS"."E$NESTED_COLUMNS_TABLE$"               0 KB       0 rows
 . . imported "WMSYS"."E$NEXTVER_TABLE$"                  6.375 KB       1 rows
 . . imported "WMSYS"."E$REMOVED_WORKSPACES_TABLE$"           0 KB       0 rows
 . . imported "WMSYS"."E$RESOLVE_WORKSPACES_TABLE$"           0 KB       0 rows
 . . imported "WMSYS"."E$RIC_LOCKING_TABLE$"                  0 KB       0 rows
 . . imported "WMSYS"."E$RIC_TABLE$"                          0 KB       0 rows
 . . imported "WMSYS"."E$RIC_TRIGGERS_TABLE$"                 0 KB       0 rows
 . . imported "WMSYS"."E$UDTRIG_DISPATCH_PROCS$"              0 KB       0 rows
 . . imported "WMSYS"."E$VERSION_HIERARCHY_TABLE$"        5.984 KB       1 rows
 . . imported "WMSYS"."E$VERSION_TABLE$"                      0 KB       0 rows
 . . imported "WMSYS"."E$VT_ERRORS_TABLE$"                    0 KB       0 rows
 . . imported "WMSYS"."E$WORKSPACES_TABLE$"               12.10 KB       1 rows
 . . imported "WMSYS"."E$WORKSPACE_PRIV_TABLE$"           7.054 KB      10 rows
 . . imported "WMSYS"."E$WORKSPACE_SAVEPOINTS_TABLE$"         0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$AUDIT"                    0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$COMPARTMENTS"             0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$GROUPS"                   0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$LAB"                      0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$LEVELS"                   0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$POL"                      0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$POLS"                     0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$POLT"                     0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$PROFILE"                  0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$PROG"                     0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$USER"                     0 KB       0 rows
 . . imported "SYS"."AMGT$DP$AUD$"                            0 KB       0 rows
 . . imported "SYS"."AMGT$DP$DAM_CLEANUP_EVENTS$"             0 KB       0 rows
 . . imported "SYS"."AMGT$DP$DAM_CLEANUP_JOBS$"               0 KB       0 rows
 . . imported "SYS"."DP$TSDP_ASSOCIATION$"                    0 KB       0 rows
 . . imported "SYS"."DP$TSDP_CONDITION$"                      0 KB       0 rows
 . . imported "SYS"."DP$TSDP_FEATURE_POLICY$"                 0 KB       0 rows
 . . imported "SYS"."DP$TSDP_PROTECTION$"                     0 KB       0 rows
 . . imported "SYS"."DP$TSDP_SENSITIVE_DATA$"                 0 KB       0 rows
 . . imported "SYS"."DP$TSDP_SENSITIVE_TYPE$"                 0 KB       0 rows
 . . imported "SYS"."DP$TSDP_SOURCE$"                         0 KB       0 rows
 . . imported "SYSTEM"."REDO_LOG_TMP"                         0 KB       0 rows
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 . . imported "SYSTEM"."SCHEDULER_JOB_ARGS_TMP"               0 KB       0 rows
 . . imported "SYSTEM"."SCHEDULER_PROGRAM_ARGS_TMP"       9.515 KB      12 rows
 . . imported "SYS"."AMGT$DP$AUDTAB$TBS$FOR_EXPORT"       5.953 KB       2 rows
 . . imported "SYS"."DP$DBA_SENSITIVE_DATA"                   0 KB       0 rows
 . . imported "SYS"."DP$DBA_TSDP_POLICY_PROTECTION"           0 KB       0 rows
 . . imported "SYS"."AMGT$DP$FGA_LOG$FOR_EXPORT"              0 KB       0 rows
 . . imported "SYS"."NACL$_ACE_IMP"                       9.929 KB       1 rows
 . . imported "SYS"."NACL$_HOST_IMP"                      6.914 KB       1 rows
 . . imported "SYS"."NACL$_WALLET_IMP"                        0 KB       0 rows
 . . imported "WMSYS"."E$EXP_MAP"                         7.718 KB       3 rows
 Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
 Processing object type DATABASE_EXPORT/STATISTICS/MARKER
 Processing object type DATABASE_EXPORT/END_PLUGTS_BLK
 Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_PURGE_SESSIONS" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_PURGE_SESSIONS" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_PURGE_SESSIONS"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_CACHE.PURGE_SESSIONS'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.179792000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval=> 
 'FREQ
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_MAIL_QUEUE" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_MAIL_QUEUE" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_MAIL_QUEUE"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_MAIL.PUSH_QUEUE_IMMEDIATE'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.239709000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval=> 
 'FREQ=MINUTE
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_WS_NOTIFICATIONS" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_WS_NOTIFICATIONS" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_WS_NOTIFICATIONS"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_WORKSHEET_API.DO_NOTIFY'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.244043000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_DAILY_MAINTENANCE" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_DAILY_MAINTENANCE" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_DAILY_MAINTENANCE"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_MAINT.DAILY_MAINTENANCE'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.248433000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_inter
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
 Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
 Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 6 error(s) at Fri Sep 13 21:40:12 2019 elapsed 0 00:05:16

After the import is done, we can check the data on the destination PDB, PDB2.

SQL> alter session set container =pdb2;
 Session altered.

SQL> select file_name from dba_data_files;

 FILE_NAME
--------------------------------------------------------
 /u01/app/oracle/oradata/cdb1/pdb2/sysaux01.dbf
 /u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani01.dbf
 /u01/app/oracle/oradata/cdb1/pdb2/system01.dbf
 /u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani02.dbf
 
SQL> select count(1) from mufalani.test1;

COUNT(1)
------------------------
 38

SQL> select count(1) from mufalani.test2;
 COUNT(1)
------------------------
42
 
 SQL> show con_name;

 CON_NAME
------------------------
 PDB2
 
So, this is one of most used technique to migrate databases to Oracle 12cR1, you can check more info on Oracle documentation: 
https://docs.oracle.com/database/121/ADMIN/transport.htm#GUID-1901E9C3-8FCE-4D4E-AB65-34D703474E52
and also on Mike Dietrich blog – https://mikedietrichde.com/tag/transportable-tablespaces/

 

This article was written by André Ontalba and me.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

 


Generating an export from a Data Guard database using NETWORK_LINK.
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Generating an export from a Data Guard database using NETWORK_LINK

This article explains how we can export Data Guard using NETWORK_LINK.
 
The Physical Standby database must be opened in “READ ONLY” mode.

Steps to execute to export from Physical Standby Database

Physical Standby Database

— Connect to Physical Standby database and check its status

 

 

[oracle@vm1 admin] sqlplus / as sysdba


SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
PHYSTBY MOUNTED

-- Cancel managed recovery and open database in "READ ONLY" mode.


SQL> alter database recover managed standby database cancel;



SQL> alter database open read only;




-- Verify database status

SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
VTABOLDG OPEN


SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY


 

“Non Standby” Database           

On the same machine I created a new instance just to serve as a bridge to perform the procedure.

-- create DB Link, Oracle Directory.


[oracle@vm1 admin] sqlplus / as sysdba


SQL> create database link expd connect to system identified by oracle using ‘DG_VTABOL’;

SQL> select db_unique_name from v$database;

DB_UNIQUE_NAME
--------------------
DP

SQL> select db_unique_name from v$database@exp;

DB_UNIQUE_NAME
--------------------
VTABOL

SQL> create directory datapump as ‘/tmp’;

 


Use NETWORK_LINK to database link above to connect to the Physical Standby database:

 

[oracle@vm1 admin] expdp DIRECTORY=DATAPUMP network_link=EXP DUMPFILE=DG_EXPDP.dmp SCHEMAS=DBA_DISCOVER,ITAMARATI,VENDASEXTERNAS,VTABOL,ABSTRATOS logfile=expdp.log PARALLEL=4



Export: Release 12.1.0.2.0 - Production on Fri Sep 6 17:32:54 2019

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options

FLASHBACK automatically enabled to preserve database integrity.

Starting "SYS"."SYS_EXPORT_SCHEMA_02":  sys/******** AS SYSDBA DIRECTORY=DATAPUMP network_link=EXP DUMPFILE=AWS.dmp SCHEMAS=DBA_DISCOVER,ITAMARATI,VENDASEXTERNAS,VTABOL,ABSTRATOS logfile=expdp.log PARALLEL=4

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 84.29 GB

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM

Processing object type SCHEMA_EXPORT/DB_LINK

Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE

Processing object type SCHEMA_EXPORT/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC

Processing object type SCHEMA_EXPORT/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION

Processing object type SCHEMA_EXPORT/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE

Processing object type SCHEMA_EXPORT/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC

Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION

Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/TRIGGER

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW

Processing object type SCHEMA_EXPORT/JOB

Processing object type SCHEMA_EXPORT/REFRESH_GROUP

Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCOBJ




Dump file set for SYS.SYS_EXPORT_SCHEMA_02 is:

  /tmp/DG_EXPDP.dmp

Job "SYS"."SYS_EXPORT_SCHEMA_02" completed with 0 error(s) at Fri Sep 6 19:55:35 2019 elapsed 0 02:22:29


 

On Physical Standby Database

[oracle@vm1 admin] sqlplus / as sysdba


SQL> shutdown immediate



SQL> startup mount




SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;




SQL> SELECT
       ARCH.THREAD# "Thread",
       ARCH.SEQUENCE# "Last Sequence Received",
       APPL.SEQUENCE# "Last Sequence Applied",
       (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
       (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN
       (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
       (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN
       (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
  WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

  2    3    4    5    6    7    8    9   10   11

    Thread Last Sequence Received Last Sequence Applied Difference

---------- ---------------------- --------------------- ----------

         1                  72153                 72153          0

         2                  67021                 67021          0




SQL>

 

I hope this helps you!!!

 

Andre Luiz Dutra Ontalba 
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Configuring TDE on a PDB with PLUG and UNPLUG Option
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Configuring TDE on a PDB with PLUG and UNPLUG Option

 
This article describes the process of configure a TDE on CDB and unplugging PDB1 from the CDB1 instance and plugging into the CDB2 instance on the same machine with a new name of PDB2.

 

1 – Configure TDE Source – CDB (cdb1)
A) Create directory

 

[root@vm1 ~]# mkdir -p /etc/ORACLE/WALLETS/cdb1
[root@vm1 ~]# mkdir -p /etc/ORACLE/WALLETS/cdb2
[root@vm1 ~]# chown -R oracle:oinstall /etc/ORACLE
[root@vm1 ~]# chmod -R 755 /etc/ORACLE
B) Edit sqlnet.ora and configure the Wallet

 

ENCRYPTION_WALLET_LOCATION=
(SOURCE=
(METHOD=FILE)
(METHOD_DATA=
(DIRECTORY=/etc/ORACLE/WALLETS/$ORACLE_SID/)))
C) Configure Database – cdb1

 

[oracle@vm1 ~]$ . oraenv <<< cdb1
ORACLE_SID = [db01] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@vm1 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 3 20:58:03 2019

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>
SQL>
SQL>
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO


SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/etc/ORACLE/WALLETS/cdb1' IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle container=all;

keystore altered.


SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle WITH BACKUP USING 'emp_key_backup'

keystore altered.


SQL> alter session set container=pdb1;

Session altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle WITH BACKUP USING 'emp_key_backup';

keystore altered.

SQL> set linesize 500
SQL> select * from v$encryption_wallet;

WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
-------------------- -------------------------------------------------- ------------------------------ -------------------- --------- --------- ----------
FILE /etc/ORACLE/WALLETS/cdb1/ OPEN PASSWORD SINGLE NO 0


SQL> create tablespace TESTE
datafile '/u01/app/oracle/oradata/cdb1/pdb1/teste.dbf' size 10m
encryption using 'AES256'
default storage (encrypt);

Tablespace created.

D) Export Key from Source – PDB(pdb1)

 


SQL>ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "oracle" TO '/home/oracle/export.p12' IDENTIFIED BY oracle;

keystore altered.

SQL> !
[oracle@vm1 ~]$ ls -ltr export.p12
-rw-r--r--. 1 oracle oinstall 2612 Sep 3 21:08 export.p12
[oracle@vm1 ~]$ exit
 
E) Unplug and DROP PDB(pdb1)

 

SQL> alter pluggable database close immediate;

Pluggable database altered.

SQL> conn / as sysdba
Connected.

SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

Pluggable database altered.

SQL> drop pluggable database pdb1 keep datafiles;

Pluggable database dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

2 – Configure TDE Destination CDB(cdb2)

 

[oracle@vm1 ~]$ . oraenv <<< cdb2
ORACLE_SID = [cdb1] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@vm1 ~]$ ls /etc/ORACLE/WALLETS/
cdb1 cdb2
[oracle@vm1 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 3 21:13:12 2019

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1241513984 bytes
Fixed Size 2923872 bytes
Variable Size 452985504 bytes
Database Buffers 771751936 bytes
Redo Buffers 13852672 bytes
Database mounted.
Database opened.
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED
SQL>
SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/etc/ORACLE/WALLETS/cdb2' IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle WITH BACKUP USING 'emp_key_backup';

keystore altered.

SQL>
SQL>
F) PLUG PDB(pdb2) using Unplugged pdb1

 

SQL> create pluggable database pdb2 as clone using '/home/oracle/pdb1.xml'
2 file_name_convert=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');

Pluggable database created.

SQL> alter pluggable database pdb2 open;

Warning: PDB altered with errors.
G) IMPORT KEY from Source PDB(pdb1) to Destination PDB(pdb2)

 

SQL> alter session set container=pdb2;

Session altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "oracle" FROM '/home/oracle/export.p12' IDENTIFIED BY oracle WITH BACKUP;

keystore altered.

SQL> shut immediate;
Pluggable Database closed.
SQL> startup
Pluggable Database opened.

 

H) Validate PDB is fully integrated with CDB2

 

 

SQL> conn / as sysdba
Connected.

SQL> select message,status from pdb_plug_in_violations where status <> 'RESOLVED';
SQL> /

no rows selected

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED
4 PDB2 READ WRITE NO
SQL>

 

I hope this helps you!!!
 
This article was written by Andre Ontalba e Rodrigo Mufalani
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


1 2 3 4 5 6 7