Rodrigo Mufalani (Board Member)
Do you NID to rename an Oracle DB on ASM?
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 4 years ago Comments: 0

Do you NID to rename an Oracle DB on ASM?

Hello, my friends, it’s been a while since my last post. By the end of 2020, there were some very exciting days, with new projects and new challenges, and also I had some time off, dedicated to my family and to me. As 2021 is in the very beginning, I would like to wish you, reading this, a Happy new year, if I didn’t do yet, through my social media.
Well, let’s get into the subject of this post!!! Sometimes, we create a database, deliver it to the customer and people start to use it immediately and after some days or months of usage, they discover the database must be named in another way and request to change. This is a normal thing and that’s why Oracle provides the NID utility. The title of this post was not a typo.
The utilization of NID utility is pretty simple and straightforward, basically we will put the database on MOUNT MODE and issue the utility connecting to the database we intend to change the name and as a parameter a new name using DBNAME parameter. Just to get more familiar with NID utility you can issue the help as I issued below:

 

[oracle@oel82 ~]$ nid help=y




DBNEWID: Release 19.0.0.0.0 - Production on Wed Jan 20 20:13:31 2021




Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.




Keyword     Description                    (Default)

----------------------------------------------------

TARGET      Username/Password              (NONE)

DBNAME      New database name              (NONE)

LOGFILE     Output Log                     (NONE)

REVERT      Revert failed change           NO

SETNAME     Set a new database name only   NO

APPEND      Append to output log           NO

HELP        Displays these messages        NO

 

We gonna see in the future steps of this blogpost the utilization of NID and also how to perform this situation when your database is running on ASM. We can use two approaches if we created the database using DBCA and not created before the directories on ASM. Note: If the directory is created by system you cannot rename it!!!
I’ve checked for that on 19c documentation and still true. For those who already read this post Renaming directory on ASM. So, my advice is, create the directories on ASM manually with the name of the database as I will do later here to allows you to rename the directory, which would be the third approach for this current post. So, let’s do it for a case where we didn’t create the directory manually and DBCA flagged on ASM the directories as SYSTEM created. Like the picture below:

 

 

As you noticed, I have created a database named orcl and we will rename it to prdorcl (don’t judge me, I was running out of good names). My database name will be change from orcl to prdorcl.

 

As I mentioned before, the database must be on MOUNT mode to use NID

 

[oracle@oel82 ~]$ sqlplus "/as sysdba"




SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 20 20:26:25 2021

Version 19.8.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.




Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.8.0.0.0




SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL>

SQL> startup mount

ORACLE instance started.




Total System Global Area 2415917880 bytes

Fixed Size                  8899384 bytes

Variable Size             520093696 bytes

Database Buffers         1879048192 bytes

Redo Buffers                7876608 bytes

Database mounted.

SQL>

 

With the database mounted, we can apply the NID using this command line

 

[oracle@oel82 ~]$ nid target=/ dbname=prdorcl logfile=prdorcl.log







[oracle@oel82 ~]$ cat prdorcl.log




DBNEWID: Release 19.0.0.0.0 - Production on Wed Jan 20 20:29:57 2021




Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.




Connected to database ORCL (DBID=1590110414)




Connected to server version 19.8.0




Control Files in database:

    +DATA/ORCL/CONTROLFILE/current.261.1062308079

    +DATA/ORCL/CONTROLFILE/current.260.1062308079




Changing database ID from 1590110414 to 2783918790

Changing database name from ORCL to PRDORCL

    Control File +DATA/ORCL/CONTROLFILE/current.261.1062308079 - modified

    Control File +DATA/ORCL/CONTROLFILE/current.260.1062308079 - modified

    Datafile +DATA/ORCL/DATAFILE/system.256.106230789 - dbid changed, wrote new name

    Datafile +DATA/ORCL/DATAFILE/sysaux.257.106230794 - dbid changed, wrote new name

    Datafile +DATA/ORCL/DATAFILE/undotbs1.258.106230797 - dbid changed, wrote new name

    Datafile +DATA/ORCL/DATAFILE/users.259.106230798 - dbid changed, wrote new name

    Datafile +DATA/ORCL/TEMPFILE/temp.268.106230810 - dbid changed, wrote new name

    Control File +DATA/ORCL/CONTROLFILE/current.261.1062308079 - dbid changed, wrote new name

    Control File +DATA/ORCL/CONTROLFILE/current.260.1062308079 - dbid changed, wrote new name

    Instance shut down




Database name changed to PRDORCL.

Modify parameter file and generate a new password file before restarting.

Database ID for database PRDORCL changed to 2783918790.

All previous backups and archived redo logs for this database are unusable.

Database is not aware of previous backups and archived logs in Recovery Area.

Database has been shutdown, open database with RESETLOGS option.

Succesfully changed database name and ID.

DBNEWID - Completed succesfully.

 

We saw the process was easy and quick to apply, don’t be afraid, but NID utility will shutdown your database and later on we will need to open using resetlogs.
Now as I haven’t created a pfile or a spfile with the new name I will fail to start the instance

 

[oracle@oel82 ~]$ export ORACLE_SID=prdorcl

[oracle@oel82 ~]$ sqlplus "/as sysdba"




SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 20 20:32:09 2021

Version 19.8.0.0.0




Copyright (c) 1982, 2020, Oracle.  All rights reserved.




Connected to an idle instance.




SQL> startup mount

ORA-01078: failure in processing system parameters

LRM-00109: could not open parameter file '/u01/app/oracle/product/19.8.0/db_1/dbs/initprdorcl.ora'

 

This orcl database was created using DBCA, so the spfile is stored on ASM by default

 

[oracle@oel82 dbs]$ srvctl config database -d orcl

Database unique name: orcl

Database name: orcl

Oracle home: /u01/app/oracle/product/19.8.0/db_1

Oracle user: oracle

Spfile: +DATA/ORCL/PARAMETERFILE/spfile.269.1062310177

Password file:

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Disk Groups: DATA

Services:

OSDBA group: oinstall

OSOPER group: oinstall

Database instance: orcl

 

We could identify the path of spfile from above command, so now I can create a pfile, edit it and start my new instance with name prdorcl.

 

[oracle@oel82 dbs]$ sqlplus "/as sysdba"




SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 20 20:34:22 2021

Version 19.8.0.0.0




Copyright (c) 1982, 2020, Oracle.  All rights reserved.




Connected to an idle instance.




SQL> create pfile='/u01/app/oracle/product/19.8.0/db_1/dbs/initprdorcl.ora' from spfile='+DATA/ORCL/PARAMETERFILE/spfile.269.1062310177';




File created.

 

So, just as reference, the original pfile looks like this… except because I removed dynamic memory parameter from the top of the file. There are some parameters do change, audit_file_dest, control_files, db_name and dispatchers. We will see the new file later on (wait for it).

 

*.audit_file_dest='/u01/app/oracle/admin/orcl/adump'

*.audit_trail='db'

*.compatible='19.0.0'

*.control_files='+DATA/ORCL/CONTROLFILE/current.261.1062308079','+DATA/ORCL/CONTROLFILE/current.260.1062308079'

*.db_block_size=8192

*.db_create_file_dest='+DATA'

*.db_name='orcl'

*.db_recovery_file_dest='+DATA'

*.db_recovery_file_dest_size=8256m

*.diagnostic_dest='/u01/app/oracle'

*.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'

*.local_listener='LISTENER_ORCL'

*.open_cursors=300

*.pga_aggregate_target=767m

*.processes=300

*.remote_login_passwordfile='EXCLUSIVE'

*.sga_target=2301m

*.undo_tablespace='UNDOTBS1'

 

As I mentioned prior, in the beginning of the post. It’s strongly recommendable to create the path on ASM manually.

 

[oracle@oel82 dbs]$ . oraenv <<< +ASM

ORACLE_SID = [+ASM] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@oel82 dbs]$




[oracle@oel82 dbs]$ asmcmd

ASMCMD> cd DATA

mkdir PRDORCL

mkdir PRDORCL/CONTROLFILE

mkdir PRDORCL/DATAFILE

mkdir PRDORCL/PARAMETERFILE

mkdir PRDORCL/ONLINELOG

mkdir PRDORCL/TEMPFILE

 

Now I have the new path created by me and not flagged as “SYSTEM”

 

 

It’s time to edit my new pfile to reflect the new database name and configurations

 

*.audit_file_dest='/u01/app/oracle/admin/prdorcl/adump'

*.audit_trail='db'

*.compatible='19.0.0'

*.control_files='+DATA/PRDORCL/CONTROLFILE/control01.ctl','+DATA/PRDORCL/CONTROLFILE/control02.ctl'

*.db_block_size=8192

*.db_create_file_dest='+DATA'

*.db_name='prdorcl'

*.db_recovery_file_dest='+DATA'

*.db_recovery_file_dest_size=8256m

*.diagnostic_dest='/u01/app/oracle'

*.dispatchers='(PROTOCOL=TCP) (SERVICE=prdorclXDB)'

*.local_listener='LISTENER_PRDORCL'

*.open_cursors=300

*.pga_aggregate_target=767m

*.processes=300

*.remote_login_passwordfile='EXCLUSIVE'

*.sga_target=2301m

*.undo_tablespace='UNDOTBS1'

 

Create the folder to store audit files on OS

 

[oracle@oel82 dbs]$ mkdir -p /u01/app/oracle/admin/prdorcl/adump

 

Change your tnsnames to reflect the new names

 

from:




LISTENER_ORCL =

  (ADDRESS = (PROTOCOL = TCP)(HOST = oel82.fritz.box)(PORT = 1521))




ORCL =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = oel82.fritz.box)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = orcl)

    )

  )




to:




LISTENER_PRDORCL =

  (ADDRESS = (PROTOCOL = TCP)(HOST = oel82.fritz.box)(PORT = 1521))







PRDORCL =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = oel82.fritz.box)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = prdorcl)

    )

  )

 

Edit the /etc/oratab file to reflect the new name

 

+ASM:/u01/app/19.8.0/grid:N             # line added by Agent

prdorcl:/u01/app/oracle/product/19.8.0/db_1:N           # line added by Agent

 

I’m about to start the instance and restore the controlfiles to the destination I put on parameter controlfiles and then open the database with resetlogs.

 

SQL> show parameter control_files




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

control_files                        string      +DATA/PRDORCL/CONTROLFILE/cont

                                                 rol01.ctl, +DATA/PRDORCL/CONTR

                                                 OLFILE/control02.ctl

 

Exit from sqlplus and login on RMAN to restore the controlfiles and open the database

 

[oracle@oel82 dbs]$ export ORACLE_SID=prdorcl

[oracle@oel82 dbs]$ rman target /




Recovery Manager: Release 19.0.0.0.0 - Production on Wed Jan 20 20:57:02 2021

Version 19.8.0.0.0




Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.




connected to target database (not started)




RMAN> startup nomount




Oracle instance started




Total System Global Area    2415917880 bytes




Fixed Size                     8899384 bytes

Variable Size                520093696 bytes

Database Buffers            1879048192 bytes

Redo Buffers                   7876608 bytes




RMAN> restore controlfile from '+DATA/ORCL/CONTROLFILE/current.261.1062308079';




Starting restore at 20-JAN-21

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=261 device type=DISK




channel ORA_DISK_1: copied control file copy

output file name=+DATA/PRDORCL/CONTROLFILE/control01.ctl

output file name=+DATA/PRDORCL/CONTROLFILE/control02.ctl

Finished restore at 20-JAN-21




RMAN> alter database open resetlogs;




Statement processed

 

The controlfiles were restored where we desired and we can double check it on asmcmd as well.

 

[oracle@oel82 dbs]$ . oraenv <<< +ASM

ORACLE_SID = [prdorcl] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@oel82 dbs]$

[oracle@oel82 dbs]$ asmcmd

ASMCMD> cd data

ASMCMD> ls

ORCL/

PRDORCL/

ASMCMD> cd PRDORCL

ASMCMD> cd CONTROLFILE

ASMCMD> ls -l

Type         Redund  Striped  Time             Sys  Name

CONTROLFILE  UNPROT  FINE     JAN 20 06:00:00  N    control01.ctl => +DATA/PRDORCL/CONTROLFILE/current.270.1062312847

CONTROLFILE  UNPROT  FINE     JAN 20 06:00:00  N    control02.ctl => +DATA/PRDORCL/CONTROLFILE/current.271.1062313075

CONTROLFILE  UNPROT  FINE     JAN 20 06:00:00  Y    current.270.1062312847

CONTROLFILE  UNPROT  FINE     JAN 20 06:00:00  Y    current.271.1062313075

 

And you probably is questioning yourself now. OK, good, the controlfiles are there, but. What about the datafiles?

 

ASMCMD> cd ..

ASMCMD> cd datafile

ASMCMD> ls

ASMCMD>

ASMCMD>

ASMCMD> pwd

+data/PRDORCL/datafile

ASMCMD> ls -l

ASMCMD>

 

  1. Approach Number #1 – Aliases
Nothing is on datafiles folder. Do you rememer when I said we could use two approaches on this case? Well, I will use the approach documented on Doc Id 564993.1 using aliases as the first approach. In this approach, the files still on the same place, but we create aliases and then we can safely rename the existing datafiles to the folder with the correct name (PRDORCL). This way is recommended when you don’t care about having multiple folders on ASM and you cannot afford a huge downtime.
I don’t need to remember you, we cannot remove the original folder without deleting all your datafiles.
Let’s take a look on the situation now. I have my controlfiles on the correct path and also redologs, when I issued open resetlogs it already created new logs for me on the correct path.

 

[oracle@oel82 dbs]$ . oraenv <<< prdorcl

ORACLE_SID = [prdorcl] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@oel82 dbs]$

[oracle@oel82 dbs]$ sqlplus "/as sysdba"




SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 20 21:05:48 2021

Version 19.8.0.0.0




Copyright (c) 1982, 2020, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.8.0.0.0




SQL> select name from v$datafile;




NAME

--------------------------------------------------------------------------------

+DATA/ORCL/DATAFILE/system.256.1062307897

+DATA/ORCL/DATAFILE/sysaux.257.1062307943

+DATA/ORCL/DATAFILE/undotbs1.258.1062307979

+DATA/ORCL/DATAFILE/users.259.1062307981




SQL> select name from v$tempfile;




NAME

--------------------------------------------------------------------------------

+DATA/ORCL/TEMPFILE/temp.268.1062308105







SQL> select member from v$logfile;




MEMBER

--------------------------------------------------------------------------------

+DATA/PRDORCL/ONLINELOG/group_3.267.1062313129

+DATA/PRDORCL/ONLINELOG/group_3.265.1062313131

+DATA/PRDORCL/ONLINELOG/group_2.266.1062313127

+DATA/PRDORCL/ONLINELOG/group_2.264.1062313129

+DATA/PRDORCL/ONLINELOG/group_1.263.1062313125

+DATA/PRDORCL/ONLINELOG/group_1.262.1062313127




6 rows selected.




SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

 

I brought the database down to rename the datafiles, but first we need to create the aliases connected on ASM instance via sqlplus “/ as sysasm”

 

[oracle@oel82 dbs]$ . oraenv <<< +ASM

ORACLE_SID = [prdorcl] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@oel82 dbs]$

[oracle@oel82 dbs]$ sqlplus "/as sysasm"




SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 20 21:49:27 2021

Version 19.8.0.0.0




Copyright (c) 1982, 2020, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.8.0.0.0




SQL> alter diskgroup DATA add alias '+DATA/PRDORCL/DATAFILE/system01.dbf' for  '+DATA/ORCL/DATAFILE/system.256.1062307897';




Diskgroup altered.




SQL> alter diskgroup DATA add alias '+DATA/PRDORCL/DATAFILE/sysaux01.dbf' for  '+DATA/ORCL/DATAFILE/sysaux.257.1062307943';SQL>




Diskgroup altered.




SQL> alter diskgroup DATA add alias '+DATA/PRDORCL/DATAFILE/undotbs1.dbf' for  '+DATA/ORCL/DATAFILE/undotbs1.258.1062307979';




Diskgroup altered.




SQL> alter diskgroup DATA add alias '+DATA/PRDORCL/DATAFILE/users01.dbf'  for  '+DATA/ORCL/DATAFILE/users.259.1062307981';




Diskgroup altered.




SQL> alter diskgroup DATA add alias '+DATA/PRDORCL/TEMPFILE/temp01.dbf' for '+DATA/ORCL/TEMPFILE/temp.268.1062308105';




Diskgroup altered.

 

Once the aliases were created, back to the database instance on MOUNT mode and rename all the datafiles to point to the aliases recently created.

 

SQL> alter database rename file '+DATA/ORCL/DATAFILE/system.256.1062307897'   to '+DATA/PRDORCL/DATAFILE/system01.dbf' ;




Database altered.




SQL> alter database rename file '+DATA/ORCL/DATAFILE/sysaux.257.1062307943'   to '+DATA/PRDORCL/DATAFILE/sysaux01.dbf';




Database altered.




SQL> alter database rename file '+DATA/ORCL/DATAFILE/undotbs1.258.1062307979' to '+DATA/PRDORCL/DATAFILE/undotbs1.dbf' ;




Database altered.




SQL> alter database rename file '+DATA/ORCL/DATAFILE/users.259.1062307981'    to '+DATA/PRDORCL/DATAFILE/users01.dbf';




Database altered.




SQL> alter database rename file '+DATA/ORCL/TEMPFILE/temp.268.1062308105' to '+DATA/PRDORCL/TEMPFILE/temp01.dbf';




Database altered.




SQL> alter database open;




Database altered.

 

Putting the spfile back to ASM on the correct PATH

 

[oracle@oel82 ~]$ sqlplus "/as sysdba"




SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 20 22:38:02 2021

Version 19.8.0.0.0




Copyright (c) 1982, 2020, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.8.0.0.0




SQL> create spfile='+DATA/PRDORCL/PARAMETERFILE/spfileprdorcl.ora' from pfile='/u01/app/oracle/product/19.8.0/db_1/dbs/initprdorcl.ora';




File created.

 

Adding the new database on Oracle Restart

 

[oracle@oel82 ~]$ srvctl add database -db prdorcl -oraclehome /u01/app/oracle/product/19.8.0/db_1 -spfile '+DATA/PRDORCL/PARAMETERFILE/spfileprdorcl.ora'

[oracle@oel82 ~]$ srvctl start database -db prdorcl

[oracle@oel82 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

Name           Target  State        Server                   State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATA.dg

               ONLINE  ONLINE       oel82                    STABLE

ora.LISTENER.lsnr

               ONLINE  ONLINE       oel82                    STABLE

ora.asm

               ONLINE  ONLINE       oel82                    Started,STABLE

ora.ons

               OFFLINE OFFLINE      oel82                    STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.cssd

      1        ONLINE  ONLINE       oel82                    STABLE

ora.diskmon

      1        OFFLINE OFFLINE                               STABLE

ora.evmd

      1        ONLINE  ONLINE       oel82                    STABLE

ora.orcl.db

      1        OFFLINE OFFLINE                               STABLE

ora.prdorcl.db

      1        ONLINE  ONLINE       oel82                    Open,HOME=/u01/app/o

                                                             racle/product/19.8.0

                                                             /db_1,STABLE

--------------------------------------------------------------------------------

 

And finally removing the old reference for the old database on Oracle Restart

 

[oracle@oel82 ~]$ srvctl remove database -db orcl

Remove the database orcl? (y/[n]) y

[oracle@oel82 ~]$

 

Adjusting the listener on Grid Home to register my new database name

 

[oracle@oel82 ~]$ cd $ORACLE_HOME

[oracle@oel82 grid]$ cd network/admin/

[oracle@oel82 admin]$ vi listener.ora




[oracle@oel82 admin]$ cat listener.ora

# listener.ora Network Configuration File: /u01/app/19.8.0/grid/network/admin/listener.ora

# Generated by Oracle configuration tools.




#Backup file is  /u01/app/oracle/crsdata/oel82/output/listener.ora.bak.oel82.oracle line added by Agent




SID_LIST_LISTENER =

  (SID_LIST =

    (SID_DESC =

      (GLOBAL_DBNAME = prdorcl)

      (ORACLE_HOME = /u01/app/oracle/product/19.8.0/db_1)

      (SID_NAME = prdorcl)

    )

  )




VALID_NODE_CHECKING_REGISTRATION_LISTENER = ON




LISTENER =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = oel82.fritz.box)(PORT = 1521))

  )




ADR_BASE_LISTENER = /u01/app/oracle




ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON

 

Then restart the listener

 

[oracle@oel82 admin]$ srvctl stop listener

[oracle@oel82 admin]$ srvctl start listener

[oracle@oel82 admin]$ lsnrctl status




LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 20-JAN-2021 22:49:46




Copyright (c) 1991, 2020, Oracle.  All rights reserved.




Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oel82.fritz.box)(PORT=1521)))

STATUS of the LISTENER

------------------------

Alias                     LISTENER

Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production

Start Date                20-JAN-2021 08:49:20

Uptime                    0 days 0 hr. 0 min. 25 sec

Trace Level               off

Security                  ON: Local OS Authentication

SNMP                      OFF

Listener Parameter File   /u01/app/19.8.0/grid/network/admin/listener.ora

Listener Log File         /u01/app/oracle/diag/tnslsnr/oel82/listener/alert/log.xml

Listening Endpoints Summary...

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oel82)(PORT=1521)))

Services Summary...

Service "prdorcl" has 1 instance(s).

  Instance "prdorcl", status UNKNOWN, has 1 handler(s) for this service...

The command completed successfully

[oracle@oel82 admin]$

 

Approach Number #2 – Copy
In this approach you have more downtime because you need to copy the datafiles from one folder to another one. This will physically move the data to the destination folder, so we can remove the source folder once we finished. If downtime is not so important and your database is not so big you might have enough space to do it using backup as copy from rman, or you could do it tablespace by tablespace, in this case you don’t need to have the double of your database size in storage to perform this procedure.
I’m gonna show to you how to use copy command and I’m not using archivelog mode for this test database, I will do the commands offline, I will copy datafile per datafile to the destination folder using rman copy command.
Note my commands are poiting to the aliases because I tested the #1 approach before to execute this one

 

RMAN> copy datafile '+DATA/PRDORCL/DATAFILE/system01.dbf' to '+DATA';




Starting backup at 20-JAN-21

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile copy

input datafile file number=00001 name=+DATA/PRDORCL/DATAFILE/system01.dbf

output file name=+DATA/PRDORCL/DATAFILE/system.272.1062318091 tag=TAG20210120T142128 RECID=1 STAMP=1062339697

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:16

Finished backup at 20-JAN-21




Starting Control File Autobackup at 20-JAN-21

piece handle=+DATA/PRDORCL/AUTOBACKUP/2021_01_20/n_1062339305.273.1062318107 comment=NONE

Finished Control File Autobackup at 20-JAN-21




RMAN> copy datafile '+DATA/PRDORCL/DATAFILE/sysaux01.dbf' to '+DATA';




Starting backup at 20-JAN-21

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile copy

input datafile file number=00003 name=+DATA/PRDORCL/DATAFILE/sysaux01.dbf

output file name=+DATA/PRDORCL/DATAFILE/sysaux.274.1062318281 tag=TAG20210120T142439 RECID=2 STAMP=1062339885

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07

Finished backup at 20-JAN-21




Starting Control File Autobackup at 20-JAN-21

piece handle=+DATA/PRDORCL/AUTOBACKUP/2021_01_20/n_1062339305.275.1062318289 comment=NONE

Finished Control File Autobackup at 20-JAN-21




RMAN> copy datafile '+DATA/PRDORCL/DATAFILE/undotbs1.dbf' to '+DATA';




Starting backup at 20-JAN-21

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile copy

input datafile file number=00004 name=+DATA/PRDORCL/DATAFILE/undotbs1.dbf

output file name=+DATA/PRDORCL/DATAFILE/undotbs1.276.1062318293 tag=TAG20210120T142453 RECID=3 STAMP=1062339899

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:08

Finished backup at 20-JAN-21




Starting Control File Autobackup at 20-JAN-21

piece handle=+DATA/PRDORCL/AUTOBACKUP/2021_01_20/n_1062339305.277.1062318305 comment=NONE

Finished Control File Autobackup at 20-JAN-21




RMAN> copy datafile '+DATA/PRDORCL/DATAFILE/users01.dbf' to '+DATA';




Starting backup at 20-JAN-21

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile copy

input datafile file number=00007 name=+DATA/PRDORCL/DATAFILE/users01.dbf

output file name=+DATA/PRDORCL/DATAFILE/users.278.1062318311 tag=TAG20210120T142509 RECID=4 STAMP=1062339910

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01

Finished backup at 20-JAN-21




Starting Control File Autobackup at 20-JAN-21

piece handle=+DATA/PRDORCL/AUTOBACKUP/2021_01_20/n_1062339305.279.1062318313 comment=NONE

Finished Control File Autobackup at 20-JAN-21

 

Important step is “rename” the files again on your controlfile, we can do that, easily, from RMAN

 

RMAN> switch database to copy;




datafile 1 switched to datafile copy "+DATA/PRDORCL/DATAFILE/system.272.1062318091"

datafile 3 switched to datafile copy "+DATA/PRDORCL/DATAFILE/sysaux.274.1062318281"

datafile 4 switched to datafile copy "+DATA/PRDORCL/DATAFILE/undotbs1.276.1062318293"

datafile 7 switched to datafile copy "+DATA/PRDORCL/DATAFILE/users.278.1062318311"

 

 

As all the files are now stored on the correct folder, on approach 2 we can remove safely the source folder to avoid issues in the future with space.

 

ASMCMD> ASMCMD> rm -rf ORCL

ASMCMD>

ASMCMD>

ASMCMD> pwd

+DATA

ASMCMD> ls -l

Type  Redund  Striped  Time  Sys  Name

                             N    PRDORCL/

ASMCMD>

 

One thing about the temporary tablespace, Oracle will recreate it for you when you open the database.
Summary: Keep in mind we saw two different approaches here to solve the same issue and if you created the ASM folder manually you can also apply a 3rd approach describe by this blogpost here: Renaming directory on ASM
 
I hope you liked this post, keep posted on my database adventures following me on twitter @mufalani and on linkedin
 
Best Regards,

 

Rodrigo Mufalani

 


Full transportable tablespace: Oracle 12c
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 5 years ago Comments: 0

Full transportable tablespace: Oracle 12c

This is one of most cool features on Oracle 12c and it is a improvement of a old feature called transportable tablespaces. In this howto artcile we will show to you how to use this feature introduced on 12cR1 and you can use it to migrate data from 11g directly to a PDB using simple commands, in fact you can use this with a dblink and you will not need to create dumpfiles, but we will
cover the method using dump files on this article.
First thing to do is connect on source database and create the tablespaces that we will transport, in our case will be on same CDB for demo purposes, but this is more useful when you intend to migrate data between distinct database, migrations and etc.
On PDB1, create two tablespaces, one user and one table on each tablespace:

 

SQL> alter session set container = pdb1;
 Session altered.

SQL> create tablespace ts_mufalani_01 datafile '/u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani01.dbf' size 10m ;
 Tablespace created.

SQL> create tablespace ts_mufalani_02 datafile '/u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani02.dbf' size 10m ;
 Tablespace created.

SQL> create user mufalani identified by oracle default tablespace ts_mufalani_01;
 User created.

SQL> alter user mufalani quota unlimited on ts_mufalani_01;
 User altered.

SQL> alter user mufalani quota unlimited on ts_mufalani_02;
 User altered.

SQL> create table mufalani.test1 tablespace ts_mufalani_01 as select * from dba_users;
 Table created.

SQL> create table mufalani.test2 tablespace ts_mufalani_02 as select * from v$session;
 Table created.

SQL> alter tablespace ts_mufalani_01 read only;
 Tablespace altered.

SQL> alter tablespace ts_mufalani_02 read only;
 Tablespace altered.

 

Now, the tablespaces are read only, check if the tablespaces are self-contained using dbms_tts.transport_set_check, it means, there is no dependency in any other tablespaces.

 

SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('ts_mufalani_01,ts_mufalani_02',TRUE);
 PL/SQL procedure successfully completed.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
 no rows selected

 

No rows selected is the expected result on this select. Remember that the directory home, just exists on pdb1 so far, so use EZconnect to export data for the PDB1.

 

SQL> create or replace directory home as '/home/oracle' ;
 Directory created.

 SQL> grant read, write on directory home to system;
 Grant succeeded.

 

After create the directory and grant access on it to user system, export the metadata using transportable=ALWAYS and full=y, it instructs Oracle to use the full transportable tablespace feature.

 

SQL> !expdp system/oracle_4U@vm2:1521/PDB1 directory=home dumpfile=tts.dmp logfile=tts.log transportable=always full=y 
 Export: Release 12.1.0.2.0 - Production on Fri Sep 13 21:17:05 2019
 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/@vm2:1521/PDB1 directory=home dumpfile=tts.dmp logfile=tts.log transportable=always full=y 
 Estimate in progress using BLOCKS method…
 Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE
 Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
 Total estimation using BLOCKS method: 4.265 MB
 Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/TABLESPACE
 Processing object type DATABASE_EXPORT/PROFILE
 Processing object type DATABASE_EXPORT/SYS_USER/USER
 Processing object type DATABASE_EXPORT/SCHEMA/USER
 Processing object type DATABASE_EXPORT/RADM_FPTM
 Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
 Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
 Processing object type DATABASE_EXPORT/RESOURCE_COST
 Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
 Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
 Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
 Processing object type DATABASE_EXPORT/STATISTICS/MARKER
 Processing object type DATABASE_EXPORT/END_PLUGTS_BLK
 Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
 Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
 . . exported "SYS"."KU$_USER_MAPPING_VIEW"               6.109 KB      38 rows
 . . exported "ORDDATA"."ORDDCM_DOCS"                     252.9 KB       9 rows
 . . exported "WMSYS"."WM$CONSTRAINTS_TABLE$"                 0 KB       0 rows
 . . exported "WMSYS"."WM$LOCKROWS_INFO$"                     0 KB       0 rows
 . . exported "WMSYS"."WM$UDTRIG_INFO$"                       0 KB       0 rows
 . . exported "LBACSYS"."OLS$AUDIT_ACTIONS"               5.757 KB       8 rows
 . . exported "LBACSYS"."OLS$DIP_EVENTS"                  5.539 KB       2 rows
 . . exported "LBACSYS"."OLS$INSTALLATIONS"               6.960 KB       2 rows
 . . exported "LBACSYS"."OLS$PROPS"                       6.234 KB       5 rows
 . . exported "SYS"."DAM_CONFIG_PARAM$"                   6.531 KB      14 rows
 . . exported "SYS"."TSDP_PARAMETER$"                     5.953 KB       1 rows
 . . exported "SYS"."TSDP_POLICY$"                        5.921 KB       1 rows
 . . exported "SYS"."TSDP_SUBPOL$"                        6.328 KB       1 rows
 . . exported "SYSTEM"."REDO_DB"                          25.59 KB       1 rows
 . . exported "WMSYS"."WM$BATCH_COMPRESSIBLE_TABLES$"         0 KB       0 rows
 . . exported "WMSYS"."WM$CONS_COLUMNS$"                      0 KB       0 rows
 . . exported "WMSYS"."WM$ENV_VARS$"                      6.015 KB       3 rows
 . . exported "WMSYS"."WM$EVENTS_INFO$"                   5.812 KB      12 rows
 . . exported "WMSYS"."WM$HINT_TABLE$"                    9.453 KB      75 rows
 . . exported "WMSYS"."WM$MODIFIED_TABLES$"                   0 KB       0 rows
 . . exported "WMSYS"."WM$MP_GRAPH_WORKSPACES_TABLE$"         0 KB       0 rows
 . . exported "WMSYS"."WM$MP_PARENT_WORKSPACES_TABLE$"        0 KB       0 rows
 . . exported "WMSYS"."WM$NESTED_COLUMNS_TABLE$"              0 KB       0 rows
 . . exported "WMSYS"."WM$NEXTVER_TABLE$"                 6.375 KB       1 rows
 . . exported "WMSYS"."WM$REMOVED_WORKSPACES_TABLE$"          0 KB       0 rows
 . . exported "WMSYS"."WM$RESOLVE_WORKSPACES_TABLE$"          0 KB       0 rows
 . . exported "WMSYS"."WM$RIC_LOCKING_TABLE$"                 0 KB       0 rows
 . . exported "WMSYS"."WM$RIC_TABLE$"                         0 KB       0 rows
 . . exported "WMSYS"."WM$RIC_TRIGGERS_TABLE$"                0 KB       0 rows
 . . exported "WMSYS"."WM$UDTRIG_DISPATCH_PROCS$"             0 KB       0 rows
 . . exported "WMSYS"."WM$VERSION_HIERARCHY_TABLE$"       5.984 KB       1 rows
 . . exported "WMSYS"."WM$VERSION_TABLE$"                     0 KB       0 rows
 . . exported "WMSYS"."WM$VT_ERRORS_TABLE$"                   0 KB       0 rows
 . . exported "WMSYS"."WM$WORKSPACES_TABLE$"              12.10 KB       1 rows
 . . exported "WMSYS"."WM$WORKSPACE_PRIV_TABLE$"          7.054 KB      10 rows
 . . exported "WMSYS"."WM$WORKSPACE_SAVEPOINTS_TABLE$"        0 KB       0 rows
 . . exported "LBACSYS"."OLS$AUDIT"                           0 KB       0 rows
 . . exported "LBACSYS"."OLS$COMPARTMENTS"                    0 KB       0 rows
 . . exported "LBACSYS"."OLS$DIP_DEBUG"                       0 KB       0 rows
 . . exported "LBACSYS"."OLS$GROUPS"                          0 KB       0 rows
 . . exported "LBACSYS"."OLS$LAB"                             0 KB       0 rows
 . . exported "LBACSYS"."OLS$LEVELS"                          0 KB       0 rows
 . . exported "LBACSYS"."OLS$POL"                             0 KB       0 rows
 . . exported "LBACSYS"."OLS$POLICY_ADMIN"                    0 KB       0 rows
 . . exported "LBACSYS"."OLS$POLS"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$POLT"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$PROFILE"                         0 KB       0 rows
 . . exported "LBACSYS"."OLS$PROFILES"                        0 KB       0 rows
 . . exported "LBACSYS"."OLS$PROG"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$SESSINFO"                        0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER"                            0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER_COMPARTMENTS"               0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER_GROUPS"                     0 KB       0 rows
 . . exported "LBACSYS"."OLS$USER_LEVELS"                     0 KB       0 rows
 . . exported "SYS"."AUD$"                                    0 KB       0 rows
 . . exported "SYS"."DAM_CLEANUP_EVENTS$"                     0 KB       0 rows
 . . exported "SYS"."DAM_CLEANUP_JOBS$"                       0 KB       0 rows
 . . exported "SYS"."TSDP_ASSOCIATION$"                       0 KB       0 rows
 . . exported "SYS"."TSDP_CONDITION$"                         0 KB       0 rows
 . . exported "SYS"."TSDP_FEATURE_POLICY$"                    0 KB       0 rows
 . . exported "SYS"."TSDP_PROTECTION$"                        0 KB       0 rows
 . . exported "SYS"."TSDP_SENSITIVE_DATA$"                    0 KB       0 rows
 . . exported "SYS"."TSDP_SENSITIVE_TYPE$"                    0 KB       0 rows
 . . exported "SYS"."TSDP_SOURCE$"                            0 KB       0 rows
 . . exported "SYSTEM"."REDO_LOG"                             0 KB       0 rows
 . . exported "SYSTEM"."SCHEDULER_JOB_ARGS"                   0 KB       0 rows
 . . exported "SYSTEM"."SCHEDULER_PROGRAM_ARGS"           9.515 KB      12 rows
 . . exported "SYS"."AUDTAB$TBS$FOR_EXPORT"               5.953 KB       2 rows
 . . exported "SYS"."DBA_SENSITIVE_DATA"                      0 KB       0 rows
 . . exported "SYS"."DBA_TSDP_POLICY_PROTECTION"              0 KB       0 rows
 . . exported "SYS"."FGA_LOG$FOR_EXPORT"                      0 KB       0 rows
 . . exported "SYS"."NACL$_ACE_EXP"                       9.929 KB       1 rows
 . . exported "SYS"."NACL$_HOST_EXP"                      6.914 KB       1 rows
 . . exported "SYS"."NACL$_WALLET_EXP"                        0 KB       0 rows
 . . exported "WMSYS"."WM$EXP_MAP"                        7.718 KB       3 rows
 Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
 
 Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
   /home/oracle/tts.dmp
 
 Datafiles required for transportable tablespace TS_MUFALANI_01:
   /u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani01.dbf
 Datafiles required for transportable tablespace TS_MUFALANI_02:
   /u01/app/oracle/oradata/cdb1/pdb1/ts_mufalani02.dbf
 Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Fri Sep 13 21:30:32 2019 elapsed 0 00:13:10

 

As we can see, the metadata and some data where exported and also Oracle marked to us on final of the log the datafiles required to transport and plug into the other database.
It’s time to copy the datafiles to the pdb on destination database, in our case, is the same CDB, but could be a different database, as you want.

 

SQL> !cp /u01/app/oracle/oradata/cdb1/pdb1/tsmufalani* /u01/app/oracle/oradata/cdb1/pdb2/ 
On destination PDB, we need to create the directory pointing to the same path where we put the dumpfiles, so, we just use the same command to create the directory inside PDB2 as we have created on PDB1, HOME, using the same path and grant to system user.
 
SQL> alter session set container = PDB2;
 Session altered.

SQL> create or replace directory home as '/home/oracle' ;
 Directory created.

SQL> grant read, write on directory home to system;
 Grant succeeded.
As the setup is fine on pdb2, we can import the metadata of the plugged datafiles, we must use the parameter transport_datafiles and put the files enclosed by (‘) and separaterd by (,) as we can see below. On the output we will see some errors related to APEX component, please ignore it, it will not cause any issue on our procedure. Keep in mind it is an VM for test purposes, on a real environment, you may want to remove apex before to execute this procedure. I have a blog note for it, please check it.
 
SQL> !impdp system/oracle_4U@vm2:1521/PDB2 full=Y dumpfile=tts.dmp directory=home transport_datafiles='/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani01.dbf','/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani02.dbf' logfile=import.log
 Import: Release 12.1.0.2.0 - Production on Fri Sep 13 21:34:50 2019
 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
 Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/@vm2:1521/PDB2 full=Y dumpfile=tts.dmp directory=home transport_datafiles=/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani01.dbf,/u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani02.dbf logfile=import.log 
 Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
 Processing object type DATABASE_EXPORT/TABLESPACE
 ORA-31684: Object type TABLESPACE:"TEMP" already exists
 Processing object type DATABASE_EXPORT/PROFILE
 Processing object type DATABASE_EXPORT/SYS_USER/USER
 Processing object type DATABASE_EXPORT/SCHEMA/USER
 Processing object type DATABASE_EXPORT/RADM_FPTM
 Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
 Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
 Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
 Processing object type DATABASE_EXPORT/RESOURCE_COST
 Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
 Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
 ORA-31684: Object type DIRECTORY:"HOME" already exists
 Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 . . imported "SYS"."KU$_EXPORT_USER_MAP"                 6.109 KB      38 rows
 Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
 . . imported "ORDDATA"."ORDDCM_DOCS_TRANSIENT"           252.9 KB       9 rows
 . . imported "WMSYS"."E$CONSTRAINTS_TABLE$"                  0 KB       0 rows
 . . imported "WMSYS"."E$LOCKROWS_INFO$"                      0 KB       0 rows
 . . imported "WMSYS"."E$UDTRIG_INFO$"                        0 KB       0 rows
 . . imported "SYS"."AMGT$DP$DAM_CONFIG_PARAM$"           6.531 KB      14 rows
 . . imported "SYS"."DP$TSDP_PARAMETER$"                  5.953 KB       1 rows
 . . imported "SYS"."DP$TSDP_POLICY$"                     5.921 KB       1 rows
 . . imported "SYS"."DP$TSDP_SUBPOL$"                     6.328 KB       1 rows
 . . imported "SYSTEM"."REDO_DB_TMP"                      25.59 KB       1 rows
 . . imported "WMSYS"."E$BATCH_COMPRESSIBLE_TABLES$"          0 KB       0 rows
 . . imported "WMSYS"."E$CONS_COLUMNS$"                       0 KB       0 rows
 . . imported "WMSYS"."E$ENV_VARS$"                       6.015 KB       3 rows
 . . imported "WMSYS"."E$EVENTS_INFO$"                    5.812 KB      12 rows
 . . imported "WMSYS"."E$HINT_TABLE$"                     9.453 KB      75 rows
 . . imported "WMSYS"."E$MODIFIED_TABLES$"                    0 KB       0 rows
 . . imported "WMSYS"."E$MP_GRAPH_WORKSPACES_TABLE$"          0 KB       0 rows
 . . imported "WMSYS"."E$MP_PARENT_WORKSPACES_TABLE$"         0 KB       0 rows
 . . imported "WMSYS"."E$NESTED_COLUMNS_TABLE$"               0 KB       0 rows
 . . imported "WMSYS"."E$NEXTVER_TABLE$"                  6.375 KB       1 rows
 . . imported "WMSYS"."E$REMOVED_WORKSPACES_TABLE$"           0 KB       0 rows
 . . imported "WMSYS"."E$RESOLVE_WORKSPACES_TABLE$"           0 KB       0 rows
 . . imported "WMSYS"."E$RIC_LOCKING_TABLE$"                  0 KB       0 rows
 . . imported "WMSYS"."E$RIC_TABLE$"                          0 KB       0 rows
 . . imported "WMSYS"."E$RIC_TRIGGERS_TABLE$"                 0 KB       0 rows
 . . imported "WMSYS"."E$UDTRIG_DISPATCH_PROCS$"              0 KB       0 rows
 . . imported "WMSYS"."E$VERSION_HIERARCHY_TABLE$"        5.984 KB       1 rows
 . . imported "WMSYS"."E$VERSION_TABLE$"                      0 KB       0 rows
 . . imported "WMSYS"."E$VT_ERRORS_TABLE$"                    0 KB       0 rows
 . . imported "WMSYS"."E$WORKSPACES_TABLE$"               12.10 KB       1 rows
 . . imported "WMSYS"."E$WORKSPACE_PRIV_TABLE$"           7.054 KB      10 rows
 . . imported "WMSYS"."E$WORKSPACE_SAVEPOINTS_TABLE$"         0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$AUDIT"                    0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$COMPARTMENTS"             0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$GROUPS"                   0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$LAB"                      0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$LEVELS"                   0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$POL"                      0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$POLS"                     0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$POLT"                     0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$PROFILE"                  0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$PROG"                     0 KB       0 rows
 . . imported "LBACSYS"."OLS_DP$OLS$USER"                     0 KB       0 rows
 . . imported "SYS"."AMGT$DP$AUD$"                            0 KB       0 rows
 . . imported "SYS"."AMGT$DP$DAM_CLEANUP_EVENTS$"             0 KB       0 rows
 . . imported "SYS"."AMGT$DP$DAM_CLEANUP_JOBS$"               0 KB       0 rows
 . . imported "SYS"."DP$TSDP_ASSOCIATION$"                    0 KB       0 rows
 . . imported "SYS"."DP$TSDP_CONDITION$"                      0 KB       0 rows
 . . imported "SYS"."DP$TSDP_FEATURE_POLICY$"                 0 KB       0 rows
 . . imported "SYS"."DP$TSDP_PROTECTION$"                     0 KB       0 rows
 . . imported "SYS"."DP$TSDP_SENSITIVE_DATA$"                 0 KB       0 rows
 . . imported "SYS"."DP$TSDP_SENSITIVE_TYPE$"                 0 KB       0 rows
 . . imported "SYS"."DP$TSDP_SOURCE$"                         0 KB       0 rows
 . . imported "SYSTEM"."REDO_LOG_TMP"                         0 KB       0 rows
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE
 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
 . . imported "SYSTEM"."SCHEDULER_JOB_ARGS_TMP"               0 KB       0 rows
 . . imported "SYSTEM"."SCHEDULER_PROGRAM_ARGS_TMP"       9.515 KB      12 rows
 . . imported "SYS"."AMGT$DP$AUDTAB$TBS$FOR_EXPORT"       5.953 KB       2 rows
 . . imported "SYS"."DP$DBA_SENSITIVE_DATA"                   0 KB       0 rows
 . . imported "SYS"."DP$DBA_TSDP_POLICY_PROTECTION"           0 KB       0 rows
 . . imported "SYS"."AMGT$DP$FGA_LOG$FOR_EXPORT"              0 KB       0 rows
 . . imported "SYS"."NACL$_ACE_IMP"                       9.929 KB       1 rows
 . . imported "SYS"."NACL$_HOST_IMP"                      6.914 KB       1 rows
 . . imported "SYS"."NACL$_WALLET_IMP"                        0 KB       0 rows
 . . imported "WMSYS"."E$EXP_MAP"                         7.718 KB       3 rows
 Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
 Processing object type DATABASE_EXPORT/STATISTICS/MARKER
 Processing object type DATABASE_EXPORT/END_PLUGTS_BLK
 Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_PURGE_SESSIONS" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_PURGE_SESSIONS" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_PURGE_SESSIONS"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_CACHE.PURGE_SESSIONS'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.179792000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval=> 
 'FREQ
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_MAIL_QUEUE" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_MAIL_QUEUE" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_MAIL_QUEUE"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_MAIL.PUSH_QUEUE_IMMEDIATE'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.239709000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval=> 
 'FREQ=MINUTE
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_WS_NOTIFICATIONS" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_WS_NOTIFICATIONS" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_WS_NOTIFICATIONS"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_WORKSHEET_API.DO_NOTIFY'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.244043000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval
 ORA-39083: Object type PROCOBJ:"APEX_040200"."ORACLE_APEX_DAILY_MAINTENANCE" failed to create with error:
 ORA-27477: "APEX_040200"."ORACLE_APEX_DAILY_MAINTENANCE" already exists
 Failing sql is:
 BEGIN 
 dbms_scheduler.create_job('"ORACLE_APEX_DAILY_MAINTENANCE"',
 job_type=>'STORED_PROCEDURE', job_action=>
 'WWV_FLOW_MAINT.DAILY_MAINTENANCE'
 , number_of_arguments=>0,
 start_date=>TO_TIMESTAMP_TZ('03-SEP-2019 10.15.30.248433000 PM +02:00','DD-MON-RRRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_inter
 Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
 Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
 Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
 Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 6 error(s) at Fri Sep 13 21:40:12 2019 elapsed 0 00:05:16

After the import is done, we can check the data on the destination PDB, PDB2.

SQL> alter session set container =pdb2;
 Session altered.

SQL> select file_name from dba_data_files;

 FILE_NAME
--------------------------------------------------------
 /u01/app/oracle/oradata/cdb1/pdb2/sysaux01.dbf
 /u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani01.dbf
 /u01/app/oracle/oradata/cdb1/pdb2/system01.dbf
 /u01/app/oracle/oradata/cdb1/pdb2/ts_mufalani02.dbf
 
SQL> select count(1) from mufalani.test1;

COUNT(1)
------------------------
 38

SQL> select count(1) from mufalani.test2;
 COUNT(1)
------------------------
42
 
 SQL> show con_name;

 CON_NAME
------------------------
 PDB2
 
So, this is one of most used technique to migrate databases to Oracle 12cR1, you can check more info on Oracle documentation: 
https://docs.oracle.com/database/121/ADMIN/transport.htm#GUID-1901E9C3-8FCE-4D4E-AB65-34D703474E52
and also on Mike Dietrich blog – https://mikedietrichde.com/tag/transportable-tablespaces/

 

This article was written by André Ontalba and me.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

 


Exadata – Advanced Intrusion Detection Environment (AIDE)
Category: Engineer System Author: Rodrigo Mufalani (Board Member) Date: 5 years ago Comments: 0

Exadata – Advanced Intrusion Detection Environment (AIDE)

Few days ago, we have upgrade one Exadata environment to 19c version. One of new features named AIDE – Advanced Intrusion Detection Environment was introduced on this release of Exadata software, it came pre-configured. This feature is really awesome, and call my attention, because it tracks if the system files were modified and then helps the DMA (Database Machine Administrator) to find issues and security breaches on their environments. AIDE run crontab jobs daily and monitor if files changes in specific directories. If some files changes, it raise an alert and generate logs on /var/log/aide/aide.log.

 

On Oracle documentation:
exadataAIDE Syntax
The utility is located at /opt/oracle.SupportTools/exadataAIDE.
1
exadataAIDE [-s|-status] [-e|enable] [-d|disable] [-u|-update] [-h|help]
Description of syntax options:
  • -s[tatus] : Print the current status of the AIDE daily cron job
  • -e[nable] : Enable the AIDE daily cron job
  • -d[isable] : Disable the AIDE daily cron job
  • -u[pdate] : Update the AIDE database metadata and run the daily scan
  • -h[elp] : Print the command syntax and help information
  • Get the current status of the aide cron job.exadataAIDE –status
  • Disable the daily AIDE scan.exadataAIDE –disable
  • Enable the daily AIDE scan.exadataAIDE –enable
  • Update the AIDE database after making changes to the system.exadataAIDE –update
Let’s take a look on the “warning file” generated by AIDE.
 [root@myexa01 ~]# cat /var/log/aide/aide.log
 AIDE 0.15.1 found differences between database and filesystem!!
 Start timestamp: 2019-07-21 19:57:20
 Summary:
   Total number of files:        55788
   Added files:                  0
   Removed files:                0
   Changed files:                2
 
 Changed files:
 changed: /home/oracle/monitor/check_compliance
 changed: /home/oracle/monitor/check_oracle
 
 Detailed information about changes:
 File: /home/oracle/monitor/check_listener
  SHA256   : AnuG4ldrP0yB15r/3nOGN+nrnW18391+ , wqPj5hnOIl1HGK7f85/E+0tWGYvEKbH5
 File: /home/oracle/monitor/check_oracle
  SHA256   : t6zk+jufuBfia/YCNHS1WqJt/GPW8xSX , Xk3ZkzP/YpT2jbKvjORsRF3TfoCteY8Z 
After that alert, for our environment we need to add an exception for a specific directory by the end of the file:
[root@myexa01 ~]# vi /etc/aide.conf

#Add one Exception for monitoring directory
!/home/oracle/monitor/
Then we need to rebuild the database for AIDE, the DB is located at /var/lib/aide/aide.db.gz
[root@myexa01 ~]# /opt/oracle.SupportTools/exadataAIDE -u
 AIDE: database update request accepted.
Then, we need to clen the alerts raised. To cleanup we use dbmcli utility
[root@myexa01 ~]# dbmcli
 DBMCLI> LIST ALERTHISTORY
          1_1     2019-07-20T18:24:29+02:00       warning         "Advanced Intrusion Detection Environment (AIDE) detected potential changes to software on this system. The changes are in /var/log/aide/aide.log "
          1_2     2019-07-22T09:14:55+02:00       clear           "Advanced Intrusion Detection Environment (AIDE) violation has been cleared."
 
DBMCLI> drop alerthistory 1_1, 1_2
 Alert 1_1 successfully dropped
 Alert 1_2 successfully dropped
To check more this secutiry framework on Exadata, please refer:
https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmsq/exadata-security-practices.html#GUID-74FF1D0C-59F2-4F02-B648-34048AC02C31

 

 

 

All the Best,
Rodrigo Mufalani

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Tuning impdp, generate less redo with transform=disable_archive_logging:Y
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 5 years ago Comments: 0

Tuning impdp, generate less redo with transform=disable_archive_logging:Y

Every single DBA in this world know how boring is load huge amount of data into databases using dump files, it is so boring than see grass growing. Huge part of this time processing is because of redo/archivelog generation.
 
In Oracle 12cR1, oracle introduced this very nice feature to allow DBAs to import data bypassing redolog generation, nologging mode. So, you are running your database in force logging option it will not take effect and is strongly recommended generate a physical backup when the import of the database is finish.
 
Let’s setup our test environment, In this case I use 18c, but you can use 12cR1 as well to execute this procedure.
SQL> create directory home as '/home/oracle' ;

Directory created.

SQL> create user c##mufalani identified by welcome1;

User created.

SQL> grant dba to c##mufalani;

Grant succeeded.
Now, log in this user and create a table and load some test data on it.
— Create some data into one table
SQL> conn c##mufalani/welcome1
Connected.
SQL>
SQL>
SQL> create table test as select * from cdb_objects;

Table created.

SQL> insert into test (select * from test);

72897 rows created.

SQL> /

145794 rows created.

SQL> /

291588 rows created.

SQL> /

583176 rows created.

SQL> /

1166352 rows created.

SQL> commit;

Commit complete.

SQL>
– Let’s check the size of the table (segment)
SQL> col SEGMENT_NAME form a12
SQL> select bytes/1024/1024 mb, segment_name, segment_type from user_segments;

MB SEGMENT_NAME SEGMENT_TYPE
---------- ------------ ------------------
375 TEST TABLE
— Now, we are generating the dumpfile to load and measure the redo generation
[oracle@ora18c3 ~]$ expdp \"/as sysdba\" directory=home dumpfile=mufalani.dmp logfile=mufalani.log schemas="C##MUFALANI"

Export: Release 18.0.0.0.0 - Production on Sun Jul 7 11:52:38 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Starting "SYS"."SYS_EXPORT_SCHEMA_01": "/******** AS SYSDBA" directory=home dumpfile=mufalani.dmp logfile=mufalani.log schemas=C##MUFALANI
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "C##MUFALANI"."TEST" 323.9 MB 2332704 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/mufalani.dmp
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Jul 7 11:53:32 2019 elapsed 0 00:00:49
— Verifying the size of the dumpfile
[oracle@ora18c3 ~]$ ls -ltrh mufalani.*
-rw-r-----. 1 oracle dba 325M Jul 7 11:53 mufalani.dmp
-rw-r--r--. 1 oracle dba 1.3K Jul 7 11:53 mufalani.log
— So, now we will check the actual value for redo generation on v$sysstat, before the import to compare after the import
SQL> select name, value/1024/1024 mb from v$sysstat where name ='redo size' ;

NAME MB
---------------- ----------
redo size 837.06
— Then use the import in regular fashion, and see how much redo generation will be done
[oracle@ora18c3 ~]$ impdp \"/as sysdba\" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas="C##MUFALANI"

Import: Release 18.0.0.0.0 - Production on Sun Jul 7 12:01:12 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Master table "SYS"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_SCHEMA_01": "/******** AS SYSDBA" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas=C##MUFALANI
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "C##MUFALANI"."TEST" 323.9 MB 2332704 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_SCHEMA_01" successfully completed at Sun Jul 7 12:01:49 2019 elapsed 0 00:00:36
— Check the redo generated again
SQL> select name, value/1024/1024 mb from v$sysstat where name ='redo size' ;

NAME MB
---------------- ----------
redo size 1215.03
As we can see, to import this table, in regular mode, Oracle generates about 378MB of redo. Let’s drop the table and repeat using this feature to cut redo generation.
SQL> drop table c##mufalani.test purge;

Table dropped.
 
Then, I will raise the import again using parameter transform=disable_archive_logging:Y and check later the redo generation.
[oracle@ora18c3 ~]$ impdp \"/as sysdba\" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas="C##MUFALANI" transform=disable_archive_logging:Y

Import: Release 18.0.0.0.0 - Production on Sun Jul 7 12:14:56 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Master table "SYS"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_SCHEMA_01": "/******** AS SYSDBA" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas=C##MUFALANI transform=disable_archive_logging:Y
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "C##MUFALANI"."TEST" 323.9 MB 2332704 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_SCHEMA_01" successfully completed at Sun Jul 7 12:15:06 2019 elapsed 0 00:00:09
— Check the redo size generation again
SQL> select name, value/1024/1024 mb from v$sysstat where name ='redo size' ;
NAME MB
---------------- ----------
redo size 1218.01
As you can verify, the time to import was dropped from 36s to 9s and redo generated, just 3MB, very cool. Just to enforce the idea, my database is not running in force logging, note that the database is in archivelog mode, but no in force logging, this reduced of redolog generation  is because oracle executes the import in nologging mode, in force logging Oracle will not bypass redolog generation, so, the time and efforts to execute import will be increased.

 

 

SQL> select log_mode, FORCE_LOGGING, name from v$database;

LOG_MODE FORCE_LOGGING NAME
------------ --------------------------------------- ---------
ARCHIVELOG NO DB01
— Let’s check the table to see what is the logging mode of the table, after the import is finished. As you can see, logging, which is the normal way (generates redo).
SQL> COL OWNER FORM A12
SQL> COL TABLE_NAME FORM A12
SQL> COL LOGGING FORM A10
SQL> select owner, table_name, logging from dba_tables where owner='C##MUFALANI' ;

OWNER TABLE_NAME LOGGING
------------ ------------ ----------
C##MUFALANI TEST YES
This is an very usefull feature to load huge amount of data into the database, but, import data with this parameter transform, leads to unrecoverable transactions and you must  execute a full backup rigth after to finish the import to avoid corrupt blocks in case of recover. I will discuss this situation in other article, soon.
I hope this feature introduced on 12.1, can help you to speed up you imports.
All the best,
Rodrigo Mufalani
 
 
Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”

Login failed due to ORA-28030: ORA-28030: Server encountered problems accessing LDAP directory service
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 5 years ago Comments: 0

Login failed due to ORA-28030: ORA-28030: Server encountered problems accessing LDAP directory service

 
 
Today I noticed in my environment a lot of messages ORA-28030 in the alert.log of the one database version 18c, previously configured to authenticate on OID (Oracle Internet Directory), event with removal of the configuration, one parameter still configured and that was the reason of the messages. 
Also, I checked on listener.log and no failures occurred.
 
I try to find useful information regargind this error, but just found issues on the oposite direction, people configuring authentication on OID, our issue was cause because of removal, not completely, the configuration of OID.
 
So, I decided to write this small post about the issue and how to solve.
 
Before, one of the steps to configure OID is setting the parameter LDAP_DIRECTORY_ACCESS = PASSWORD, so one of my coleagues have done that:
 
alter system set LDAP_DIRECTORY_ACCESS = 'PASSWORD' scope=both sid='*';
— Note that this parameter is dynamic
 
To solve the issue, I issue the parameter as default:
alter system set LDAP_DIRECTORY_ACCESS = 'NONE' scope=both sid='*';
Immediately, after the command, no more messages “Login failed due to ORA-28030: ORA-28030: Server encountered problems accessing LDAP directory service”. 
 
I hope this helps you to avoid this messages when you are removing OID authentication of one database.
 
You can also find more useful information on
 
Deinstall
https://docs.oracle.com/en/middleware/lifecycle/12.2.1.3/inoim/uninstalling-or-reinstalling-product.html#GUID-3FC6A4E1-4684-4BAC-9760-DA308A114FE7
Install
https://docs.oracle.com/en/middleware/lifecycle/12.2.1.3/inoim/installing-product-software.html#GUID-5D47117E-3983-4BED-BADB-CB7DDD63DB34
 
All the Best,
Rodrigo Mufalani

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited 
to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

 


Backing up and restoring your ORACLE HOME
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 6 years ago Comments: 0

Backing up and restoring your ORACLE HOME

I did this small how to with purpose of copy a patched Oracle Home between servers. But you can use it as a ” best practice” and have a regular backup of your Oracle Home in case of you, or other guy on your team, accidentally remove part of or entire files and folders on it.

I will reproduce the steps on my virtual machine, I hope that is not your case, on a production database server, but, nobody knows when shit happens, and you may need this procedure to recover and put online your database after an accident.

 

As you can see, my database is up and runing in my Oracle 18c home.
[oracle@ora18c3 ~]$ sqlplus "/as sysdba"




SQL*Plus: Release 18.0.0.0.0 - Production on Thu Jun 13 18:44:55 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
5 PDB2 READ WRITE NO
SQL>
SQL>
The location of my ORACLE HOME is listed for this database on my file /etc/oratab

 

[oracle@ora18c3 ~]$ cat /etc/oratab
#Backup file is /u01/app/18.0.0/grid/srvm/admin/oratab.bak.ora18c3 line added by Agent
#
# This file is used by ORACLE utilities. It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.

# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
db02:/u01/app/oracle/product/18.0.0/db_1:N # line added by Agent
+ASM:/u01/app/18.0.0/grid:N # line added by Agent
db01:/u01/app/oracle/product/18.0.0/db_1:N # line added by Agent
Now, let’s issue a backup of my oracle home, do this procedure with root user. You can backup OH online, no problem at all on the database side to issue this step.

 

[root@ora18c3 db_1]# tar -cf /media/bkp/dbhome18c_installed.tar .
[root@ora18c3 db_1]#

[root@ora18c3 db_1]# ls -ltrh /media/bkp/*home18c*
-rwxrwx---. 1 root vboxsf 9.3G Jun 13 18:58 /media/bkp/dbhome18c_installed.tar
[root@ora18c3 db_1]#
Now, I will cause some damage to my Oracle Home to return the backup of it.
[oracle@ora18c3 db_1]$ ps -ef | grep smon
oracle 11124 1 0 18:29 ? 00:00:00 asm_smon_+ASM
oracle 20849 1 0 18:41 ? 00:00:00 ora_smon_db01
oracle 31126 19386 0 18:59 pts/0 00:00:00 grep --color=auto smon
I will remove all files on my Oracle home…

 

[oracle@ora18c3 db_1]$ pwd
/u01/app/oracle/product/18.0.0/db_1
[oracle@ora18c3 db_1]$

[oracle@ora18c3 db_1]$ du -sh .
9.4G .
[oracle@ora18c3 db_1]$

[oracle@ora18c3 db_1]$ rm -rf *
[oracle@ora18c3 db_1]$
[oracle@ora18c3 db_1]$

[oracle@ora18c3 db_1]$ ls -ltrh
total 0
[oracle@ora18c3 db_1]$ pwd
/u01/app/oracle/product/18.0.0/db_1
[oracle@ora18c3 db_1]$
It can happen with anyone, just work in a wrong directory and issue the command rm -rf…

 

[oracle@ora18c3 db_1]$ sqlplus "/as sysdba"
-bash: /u01/app/oracle/product/18.0.0/db_1/bin/sqlplus: No such file or directory
[oracle@ora18c3 db_1]$
Now my sqlplus and all utilities were gone!!!

 

[oracle@ora18c3 db_1]$ ps -ef | grep smon
oracle 11124 1 0 18:29 ? 00:00:00 asm_smon_+ASM
oracle 32033 19386 0 19:01 pts/0 00:00:00 grep --color=auto smon
[oracle@ora18c3 db_1]$
[oracle@ora18c3 db_1]$
As you can see, database is down… On alert I can see these messages below

 

2019-06-13T19:01:18.456507+02:00
Errors in file /u01/app/oracle/diag/rdbms/db01/db01/trace/db01_psp0_20809.trc:
ORA-07274: spdcr: access error, access to oracle denied.
Linux-x86_64 Error: 2: No such file or directory
Errors in file /u01/app/oracle/diag/rdbms/db01/db01/trace/db01_psp0_20809.trc (incident=45633) (PDBNAME=CDB$ROOT):
ORA-7274 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/rdbms/db01/db01/incident/incdir_45633/db01_psp0_20809_i45633.trc
2019-06-13T19:01:19.845668+02:00
USER (ospid: 20809): terminating the instance due to ORA error 7274
2019-06-13T19:01:19.889661+02:00
System state dump requested by (instance=1, osid=20809 (PSP0)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/db01/db01/trace/db01_diag_20825_20190613130119.trc
2019-06-13T19:01:26.111518+02:00
Instance terminated by USER, pid = 20809
As root user again, let’s restore our backuped Oracle Home to the correct path using these commands below.

 

[root@ora18c3 ~]# cd /u01/app/oracle/product/18.0.0/db_1/
[root@ora18c3 db_1]#
[root@ora18c3 db_1]#
[root@ora18c3 db_1]# tar -xf /media/bkp/dbhome18c_installed.tar .
 
Now as Oracle, let’s check the size of the folder..

 

[oracle@ora18c3 ~]$ cd /u01/app/oracle/product/18.0.0/db_1/
[oracle@ora18c3 db_1]$ du -sh .
9.4G .
As you could see, is the same size as a original one, 9.4Gb, good for us. To garantee that everything is good, I will do a relink on Oracle binaries before to
try open my database again.

 

[oracle@ora18c3 bin]$ pwd
/u01/app/oracle/product/18.0.0/db_1/bin
[oracle@ora18c3 bin]$ relink all
writing relink log to: /u01/app/oracle/product/18.0.0/db_1/install/relink_2019-06-13_19-17-53PM.log


[oracle@ora18c3 bin]$ cat /u01/app/oracle/product/18.0.0/db_1/install/relink_2019-06-13_19-17-53PM.log | grep -i "Code 1\|Fatal\|Error"
[oracle@ora18c3 bin]$
[oracle@ora18c3 bin]$
Really awesome!!! no errors on the relink file, so it means that we successfully restore our original Oracle home and now we can startup our database again.

 

[oracle@ora18c3 ~]$ . oraenv
ORACLE_SID = [db01] ? db01
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora18c3 ~]$ sqlplus "/as sysdba"




SQL*Plus: Release 18.0.0.0.0 - Production on Thu Jun 13 19:25:41 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 2432694552 bytes
Fixed Size 8898840 bytes
Variable Size 654311424 bytes
Database Buffers 1761607680 bytes
Redo Buffers 7876608 bytes
Database mounted.
Database opened.
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
5 PDB2 READ WRITE NO
SQL>


I hope this article helps you to restore this situation that I described in the benning, and, please, include a backup of your Oracle Home in your schedules of backup,
otherwise you will need to reinstall Oracle again and maybe patch all patches that you already applied on your Oracle Home before the crash.

 

All the Best,
Rodrigo Mufalani

 

 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited
to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Query the last patch applied on database 11g, 12cR1, 12cR2 and 18c
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 6 years ago Comments: 0

Query the last patch applied on database 11g, 12cR1, 12cR2 and 18c

 
 
Part of DBA routine is patch databases by quarterly, what is that means? 
 
It means that Oracle is a software and it is hard to hear, but Oracle database has bugs and secutiry vulnerabilities, so is good idea patch regularly your production databases with RU and RUR, you can
find more information about this change on nomenclature that occurred on 12cR2 at this of link on blog of the master https://mikedietrichde.com/2017/11/07/ru-rur-recommendations-facts/
So, usually a patch came with 2 different parts, first part is the binaries (libraries, programs and etc), you must be familiar with opatch utility to apply a patch on the Oracle Home, and the second part is the data dictionary corrections. 
To execute this part, since Oracle version 12cR1 we use datapatch to apply, this tool is amazing, really like to use it. So, the datapatch utility, is part of OPatch and the purpose of usage of this tool is apply scripts against Oracle data dictionary.
In this article I will show to you, how to query the correct views on the database to find informations about 11g, and 12c (family) databases, let’s say like that.
 
So, on version 11g, after apply a PSU patch on database, after run the script catbundle.sql, you can query this view with SYS account, to check information about the past PSUs applied:

— 11g version

set lin 1000
col action form a12
col version form a40
col comments form a70
col action_date form a20

select comments, action, to_char(action_time,’DD/MM/RR HH24:MI:SS’) action_date, version
from registry$history
order by action_date;

COMMENTS ACTION ACTION_DATE VERSION
———————————————————————- ———— ——————– —————————————-
Patch applied from 18.1.0.0.0 to 18.3.0.0.0 RU_APPLY 20/03/19 14:36:26 18.0.0.0.0
OJVM RU post-install APPLY 20/03/19 14:37:42 18.3.0.0.180717OJVMRU
RAN jvmpsu.sql jvmpsu.sql 20/03/19 14:37:42 18.3.0.0.180717OJVMRU
RDBMS_18.3.0.0.0DBRU_LINUX.X64_180627 BOOTSTRAP 18

 

For the version 12R1 and onwards you will need to use another view and depending against wich version are running you can filter by PSU ou RELEASE words on column description:
 

— For 12c and 18c
set lin 1000
col action form a12
col version form a40
col description form a85
col action_date form a20

select description, action, to_char(action_time,’DD/MM/RR HH24:MI:SS’) action_date, ‘ ‘ version
from dba_registry_sqlpatch;

DESCRIPTION ACTION ACTION_DATE VERSION
————————————————————————————- ———— ——————– —————————————-
Database Release Update : 18.3.0.0.180717 (28090523) APPLY 20/03/19 14:37:46
OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415) APPLY 20/03/19 14:37:46

 

Obs.: There is no more version column, I keep that just for have compatibility with some old scripts.
 

To query:
— 12cR1
where upper(description) like ‘DATABASE PSU%’;

— 12cR2 onwards
where upper(descrition) like ‘DATABASE RELEASE%’;

 

I hope this brief note can help you to find more details how to query and check if the version applied on binaries matches the version of the database level of patch.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

All the best,
Rodrigo Mufalani

1 2