Andre Luiz Dutra Ontalba (Board Member)
Great opportunity to upgrade your career
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Great opportunity to upgrade your career

Oracle University is offering 25% off Oracle Learning Subscriptions and career-enhancing Oracle Certification Vouchers.

 

This offer covers training on: 
  • Oracle Cloud (IaaS/PaaS/SaaS)
  • Oracle Database
  • Oracle Systems
  • MySQL
  • Java Programming
  • Oracle Middleware
  • Oracle Applications
  • Oracle Industries
To access this discount click here
 
We hope we helped !!!
 
Andre Ontalba and Rodrigo Mufalani

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”

 


Oracle Open World 2019
Category: Cloud,Database,Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Oracle Open World 2019

 
 
First day of OOW19 and we had big announcements of new services and products.
 
Now as Larry said “Oracle Cloud is Allways Free for Everyone”
 
Oracle’s Free Tier program has two components:
Always Free services, which provide access to Oracle Cloud services for an unlimited time
Free Trial, which provides $ 300 in credits for 30 days to try additional services and larger shapes
The new Always Free program includes the essentials users need to build and test applications in the cloud: Oracle Autonomous Database, Compute VMs, Block Volumes, Object and Archive Storage, and Load Balancer. Specifications include:
2 Autonomous Databases (Autonomous Data Warehouse or Autonomous Transaction Processing), each with 1 OCPU and 20 GB storage
2 Compute VMs, each with 1/8 OCPU and 1 GB memory
2 Block Volumes, 100 GB total, with up to 5 free backups
10 GB Object Storage, 10 GB Archive Storage, and 50,000 / month API requests
1 Load Balancer, 10 Mbps bandwidth
10 TB / month Outbound Data Transfer
500 million ingestion Datapoints and 1 billion Datapoints for Monitoring Service
1 million Notification delivery options per month and 1000 emails per month
More information click here.
Another release was the Oracle Autonomous Linux “World’s First Autonomous Operating System.
Introducing Oracle OS Management Service
Oracle Autonomous Linux, in conjunction with Oracle OS Management Service, uses advanced machine learning and autonomous capabilities to deliver unprecedented cost savings, security and availability and frees up critical IT resources to tackle more strategic initiatives.
Eliminate manual OS management — World’s first autonomous operating system in the cloud to deliver automated patching, updates, and tuning without human intervention. Based on a preconfigured Oracle Linux image; automated daily package updates; enhanced OS parameter tuning and OS diagnostics gathering.
Deliver automatic, in-depth protection at all levels — 100 percent hands-off automatic security updates daily to the Linux kernel and key user space libraries. This requires no downtime along with protection from both external attacks and malicious internal users. Known Exploit Detection provides automated alerts if anyone attempts to exploit a vulnerability that has been patched by Oracle.
Provides always-on availability — Includes automated patching and upgrades while the system is running, eliminating unnecessary downtime for users and the system.
More information click here
 
A release expected by customers the new partnership with VMware.
 
For the first time, Oracle will officially support Oracle products running on VMware virtualized environments and VMware will support Oracle running VMware Workloads on Oracle Cloud WIN/WIN for customers!
 

 

 
Running VMware workloads in Oracle Cloud
With this announcement, Oracle becomes a partner in the VMware Cloud Provider Program and Oracle Cloud VMware Solution will be sold by Oracle and its partners. The solution will be based on VMware Cloud Foundation and will deliver a full software-defined data center (SDDC) stack including VMware vSphere, NSX, and vSAN. Through consistent infrastructure and operations, customers will be able to migrate and modernize applications, seamlessly moving workloads between on-premise environments and Oracle Cloud.
Customers will be able to easily use Oracle services, such as Oracle Autonomous Database, Oracle Exadata Cloud Service and Oracle Database Cloud, which run in the same cloud data centers, on the same networks, with a consistent portal and APIs. able to leverage Oracle’s rapidly expanding footprint of global regions to scale globally without needing to establish their own data centers. Oracle will provide administrative access to the underlying physical servers, enabling a level of control previously only possible on premise, and customers will be able to use VMware vCenter to manage both their on-premise clusters and Oracle Cloud-based SDDCs through a single crash of glass Oracle will also provide first line technical support for this solution.
To learn more about the offering visit: https://www.oracle.com/cloud/VMware
 
We also had the Exadata X8M release.
The Exadata X8M uses the Remote Direct Memory (RDMA) to allow database access to the storage server memory directly. And the memory, in this case, it is special, X8M uses Intel Optane DC Persistent Memory modules (DIMM / NVDIMM – Non Volatile DIMM – to provide PMEM – Persistent Memory) attached directly to server storage and these can be accessed directly from the database using RDMA trough RoCE network. Let’s check the details to see what it is.
Our contributor Fernando Simon a few hours ago posted an article about Exadata X8M where it can be viewed http://www.fernandosimon.com/blog/exadata-x8m/.
Soon more will be available here on the Portal this article and more details about the new services and products.
 
 Andre Luiz Dutra Ontalba
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”the new partnership with VMware


Generating an export from a Data Guard database using NETWORK_LINK.
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Generating an export from a Data Guard database using NETWORK_LINK

This article explains how we can export Data Guard using NETWORK_LINK.
 
The Physical Standby database must be opened in “READ ONLY” mode.

Steps to execute to export from Physical Standby Database

Physical Standby Database

— Connect to Physical Standby database and check its status

 

 

[oracle@vm1 admin] sqlplus / as sysdba


SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
PHYSTBY MOUNTED

-- Cancel managed recovery and open database in "READ ONLY" mode.


SQL> alter database recover managed standby database cancel;



SQL> alter database open read only;




-- Verify database status

SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
VTABOLDG OPEN


SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY


 

“Non Standby” Database           

On the same machine I created a new instance just to serve as a bridge to perform the procedure.

-- create DB Link, Oracle Directory.


[oracle@vm1 admin] sqlplus / as sysdba


SQL> create database link expd connect to system identified by oracle using ‘DG_VTABOL’;

SQL> select db_unique_name from v$database;

DB_UNIQUE_NAME
--------------------
DP

SQL> select db_unique_name from v$database@exp;

DB_UNIQUE_NAME
--------------------
VTABOL

SQL> create directory datapump as ‘/tmp’;

 


Use NETWORK_LINK to database link above to connect to the Physical Standby database:

 

[oracle@vm1 admin] expdp DIRECTORY=DATAPUMP network_link=EXP DUMPFILE=DG_EXPDP.dmp SCHEMAS=DBA_DISCOVER,ITAMARATI,VENDASEXTERNAS,VTABOL,ABSTRATOS logfile=expdp.log PARALLEL=4



Export: Release 12.1.0.2.0 - Production on Fri Sep 6 17:32:54 2019

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options

FLASHBACK automatically enabled to preserve database integrity.

Starting "SYS"."SYS_EXPORT_SCHEMA_02":  sys/******** AS SYSDBA DIRECTORY=DATAPUMP network_link=EXP DUMPFILE=AWS.dmp SCHEMAS=DBA_DISCOVER,ITAMARATI,VENDASEXTERNAS,VTABOL,ABSTRATOS logfile=expdp.log PARALLEL=4

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 84.29 GB

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM

Processing object type SCHEMA_EXPORT/DB_LINK

Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE

Processing object type SCHEMA_EXPORT/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/TABLE/COMMENT

Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC

Processing object type SCHEMA_EXPORT/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION

Processing object type SCHEMA_EXPORT/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE

Processing object type SCHEMA_EXPORT/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC

Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION

Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS

Processing object type SCHEMA_EXPORT/VIEW/VIEW

Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT

Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Processing object type SCHEMA_EXPORT/TABLE/TRIGGER

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW

Processing object type SCHEMA_EXPORT/JOB

Processing object type SCHEMA_EXPORT/REFRESH_GROUP

Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCOBJ




Dump file set for SYS.SYS_EXPORT_SCHEMA_02 is:

  /tmp/DG_EXPDP.dmp

Job "SYS"."SYS_EXPORT_SCHEMA_02" completed with 0 error(s) at Fri Sep 6 19:55:35 2019 elapsed 0 02:22:29


 

On Physical Standby Database

[oracle@vm1 admin] sqlplus / as sysdba


SQL> shutdown immediate



SQL> startup mount




SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;




SQL> SELECT
       ARCH.THREAD# "Thread",
       ARCH.SEQUENCE# "Last Sequence Received",
       APPL.SEQUENCE# "Last Sequence Applied",
       (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
       (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN
       (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
       (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN
       (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
  WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

  2    3    4    5    6    7    8    9   10   11

    Thread Last Sequence Received Last Sequence Applied Difference

---------- ---------------------- --------------------- ----------

         1                  72153                 72153          0

         2                  67021                 67021          0




SQL>

 

I hope this helps you!!!

 

Andre Luiz Dutra Ontalba 
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Configuring TDE on a PDB with PLUG and UNPLUG Option
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Configuring TDE on a PDB with PLUG and UNPLUG Option

 
This article describes the process of configure a TDE on CDB and unplugging PDB1 from the CDB1 instance and plugging into the CDB2 instance on the same machine with a new name of PDB2.

 

1 – Configure TDE Source – CDB (cdb1)
A) Create directory

 

[root@vm1 ~]# mkdir -p /etc/ORACLE/WALLETS/cdb1
[root@vm1 ~]# mkdir -p /etc/ORACLE/WALLETS/cdb2
[root@vm1 ~]# chown -R oracle:oinstall /etc/ORACLE
[root@vm1 ~]# chmod -R 755 /etc/ORACLE
B) Edit sqlnet.ora and configure the Wallet

 

ENCRYPTION_WALLET_LOCATION=
(SOURCE=
(METHOD=FILE)
(METHOD_DATA=
(DIRECTORY=/etc/ORACLE/WALLETS/$ORACLE_SID/)))
C) Configure Database – cdb1

 

[oracle@vm1 ~]$ . oraenv <<< cdb1
ORACLE_SID = [db01] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@vm1 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 3 20:58:03 2019

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>
SQL>
SQL>
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO


SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/etc/ORACLE/WALLETS/cdb1' IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle container=all;

keystore altered.


SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle WITH BACKUP USING 'emp_key_backup'

keystore altered.


SQL> alter session set container=pdb1;

Session altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle WITH BACKUP USING 'emp_key_backup';

keystore altered.

SQL> set linesize 500
SQL> select * from v$encryption_wallet;

WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
-------------------- -------------------------------------------------- ------------------------------ -------------------- --------- --------- ----------
FILE /etc/ORACLE/WALLETS/cdb1/ OPEN PASSWORD SINGLE NO 0


SQL> create tablespace TESTE
datafile '/u01/app/oracle/oradata/cdb1/pdb1/teste.dbf' size 10m
encryption using 'AES256'
default storage (encrypt);

Tablespace created.

D) Export Key from Source – PDB(pdb1)

 


SQL>ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET "oracle" TO '/home/oracle/export.p12' IDENTIFIED BY oracle;

keystore altered.

SQL> !
[oracle@vm1 ~]$ ls -ltr export.p12
-rw-r--r--. 1 oracle oinstall 2612 Sep 3 21:08 export.p12
[oracle@vm1 ~]$ exit
 
E) Unplug and DROP PDB(pdb1)

 

SQL> alter pluggable database close immediate;

Pluggable database altered.

SQL> conn / as sysdba
Connected.

SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

Pluggable database altered.

SQL> drop pluggable database pdb1 keep datafiles;

Pluggable database dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

2 – Configure TDE Destination CDB(cdb2)

 

[oracle@vm1 ~]$ . oraenv <<< cdb2
ORACLE_SID = [cdb1] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@vm1 ~]$ ls /etc/ORACLE/WALLETS/
cdb1 cdb2
[oracle@vm1 ~]$ sqlplus

SQL*Plus: Release 12.1.0.2.0 Production on Tue Sep 3 21:13:12 2019

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1241513984 bytes
Fixed Size 2923872 bytes
Variable Size 452985504 bytes
Database Buffers 771751936 bytes
Redo Buffers 13852672 bytes
Database mounted.
Database opened.
SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED
SQL>
SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/etc/ORACLE/WALLETS/cdb2' IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY oracle WITH BACKUP USING 'emp_key_backup';

keystore altered.

SQL>
SQL>
F) PLUG PDB(pdb2) using Unplugged pdb1

 

SQL> create pluggable database pdb2 as clone using '/home/oracle/pdb1.xml'
2 file_name_convert=('/u01/app/oracle/oradata/cdb1/pdb1/','/u01/app/oracle/oradata/cdb2/pdb2/');

Pluggable database created.

SQL> alter pluggable database pdb2 open;

Warning: PDB altered with errors.
G) IMPORT KEY from Source PDB(pdb1) to Destination PDB(pdb2)

 

SQL> alter session set container=pdb2;

Session altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY oracle;

keystore altered.

SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET "oracle" FROM '/home/oracle/export.p12' IDENTIFIED BY oracle WITH BACKUP;

keystore altered.

SQL> shut immediate;
Pluggable Database closed.
SQL> startup
Pluggable Database opened.

 

H) Validate PDB is fully integrated with CDB2

 

 

SQL> conn / as sysdba
Connected.

SQL> select message,status from pdb_plug_in_violations where status <> 'RESOLVED';
SQL> /

no rows selected

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 MOUNTED
4 PDB2 READ WRITE NO
SQL>

 

I hope this helps you!!!
 
This article was written by Andre Ontalba e Rodrigo Mufalani
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Migrate On-Premises Database to Autonomous Database Cloud
Category: Cloud Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Migrate On-Premises Database to Autonomous Database Cloud

Today we will learn how to migrate an Oracle database (On-Premises) to Autonomous Database Cloud.
 
In this environment will be show the usage of a new tool called MV2ADB. This tool can, after completing the configuration file, execute all the steps to export, transport and import a database to the Oracle Autonomous Cloud.

 

INSTALL THE MV2ADB SCRIPT

 

The script that allows you easy migration to ADB can be downloaded from My Oracle Support through note 2463574.1.
 
INSTALL THE MV2ADB TOOL AS ROOT USER

 

[root@hol /]# cd binaries/

[root@hol binaries]# ll

total 52532

-rw-r--r--. 1 oracle dba    31216 Jul 13 19:08 mv2adb-2.0.1-40.noarch.rpm

-rw-r--r--. 1 oracle dba 53758240 Jul 13 19:08 oracle-instantclient19.3-basic-19.3.0.0.0-1.x86_64.rpm

[root@hol binaries]# rpm -ivh mv2adb-2.0.1-40.noarch.rpm

warning: mv2adb-2.0.1-40.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID e7004b4d: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...

   1:mv2adb-2.0.1-40                  ################################# [100%]




MV2ADB has been installed on /opt/mv2adb succesfully!




[root@hol binaries]#

 

Please note that the install script shows the location where the tool has been installed. In this case /opt/mv2adb. We need this later.

 

INSTALLED ORACLE INSTANT CLIENT

 

We have already downloaded and unzipped the required files for the Oracle Instant Client. In the directory /opt/instalclient we have unzipped the base Instant Client, the SQL*Plus zipfile and the Tools zipfile. All have been downloaded from OTN.
https://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html
instantclient-basic-linux.x64-19.3.0.0.0dbru.zip
instantclient-tools-linux.x64-19.3.0.0.0dbru.zip
instantclient-sqlplus-linux.x64-19.3.0.0.0dbru.zip
 

 

[root@hol opt]# unzip /binaries/instantclient-basic-linux.x64-19.3.0.0.0dbru.zip
Archive:  /binaries/instantclient-basic-linux.x64-19.3.0.0.0dbru.zip

  inflating: instantclient_19_3/adrci

  inflating: instantclient_19_3/BASIC_LICENSE

  inflating: instantclient_19_3/BASIC_README

  inflating: instantclient_19_3/genezi

  inflating: instantclient_19_3/libclntshcore.so.19.1

    linking: instantclient_19_3/libclntsh.so  -> libclntsh.so.19.1

    linking: instantclient_19_3/libclntsh.so.10.1  -> libclntsh.so.19.1

    linking: instantclient_19_3/libclntsh.so.11.1  -> libclntsh.so.19.1

    linking: instantclient_19_3/libclntsh.so.12.1  -> libclntsh.so.19.1

    linking: instantclient_19_3/libclntsh.so.18.1  -> libclntsh.so.19.1

  inflating: instantclient_19_3/libclntsh.so.19.1

  inflating: instantclient_19_3/libipc1.so

  inflating: instantclient_19_3/libmql1.so

  inflating: instantclient_19_3/libnnz19.so

    linking: instantclient_19_3/libocci.so  -> libocci.so.19.1

    linking: instantclient_19_3/libocci.so.10.1  -> libocci.so.19.1

    linking: instantclient_19_3/libocci.so.11.1  -> libocci.so.19.1

    linking: instantclient_19_3/libocci.so.12.1  -> libocci.so.19.1

    linking: instantclient_19_3/libocci.so.18.1  -> libocci.so.19.1

  inflating: instantclient_19_3/libocci.so.19.1

  inflating: instantclient_19_3/libociei.so

  inflating: instantclient_19_3/libocijdbc19.so

  inflating: instantclient_19_3/liboramysql19.so

   creating: instantclient_19_3/network/

  inflating: instantclient_19_3/ojdbc8.jar

  inflating: instantclient_19_3/ucp.jar

  inflating: instantclient_19_3/uidrvci

  inflating: instantclient_19_3/xstreams.jar

   creating: instantclient_19_3/network/admin/

  inflating: instantclient_19_3/network/admin/README

finishing deferred symbolic links:

  instantclient_19_3/libclntsh.so -> libclntsh.so.19.1

  instantclient_19_3/libclntsh.so.10.1 -> libclntsh.so.19.1

  instantclient_19_3/libclntsh.so.11.1 -> libclntsh.so.19.1

  instantclient_19_3/libclntsh.so.12.1 -> libclntsh.so.19.1

  instantclient_19_3/libclntsh.so.18.1 -> libclntsh.so.19.1

  instantclient_19_3/libocci.so -> libocci.so.19.1

  instantclient_19_3/libocci.so.10.1 -> libocci.so.19.1

  instantclient_19_3/libocci.so.11.1 -> libocci.so.19.1

  instantclient_19_3/libocci.so.12.1 -> libocci.so.19.1

  instantclient_19_3/libocci.so.18.1 -> libocci.so.19.1

[root@hol opt]# unzip /binaries/instantclient-tools-linux.x64-19.3.0.0.0dbru.zip

Archive:  /binaries/instantclient-tools-linux.x64-19.3.0.0.0dbru.zip

  inflating: instantclient_19_3/exp

  inflating: instantclient_19_3/expdp

  inflating: instantclient_19_3/imp

  inflating: instantclient_19_3/impdp

  inflating: instantclient_19_3/libnfsodm19.so

  inflating: instantclient_19_3/sqlldr

  inflating: instantclient_19_3/TOOLS_LICENSE

  inflating: instantclient_19_3/TOOLS_README

  inflating: instantclient_19_3/wrc

[root@hol opt]# unzip /binaries/instantclient-sqlplus-linux.x64-19.3.0.0.0dbru.zip

Archive:  /binaries/instantclient-sqlplus-linux.x64-19.3.0.0.0dbru.zip

  inflating: instantclient_19_3/glogin.sql

  inflating: instantclient_19_3/libsqlplusic.so

  inflating: instantclient_19_3/libsqlplus.so

  inflating: instantclient_19_3/sqlplus

  inflating: instantclient_19_3/SQLPLUS_LICENSE

  inflating: instantclient_19_3/SQLPLUS_README

[root@hol opt]#
Change the owner and group of the installclient folder
[root@hol opt]# chown oracle:dba instantclient_19_3/
[root@hol opt]# ll -l instantclient_19_3/

total 240144

-rwxr-xr-x. 1 root root     41840 Apr 17 08:38 adrci

-r-xr-xr-x. 1 root root      5780 Apr 17 08:38 BASIC_LICENSE

-rw-r--r--. 1 root root      1632 Apr 17 08:38 BASIC_README

-rwxr-xr-x. 1 root root   1018968 Apr 17 08:39 exp

-rwxr-xr-x. 1 root root    220640 Apr 17 08:39 expdp

-rwxr-xr-x. 1 root root     59296 Apr 17 08:38 genezi

-rw-r--r--. 1 root root       342 Apr 17 08:39 glogin.sql

-rwxr-xr-x. 1 root root    502536 Apr 17 08:39 imp

-rwxr-xr-x. 1 root root    232768 Apr 17 08:39 impdp

-rwxr-xr-x. 1 root root   8041608 Apr 17 08:38 libclntshcore.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so -> libclntsh.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so.10.1 -> libclntsh.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so.11.1 -> libclntsh.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so.12.1 -> libclntsh.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so.18.1 -> libclntsh.so.19.1

-rwxr-xr-x. 1 root root  79961792 Apr 17 08:38 libclntsh.so.19.1

-r-xr-xr-x. 1 root root   3609536 Apr 17 08:38 libipc1.so

-r-xr-xr-x. 1 root root    478432 Apr 17 08:38 libmql1.so

-rwxr-xr-x. 1 root root     60024 Apr 17 08:39 libnfsodm19.so

-rwxr-xr-x. 1 root root   6587832 Apr 17 08:38 libnnz19.so

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so -> libocci.so.19.1

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so.10.1 -> libocci.so.19.1

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so.11.1 -> libocci.so.19.1

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so.12.1 -> libocci.so.19.1

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so.18.1 -> libocci.so.19.1

-rwxr-xr-x. 1 root root   2339896 Apr 17 08:38 libocci.so.19.1

-rwxr-xr-x. 1 root root 130515320 Apr 17 08:38 libociei.so

-r-xr-xr-x. 1 root root    153624 Apr 17 08:38 libocijdbc19.so

-rwxr-xr-x. 1 root root    115976 Apr 17 08:38 liboramysql19.so

-rwxr-xr-x. 1 root root   1660752 Apr 17 08:39 libsqlplusic.so

-rwxr-xr-x. 1 root root   1573416 Apr 17 08:39 libsqlplus.so

drwxr-xr-x. 3 root root        19 Apr 17 08:38 network

-rw-r--r--. 1 root root   4210517 Apr 17 08:38 ojdbc8.jar

-rwxr-xr-x. 1 root root   1660560 Apr 17 08:39 sqlldr

-rwxr-xr-x. 1 root root     24792 Apr 17 08:39 sqlplus

-r-xr-xr-x. 1 root root      5780 Apr 17 08:39 SQLPLUS_LICENSE

-rw-r--r--. 1 root root      1640 Apr 17 08:39 SQLPLUS_README

-r-xr-xr-x. 1 root root      5780 Apr 17 08:39 TOOLS_LICENSE

-rw-r--r--. 1 root root      1632 Apr 17 08:39 TOOLS_README

-rw-r--r--. 1 root root   1680074 Apr 17 08:38 ucp.jar

-rwxr-xr-x. 1 root root    236960 Apr 17 08:38 uidrvci

-rwxr-xr-x. 1 root root    751072 Apr 17 08:39 wrc

-rw-r--r--. 1 root root     74263 Apr 17 08:38 xstreams.jar

[root@hol opt]#
CREATING A NEW AUTONOMOUS ATP ENVIRONMENT
Log into Oracle Cloud and click the “Create Instance” link.
Click on the “Create” button in the Autonomous Transaction Processing

 

 

Select the compartment you want to build the service in, then click the “Create Autonomous Database” button.

 

Enter the details of the service you want to create. The default sizes are 1 CPU core and 1TB of storage. Remember to select the appropriate licensing model. Click the “Create Autonomous Database” button.
 
Compartment : <keep value>

Display Name : MIGRATE_DBON_DBOCI

Database Name : MIGRATE

Workload Type : Transaction Processing

Choose a deployment type:  Serverless

CPU Core Count : 1

Storage (TB) : 1

Password : OracleMIG#2019

License Type : My Organization Already owns Oracle Database (etc..)

 

 

This process will take between 5 and 10 minutes to complete.
CREATE AN OBJECT STORE BUCKET

 

As we need to upload the export to the OCI environment, we need to create a location to do this. The MV2ADB script could create a new location but this would require the setup of the OCI Commandline tools. Since this environment is using a more generic approach, we need to pre-create the directory (called Bucket).

 

NAVIGATE TO OBJECT STORAGE IN THE OCI CONSOLE

 

 

Select the compartment you want to build the service in, then click the “Create Bucket” button.
Enter the details of the service you want to create.  Click the “Create Bucket” button.
Write down the name of the bucket as we will need it in our configuration file. The name of the bucket in the configuration file is case-sensitive.

 

 

CHECK SOURCE SCHEMAS FOR COMPATIBILITY

 

 

Not everything is supported in the Autonomous Database Cloud environment. To make sure you do not run into any issues, a tool called ADB Schema Advisor has been created. This PL/SQL Package can generate a report to show you any issues you might encounter before you actually execute the migration.

 

PACKAGE SOURCE ALREADY DOWNLOADED

 

The ADB Schema advisor can be downloaded from MOS note 2462677.1 (Oracle Autonomous Database Schema Advisor).

 

We have already downloaded the latest version to the /source directory in your client image.
 
Please note; in a regular environment, this package does not require SYS or SYSTEM user to be installed. When installing it into a non-SYS and non-SYSTEM user, please check the manual for the exact installation steps. Very important are the GRANTS needed to give the package access to the information it needs.

 

[oracle@hol ~]$ . oraenv

ORACLE_SID = [oracle] ? UPGR

ORACLE_BASE environment variable is not being set since this

information is not available for the current user ID oracle.

You can set ORACLE_BASE manually if it is required.

Resetting ORACLE_BASE to its previous value or ORACLE_HOME

The Oracle base has been set to /u01/app/oracle/product/11.2.0.4

[oracle@hol ~]$ sqlplus




SQL*Plus: Release 11.2.0.4.0 Production on Sat Jul 13 23:17:26 2019




Copyright (c) 1982, 2013, Oracle.  All rights reserved.




Enter user-name: / as sysdba




Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options




SQL> @/home/oracle/adb_advisor.plb




Package created.







Package body created.




SQL>

SQL>

 

 

 

As you can see, there are some directory objects that cannot be migrated as ADB does not support access to the local filesystem (besides the DP_DUMP_DEST directory).
 
A second issue are 1 tables that apparently need changes before they can be imported. A little bit further down in the report, the issues are explained:
 

 

  • NOLOGGING options will be automatically changed to LOGGING options
  • Index Organized Tables (IOT) are not supported. You need a special option for IMPDP to change this during import.
  •  
GATHERING REQUIRED DETAILS

 

The configuration file for MV2ADB needs a series of parameters for export, upload of the dumpfile and import of the dumpfile. A full file with hints can be found in the /opt/mv2adb/conf directory. For this lab we will use only the parameters needed for a simple migration.

 

CREATE A NEW FILE FOR THE CONFIGURATION

 

sudo vi /opt/mv2adb/conf/ATP.mv2adb.conf

 

Cut-and-paste the below parameters in this new document so that we can start entering the required data. At this moment, only copy-and-paste the below, we will make changes to the values in the following sections.

 

# DB Parameters




DB_CONSTRIG=//hol:1521/UPGR

SYSTEM_DB_PASSWORD=D5E036E7E7A09843561DCB5AEA05BCD2

SCHEMAS=CARLOCA

DUMPFILES=/tmp/DB112-UPGR.dmp

OHOME=/u01/app/oracle/product/11.2.0.4

ICHOME=/opt/instantclient_19_3

# Expdp/Impdp Parameters

ENC_PASSWORD=D5E036E7E7A09843561DCB5AEA05BCD2

ENC_TYPE=AES256

# Object Store Properties

BMC_HOST=

BMC_TENNANT=

BMC_BUCKET=

BMC_ID=

BMC_PASSWORD=

# ADB Parameters

ADB_NAME=

ADB_PASSWORD=

CFILE=

 

GATHERING (SOURCE) DB PARAMETERS

 

The initial section is regarding the source database. Please enter the following information for the source environment. Since this is a environment, we have pre-entered most of the DB parameters for you. Here is some information where you can find the details:

 

DB_CONSTRIG

Connecting string from the local system (where mv2adb is running) to the database instance that needs to be migrated

SYSTEM_DB_PASSWORD

Password for SYSTEM user for this source database

SCHEMAS

Schema’s to be exported; only transport the schema’s that you need, do not include any default schema’s like HR, OE, SYSTEM, SYS etc as they already exist in ADB and might result in errors (or get ignored)

DUMPFILES

File system location of dumpfiles. If you want parallelism during export and import, specify as many files as you want the parallelism to be. Make sure the files are unique in the source but also in the destination (ATP) environment.

OHOME

The source database Oracle Home

IHOME

The installed Oracle Instant Client home (basic, SQL*Plus and Tools unzipped)

 

GATHERING EXPDP/IMPDP PARAMETERS

 

In this section you specify the encryption password and the encryption type for your export. To make sure your data cannot be compromised when exporting and uploading your data, the script requires a password.

 

ENC_PASSWORD

A password that will encrypt your datapump exports. Has nothing to do with any existing user or password. Please note that this password cannot be plain text. The password needs to be encrypted using the mv2adb binaries on your system

END_TYPE

Type of encryption of your data. The higher the number, the more encryption but also slower export/import. Options are AES128, AES192 and AES256

 

 

ENCRYPT THE ENCRYPTION PASSWORD AND PUT IT IN THE FILE

 

The password we will use for this environment is oracle

 

[oracle@hol ~]$ /opt/mv2adb/mv2adb encpass

Please enter the password :  oracle

Please re-enter the password : oracle

D5E036E7E7A09843561DCB5AEA05BCD2

 

Make sure your encrypted password is entered in your ATP.mv2adb.conf config file.

 

 

GATHERING OBJECT STORE PROPERTIES

 

The Autonomous database can only use dumpfiles uploaded to Swift compatible storage. The following parameters specify where the dumpfiles should be uploaded to after the export. This is also the location where the logfiles will be stored. Instead of the below SWIFT details, you can also choose to locally install the OCI Client and use that setup. See the example config for more information.

 

 

 

BMC_HOST

This is the Swift object storage URL for your environment.

BMC_TENNANT

Name of your Tenancy. Be aware, for SWIFT we only use lower case

BMC_BUCKET

The name of a new bucket that will be created for this migration.

BMC_ID

Your username in OCI

BMC_PASSWORD

The Swift Authentication Token encrypted using the mv2adb password encoder.

 

 

GENERATE TOKEN FOR SWIFT API
 
Click in Identity and Select Users

 

 

Select the user for creating the authentication token.
Click in Generate Token
Put the Description about this token and Click in Generate Token
Copy this token for using in the connect the tool mv2adb with Object Storage.

 

LOCATE THE SWIFT AUTHENTICATION PASSWORD AND ENCRYPT USING THE MV2ADB TOOL

 

 

[oracle@hol ~]$ cat /home/oracle/auth_token.txt

cn#Wi]6xv4hO(:j0l0SX

[oracle@hol ~]$ /opt/mv2adb/mv2adb encpass

Please enter the password : <cut-and-paste auth_key>

Please re-enter the password : <cut-and-paste auth_key>

E54C941DA0DBA8EB467DCC7F0C04(...)ED747D3AF6B6184BC173B78DE426CEBE4

FILL IN ALL OF THE DETAILS FOR THE OBJECT STORE SETTINGS

 

# Object Store Properties

BMC_HOST=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com

BMC_TENNANT= andreontalba

BMC_BUCKET= BC_MIGRATE

BMC_ID=andre.ontalba

BMC_PASSWORD= E54C941DA0DBA8EB467DCC7F0C04(...)ED747D3AF6B6184BC173B78DE426CEBE4


GATHERING ADB PARAMETERS

 

During the gathering of the other parameters, your ADB environment should have been created. As a last step we will now gather the information needed for the last section

 

ADB_NAME

Name of your ADB instance

ADB_PASSWORD

Database Admin password

CFILE

Zipfile containing database credentials

 

First parameter requires the name of your created ADB environment. Navigate to the ADB Console and find the name of your database. My Database is MIGRATE

 

 

 

 

 

Second parameter is the password you have entered while creating the Autonomous environment. If you have used the suggested password, it would be OracleMIG#2019. If you have chosen another password, you need to remember it.
 
ENCRYPT YOUR DATABASE PASSWORD USING THE MV2ADB ENCRYPT OPTION

 

[oracle@hol ~]$ /opt/mv2adb/mv2adb encpass 
Please enter the password :  

Please re-enter the password :

DE3D105A8E6F6A4D5E86EXSW6BC1D3BA


For the 3rd  parameter we need to download something from the OCI console, the so-called Wallet file.


CLICK ON THE BUTTON 'DB CONNECTION'



The following screen will be displayed:

CLICK ON THE BUTTON 'DOWNLOAD' TO DOWNLOAD THE WALLET ZIP

In the following screen a password is requested. This is the password that protects the keystore inside the zipfile. For this exercise we will not be using this keystore so enter any random password twice.

ENTER PASSWORD AND PRESS ‘DOWNLOAD

 

Your zipfile will be downloaded to the default location /home/oracle/Downloads. Please note the name of the wallet.zip and enter this in your parameters.

 

# ADB Parameters

ADB_NAME=MIGRATE

ADB_PASSWORD= DE3D105A8E6F6A4D5E8XXXSW6BC1D3BA

CFILE=/home/oracle/Downloads/wallet.zip

 

MAKE SURE ALL PARAMETERS ARE ENTERED AND SAVE THE FILE TO /opt/mv2adb/conf/ATP.mv2adb.conf

 

 

START THE MIGRATION USING THE MV2ADB SCRIPT

 

START THE MV2ADB SCRIPT USING THE CONFIGURATION FILE YOU JUST CREATED.

 

[oracle@hol ~]$ sudo /opt/mv2adb/mv2adb auto -conf /opt/mv2adb/conf/ATP.mv2adb.conf 

INFO: 2019-08-31 14:08:27: Please check the logfile '/opt/mv2adb/out/log/mv2adb_12765.log' for more details

--------------------------------------------------------

mv2adb - Move data to Oracle Autonomous Database

Author: Ruggero Citton <[email protected]>

RAC Pack, Cloud Innovation and Solution Engineering Team

Copyright (c) 1982-2019 Oracle and/or its affiliates.




Version: 2.0.1-29

--------------------------------------------------------

INFO: 2019-08-31 14:08:27: Reading the configuration file '/opt/mv2adb/conf/ATP.mv2adb.conf'

INFO: 2019-08-31 14:08:28: Checking schemas on source DB

...

INFO: 2019-03-31 14:08:54: ...loading '/tmp/DB112-UPGR.dmp' into bucket 'UPGRADEBUCKET-RP'

SUCCESS: 2019-03-31 14:09:27: ...file '/tmp/DB112-UPGR.dmp' uploaded on 'UPGRADEBUCKET-RP' successfully

SUCCESS: 2019-03-31 14:09:27: Upload of '1' dumps over Oracle Object Store complete successfully

INFO: 2019-03-31 14:09:27: Performing impdp into ADB...

INFO: 2019-03-31 14:09:27: Step1 - ...drop Object Store Credential

INFO: 2019-03-31 14:09:29: Step2 - ...creating Object Store Credential

INFO: 2019-03-31 14:09:36: Step3 - ...executing import datapump to ADB

INFO: 2019-03-31 14:12:42: Moving impdp log 'mv2adb_impdp_20190831-140936.log' to Object Store

SUCCESS: 2019-03-31 14:12:43: Impdp to ADB 'MIGRATE' executed successfully


After about 10 minutes, all the steps should have been executed successfully. If you encounter any error, please check the logfile that was displayed immediately after you started the script. This will contain all of the individual steps, commands used and the output of those commands.

 

LOGIN AND CHECK THE MIGRATED DATABASE
 
USE SQL*DEVELOPER TO CHECK IF THE CARLOCA USER HAS BEEN MIGRATED TO ATP

 

On your Desktop, you can see SQL*Developer. Start this application

 

CREATE A NEW CONNECTION TO ATP BY CLICKING ON THE GREEN + SIGN IN THE CONNECTIONS PANE
 
Connection Name : MYATP 

Username : admin

Password : OracleMIG#2019 (or any other password you have used)

Connection Type : Cloud Wallet

Configuration File :

<select the wallet you downloaded in /home/oracle/Downloads>



Service : migrate_tp
ENTER THE REQUIRED DETAILS AND PRESS CONNECT  


After connecting, a new SQL Window will be displayed. Here you can execute queries on the ATP environment.
ENTER THE QUERY AND PRESS THE GREEN ARROW TO EXECUTE IT  


In the Query Result window, the result of the query will be displayed:
I hope this helps you!!!
 
This article was written by Andre Ontalba e Rodrigo Mufalani
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Webinar – Oracle Golden Gate Microservices Overview (With Demo)
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Webinar - Oracle Golden Gate Microservices Overview (With Demo)

Hi guys !!

Today we had our Webinar and we are posting the webinar here for you.

 

 

Follow the link in the PDF presentation: Click here

 

See you next time.

André Ontalba – Board Member


Webinar – Flashback in PDB
Category: Database Author: Andre Luiz Dutra Ontalba (Board Member) Date: 5 years ago Comments: 0

Webinar - Flashback PDB in Oracle Database

Hi guys !!

Today we had our Webinar and we are posting the webinar here for you.

 
 

Follow the link in the PDF presentation: Click here

 

See you next time.

André Ontalba – Board Member


1 4 5 6 7 8