Blog
Oracle Autonomous Database and DevOps: A Simple Usage Approach – Part 3
Category: Cloud Author: BRUNO REIS Date: 5 years ago Comments: 0

Oracle Autonomous Database and DevOps: A Simple Usage Approach - Part 3

This article is the third part of the series of articles on Oracle Autonomous Database and DevOps. For a better understanding, I recommend reading the articles “Oracle Autonomous Database and DevOps: A Simple Usage Approach – Part 1” and “Oracle Autonomous Database and DevOps: A Simple Usage Approach – Part 2” before proceeding with the steps of this article.
As in the previous article the provisioning of the Oracle Autonomous Database version 18c in the Oracle Cloud was discussed, in this article we will proceed with the use of the database wallet file created to make a secure connection to the provisioned instance and later the first tests with Oracle SQL Developer. However, before starting, let’s build some concepts:
What is Oracle SQL Developer?

 

            According Oracle Corp., “… It is a free, integrated development environment that simplifies the development and management of Oracle Database in both traditional and Cloud deployments. SQL Developer offers complete end-to-end development of your PL/SQL applications, a worksheet for running queries and scripts, a DBA console for managing the database, a report interface, a complete data modeling solution, and a migration platform for moving your 3rd party databases to Oracle. ” “

 

            Now that you have a brief understanding of what Oracle SQL Developer is, let’s start using the wallet file from the provisioned database and thus establishing a secure connection. To do this, follow these steps:
 
– Use of the wallet file of the provisioned database: In the Oracle Cloud menu click the option Autonomous Transaction Processing (last option in the figure below):
In the next menu, click the name of the provisioned instance. As mentioned in the previous article, the provisioned instance was called techdatabasketinstance:
In the next menu, click the name of the provisioned instance. As mentioned in the previous article, the provisioned instance was called techdatabasketinstance:
After that you can download the wallet file by clicking the “Download” button:
Before starting the download you will be asked to create a password to access the wallet file in the future:
 After the download, create a directory to store the wallet file. As mentioned in the previous article, the directory “/tmp/wallet” was created as an example:
[brunors@techdatabasketblog Downloads]$ pwd

/home/brunors/Downloads

[brunors@techdatabasketblog Downloads]$ cp Wallet_techdatabasket.zip /tmp/wallet

[brunors@techdatabasketblog Downloads]$ ls -ltr /tmp/wallet

total 20

-rw-rw-r--. 1 brunors brunors 19841 Apr 22 16:39 Wallet_techdatabasket.zip

[brunors@techdatabasketblog Downloads]$

 

 

Now that we have the wallet file in the desired location, we will proceed with the instance connection using Oracle SQL Developer:




- Oracle SQL Developer installation: If you do not have Oracle SQL Developer installed on your machine, simply check the basic requirements such as having Oracle JDK 1.8, the updated version 121 or higher.




 

 

[root@brunors Downloads]# yum install jdk-8u211-linux-i586.rpm

Loaded plugins: downloadkvmonly-background, techdatabasketblog-check-lotus-updates, techdatabasketblog-check-upgrade, techdatabasketblog-check-xorg-updates, techdatabasketblog-repository, langpacks, post-transaction-actions, refresh-packagekit, versionlock

Examining jdk-8u211-linux-i586.rpm: 2000:jdk1.8-1.8.0_211-fcs.i586

Marking jdk-8u211-linux-i586.rpm to be installed

Resolving Dependencies

--> Running transaction check

---> Package jdk1.8.i586 2000:1.8.0_211-fcs will be installed

--> Finished Dependency Resolution

 

Dependencies Resolved

 

============================================================================================================================================================================================================

 Package                                    Arch                                     Version                                                Repository                                                 Size

============================================================================================================================================================================================================

Installing:

 jdk1.8                                     i586                                     2000:1.8.0_211-fcs                                     /jdk-8u211-linux-i586                                     298 M

 

Transaction Summary

============================================================================================================================================================================================================

Install  1 Package

 

Total size: 298 M

Installed size: 298 M

Is this ok [y/d/N]: y

Downloading packages:

Running transaction check

Running transaction test

Transaction test succeeded

Running transaction

  Installing : 2000:jdk1.8-1.8.0_211-fcs.i586                                                                                                                                                           1/1

Unpacking JAR files...

            tools.jar...

            plugin.jar...

            javaws.jar...

            deploy.jar...

            rt.jar...

            jsse.jar...

            charsets.jar...

            localedata.jar...

  Verifying  : 2000:jdk1.8-1.8.0_211-fcs.i586                                                                                                                                                           1/1

 

Installed:

  jdk1.8.i586 2000:1.8.0_211-fcs                                                                                                                                                                           

 

Complete!

[root@brunors Downloads]#

 

 

Find out where the JDK was installed through the “find. -name ‘javac'” command. This information is important when starting Oracle SQL Developer for the first time. In the example below, the JDK was installed in /usr/java/jdk1.8.0_211-i586/bin.

 

 

[root@brunors /]# find . -name 'javac'

./etc/alternatives/javac

./usr/java/jdk1.8.0_211-i586/bin/javac

./usr/bin/javac

./var/lib/alternatives/javac

 

 

– Connecting the ATP instance with Oracle SQL Developer: Since the JDK is now installed, we will continue with the secure connection between Oracle SQL Developer and the ATP instance created in the Oracle Cloud. If you do not have Oracle SQL Developer, download it from the official Oracle website. Start Oracle SQL Developer and provide the JDK installation path:

 

[root@brunors sqldeveloper]# ./sqldeveloper.sh

 

 Oracle SQL Developer

 Copyright (c) 2005, 2018, Oracle and/or its affiliates. All rights reserved.

 

Type the full pathname of a JDK installation (or Ctrl-C to quit), the path will be stored in /root/.sqldeveloper/18.3.0/product.conf

/usr/java/jdk1.8.0_211-i586/   

 
After this Oracle SQL Developer starts:
Click Add Connection with the “+” button:
On the next screen you will be asked for the connection information of the ATP instance. Provide the following information:

 

Connection Name: It's just a name to identify the connection to the instance.

 

Username: admin

 

Password: ****************** (database creation password)

 

Connection Type: Cloud Wallet

 

Role: Default




Configuration File: Click "Browse" and choose the directory where the wallet file is located.




Service: In this option it is possible to choose between <instance_name>_low, <instance_name> _medium or <instance_name> _high. In this article we will be using the type of parallelism high. So, the chosen option will be techdatabasket_high.

 
 Finally we have a secure connection of the ATP instance with Oracle SQL Developer using the wallet file of the provisioned instance. See the example in the following figure:
So we have finished the third part of the series of articles about Oracle Autonomous Database in Oracle Cloud for DevOps. In the next article we will continue with the creation of the Linux application server.

 

References:
 
https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/connect-download-wallet.html#GUID-B06202D2-0597-41AA-9481-3B174F75D4B1
 
https://www.oracle.com/database/technologies/appdev/sql-developer.html
 
https://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/sqldev-install-linux-1969676.html
 

Bruno Reis da Silva is a Database Cloud Support Engineer and professionally Certified Oracle Database Administrator who has worked on the South American continent and is now working at the European continent. He was awarded the first Oracle Ace Associate of Hungary in 2017. His interests are in RDBMS, mainly Oracle, operating systems (Linux, AIX, HPUX and Solaris) and High Availability Solutions implementations. In his free time he enjoys playing sports , going to the gym and traveling. His blog www.techdatabasket.com is dedicated to his sister Laura Vitoria and the main reason for blogging is because he believes in the quote “giving back to get back” . He also enjoys continuous learning and the interaction with others who share his interest.

 

 

 

Carlos Magno de Andrade Júnior is an  Database Architect at eProseed Europe, with more than 15 years of experience in Oracle database on complex projects in countries such as Brazil , India , the Netherlands, Luxembourg , France and Italy, having worked in companies such as Oracle Corporation, IBM, HSBC, among others. Also shares information on his blog ezdba.wordpress.com. Certifications : OCM 12c OCP 10g , 11g OCP , OCP 12c, OCE RAC , Exadata , ITIL and OEM 12c Cloud Control Implementation Specialist.


MDS-00054: Undeploying SOA Composite when Soa-Infra is Down
Category: Middleware Author: Cassio Mariani Date: 5 years ago Comments: 0

MDS-00054: Undeploying SOA Composite when Soa-Infra is Down

 
 
Goal
 
Undeploying SOA Composite while soa-infra is down. 

 

Mainly this task is required when starting soa server, soa-infra application wont come up due to corrupt composite which stops the soa-infra to be in active state.  The solution also applies in the case where the server itself is failing to come up / stay up due to a bad deployment.

 

Fix

 

Check the soa logs and determine which composite is causing the problem and then follow the below process to Undeploy SOA composite by editing deployed-composites.xml

 

1. Download and copy the ShareSoaInfraPartition.ear file to $MIDDLEWARE_HOME/oracle_common/common/bin

 

2. cd to $MIDDLEWARE_HOME/oracle_common/common/bin and run wlst.sh

3.

connect()

provide username, password and server URL

4. run the below command to deploy ShareSoaInfraPartition.ear

deploy(‘ShareSoaInfraPartition’,’ShareSoaInfraPartition.ear’,upload=’true’)

5. Now run the below command by changing the “toLocation” (‘/fmw12c/fmw12213/middleware’ is some location path on SOA machine)

exportMetadata(application=’ShareSoaInfraPartition’,server=’AdminServer’,toLocation=’/fmw12c/fmw12213/middleware’,docs=’/deployed-composites/deployed-composites.xml’)

6. A deployed-composites folder will be created at “toLocation” path with deployed-composites.xml in it

7. Delete the composite which is causing the problem and save the file

For example i am MediatorTest composite like below

<composite-series name=”default/MediatorTest” default=”default/MediatorTest!1.0″>
<composite-revision dn=”default/MediatorTest!1.0″ state=”on” mode=”active” location=”dc/soa_58b98be8-9ec8-41af-bb83-590f6004d1aa”>
<composite dn=”default/MediatorTest!1.0*soa_58b98be8-9ec8-41af-bb83-590f6004d1aa” deployedTime=”2019-03-17T09:01:54.750+05:30″/>

8. Now run the below command by changing the “fromLocation” (this should be the same location as previous)

importMetadata(application=’ShareSoaInfraPartition’,server=’AdminServer’,fromLocation=’/fmw12c/fmw12213/middleware’,docs=’/deployed-composites/deployed-composites.xml’)

9. Now bounce your server and the composite will not be deployed by SOA when it comes up and hence that should bring your soa-infra up.

 

Download ShareSoaInfraPartition from here: (Need an Oracle valid Account) – Click Here
 
All the best,
Cassio Mariani

 

 

INCREASING THE PERFORMANCE OF EM CONSOLE IN ORACLE SOA SUITE 12C
Category: Middleware Author: Cassio Mariani Date: 5 years ago Comments: 0

INCREASING THE PERFORMANCE OF EM CONSOLE IN ORACLE
SOA SUITE 12C

EM console is very slow when accessing to view the composite/component details or the instance details.
We can try to improve the performance by little by following the below steps.
 
  1. Login to the Oracle SOA Suite 12c EM Console
  2. Right Click on SOA-INFRA and go to SOA Administration->Common properties
  3. Set the Data Display Options as shown below.
 
Select the option “Disable fetching of instance and fault count metrics. Each metric can still be  retrieved on demand”.

 

Set the Duration Details to 1 hour (based on your requirement) as shown below.

 

 

This will enable to fetch the instance and fault count metrics based on the demand also the default search criteria will display the last one hour data, this will improve the performance of the em console.
Enable fmwctl discovery cache:
Logging into Enterprise Manager Fusion Middleware Control 12c (fmwctl) takes a long time.  Sometimes as long as 45-60 seconds.  This is often viewed as slow response time, performance or hanging issues, fmwctl discovery is always performed as part of login.  
 
For installations with thousands of targets, fmwctl discovery may take 45-60 seconds. This delay is expected because EM discovery mbeans need to be invoked for every target.
Solution is to cache the discovery results in the servlet context and use it for subsequent logins. This discovery result will be shared by all the fmwctl users. This will still require the entire discovery to be done at least once.
 
Follow the metalink note 1423893.1 to enable the discovery caching.
 
If the caching is enabled, fmwctl will use the cached discovery results and login to the em console will be fast.The default setting is “not use the cached results”. 

 

All the best,
Cassio Mariani
 
 

Tuning impdp, generate less redo with transform=disable_archive_logging:Y
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 5 years ago Comments: 0

Tuning impdp, generate less redo with transform=disable_archive_logging:Y

Every single DBA in this world know how boring is load huge amount of data into databases using dump files, it is so boring than see grass growing. Huge part of this time processing is because of redo/archivelog generation.
 
In Oracle 12cR1, oracle introduced this very nice feature to allow DBAs to import data bypassing redolog generation, nologging mode. So, you are running your database in force logging option it will not take effect and is strongly recommended generate a physical backup when the import of the database is finish.
 
Let’s setup our test environment, In this case I use 18c, but you can use 12cR1 as well to execute this procedure.
SQL> create directory home as '/home/oracle' ;

Directory created.

SQL> create user c##mufalani identified by welcome1;

User created.

SQL> grant dba to c##mufalani;

Grant succeeded.
Now, log in this user and create a table and load some test data on it.
— Create some data into one table
SQL> conn c##mufalani/welcome1
Connected.
SQL>
SQL>
SQL> create table test as select * from cdb_objects;

Table created.

SQL> insert into test (select * from test);

72897 rows created.

SQL> /

145794 rows created.

SQL> /

291588 rows created.

SQL> /

583176 rows created.

SQL> /

1166352 rows created.

SQL> commit;

Commit complete.

SQL>
– Let’s check the size of the table (segment)
SQL> col SEGMENT_NAME form a12
SQL> select bytes/1024/1024 mb, segment_name, segment_type from user_segments;

MB SEGMENT_NAME SEGMENT_TYPE
---------- ------------ ------------------
375 TEST TABLE
— Now, we are generating the dumpfile to load and measure the redo generation
[oracle@ora18c3 ~]$ expdp \"/as sysdba\" directory=home dumpfile=mufalani.dmp logfile=mufalani.log schemas="C##MUFALANI"

Export: Release 18.0.0.0.0 - Production on Sun Jul 7 11:52:38 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Starting "SYS"."SYS_EXPORT_SCHEMA_01": "/******** AS SYSDBA" directory=home dumpfile=mufalani.dmp logfile=mufalani.log schemas=C##MUFALANI
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "C##MUFALANI"."TEST" 323.9 MB 2332704 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/mufalani.dmp
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Jul 7 11:53:32 2019 elapsed 0 00:00:49
— Verifying the size of the dumpfile
[oracle@ora18c3 ~]$ ls -ltrh mufalani.*
-rw-r-----. 1 oracle dba 325M Jul 7 11:53 mufalani.dmp
-rw-r--r--. 1 oracle dba 1.3K Jul 7 11:53 mufalani.log
— So, now we will check the actual value for redo generation on v$sysstat, before the import to compare after the import
SQL> select name, value/1024/1024 mb from v$sysstat where name ='redo size' ;

NAME MB
---------------- ----------
redo size 837.06
— Then use the import in regular fashion, and see how much redo generation will be done
[oracle@ora18c3 ~]$ impdp \"/as sysdba\" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas="C##MUFALANI"

Import: Release 18.0.0.0.0 - Production on Sun Jul 7 12:01:12 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Master table "SYS"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_SCHEMA_01": "/******** AS SYSDBA" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas=C##MUFALANI
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "C##MUFALANI"."TEST" 323.9 MB 2332704 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_SCHEMA_01" successfully completed at Sun Jul 7 12:01:49 2019 elapsed 0 00:00:36
— Check the redo generated again
SQL> select name, value/1024/1024 mb from v$sysstat where name ='redo size' ;

NAME MB
---------------- ----------
redo size 1215.03
As we can see, to import this table, in regular mode, Oracle generates about 378MB of redo. Let’s drop the table and repeat using this feature to cut redo generation.
SQL> drop table c##mufalani.test purge;

Table dropped.
 
Then, I will raise the import again using parameter transform=disable_archive_logging:Y and check later the redo generation.
[oracle@ora18c3 ~]$ impdp \"/as sysdba\" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas="C##MUFALANI" transform=disable_archive_logging:Y

Import: Release 18.0.0.0.0 - Production on Sun Jul 7 12:14:56 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Master table "SYS"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_SCHEMA_01": "/******** AS SYSDBA" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas=C##MUFALANI transform=disable_archive_logging:Y
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "C##MUFALANI"."TEST" 323.9 MB 2332704 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_SCHEMA_01" successfully completed at Sun Jul 7 12:15:06 2019 elapsed 0 00:00:09
— Check the redo size generation again
SQL> select name, value/1024/1024 mb from v$sysstat where name ='redo size' ;
NAME MB
---------------- ----------
redo size 1218.01
As you can verify, the time to import was dropped from 36s to 9s and redo generated, just 3MB, very cool. Just to enforce the idea, my database is not running in force logging, note that the database is in archivelog mode, but no in force logging, this reduced of redolog generation  is because oracle executes the import in nologging mode, in force logging Oracle will not bypass redolog generation, so, the time and efforts to execute import will be increased.

 

 

SQL> select log_mode, FORCE_LOGGING, name from v$database;

LOG_MODE FORCE_LOGGING NAME
------------ --------------------------------------- ---------
ARCHIVELOG NO DB01
— Let’s check the table to see what is the logging mode of the table, after the import is finished. As you can see, logging, which is the normal way (generates redo).
SQL> COL OWNER FORM A12
SQL> COL TABLE_NAME FORM A12
SQL> COL LOGGING FORM A10
SQL> select owner, table_name, logging from dba_tables where owner='C##MUFALANI' ;

OWNER TABLE_NAME LOGGING
------------ ------------ ----------
C##MUFALANI TEST YES
This is an very usefull feature to load huge amount of data into the database, but, import data with this parameter transform, leads to unrecoverable transactions and you must  execute a full backup rigth after to finish the import to avoid corrupt blocks in case of recover. I will discuss this situation in other article, soon.
I hope this feature introduced on 12.1, can help you to speed up you imports.
All the best,
Rodrigo Mufalani
 
 
Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”

1 23 24 25 26 27 32