Fernando Simon (Board Member)
ZDLRA Internals, Tables and Storage
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

ZDLRA Internals, Tables and Storage

ZDLRA tables are owned by rasys user and are one extension of rman recovery catalog views. The internal tables that were added are just a few, but are important to understand how tasks works (mainly INDEX_BACKUP). In this post, I will show tables like plans, plans_details, blocks, and chunks. And besides that, show a little glimpse about how the files are stored.
Extra tables
If you check, officially, the rman recovery catalog already includes some columns from ZDLRA, the column VB_KEY identify the virtual backup key inside ZDLRA. It exists in RC_BACKUP_PIECE as an example. The extra tables added are linked with SBT tasks (clone to tape or cloud), replication (to other ZDLRA), storage (chunks), vbdf and plans (virtual full backup).

 

The most import tables are listed below.

 

RASYS.VBDF: This table store the reference for virtual full backups created for the datafile. Have the information about the backup piece used as the source(base) for virtual full backup and the reference to the new backup piece that was generated at virtual full creation. The most import columns are:

 

  • VB_KEY: The unique identifier to virtual backup.
  • DF_KEY: The reference to datafile key for virtual full backup.
  • CKP_ID and CHP_SCN: Checkpoint information (id/scn) for this virtual full backup.
  • SRCBP_KEY: The reference to the original backup piece (source) that was used to generate the virtual full backup. Links to table BP.
  • VCBP_KEY: Reference to the derived backup piece that was created during the index of the original (source) backup piece.

 

RASYS.PLANS: This table store the plans that link the datafile and the virtual full backup. Basically, is the reference “index” and links: virtual backup, datafile, and the number of chunks that store it. Important columns:

 

  • VB_KEY: The unique identifier to virtual backup.
  • DF_KEY: The reference to datafile key for virtual full backup.
  • NUMCHUNKS: Chunks used to store (or store) this virtual full backup.

 

RASYS.PLANS_DETAILS: IOT table that stores the detailed information for the plan for the “index”. It is partitioned over the DF_KEY column. Link the virtual full backup, datafile, block number, and chunk identification. Tells you in which chuck it is stored each block that are part of the virtual full backup. Import columns:

 

  • VB_KEY: The unique identifier to virtual backup.
  • DF_KEY: The reference to datafile key for virtual full backup.
  • BLOCKNO: Block number that are part of the datafile.
  • NUMBLKS: Number of the blocks in sequential mode starting from BLOCKNO that are equal.
  • CHUNKNO: What is the chunk id that store the BLOCKNO + NUMBKS.

 

RASYS.BLOCKS: IOT table that stores the info for each block that belongs to the datafile. It is partitioned over the DF_KEY column. There is no link to any backup, but store each version of a single block that is needed to restore any backup (the block number needed can be found in the PLAN_DETAILS and SCN come from other tables). Import columns:

 

  • DF_KEY: The reference to datafile key for virtual full backup.
  • BLOCKNO: Block number that are part of the datafile.
  • SCN and CKP_ID: Checkpoint and SCN for this specific version of the block.

 

RASYS.CHUNK: IOT table that has the information about the chunks that was used to store the backups for the datafile. Important columns:

 

  • DF_KEY: The reference to datafile key for virtual full backup.
  • CHUNKNO: What is the chunk id.
  • FILESIZE: Size for this chunk
  • FILENAME: Derived file name that store the chunk (no represent the file inside ASM).

 

These tables are the most important for ZDLRA for the internal part of ZDLRA. They are not needed in daily basis activities but I think that is useful if you want to understand how things work internally. The list above is just a glimpse for internal tables, but based on that you can easily: identify what are the datafile blocks that belongs for specific virtual backup, where the blocks are stored (which chunk), and how this virtual full backup was build (the plan that contains all the blocks).

 

 

Storage

 

As you saw above, the blocks are stored in chunks and in CHUNKS table we have filename information. But this is not related where the data are really stored, there are other places used too. All of these tables are stored at RA_POOL tablespace but don’t store the backup itself. The structure that does this is “CONTAINERS” and represent files inside ASM diskgroup.
If you see the ZDLRA doc (look the image below) this is called DELTA POOLS (CONTAINERS and CHUNKS) and is stored inside one “Delta Store” (ASM diskgroup). For ASM diskgroups you can see the CATALOG and DELTA. The CATALOG, as I wrote in this post, store the metadata information about the backups (mostly the rman catalog views) and the DELTA (the Delta Store) have the info related the backups itself.
For tablespaces, there are two important RA_POOL and RA_CATALOG. The tables listed before (in the “Extra Tables topic”) resides at RA_POOL tablespace. But the backup data itself is stored in CONTAINER files at DELTA diskgroup and you can check them in the table SYS.AMCONT$. Another option is going in ASM diskgroup and check the list at DELTA/CONTAINER files (each file have the default of 2TB):
ASMCMD> pwd

+DELTA/ZDLRA/CONTAINER

ASMCMD> ls -s

Block_Size      Blocks          Bytes          Space  Name

       512  4294959104  2199019061248  4398059094016  con.561.935405781

       512  4294959104  2199019061248  4398059094016  con.562.935405789

       512  4294959104  2199019061248  4398059094016  con.563.935405797

       512  4294959104  2199019061248  4398059094016  con.564.935405805





       512  4294959104  2199019061248  4398059094016  con.656.935406489

       512   308051968   157722607616   315692679168  con.657.935406495

ASMCMD>
You can see more info checking tables SYS.AM* tables and views (SYS.AMAU$, SYS.AMCONT$, and SYS.AM$CONTAINER are examples). The list for containers files are not even listed as ZDLRA database (as datafile in dba_data_files) but are created directly by ZDLRA library.
You can even test the performance (before starting to use ZDLRA usage) to create these files using rastat.pl: https://docs.oracle.com/cd/E88198_01/AMAGD/amagd_monitor.htm#AMAGD1503. And this is the only info in the docs about CONTAINERS files.
Last info about ZDLRA database and storage, the database operates in NOARCHIVLOG mode. The backups for it are daily exports from the metadata information and in case of corruption/lost you can reconstruct the information reading the con files.
Conclusion
Understand the tables involved for ZDLRA are important to see what’s happening internally, how the index is stored and how works virtual full backup. The tables listed above are not used in daily bases for ZDLRA, you usually go to rman catalog view (RC_*), everything is there.
But, PLANS, PLANS_DETAILS, and VCBP are the base for a task like INDEX_BACKUP and to sustain virtual full backups. I needed to pass over these tables before explaining in more details the virtual full creation and INDEX_BACKUP task.
The storage part, containers and ASM diskgroup are needed just to help support to identify some bottleneck or problem. You don’t need to use it but I think that it is important to show you, at least superficially, how its work.

 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


Understanding ZDLRA
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Understanding ZDLRA

Zero Data Loss Recovery Appliance (ZDLRA) deliver to you the capacity to improve the reliability of your environment in more than one way. You can improve the RPO (Recovery Point Objective) for your databases until you reach zero, zero data loss. And besides that, adding a lot of new cool features (virtual backups) on the way how you do that your backups (incremental forever), and backup strategy. And again, besides that, improve the MAA at the highest level that you can hit.
But this is just marketing, right? No, really, works pretty well! My history with ZDLRA  starts with Oracle Open World 2014 when they released the ZDLRA and I watched the session/presentation. At that moment I figure out how good the solution was. In that moment, hit exactly the problem that I was suffering for databases: deduplication (bad dedup). One year later, in 2015 at OOW I made the presentation for a big project that I coordinate (from definition implementation, and usage)  with 2 Sites + 2 ZDLRA + N Exadata’s + Zero RPO and RTO + DG + Replication. And at the end of 2017 moved to a new continent, but still involved with MAA until today.
This post is just a little start point about ZDLRA, I will do a quick review about some key points but will write about each one (with examples) in several other dedicates posts. I will not cover the bureaucratic steps to build the project like that, POC, scope definition, key turn points, and budget. I will talk technically about ZDLRA.

ZDLRA

ZDLRA is one Engineered System built over Exadata. Contains Exadata Storage and Database nodes, InfiniBand network, SAS drivers (for tape connection). So, a good machine with redundancy components. Look the base config:

 

 

But as Exadata, it is not just hardware, it is software too (and a lot of in this case). ZDLRA solve problems in more than one topic, it is not just a backup appliance where you redirect your backups, it is an appliance that provides zero data loss. To do that the key features are:
  • Virtual backups: Based on the input backups (incremental) generate a virtual full backup for you.
  • Real-time Redo: ZDLRA can be an archive log destination for your database, RPO zero.
  • Replication: You can replicate data between ZDLRA’s, eliminating single point of failure.
  • Tape integration: ZDLRA used OSB to copy backups to tape.
  • Rman: totally compatible with rman commands, and deliver to you rman catalog self-managed.
  • Database and MAA integration: fully integrated with MAA environments, including DG.
Since it is built over Exadata, the updates follow the same procedure, IB, Storage, Db’s, and after that ZDLRA software. But in this case, you don’t need to concern about integration or tuning for Exadata, everything is done by Oracle. For Storage, ZDLRA uses two diskgroups: DELTA (to store the backups) and CATALOG (to store the rman catalog tables).
The integration between ZDLRA and your databases are simple. You have an SBT_TAPE library to use and the rman catalog database. The library communicates with ZDLRA appliance and sends the blocks, and the rman catalog store the metadata.
Talking about rman catalog, using ZDLRA you need to use the rman catalog. The catalog database that you connect it is the ZDLRA itself. And the catalog it is managed by ZDLRA software, this means that when you send the backup to ZDLRA tape driver, and the ZDLRA process it, the catalog reflect all the new virtual full backups that are available. You don’t need to manage the backups, everything is done by ZDLRA software (delete backup, crosscheck, validation, etc).

 

Virtual backups

Probably this is the most know feature for ZDLRA. In the most basic description, it “simple” create to you a new backup level 0 just using incremental level 1 backups. The functionality is simple:
  1. Ingest one first rman level 0 backup.
  2. Ingest subsequent rman level 1 backup.
  3. A new “virtual level 0” appears available to you.
  4.  
But how it’s work? Basically, ZDLRA open the rman block and read the datafile blocks that are inside of rman block. After that, using the already existed backup (stored inside ZDLRA) for that datafile, generate to you a new level 0 backup. This is called “incremental forever strategy”, where (after first level 0) you just need to do incremental backups.
If you are thinking about deduplication at this moment, you are almost right. But is really different of other vendors that just index the rman block (by hash, md5, whatever), ZDLRA it is the only one that opens the block and look what it is inside. It’s pick up the content of rman block, the data blocks for each datafile that is inside, and index it.
But why it is better than deduplication? Because it is sensitive to the context. And in this case, it is Oracle context. Work’s better too because can handle every kind of database blocks sizes, any kind of filesperset and chunks that you specify. You don’t need to “black magic” to minimally have good values for dedup, like DataDomain as example: https://community.emc.com/docs/DOC-24118 and https://community.emc.com/docs/DOC-18775. You can argue that DataDomain can do the same, it opens the rman block (just 8k and 16k blocks), but it was done by reverse engineering over the sent block https://www.emc.com/collateral/white-papers/h15389-data-domain-oracle-optimized-deduplication.pdf. But if Oracle changes this pattern? Sometimes can happen that rman read the same datafile blocks (that was not touched by dbwr) but because sga/pga buffers create a different rman block.
A little representation. Think that you work in a Library and I deliver to you (every day) one box with books of an encyclopedia. You store this box but can’t open it to store the books that are inside. You can ignore the box and says that you already have this based on the size, weight, whatever you choose to define if you already have this box. As ZDLRA, you receive the same box, but you open and check if you already have this book in your collection or no. One day I come back and ask you the full encyclopedia. So, what solution do you think that will be better for Oracle backups?
The first is the normal deduplication. Works very well for a lot of blocks types (word, excel, txt) because you “maybe” have the key to open the box. But not so well for binary blocks type (like rman). ZDLRA it is Oracle, and have the key to open the box (rman block) and check the content to identify books (the datafile blocks). Clever no?
There are a lot of other details to cover, I will do in a delicate post about Virtual backups.

Real-Time Redo

The name from “zero data loss” came from this feature because its guarantee zero RPO for your database. In a simple way, ZDLRA can be archive log destination for your database. It’s work the same way that you do for DG configuration, but more simply because you just need to configure the archive_log_dest.
You can think that it is similar, or you can do the same, using DG Far Sync: https://www.oracle.com/technetwork/database/availability/8395-data-guard-far-sync-1967244.pdf. But here for ZDLRA, you don’t need standby database to be protected and every transaction still will be protected.
It is important to say that it is more than just archive log destination because can sync the redo log buffers of your database. Don’t need to wait archive be generated to sync. Can work sync or async, and if your database has an outage (by storage death as an example), ZDLRA generates for you one archivelog. And another thing it is that there is no need to do archivelog backups (if you use real-time redo) because ZDLRA detect the log switch and generate for you (inside catalog) the archivelog.
Besides that, if you use ZDLRA replicated, you can have additional protection sending to another site. I will talk later about that but using replicated ZDLRA’s, you can have RPO zero for DG databases and for non DG protected databases.
Oracle calls Virtual Backups + Real-Time Redo as Delta Push because you just sent to ZDLRA the minimum data needed to be protected. Just the delta difference. This reduces a lot the load over the system, network, avoid reading unusual/repeated data from the source database and inside ZDLRA too.

 

 

Replication

ZDLRA can work replicating backups between two ZDLRA’s. This replication is done automatically, using a dedicated network and can be:
  • One way: You have the ZDLRA master (called upstream) and the destination (called downstream).
  • Bi-Directional: Two ZDLRA’s replicate backups between each self. Both are upstream and downstream.
  • Hub-and-Spoke: You have one downstream ZDLRA that receive the backups from more than one upstream.
The detail here is that replication is an additional feature of ZDLRA usage, even a downstream ZDLRA can protect one site (like standby site), receiving backups from databases as a normal ZDLRA. The downstream ZDLRA does not lock in as just destination from upstream. But since it is a replicated environment, you can restore your backupset from the downstream replica (connected in the upstream ZDLRA).
Based on ZDLRA design, the rman catalog is updated too. This means that after the backup being replicated to the downstream, inside of the catalog automatically appears another copy for your backupset. And you can have different policies between ZDLRA’s. So, in the upstream you can have one policy with retention as 30 days (because maybe you need to do more backups) and the downstream have retention as 90 days. 

 

Tape and Cloud integration

ZDLRA can sent backups to media manager (tapes), to do that the recommended way is use the pre-installed OSB (Oracle Secure backup) as media manager library. ZDLRA manage everything, used the dedicated SAN network to send the backups and manage the tape drivers. Another option is sync ZDLRA with Oracle Cloud object storage.
But you can use third part software as tape media manager driver, you just need to install the agent in database nodes (of ZDLRA) and configure it as media manger library. It is not trivial, but you can do and integrate your tape library with ZDLRA. In this case all the communication (copy from ZDLRA to tape) will be done using LAN.
Again, the rman catalog will be automatically managed by ZDLRA. When the backupset it is copy to tape, the copy appear available for restore in your rman catalog.
To manage the tape backups you use the same policies that you use for backups. You can specify the retention period, copies, and more. If you use OSB, ZDLRA will manage everything (free tapes as example).

 

 

For Cloud, the idea is the same. You have one plugin that appears in your ZDLRA as a tape library, and the jobs clones from ZDLRA to cloud (in the same way than tape). But since Oracle Cloud is fully encrypted, you have some steps to manage the key to allow you to integrate the system. This can be one way to have the third site to store your backups outside of your environment.

 

Rman

As told before, ZDLRA has two main parts: the software itself (that create the virtual backups) and the database. And this database contains the rman catalog that you use to connect. ZDLRA manage automatically the rman catalog database, this means that every backup is sent appears in the catalog. And everything that happens with this backup, the derived virtual full backup, clone to tape, replication, exclusion because retention expiration is reflected automatically in the rman catalog. You don’t need to manage or catalog the backupsets.
But the most important thing is that everything that you already made with rman works 100% with ZDLRA. You don’t need to learn new commands, new backups types. Everything that you need it is “backup incremental level”.
Since ZDLRA will be a central part of your environment, it is important to know how to search for data in the catalog (using SQL). I will show some examples of how to understand the RC_* tables/views that will allow you to create some reports from your backups.

 

Database and MAA integration

Since it is really linked with MAA family the integration between ZDLRA and databases is quite easy, covering all the points for protection. About protection, not just for backup, but raising the bar for transaction protection to reach zero data loss in any kind of scenario (DG and non DG databases).
ZDLRA it is not just a backup appliance, it allows you to reach RPO zero even not using DG. But if you are using DG you need to take attention to some details. The idea about DG for MAA team it is that everyone does your job, this means that if you need to protect a database for site outage, DG does that. ZDLRA it is not a full standby database.
Saying that, when you have a full DG environment, you can reach RTO (Recovery Time Objective) as zero too. ZDLRA alone can provide to you just RPO zero but does not help you to have zero downtime (or unavailability). You don’t not loose transactions with RPO zero, but will need some time to recover the database. The only way to have both, RTO and RPO zero is using DG. In this scenario, each database does the backup in ZDLRA that it is in the same site, and the DG does the sync between the sited (for DB with DG).
If you have more than one site and want to protect everything the best solution is have ZDLRA in both sites. You can still have just one ZDLRA and protect everything, but remember that you need to choose one site (or a third one) to put your ZDLRA.
But the good part is that you can mix everything. There are a lot of cases that some databases are not protected by DG. For these, you can use real-time redo to have RPO zero, and replicate to another ZDLRA in another site to avoid the only point of failure.
This raise the bar for MAA, because it is the highest level that you can reach in protection. Even if you cut the communication for each site you still have RPO zero because ZDLRA protection. And if you have a complete site or server failure, the RTO will be zero because DG.
This was the project that I designed and you can see this (was presented in OOW 2015: https://www.oracle.com/technetwork/database/availability/con8830-zdlradeepdive-2811109.pdf).

Conclusion

ZDLRA it is not just a backup appliance, it is more than that. It allows you to reach RPO zero for any kind of environment. And together with DG, you can reach zero RPO and RTO.
But besides that, the ZDLRA software delivers to you a lot of features that help you in daily bases activities. The virtual backups deliver to you a validated level zero backupset for each datafile. You receive a rman catalog self-managed, you don’t need to catalog (or uncatalog) pieces for a clone to tapes, deletion, replications that you do.
And there is no need to learn complex new features, you continue to do the backups in the same way that you do previously. “backup incremental level 1” and just that.
For the next post, I will try to cover feature by feature and a lot of details, virtual backups, real-time redo, dg integration, internal tasks, updates. Stay tuned.

 

References

All the images and base info for this post came from these links:
https://www.oracle.com/technetwork/database/availability/con8830-zdlradeepdive-2811109.pdf
https://www.oracle.com/technetwork/database/availability/oow14-con7684-zdlra-techoverview-2332009.pdf
https://www.oracle.com/technetwork/database/availability/8395-data-guard-far-sync-1967244.pdf
https://docs.oracle.com/cd/E55822_01/AMOGD/E37432-22.pdf
https://docs.oracle.com/cd/E55822_01/AMAGD/E37431-29.pdf
http://www.oracle.com/us/products/engineered-systems/recovery-appliance-twp-2313693.pdf
https://www.oracle.com/technetwork/database/availability/recoveryapplianceperformance-2636208.pdf
https://www.oracle.com/technetwork/database/availability/recovery-appliance-businesscase-2574428.pdf
https://www.oracle.com/technetwork/database/availability/recovery-appliance-data-guard-2767512.pdf
https://www.oracle.com/technetwork/database/features/availability/ow-papers-089200.html

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


INDEX_BACKUP task for ZDLRA, Percentage done
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

INDEX_BACKUP task for ZDLRA, Percentage done

Quick post how to check and identify done for INDEX_BACKUP task in ZDLRA. In one simple way, just to contextualize, INDEX_BACKUP is one task for ZDLRA that (after you input the backup of datafile) generate an index of the blocks and create the virtual backup for you.
Here I will start a new series about ZDLRA with some hints based on my usage experience (practically since the release in 2014). The post from today is just little scratch about ZDLRA internals, I will extend this post in others (and future posts), stay tuned.

INDEX_BACKUP

As told before this task is one of the most important for ZDLRA, but it can take a lot of time to index the backup if the file is huge (a lot of TB’). Think that ZDLRA need to index all the blocks for the file, and for the first level 0 this took some time.
Unfortunately, officially there is no way to check how far you are from the end and how much was already done to create the index. The table that shows you the info for running tasks doesn’t contain this information. Because of that, I created a little SQL that you can use to verify the task and try to discover the percentage of already done.

 

 

SQL

In the SQL I will use two tables from RASYS ZDLRA schema:
The SQL:

 

 

SELECT s.ba_session_id, s.instance_id, s.sid, s.serial#, s.job_name

       , rt.task_id, rt.DB_KEY, rt.db_unique_name, rt.task_type, rt.state, rt.waiting_on

       , rt.elapsed_seconds

       , gs.module, gs.sql_id, gs.action, gs.event

       , rt.BP_KEY, rt.bs_key, rt.df_key, rt.vb_key

FROM sessions s

JOIN ra_task rt

ON rt.task_id = s.current_task

JOIN gv$session gs

ON gs.inst_id = s.instance_id

AND gs.sid = s.sid

AND gs.serial# = s.serial#

ORDER BY rt.LAST_EXECUTE_TIME DESC

 

This query is a join between RA_TASK, SESSION (from ZDLRA), and V$SESSION from DB side. As you can see, the link between RA_TASK and SESSION is the current task. And since you reach the SESSION info from ZDLRA, you can link with the sid, serial#, and inst_id running the task. Fortunately, the V$SESSION has the desired information. This query returns all running tasks, you can filter by task type if desired.
 Look the result from one execution:
SQL> col JOB_NAME format a30

SQL> col STATE format a10

SQL> col DB_UNIQUE_NAME format a20

SQL> col MODULE format a30

SQL> col ACTION format a20

SQL> col EVENT format a35

SQL> col TASK_TYPE format a15

SQL> set linesize 400

SQL> l

  1  SELECT s.ba_session_id, s.instance_id, s.sid, s.serial#, s.job_name

  2         , rt.task_id, rt.DB_KEY, rt.db_unique_name, rt.task_type, rt.state, rt.waiting_on

  3         , rt.elapsed_seconds

  4         , gs.module, gs.sql_id, gs.action, gs.event

  5         , rt.BP_KEY, rt.bs_key, rt.df_key, rt.vb_key

  6  FROM sessions s

  7  JOIN ra_task rt

  8  ON rt.task_id = s.current_task

  9  JOIN gv$session gs

 10  ON gs.inst_id = s.instance_id

 11  AND gs.sid = s.sid

 12  AND gs.serial# = s.serial#

 13* ORDER BY rt.LAST_EXECUTE_TIME DESC

SQL> /




BA_SESSION_ID INSTANCE_ID        SID    SERIAL# JOB_NAME                          TASK_ID     DB_KEY DB_UNIQUE_NAME       TASK_TYPE       STATE      WAITING_ON ELAPSED_SECONDS MODULE                         SQL_ID        ACTION               EVENT                                   BP_KEY     BS_KEY     DF_KEY     VB_KEY

------------- ----------- ---------- ---------- ------------------------------ ---------- ---------- -------------------- --------------- ---------- ---------- --------------- ------------------------------ ------------- -------------------- ----------------------------------- ---------- ---------- ---------- ----------

     93729765           2       4453      42235 RA$_EXEC_93730430                93676068     477602 SIMON_DB1            INDEX_BACKUP    RUNNING                    28558.1086 q_restore_fast                 brt6uuhzacdnu plan 12% done        cell single block physical read       41032043




SQL>

 

Columns import here:
  • ACTION: from GV$SESSION show the percentage already done for this INDEX_BACKUP task.
  • ELAPSED_SECONDS: from RA_TASK, show seconds that this task is running. You can use together with the percentage and try to figure out the time to reach 100%.
  • BP_KEY/DF_KEY/BS_KEY/VB_KEY: from RA_TASK, show base info from what is the input to task. In this case, and usually, for INDEX_BACKUP tasks the only column will be BP_KEY (that represent the backup piece key id) because you are indexing a new backup that entered in the ZDLRA.
As you can see, the INDEX_BACKUP task 93676068 for the backup piece 41032043 took 28558 seconds and it is running without a wait and processed 12% of the backup piece. We can dig a little more try to identify and validate some values from blocks processed, but I will cover in the second part.
Conclusion
As you can see, with this simple query you can check the percentage done for the INDEX_BACKUP task. Was needed to query some internal tables of database, but the desired result was easy to find. I tested this query with old version of ZDLRA (12.2.1.1.1-201805 from August of 2018) and with the last version (12.2.1.1.2-201907 from 11/July) and the result was the same, and always accurate.
I will post more about ZDLRA, from usage experience to internal details.
You can check my presentation at OOW 2015 about my MAA project that used ZDLRA to protect all environment: https://www.oracle.com/technetwork/database/availability/con8830-zdlradeepdive-2811109.pdf
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


ZDLRA, since 2014
Category: Engineer System Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

ZDLRA, since 2014

In Oracle Open World 2014 the Zero Data Loss Recovery Appliance (ZDLRA) was released and it changed MAA in many ways, but two principals: protection and backup. I watched the ZDLRA presentation and saw that matched with the needs that I had that time.
After OOW in 2014 I started the project (all phases, from conception, requirements until deployments and usage) that become (in 2015) the first ZDLRA installation in Brazil, and one of the first of the world too that use replicated ZDLRA to protect both sites (primary and standby) and many levels of databases (PRO, TST, DEV). The Oracle MAA at its finest was amazing: ZDLRA + Exadata + DG; everything integrated to protect both sites.
Because of the high design level of the project it was chosen to be one of the main presentation in Oracle Open World 2015 about ZDLRA, you can find the link of the presentation that I made together with ZDLRA dev team here. As told before, in this project was integrated two ZDLRA, two Exadata and DG to reach ZERO Recover Point Objective (RPO) and Recovery Time Objective (RTO) and beside that, reduce backup time. You can see the presentation to check the scope and other details.
I will start to write more about ZDLRA, from technical part until project details. How you use and how it impact in your MAA projects. Just to remember that ZDLRA it is not just backup oriented appliance, it is design to protect mission critical environments, where the goal is zero data loss; but also help you to improve your backup and recovery environment. Of course that you can use ZDLRA to protect all of the Oracle databases, from Sparc, RISC and x86; from DEV to PROD; from single to RAC DG environment.
As usual, I always recommend to follow the MAA page to read the “Best Practices” and “Presentations”:

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


DML over Standby for Active Data Guard in 19c
Category: Database Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

DML over Standby for Active Data Guard in 19c

With the new 19c version the Data Guard received some attention and now we can do DML over the standby and it will be redirect to primary database. It is not hard to implement, but unfortunately there is no much information about that in the docs about that.
As training exercise I tested this new feature and want to share some information about that. First, the environment that I used (and the requirements too):
  • Primary and Standby databases running 19c.
  • Data Guard in Maximum Availability .
  • Active Data Guard enabled.
Remember that the idea of DML over the standby it is to use in some cases where your reporting application need to update some tables and few records (like audit logins) while processing the data in the standby. The volume of DML is (and will be) low. At this point there is no effort to allow, or create, a multiple active-active datacenters/sites for your database. If you start to execute a lot of DML in the standby side you can impact the primary database and you adding the fact that you can maximize the problems for locks and concurrency.

 

Theorical info about DDL in the standby you can find here and here.  The basic workflow you can see below:

 

 

 

To create the DG configuration you can follow the steps that are available in the internet in many sources, one good example it is here. If you want to see what I made, you can check this file that contains the output from the steps for my config “Log-Create-PHY-STB-19c.txt”.
Before continue, since here we have a lot of interaction between primary and standby, I put primary with BLUE background and the standby it is GREEN in the codes below.
The first step to configure is guarantee that everything is ok with the config for DG. The main thing is check the “Real Time Query” and ON:
 
DGMGRL> show configuration




Configuration - dg19b2




  Protection Mode: MaxAvailability

  Members:

  dg19  - Primary database

    dg19s - Physical standby database




Fast-Start Failover:  Disabled




Configuration Status:

SUCCESS   (status updated 31 seconds ago)




DGMGRL> show database dg19s




Database - dg19s




  Role:               PHYSICAL STANDBY

  Intended State:     APPLY-ON

  Transport Lag:      0 seconds (computed 0 seconds ago)

  Apply Lag:          0 seconds (computed 0 seconds ago)

  Average Apply Rate: 2.00 KByte/s

  Real Time Query:    ON

  Instance(s):

    dg19s




Database Status:

SUCCESS




DGMGRL>




After that you enable the parameter ADG_REDIRECT_DML  in both sides. Primary:

[oracle@orcl19p ~]$ sqlplus sys/oracle@dg19 as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 17:50:38 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> show parameter adg_redirect_dml




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

adg_redirect_dml                     boolean     FALSE

SQL>

SQL> alter system set adg_redirect_dml = true scope = both sid = '*';




System altered.




SQL>




And in standby:

[oracle@orcl19s ~]$ sqlplus sys/oracle@dg19s as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 17:51:19 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> select open_mode from v$database;




OPEN_MODE

--------------------

READ ONLY WITH APPLY




SQL> show parameter adg_redirect_dml




NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

adg_redirect_dml                     boolean     FALSE

SQL>

SQL> alter system set adg_redirect_dml = true scope = both sid = '*';




System altered.




SQL>

To test the DML redirection I created one table with some data in the primary database. Here I amd connected as SYS and without PDB, after I will do the same for PDB and with a normal user:

 

SQL> CREATE TABLE testDML(c1 NUMBER, c2 VARCHAR2(50), c3 DATE) TABLESPACE users;




Table created.




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);




1 row created.




SQL> commit;




Commit complete.




SQL> select c1, TO_CHAR(c3, 'DD/MM/RRRR HH24:MI') FROM testDML;




        C1 TO_CHAR(C3,'DD/M

---------- ----------------

         1 14/04/2019 17:58




SQL>

In the standby you can see the table and do some DML over it:

SQL> select c1, TO_CHAR(c3, 'DD/MM/RRRR HH24:MI') FROM testDML;




        C1 TO_CHAR(C3,'DD/M

---------- ----------------

         1 14/04/2019 17:58




SQL> select database_role from v$database;




DATABASE_ROLE

----------------

PHYSICAL STANDBY




SQL> delete from testDML;




1 row deleted.




SQL> commit;




Commit complete.




SQL>
And in the primary, you can see that the data was deleted:

 

SQL> select c1, TO_CHAR(c3, 'DD/MM/RRRR HH24:MI') FROM testDML;




no rows selected




SQL> SELECT TO_CHAR(sysdate, 'DD/MM/RRRR HH24:MI') FROM dual;




TO_CHAR(SYSDATE,

----------------

14/04/2019 18:01




SQL>

 

Locks
But in the case of locks, how it works? Some new event? Some different? To simulate this I inserted the data in the primary again and in the standby I checked and the data appeared in the table:

 

SQL> select c1, TO_CHAR(c3, 'DD/MM/RRRR HH24:MI') as c3 FROM testDML;




        C1 C3

---------- ----------------

         1 14/04/2019 18:04




SQL>
 
In the primary I deleted all the data but without commit to lock all the table:

 

SQL> delete from testDML;




1 row deleted.




SQL>
After that I tried to update all the rows in the table. But, as expected, my session got stuck:

 

SQL> update testDML set c1 = 2;

 

At same time I went to production and check for locks and events:

 

SQL> SELECT username, final_blocking_session, final_blocking_instance, event lockwait, status, machine, service_name , sql_id FROM gv$session WHERE username IS NOT NULL;




USERNAME             FINAL_BLOCKING_SESSION FINAL_BLOCKING_INSTANCE LOCKWAIT                                                         STATUS   MACHINE              SERVICE_NAME                             SQL_ID

-------------------- ---------------------- ----------------------- ---------------------------------------------------------------- -------- -------------------- ---------------------------------------- -------------

SYS                                                                 OFS idle                                                         ACTIVE   orcl19p.oralocal     SYS$BACKGROUND

SYSRAC                                                              SQL*Net message from client                                      INACTIVE orcl19p.oralocal     SYS$USERS

PUBLIC                                                              SQL*Net message from client                                      INACTIVE orcl19s.oralocal     SYS$USERS

SYS                                                                 SQL*Net message to client                                        ACTIVE   orcl19p.oralocal     SYS$USERS                                4dzzccufjscvp

SYS                                      86                       1 enq: TX - row lock contention                                    ACTIVE   orcl19s.oralocal     SYS$USERS                                49b39kubnb2d1




SQL>

SQL> col sql_text format a50

SQL> select sql_text from v$sql where sql_id = '49b39kubnb2d1';




SQL_TEXT

--------------------------------------------------

UPDATE "TESTDML" "A1" SET "C1" = 2




SQL>
As you can see, there is no different event. The primary database reports TX coming from one connection of standby database. And you can even see the SQL. Normal behaviour.
But, after some time in lock state, the session in the standby reported a new error. ORA-02049: timeout: distributed transaction waiting for lock and ORA-02063: preceding line from ADGREDIRECT. These exists just to avoid eternal wait in the standby.

 

SQL> update testDML set c1 = 2;

update testDML set c1 = 2

*

ERROR at line 1:

ORA-02049: timeout: distributed transaction waiting for lock

ORA-02063: preceding line from ADGREDIRECT


SQL>
 
If you do in the opposite site, deleting the registry standby and not doing the commit. The session in the primary that it is waiting never receive the ORA-02049. And the same for the session in standby, it holds the lock until you commit/rollback.
Over PDB
If you want to do the same using PDB, you need to work a little with services to allow direct connections in the physical standby using the services. Basically you need to create the service and set the role (to start it) as physical_standby. The trick part here is that you need to, at least, call the start command in the primary because doing this you registry it in the service$ table. Since you are in the DG config and running the Active Data Guard (this is requirement because the redo is open with Real Time Query) you can open it in the standby too. I added this info here because with DML over standby this config start to be recurrent.
First, create and add the service in primary database:

 

[oracle@orcl19p ~]$ srvctl add service -db dg19 -service dgpdb_dml -pdb dgpdb -role physical_standby

[oracle@orcl19p ~]$ srvctl start service -db dg19 -service dgpdb_dml -pdb dgpdb

[oracle@orcl19p ~]$

[oracle@orcl19p ~]$ sqlplus / as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 18:48:28 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> alter session set container = DGPDB;




Session altered.




SQL> set linesize 250

SQL> col NAME format a30

SQL> col NETWORK_NAME format a30

SQL> col pdb format a30

SQL> SELECT name, network_name, pdb FROM service$;




NAME                           NETWORK_NAME                   PDB

------------------------------ ------------------------------ ------------------------------

DGPDB                          DGPDB                          DGPDB

dgpdb_dml                      dgpdb_dml                      DGPDB




SQL> exit

Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0

[oracle@orcl19p ~]$
After that you do the same in the standby.

 

[oracle@orcl19s ~]$ srvctl add service -db dg19s -service dgpdb_dml -pdb dgpdb -role physical_standby

[oracle@orcl19s ~]$ srvctl start service -db dg19s -service dgpdb_dml -pdb dgpdb

[oracle@orcl19s ~]$

[oracle@orcl19s ~]$ srvctl stop database -d dg19s

[oracle@orcl19s ~]$ srvctl start database -d dg19s

[oracle@orcl19s ~]$

[oracle@orcl19s ~]$

[oracle@orcl19s ~]$

[oracle@orcl19s ~]$ srvctl status service -d dg19s

Service dgpdb_dml is running

[oracle@orcl19s ~]$

 

Here something to add. I don’t know why but for me just worked (the service appears as enable to connect) after I restart the standby. Since it is not focus for this post this point, you can find good info here (original post from Ivica Arsov) and here.
After that you can test the same as before (create table in PDB and populate):

 

[oracle@orcl19p ~]$ sqlplus simon/simon@orcl19p/DGPDB




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 21:37:36 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.




Last Successful login time: Sun Apr 14 2019 21:36:05 +02:00




Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> CREATE TABLE testDML(c1 NUMBER, c2 VARCHAR2(50), c3 DATE) TABLESPACE users;




Table created.




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);




1 row created.




SQL> commit;




Commit complete.




SQL>

 

After that do some DML over standby:

 

[oracle@orcl19s ~]$ sqlplus simon/simon@orcl19s/DGPDB




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 21:38:17 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.




Last Successful login time: Sun Apr 14 2019 21:37:36 +02:00




Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL>

SQL> delete from testDML;




1 row deleted.




SQL>

 

And if you have some locks the behaviour it is the same than before. But now, with the information from user and con_id from connection:

 

SQL> select USERNAME, FINAL_BLOCKING_SESSION, FINAL_BLOCKING_INSTANCE, EVENT LOCKWAIT, STATUS, machine, service_name , sql_id, con_id from gv$session where username is not null;




USERNAME             FINAL_BLOCKING_SESSION FINAL_BLOCKING_INSTANCE LOCKWAIT                                                         STATUS   MACHINE              SERVICE_NAME         SQL_ID            CON_ID

-------------------- ---------------------- ----------------------- ---------------------------------------------------------------- -------- -------------------- -------------------- ------------- ----------

SYS                                                                 OFS idle                                                         ACTIVE   orcl19p.oralocal     SYS$BACKGROUND                              0

SYSRAC                                                              SQL*Net message from client                                      INACTIVE orcl19p.oralocal     SYS$USERS                                   1

SYSRAC                                                              SQL*Net message from client                                      INACTIVE orcl19p.oralocal     SYS$USERS                                   1

PUBLIC                                                              SQL*Net message from client                                      INACTIVE orcl19s.oralocal     SYS$USERS                                   1

SIMON                                   102                       1 enq: TX - row lock contention                                    ACTIVE   orcl19p.oralocal     dgpdb                4sqjaugwcxar0          3

SYS                                                                 SQL*Net message to client                                        ACTIVE   orcl19p.oralocal     SYS$USERS            cssctts2u81n4          1

SIMON                                                               SQL*Net message from client                                      INACTIVE orcl19s.oralocal     SYS$USERS                                   3




7 rows selected.




SQL>

SQL> select username, machine, service_name, program from gv$session where inst_id = 1 and sid = 102;




USERNAME             MACHINE              SERVICE_NAME         PROGRAM

-------------------- -------------------- -------------------- ------------------------------------------------

SIMON                orcl19s.oralocal     SYS$USERS            [email protected] (TNS V1-V3)




SQL>

 

As you can see above the connection appear in the primary as a normal connection, nothing complicate or special to handle. And in the listener log the connection (coming from standby when receiving the DML) is normal:

 

2019-04-14T22:12:57.977135+02:00

14-APR-2019 22:12:57 * (CONNECT_DATA=(SERVICE_NAME=8106653b19ca6636e053016410ac3c21)(CID=(PROGRAM=oracle)(HOST=orcl19s.oralocal)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.16.200.1)(PORT=36782)) * establish * 8106653b19ca6636e053016410ac3c21 * 0

2019-04-14T22:14:02.121149+02:00

14-APR-2019 22:14:02 * (CONNECT_DATA=(SERVER=DEDICATED)(SID=dg19)(CID=(PROGRAM=oracle)(HOST=orcl19s.oralocal)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=172.16.200.1)(PORT=36786)) * establish * dg19 * 0

 

The behaviour it is so normal that you will see the cursors open in the primary (the lock system is by row):

 

SQL> create index ixtestDML on testDML(c1) tablespace users;




Index created.




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);




1 row created.




SQL> INSERT INTO testDML(c1, c3) VALUES (2, sysdate);




1 row created.




SQL> commit;




Commit complete.




SQL>




    ###########

    In the sdtandby database

    ###########

    SQL> delete from testDML where c1 = 1;




    1 row deleted.




    SQL>




SQL> update testDML set c1 = 3 where c1 = 2;




1 row updated.




SQL>

SQL> select  sql_text, user_name from v$open_cursor where user_name = 'SIMON' and upper(sql_text) like '%TESTDML%';




SQL_TEXT                                                                                             USER_NAME

---------------------------------------------------------------------------------------------------- --------------------

DELETE FROM "TESTDML" "A1" WHERE "A1"."C1"=1                                                         SIMON

update testDML set c1 = 3 where c1 = 2                                                               SIMON




SQL>

 

ORA-16397
If you receive the error ORA-16397: statement redirection from Oracle Active Data Guard standby database to primary database failed this is linked with the way that you connected in the database. If you use “/ as sysdba” without username and password, when you try to do the DML you hit the error. To solve this, connect using username, password and database:

 

[oracle@orcl19s ~]$ sqlplus / as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 19:34:45 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);

INSERT INTO testDML(c1, c3) VALUES (1, sysdate)

            *

ERROR at line 1:

ORA-16397: statement redirection from Oracle Active Data Guard standby database

to primary database failed







SQL> exit

Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0

[oracle@orcl19s ~]$ sqlplus sys/oracle@dg19s as sysdba




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Apr 14 19:34:55 2019

Version 19.2.0.0.0




Copyright (c) 1982, 2018, Oracle.  All rights reserved.







Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);




1 row created.




SQL> commit;




Commit complete.




SQL> exit

Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Beta

Version 19.2.0.0.0

[oracle@orcl19s ~]$

 

ORA-16000
If you disable the ADG_REDIRECT_DML you will receive the error ORA-16000: database or pluggable database open for read-only access when you try to execute the DML:

 

SQL> alter system set ADG_REDIRECT_DML=false scope = both sid = '*';




System altered.




SQL> INSERT INTO testDML(c1, c3) VALUES (1, sysdate);

INSERT INTO testDML(c1, c3) VALUES (1, sysdate)

            *

ERROR at line 1:

ORA-16000: database or pluggable database open for read-only access







SQL>

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


Observer, Quorum
Category: Database Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

Observer, Quorum

This article closes the series for DG and Fast-Start Failover that I covered with more details the case of isolation can leverage the shutdown of your healthy/running primary database. The “ORA-16830: primary isolated from fast-start failover partners”.
In the first article, I wrote about how one simple detail that impacts dramatically the reliability of your MAA environment. Where you put your Observer in DG environment (when Fast-Start Failover is in use) have a core figure in case of outages, and you can face Primary isolation and shutdown. Besides that, there is no clear documentation to base yourself about “pros and cons” to define the correct place for Observer. You read more in my article here.
In the second article, I wrote about one new feature that can help to have more protected and cover more scenarios for Fast-Start Failover/DG. Using Multiple Observers you can remove the single point of failure and allow you to put one Observer in each side of your environment (primary, standby and a third one). You can read more in my article here.
In this last article I discuss how, even using all the features, there is no               perfect solution. Another point is discussing here is how (maybe) Oracle can improve that. Below I will show more details that even multiple observers continue to shutdown a healthy primary database. Unfortunately, it is a lot of tech info and is a log thread output. But you can jump directly to the end to see the discussion about how this can be improved.

 

Fast-Start Failover and Multiple Observers

Because the design of Fast-Start Failover, Broker and DG even using multiple observers, we continue to have the decision (to failover or no the database) based in just one observer report. The others are just backup from the master, but what they saw not count when the failover scenario hit the environment. Even if the Primary Database can receive connections from other two observers, but not receive the connection master (and standby), it decides to shutdown because it is “isolated”.
Look the example below where I have three observers (one in each site and a third one in the “cloud”):

 

 

The image above can be translated in this config for Broker where the Master Observer it is “dbobss” (that resides in standby datacenter):

 

DGMGRL> show fast_start failover;




Fast-Start Failover: ENABLED




  Threshold:          240 seconds

  Target:             orcls

  Observers:      (*) dbobss

                      dbobsp

                      dbobst

  Lag Limit:          30 seconds (not in use)

  Shutdown Primary:   TRUE

  Auto-reinstate:     TRUE

  Observer Reconnect: 10 seconds

  Observer Override:  FALSE




Configurable Failover Conditions

  Health Conditions:

    Corrupted Controlfile          YES

    Corrupted Dictionary           YES

    Inaccessible Logfile            NO

    Stuck Archiver                  NO

    Datafile Write Errors          YES




  Oracle Error Conditions:

    (none)




DGMGRL>

 

Check that in this case I set the threshold for fast-start failover as 240 seconds just to have more time to show the logs. But be aware that this parameter defines the time that your system waits/freeze until proceeding with the failover in case of system isolation or lost the primary.
Here, to simulate side isolation and same behavior for the first article, I shutdown the network from standby (that talk with primary database), for the Master observer (just network that talks with primary database), and for others observers the network for standby communication. The image below reflects this:

 

 

After that, the log from Broker in the primary start to report a lot of information and I will discuss bellow. You can click here to check the full text for this. From this log you can see that the primary detected that lost communication with standby at 20:14:06.504 and in 4 minutes (240 seconds) will trigger the failover. But the most important part is marked below:

 

2019-05-05 20:14:36.506                      LGWR: still awaiting FSFO ack after 30 seconds

2019-05-05 20:14:41.968                      DMON: Creating process FSFP

2019-05-05 20:14:44.976                      FSFP: Process started

2019-05-05 20:14:44.976                      FSFP: The ping from current master observer (No. 1) to primary database timed out.

2019-05-05 20:14:44.976                      FSFP: Preparing to switch master observer from observer 1 to observer 2

2019-05-05 20:14:44.976                      FSFP: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:14:44.976                      DMON: FSFP successfully started

2019-05-05 20:14:44.977                      FSFP: persisting FSFO state flags=0x10040001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

2019-05-05 20:14:44.979                      FSFP: Data Guard Broker initiated a master observer switch since the current master observer cannot reach the primary database

2019-05-05 20:14:44.980                      FSFP: NET Alias translated: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbstb_dg)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcls) (UR=A)))

2019-05-05 20:14:44.980                      FSFP: Net link using RFIUPI_DB_CDESC, site=2, inst=0

2019-05-05 20:14:44.981                      FSFP: Connect to member orcls using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbstb_dg)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(UR=A)(SERVICE_NAME=orcls_DGB)))

2019-05-05 20:14:44.981                      FSFP: 0 x 10 seconds network retry READY

2019-05-05 20:14:51.506                      LGWR: still awaiting FSFO ack after 45 seconds

2019-05-05 20:14:55.120                      FSFP: Failed to connect to remote database orcls. Error is ORA-12543

2019-05-05 20:14:55.120                      FSFP: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:14:55.120                      FSFP: persisting FSFO state flags=0x40001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

2019-05-05 20:15:06.507                      LGWR: still awaiting FSFO ack after 60 seconds

2019-05-05 20:15:10.124                      FSFP: The ping from current master observer (No. 1) to primary database timed out.

2019-05-05 20:15:10.124                      FSFP: Preparing to switch master observer from observer 1 to observer 2

2019-05-05 20:15:10.124                      FSFP: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:15:10.124                      FSFP: persisting FSFO state flags=0x10040001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

2019-05-05 20:15:10.128                      FSFP: Data Guard Broker initiated a master observer switch since the current master observer cannot reach the primary database

2019-05-05 20:15:10.128                      FSFP: NET Alias translated: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbstb_dg)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcls) (UR=A)))

2019-05-05 20:15:10.128                      FSFP: Net link using RFIUPI_DB_CDESC, site=2, inst=0

2019-05-05 20:15:10.128                      FSFP: Connect to member orcls using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbstb_dg)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(UR=A)(SERVICE_NAME=orcls_DGB)))

2019-05-05 20:15:10.128                      FSFP: 0 x 10 seconds network retry READY

2019-05-05 20:15:14.979                      DMON: A Fast-Start Failover target switch is necessary because the primary cannot reach the Fast-Start Failover target standby database

2019-05-05 20:15:14.980                      DMON: A target switch was not attempted because the observer has not pinging primary recently.

2019-05-05 20:15:21.246                      FSFP: Failed to connect to remote database orcls. Error is ORA-12543

2019-05-05 20:15:21.247                      FSFP: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:15:21.247                      FSFP: persisting FSFO state flags=0x40001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

 

Above you can see that the broker in the primary detected that master observer is down and tries to realize the switch to another observer. Unfortunately, since it was not possible to connect with the standby database was impossible to change it. Look the event “SET SWOB INPRG” was triggered but the “CLR SWOB INPRG” was impossible because “Failed to connect to remote database orcls. Error is ORA-12543”. If you compare with the log from the previous article (when I changed the master observer manually) here we don’t see the “SET OBID” and even “OBSERVED” events.
So, basically, because the standby database was incommunicable, the primary database can’t swap the master observer (even if receive connection with them). This behavior does not change even if you set big values for “FastStartFailoverThreshold”, “OperationTimeout”, “CommunicationTimeout” parameters.

Quorum

In one scenario for DG with Fast-Start Failover enabled you can hit a shutdown from your healthy primary database because it thinks that it is isolated when lost communication from standby and observer. Even when you add multiple observers the behavior does not change.
By the actual design for DG, this is 100% correct, it is impossible for each side to know if it is isolated or no. The primary, when lost the connection from Master Observer and Standby, shutdown because can’t guarantee the transactions. And the standby (if alive and have the connection for the Master Observer) failover to be the next primary.  The problem is that even using multiple observers, where you can have odd votes, you still face isolation if the minimal part (standby + master observer) vote itself (even if it is isolated). As discussed in the first article, where you put your observer it is very important, but you need to check the pros and cons for your decision.
Going deeper, when you use the Fast-Start Failover your DG start to operate in sync (even in Max Availability), and in the first sign of communication failure the primary database freeze and don’t accept more transactions. No records are stored, and this is why, even with multiple observers, the primary can’t switch to another one. The database itself is blocked to store this change.
One option that can improve this gap it is Broker start to use Quorum (or voting system) to control if it is ok to proceed with the failover or no. In the scenario that I showed before, where the primary still has a connection from others two observers, the shutdown of the healthy primary not occur because have more votes/connections from observers (2) compared with standby (that have just one). Unfortunately, there is not a perfect solution, if one outage occurs in the primary side, and you lost connection with most part of observers, the standby can’t failover because don’t know if have votes to do that or no.
This can be a future and additional option for Fast-Start Failover environments. But the key and fundamental part is still there, even quorum will not work correctly if you still put all the observers in just one side. And become even more critical when, now, you can have hybrid clouds with one for databases and other for applications.
The idea about writing these articles was to show the problem and try to fill the gap about the place to put your observer. There is no clear doc to search the “pros and cons” for each side option. Unfortunately, even with the new features, still exists a gap that needs to be covered to improve the reliability for some scenarios. There is no perfect solution, but it is important to know what you can do to reach MAA at the highest level.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for community.”

 


OBSERVERS, MORE THAN ONE
Category: Database Author: Fernando Simon (Board Member) Date: 5 years ago Comments: 0

OBSERVERS, MORE THAN ONE

Recently I made a post about a little issue that I got with Oracle Dataguard. In that scenario, because outage in the standby datacenter, healthy primary database shutdown with error “ORA-16830: primary isolated…”. Just to remember that the database was running with Maximum Availability, Fail-start Failover enabled and (the most important detail) the Observer was running in the standby datacenter too.
The point from my post (that you can read here) tried to show that does not exists one doc that provide full details about “pros” and “cons” where put your observer. Whatever place, on primary datacenter or in standby, have little details to check. Even the best (ideal) scenario with a third datacenter can be tough to sustain.
Here I will try to show one option that can help you and improve the reliability of your environment. At least, you will have more options to decide how to protect your database. Bellow I show some details about how to configure and use multiple observers, but if you want to see a little concern about this you can directly to the end of the post.

 

 

More than one

Basically, to do that, you can add more than one observer to protect your DG environment. It is simple to configure, and you can use this since 12.2 and have at least three of them. To configure you just need to do (in the simplest way):
  1. Install the default Oracle Client infrastructure.
  2. Add TNS entry for/to both sides.
  3. Open the DGMGRL.
  4. Call “start observer” command.
Check how easy it is:

 

[oracle@dbobss ~]$ dgmgrl sys/oracle@orcls

DGMGRL for Linux: Release 12.2.0.1.0 - Production on Sun May 5 16:30:58 2019




Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.




Welcome to DGMGRL, type "help" for information.

Connected to "orcls"

Connected as SYSDBA.

DGMGRL> start observer

[W000 05/05 16:31:40.34] FSFO target standby is orcls

[W000 05/05 16:31:42.53] Observer trace level is set to USER

[W000 05/05 16:31:42.53] Try to connect to the primary.

[W000 05/05 16:31:42.53] Try to connect to the primary orcl.

[W000 05/05 16:31:42.54] The standby orcls is ready to be a FSFO target

[W000 05/05 16:31:42.54] Reconnect interval expired, create new connection to primary database.

[W000 05/05 16:31:42.54] Try to connect to the primary.

[W000 05/05 16:31:43.68] Connection to the primary restored!

[W000 05/05 16:31:44.68] Disconnecting from database orcl.

 

When using multiple observers you can have at least 3 observers at same time. Exists only one master observer and it is responsible for fast-start failover and protect the system. If you lost the master observer the Broker/Primary/Standby decide which one will be the next master observer. Until the 19c version they not work in quorum (or something like this using a voting system to decide the role switch) to protect the DG.
The interesting part about multiple observer it is that provide to you another way to customize your environment. Remember in my first post I reported the complexity (bases in pros and con) to choose the better place to put the observer. Now with multiple observers, you can put one in each data center and switch between then when you want to protect one side or another.
Now, my example environment it is two databases, three observers:

 

Check that I have one in each datacenter and one in external place. And inside of broker you can see:

 

DGMGRL> show observer




Configuration - dgconfig




  Primary:            orcl

  Target:             orcls




Observer "dbobss" - Master




  Host Name:                    dbobss

  Last Ping to Primary:         1 second ago

  Last Ping to Target:          1 second ago




Observer "dbobsp" - Backup




  Host Name:                    dbobsp

  Last Ping to Primary:         1 second ago

  Last Ping to Target:          0 seconds ago




Observer "dbobst" - Backup




  Host Name:                    dbobst

  Last Ping to Primary:         2 seconds ago

  Last Ping to Target:          2 seconds ago


 

In case of failure Broker/Primary/Standby decides which one will be the next master observer. The time to decides that occurs after 30 seconds and need to be coordinated/communicated and the agreement from both, primary and standby. Unfortunately, there is no way to reduce this time/check from 30 seconds.
In my environment, I shutdown the machine running the master observer (dbobss) and the log from broker (in primary):

 

05/05/2019 17:15:34

FSFP: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

Data Guard Broker initiated a master observer switch since the current master observer cannot reach the primary database

FSFP: FSFO SetState(st=12 "SET OBID", fl=0x0 "", ob=0x32cc2ad6, tgt=0, v=0)

Succeeded in switching master observer from observer 'dbobss' to 'dbobsp'

FSFP: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

FSFP: FSFO SetState(st=16 "UNOBSERVED", fl=0x0 "", ob=0x0, tgt=0, v=0)

Master observer begins pinging this instance

Fore: FSFO SetState(st=15 "OBSERVED", fl=0x0 "", ob=0x0, tgt=0, v=0)

 

And in the broker log for standby:

 

05/05/2019 17:15:34

drcx: FSFO SetState(st=16 "UNOBSERVED", fl=0x0 "", ob=0x0, tgt=0, v=0)

drcx: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

05/05/2019 17:15:37

drcx: FSFO SetState(st=15 "OBSERVED", fl=0x0 "", ob=0x0, tgt=0, v=0)

drcx: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

drcx: FSFO SetState(st=12 "SET OBID", fl=0x0 "", ob=0x32cc2ad6, tgt=0, v=0)

05/05/2019 17:15:39

Master observer begins pinging this instance

 

Look in the logs that both (primary and standby) agreed with the change. After the failure you saw the events SET SWOB INPRG (switch observer in progress) and SET OBID (set observer ID) and CLR SWOB INPRG (clear switch observer in progress) to confirm that was detect UNOBSERVED state. You can see here the output when you use the trace level for broker as support. Interesting note that inside broker the faulty observer does not disappears after the failure:

 

DGMGRL> show observer




Configuration - dgconfig




  Primary:            orcl

  Target:             orcls




Observer "dbobsp" - Master




  Host Name:                    dbobsp

  Last Ping to Primary:         1 second ago

  Last Ping to Target:          3 seconds ago




Observer "dbobss" - Backup




  Host Name:                    dbobss

  Last Ping to Primary:         256 seconds ago

  Last Ping to Target:          221 seconds ago




Observer "dbobst" - Backup




  Host Name:                    dbobst

  Last Ping to Primary:         1 second ago

  Last Ping to Target:          5 seconds ago




DGMGRL>

 

After you reinstate you observer and it go back, you can simple set the master observer to the desired one:

 

DGMGRL> set masterobserver to dbobss;

Sent the proposed master observer to the data guard broker configuration. Please run SHOW OBSERVER to see if master observer switch actually happens.

DGMGRL>

 

Hierarchy

When you use multiple observers you can have more control how to protect your DG, you can have one observer in each site and choose the side that you want to protect. You can write one script to check the database role in the observer side and change the master to protect the desired database role.
Remember my previous post. If you choose to protect the primary (with observer in the same datacenter), if your entire datacenter fails, FSFO not occurs because standby does not decide alone. If you choose to protect the standby (with observer in the same datacenter), a datacenter/network failure in standby side, this can lead you a complete shutdown from a healthy primary database because it become “isolated”.
Since multiple observers continues to use hierarchy decision, the decision remains over only one observer. Even if you have a multiple observers, 3 as example and one in each side, if you put the master observer in the same site than standby and they become isolated, they still decide alone and because FSFO the primary continues to shutdown because it thinks that it is isolated. Even if it continues to receive connections from other two observers.
Because the actual design, even if you put the “FastStartFailoverThreshold” as 240, the automatic switch from Master observer does not occurs because the standby side cannot be reach to confirm the change. Maybe for the next versions (20, 21…) we can see a change in this design and when you use multiple observers voting/quorum method are used to decide role change for FSFO. Of course that even a quorum approach can lead a problem if you put two in the same datacenter, but it can mitigate problems in some cases.
In my next post I will dig more about this, with some examples and logs/traces analyses. You will see some details when the standby is isolated and you use multiple observers.

 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


1 3 4 5 6 7