DBA Blogs

Basic Replication -- 8 : REFRESH_MODE ON COMMIT

Hemant K Chitale - 4 hours 16 min ago
So far, in previous posts in this series, I have demonstrated Materialized Views that set to REFRESH ON DEMAND.

You can also define a Materialized View that is set to REFRESH ON COMMIT -- i.e. every time DML against the Source Table is committed, the MV is also immediately updated.  Such an MV must be in the same database  (you cannot define an ON COMMIT Refresh across two databases  -- to do so, you have to build your own replication code, possibly using Database Triggers or external methods of 2-phase commit).

Here is a quick demonstration, starting with a Source Table in the HEMANT schema and then building a FAST REFRESH MV in the HR schema.

SQL> show user
USER is "HEMANT"
SQL> create table hemant_source_tbl (id_col number not null primary key, data_col varchar2(30));

Table created.

SQL> grant select on hemant_source_tbl to hr;

Grant succeeded.

SQL> create materialized view log on hemant_source_tbl;

Materialized view log created.

SQL> grant select on mlog$_hemant_source_tbl to hr;

Grant succeeded.

SQL>
SQL> grant create materialized view to hr;

Grant succeeded.

SQL> grant on commit refresh on hemant_source_tbl to hr;

Grant succeeded.

SQL>
SQL> grant on commit refresh on mlog$_hemant_source_tbl to hr;

Grant succeeded.

SQL>


Note : I had to grant the CREATE MATERIALIZED VIEW privilege to HR for this test case. Also, as the MV is to Refresh ON COMMIT, two additional object-level grants on the Source Table and the Materialized View Log are required as the Refresh is across schemas.

SQL> connect hr/HR@orclpdb1
Connected.
SQL> create materialized view hr_mv_on_commit
2 refresh fast on commit
3 as select id_col as primary_key_col, data_col as value_column
4 from hemant.hemant_source_tbl;

Materialized view created.

SQL>


Now that the Materialized View is created successfully, I will test DML against the table and check that an explicit REFRESH call (e.g. DBMS_MVIEW.REFRESH or DBMS_REFRESH.REFRESH) is not required.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> insert into hemant_source_tbl values (1,'First');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from hr.hr_mv_on_commit;

PRIMARY_KEY_COL VALUE_COLUMN
--------------- ------------------------------
1 First

SQL> connect hr/HR@orclpdb1
Connected.
SQL> select * from hr_mv_on_commit;

PRIMARY_KEY_COL VALUE_COLUMN
--------------- ------------------------------
1 First

SQL>


The Materialized View in the HR schema was refreshed immediately, without an explicit REFRESH call.

Remember : An MV that is to REFRESH ON COMMIT must be in the same database as the Source Table.




Categories: DBA Blogs

Funny Gamertags for Xbox That You Don’t Want to Miss

VitalSoftTech - Thu, 2019-10-17 09:49

Being a regular gamer on Xbox player, some cool and funny Gamertags are very easy to come by. In the gaming universe of Xbox, the two essential things you need are 90% smartness and 10% luck, and you will come out the conqueror of the game. However, apart from skill and fortune, you need one […]

The post Funny Gamertags for Xbox That You Don’t Want to Miss appeared first on VitalSoftTech.

Categories: DBA Blogs

Excel Roundup Function – A Step By Step Tutorial

VitalSoftTech - Wed, 2019-10-16 10:28

Let’s face it; it’s all fun and games when you are playing with the numbers, but as soon as the decimals start to make an appearance, things start to take a turn. But it doesn’t necessarily have to be that way. With applications like Microsoft Excel at your disposal, handling the decimals has never been […]

The post Excel Roundup Function – A Step By Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

Marks & Spencer Sparks, a Walkthrough

VitalSoftTech - Tue, 2019-10-15 10:02

As North Americans have their Walmart & Targets, the British have their own one-stop shop called Marks & Spencer established over one hundred years ago. It is a retail store for all home goods. It is a household name in Britain that caters to all kinds of domestic needs. A business as large as this […]

The post Marks & Spencer Sparks, a Walkthrough appeared first on VitalSoftTech.

Categories: DBA Blogs

Basic Replication -- 7 : Refresh Groups

Hemant K Chitale - Fri, 2019-10-11 23:24
So far, all my blog posts in this series cover "single" Materialized Views (even if I have created two MVs, they are independent of each other and can be refreshed at different schedules).

A Refresh Group is what you would define if you want multiple MVs to be refreshed to the same point in time.  This allows for
(a) data from transaction that touch multiple tables
or
(b) views of multiple tables
to be consistent in the target MVs.

For example, if you have SALES_ORDER and LINE_ITEMS tables and the MVs on these are refreshed at different times, you might see the ORDER (Header) without the LINE_ITEMs (or, worse, in the absence of Referential Integrity constraints, LINE_ITEMs without the ORDER (Header) !).

Here's a demo, using the HR  DEPARTMENTS and EMPLOYEES table with corresponding MVs built in the HEMANT schema.

SQL> show user
USER is "HR"
SQL> select count(*) from departments;

COUNT(*)
----------
27

SQL> select count(*) from employees;

COUNT(*)
----------
107

SQL>
SQL> grant select on departments to hemant;

Grant succeeded.

SQL> grant select on employees to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on departments;

Materialized view log created.

SQL> grant select, delete on mlog$_departments to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on employees;

Materialized view log created.

SQL> grant select, delete on mlog$_employees to hemant;

Grant succeeded.

SQL>
SQL>


Having created the source MV Logs  note that I have to grant privileges to the account (HEMANT) that will be reading and deleting from the MV Logs.

Next, I setup the MVs and the Refresh Group

SQL> show user
USER is "HEMANT"
SQL>
SQL> select count(*) from hr.departments;

COUNT(*)
----------
27

SQL> select count(*) from hr.employees;

COUNT(*)
----------
107

SQL>
SQL>
SQL> create materialized view mv_dept
2 refresh fast on demand
3 as select department_id as dept_id, department_name as dept_name
4 from hr.departments
5 /

Materialized view created.

SQL>
SQL> create materialized view mv_emp
2 refresh fast on demand
3 as select department_id as dept_id, employee_id as emp_id,
4 first_name, last_name, hire_date
5 from hr.employees
6 /

Materialized view created.

SQL>
SQL> select count(*) from mv_dept;

COUNT(*)
----------
27

SQL> select count(*) from mv_emp;

COUNT(*)
----------
107

SQL>
SQL> execute dbms_refresh.make(-
> name=>'HR_MVs',-
> list=>'MV_DEPT,MV_EMP',-
> next_date=>sysdate+0.5,-
> interval=>'sysdate+1');

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>


Here, I have built two MVs and then a Refresh Group called "HR_MVS".  The first refresh will be 12hours from now and every subsequent refresh will be after 24hours.  (The Refresh Interval must be set to what would be larger than the time taken to execute the actual Refresh).

However, I can manually execute the Refresh after new rows are populated into the source tables. First, I insert new rows

SQL> show user
USER is "HR"
SQL> insert into departments (department_id, department_name)
2 values
3 (departments_seq.nextval, 'New Department');

1 row created.

SQL> select department_id
2 from departments
3 where department_name = 'New Department';

DEPARTMENT_ID
-------------
280

SQL> insert into employees(employee_id, first_name, last_name, email, hire_date, job_id, department_id)
2 values
3 (employees_seq.nextval, 'Hemant', 'Chitale', 'hkc@myenterprise.com', sysdate, 'AD_VP', 280);

1 row created.

SQL> select employee_id
2 from employees
3 where first_name = 'Hemant';

EMPLOYEE_ID
-----------
208

SQL> commit;

Commit complete.

SQL>


Now that there are new rows, the target MVs must be refreshed together.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> execute dbms_refresh.refresh('HR_MVS');

PL/SQL procedure successfully completed.

SQL> select count(*) from mv_dept;

COUNT(*)
----------
28

SQL> select count(*) from mv_emp;

COUNT(*)
----------
108

SQL>
SQL> select * from mv_dept
2 where dept_id=280;

DEPT_ID DEPT_NAME
---------- ------------------------------
280 New Department

SQL> select * from mv_emp
2 where emp_id=208;

DEPT_ID EMP_ID FIRST_NAME LAST_NAME HIRE_DATE
---------- ---------- -------------------- ------------------------- ---------
280 208 Hemant Chitale 12-OCT-19

SQL>


Both MVs have been Refresh'd together as an ATOMIC Transaction.  If either of the two MVs had failed to refresh (e.g. unable to allocate extent to grow the MV), both the INSERTs would be rolled back.  (Note : It is not a necessary requirement that both source tables have new / updated rows, the Refresh Group works even if only one of the two tables has new / updated rows).

Note : I have used DBMS_REFRESH.REFRESH (instead of DBMS_MVIEW.REFRESH) to execute the Refresh.

You can build multiple Refresh Groups, each consisting of *multiple* Source Tables from the same source database.
You would define each Refresh Group to maintain consistency of data across multiple MVs (sourced from different tables).
Besides the Refresh Group on two HR tables, I could have, within the HEMANT schema, more Refresh Groups on FINANCE schema tables as well.

(Can you have a Refresh Group sourcing from tables from different schemas ?  Try that out !)


What's the downside of Refresh Groups ?    
Undo and Redo !  Every Refresh consists of INSERT/UPDATE/DELETE operations on the MVs.  And if any one of the MVs fails to Refresh, the entire set of DMLs (across all the MVs in the Refresh Group) has to *Rollback* !


Categories: DBA Blogs

100 Good Tumblr Usernames

VitalSoftTech - Thu, 2019-10-10 09:38

Shakespeare once said, “What is in a name?”. Our boy Shakespeare never had a Tumblr blog because, to have a hit blog, one must think of a catchy and cool Tumblr username. He would be pulling his hair out in despair had he tried to get his Tumblr blog off the ground with all the […]

The post 100 Good Tumblr Usernames appeared first on VitalSoftTech.

Categories: DBA Blogs

Funny Twitter Names

VitalSoftTech - Tue, 2019-10-08 09:45

If there is anyone who knows about the importance of having a funny Twitter name, it is the people who already use Twitter. Twitter is one of the most popular social media platforms. This microblogging website amounts up to an average of 330 Million active users every month over the globe. Fun Fact: Jack Dorsey, […]

The post Funny Twitter Names appeared first on VitalSoftTech.

Categories: DBA Blogs

Urdu in AWS

Pakistan's First Oracle Blog - Fri, 2019-10-04 00:24
Urdu is arguably one of the most beautiful and poetic language on the planet. AWS Translate now supports Urdu along with 31 other languages, which is awesome.



AWS Translate is growing leaps and bounds and has matured quite a lot over the last few months. There are now hundreds of translations and its now available in all the regions.


Amazon Translate is a text translation service that uses advanced machine learning technologies to provide high-quality translation on demand. You can use Amazon Translate to translate unstructured text documents or to build applications that work in multiple languages.


Amazon Translate provides translation between a source language (the input language) and a target language (the output language). A source language-target language combination is known as a language pair.


As with other AWS products, there are no contracts or minimum commitments for using Amazon Translate.



Categories: DBA Blogs

How to do a quick health check of AWS RDS database

Pakistan's First Oracle Blog - Fri, 2019-10-04 00:16
Just because the database is on AWS RDS, it doesn't mean that it won't run slow or get stuck. So when your users complain about the slowness of your RDS database, do the following quick health check:
1- From AWS console, in RDS section, go to your database and then go to Logs and Events tab. From Logs, in case of Oracle check alert log, in case of SQL Server check Error log, for PostgreSQL check postgres log and error log for MySQL database. Check for any errors or warnings and proceed accordingly as per that database engine.


2- If you dont see any errors or warnings or if you want to check in addition, then first check which database instance type you are using. For example for one of my test Oracle databases, it is db.r4.4xlarge.


Go to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html and check specifications of this instance type.

For instance, for db.r4.4xlarge, it is :


Instance Class vCPU, ECU, Memory (GiB), VPC Only, EBS Optimized, Max. Bandwidth (Mbps), Network Performance
db.r4.4xlarge 16  53  122   Yes   Yes    3,500     Up to 10 Gbps


So this db.r4.4xlarge has a max bandwidth (throughput) of 437.5 MB/s  (3500 Mbps/8 = 437.5 MB/s). The throughput limit is separate for read and write, which means you’ll get 437.5 MB/s for read and 437.5 MB/s for write.


3- Now go to Monitoring tab of this RDS in your console and check Read Throughput and Write Throughput to see if your instance is touching above threshold. For instance in this case 437.5. If yes, then you know that IO is the issue and you may need to either tune the SQLs responsible or increase instance size.


4- Similarly, from the same monitoring tab check for CPU usage, free memory available and Free storage space to make sure no other threshold is being reached.
5- Also check for Disk Queue Depth. The Disk Queue Depth is the number of IO requests waiting to be serviced. This time spent waiting in the queue is a component of latency and service time. Ideally disk queue depth of 15 or less should be good, but in case you notice latency greater than 10 or milliseconds accompanied by high disk queue depth than that could cause performance issues.


6- Last but not least, reviewing and tuning your SQLs is the biggest optimization gain you can achieve whether your database is in RDS or not.


Hope it helps.
Categories: DBA Blogs

How Can you Use Audience Targeting to Show your Ads?

VitalSoftTech - Thu, 2019-10-03 10:10

Audience targeting has been a crucial tool in an advertiser’s arsenal for a long time. You can use audience targeting to show your ads to the exact people you’re targeting. What’s more, you can show it to them at the exact time you want. As advertisers, we believe that this is one of the core […]

The post How Can you Use Audience Targeting to Show your Ads? appeared first on VitalSoftTech.

Categories: DBA Blogs

The future of work

RDBMS Insight - Wed, 2019-10-02 10:48

The people behind the American tech industry blog TechDirt have put together an anthology of stories about the future of work called (appropriately) Working Futures, and I’m excited to have a story in it.

My story, “The Auditor and the Exorcist,” was heavily influenced by the years I spent working remotely for Oracle. It tells a near-future story of working in a world with a social credit system that depends on weak AI. Of course, poor security, hacking, and bugs are all present in the future, too…

Quick summary: Pat is stuck in a soul-deadening job as a social credit auditor. Her thoroughly modern home suddenly shows every sign of being haunted. Pat doesn’t believe in ghosts, but the only thing that seems to restore her home to normalcy is the help of an online exorcist. Is Pat’s house really haunted… or is something more sinister going on?

The book is now available in both ebook and paperback format at Amazon, and if you have a Kindle Unlimited subscription, you can read it for free. Plus, you can repost any of the stories in it (non-commercially): The stories in the anthology are all released under CC license. My story is released as CC-BY-NC-ND, meaning anyone’s free to recopy it non-commercially, but not to re-release it commercially or to remix without permission. to recopy it non-commercially, but not to re-release it commercially or to remix without permission.

Image credit: Remixed from a public domain image found on publicdomainimages.net. Feel free to reuse/remix further.

Categories: DBA Blogs

Volume Icon Missing? Get it Back! (Windows 7, 8, 10)

VitalSoftTech - Tue, 2019-10-01 10:01

So you’re using your Windows PC, and you want to turn the volume up to better hear the characters in the movie you’re watching, only to discover the missing volume icon. Where did it go? Before you start stressing out about a potential virus or some malware infecting your PC, take a look at this […]

The post Volume Icon Missing? Get it Back! (Windows 7, 8, 10) appeared first on VitalSoftTech.

Categories: DBA Blogs

How to enlarge an #Exasol database by adding a node

The Oracle Instructor - Mon, 2019-09-30 08:46

Adding a cluster node will not only increase the available storage capacity but also the total compute power of your cluster. This scale-out is a quite common operation for Exasol customers to do.

My example shows how to change an existing 2+1 cluster into a 3+0 cluster. Before you can enlarge the database with an active node, this node has to be a reserve node first. See here how to add a reserve to a 2+0 cluster. Of course you can add another reserve node to change from 3+0 to 3+1 afterwards. See here if you wonder why you may want to have a reserve node at all.

Initial state – reserve node is present

I start with a 2+1 cluster – 2 active nodes and 1 reserve node:

For later comparison, let’s look at the distribution of rows of one of my tables:

The rows are roughly even distributed across the two active nodes.

Before you continue, it would be a good idea to take a backup on a remote archive volume now – just in case.

Shutdown database before volume modification

A data volume used used by a database cannot be modified while that database is up, so shut it down first:

After going to the Storage branch in EXAoperation, click on the data volume:

Then click on Edit:

Decrease volume redundancy to 1

Change the redundany from 2 to 1, then click Apply:

Why is the redundancy reduced from 2 to 1 here? Let’s try to explain that. Initially, I had 2 active nodes with a volume using redundancy 2:

A and B are master segments while A’ and B’ are mirrored segments. If I could add a node to this volume keeping the existing segments, it would look like this:

Of course this would be a bad idea. The redundancy is reduced to 1 before the new node is added to the volume:

Only distributed master segments with no mirrors at first. Then the redundancy is again increased to 2:

This way, every master segment can be mirrored on a neighbor node. That’s why the redundancy needs to be reduced to 1.

Add new node to volume

After having decreased the volume redundancy to 1, click Edit on the volume detail page again and add n13 as a new master node to the volume and click Apply:

Increase redundancy to 2

Now click Edit again and increase the redudancy to 2:

The state of the volume shows now as RECOVERING – don’t worry, it just means that mirrored segments are now created.

Enlarge the database

Now click on the database link on the EXASolution screen:

Select the Action Enlarge and click Submit:

Enter 1 and click Apply:

The database detail page looks like this now:

Reorganize

Technically, this is a 3+0 cluster now – but the third node doesn’t contain any data yet. If we look at the same table as before, we see that no rows are on the new node:

To change that, a REORGANIZE needs to be done. Either on the database layer, on schema layer or on table layer. Most easy to perform is REORGANIZE DATABASE:

Took me about 10 Minutes on my tiny database. That command re-distributes every table across all cluster nodes and can be time consuming with high data volume. While a table is reorganized, that table is locked against DML. You can monitor the ongoing reorganization by selecting from EXA_DBA_PROFILE_RUNNING in another session.

Final state

Let’s check the distribution of the previous table again:

As you can see above, now there are rows on the added node. Also EXAoperation confirms that the new node is not empty any more:

On a larger database, you would see that the volume usage of the nodes is less than before per node and every node is holding roughly the same amount of data. For failsafety, you could add another reserve node now.

Summary of steps
  1. Add a reserve node (if not yet existing)
  2. Take a backup on a remote archive volume
  3. Shutdown database
  4. Decrease volume redundancy to 1
  5. Add former reserve node as new master node to the volume
  6. Increase redundancy to 2
  7. Enlarge database by 1 active node
  8. Reorganize
  9. Add another reserve node (optionally)
Categories: DBA Blogs

Basic Replication -- 6 : COMPLETE and ATOMIC_REFRESH

Hemant K Chitale - Sun, 2019-09-29 09:36
Till 9i, if you did a COMPLETE Refresh of a Single Materialized View, Oracle would do a TRUNCATE followed by an INSERT.
If you did a COMPLETE Refresh of a *group* of Materialized Views, Oracle would execute DELETE and INSERT so that all the MVs would be consistent to the same point in time.  Thus, if one of the MVs failed to refresh (e.g. the SELECT on the Source Table failed or the INSERT failed, it would be able to do a ROLLBACK of all the MVs to revert them to the status (i.e. all rows that were present) as of the time before the Refresh began.  This would also allow all MVs to be available for queries with the rows as of before the Refresh began, even as the Refresh was running (because the Refresh of the multiple MVs was a single transaction).

In 10g, the behaviour for a *group* of Materialized Views remained the same.  However, for a single MV, the default was now to do a DELETE and INSERT as well.  This would allow the MV to be queryable as well while the Refresh was running.
This change came as a surprise to many customers (including me at a site where I was managing multiple single MVs) !
This change meant that the single MV took longer to run (because DELETEing all the rows takes a long time !) and required much more Undo and Redo space !!

Here's a demonstration in 19c (as in the previous posts in this series) :

First, I start with a new, larger, Source Table  and then build two MVs on it :

SQL> create table source_table_2
2 as select *
3 from dba_objects
4 where object_id is not null
5 /

Table created.

SQL> alter table source_table_2
2 add constraint source_table_2_pk
3 primary key (object_id)
4 /

Table altered.

SQL> select count(*)
2 from source_table_2
3 /

COUNT(*)
----------
72366

SQL>
SQL> create materialized view new_mv_2_1
2 as select object_id, owner, object_name, object_type
3 from source_table_2
4 /

Materialized view created.

SQL> create materialized view new_mv_2_2
2 as select object_id, owner, object_name, object_type
3 from source_table_2
4 /

Materialized view created.

SQL>
SQL> select mview_name, refresh_mode, refresh_method, last_refresh_type, fast_refreshable
2 from user_mviews
3 where mview_name like 'NEW_MV%'
4 order by 1
5 /

MVIEW_NAME REFRESH_M REFRESH_ LAST_REF FAST_REFRESHABLE
---------------- --------- -------- -------- ------------------
NEW_MV_2_1 DEMAND FORCE COMPLETE DIRLOAD_DML
NEW_MV_2_2 DEMAND FORCE COMPLETE DIRLOAD_DML

SQL>


Note that it *IS* possible to have two Materialized Views with exactly the same QUERY co-existing.  They may have different REFRESH_METHODs (here both are the same) and/or may have different frequencies of Refresh calls when the REFRESH_MODE is 'DEMAND'

Note also that I did not specify any "refresh on demand" clause so both defaulted to FORCE and DEMAND.

(Question 1 : Why might I have two MVs with the same QUERY and the same REFRESH_METHOD but different frequency or different times when the Refresh is called ?)

(Question 2 : What is DIRLOAD_DML ?)


Now, let me issue two different COMPLETE Refresh calls and trace them.

SQL> execute dbms_mview.refresh('NEW_MV_2_1','C');
SQL> execute dbms_mview.refresh('NEW_MV_2_2','C',atomic_refresh=>FALSE); -- from a different session


Now, I look at the trace files.

For the NEW_MV_2_1  (where ATOMIC_REFRESH defaulted to TRUE), I see :

/* MV_REFRESH (DEL) */ delete from "HEMANT"."NEW_MV_2_1"

/* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "HEMANT"."NEW_MV_2_1"("OBJECT_ID","OWNER","OBJECT_NAME","OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER","SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM "SOURCE_TABLE_2" "SOURCE_TABLE_2"



And for the NEW_MV_2_2 (where ATOMIC_REFRESH was set to FALSE), I see :

LOCK TABLE "HEMANT"."NEW_MV_2_2" IN EXCLUSIVE MODE  NOWAIT

/* MV_REFRESH (DEL) */ truncate table "HEMANT"."NEW_MV_2_2" purge snapshot log

/* MV_REFRESH (INS) */INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND SKIP_UNQ_UNUSABLE_IDX */ INTO "HEMANT"."NEW_MV_2_2"("OBJECT_ID","OWNER","OBJECT_NAME","OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER","SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM "SOURCE_TABLE_2" "SOURCE_TABLE_2"


So, the default ATOMIC_REFRESH=TRUE caused a DELETE followed by an INSERT while the ATOMIC_REFRESH=FALSE caused a DELETE followed by an INSERT APPEND (a Direct Path Insert).  The second method is much faster.



More information from a tkprof for the NEW_MV_2_1 (ATOMIC_REFRESH=TRUE) is :

INSERT INTO "HEMANT"."NEW_MV_2_1"("OBJECT_ID","OWNER","OBJECT_NAME",
"OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER",
"SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM
"SOURCE_TABLE_2" "SOURCE_TABLE_2"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 66 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.01 0.01 0 66 0 0




delete from "HEMANT"."NEW_MV_2_1"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 1.47 1.77 151 173 224377 72366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.47 1.77 151 173 224377 72366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 DELETE NEW_MV_2_1 (cr=178 pr=151 pw=0 time=1783942 us starts=1)
72366 72366 72366 INDEX FAST FULL SCAN SYS_C_SNAP$_82SOURCE_TABLE_2_PK (cr=157 pr=150 pw=0 time=54982 us starts=1 cost=42 size=361830 card=72366)(object id 73111)




INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO "HEMANT"."NEW_MV_2_1"("OBJECT_ID",
"OWNER","OBJECT_NAME","OBJECT_TYPE") SELECT "SOURCE_TABLE_2"."OBJECT_ID",
"SOURCE_TABLE_2"."OWNER","SOURCE_TABLE_2"."OBJECT_NAME",
"SOURCE_TABLE_2"."OBJECT_TYPE" FROM "SOURCE_TABLE_2" "SOURCE_TABLE_2"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 4 0
Execute 1 0.71 0.71 0 2166 152128 72366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.71 0.71 0 2166 152132 72366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL NEW_MV_2_1 (cr=2257 pr=0 pw=0 time=723103 us starts=1)
72366 72366 72366 TABLE ACCESS FULL SOURCE_TABLE_2 (cr=1410 pr=0 pw=0 time=30476 us starts=1 cost=392 size=3980130 card=72366)




Note that the first INSERT was only Parsed but *not* Executed.


While that for NEW_MV_2_2 (ATOMIC_REFRESH=FALSE) shows :

LOCK TABLE "HEMANT"."NEW_MV_2_2" IN EXCLUSIVE MODE  NOWAIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0




truncate table "HEMANT"."NEW_MV_2_2" purge snapshot log



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 1 0
Execute 1 0.06 0.56 13 15 511 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.06 0.57 13 15 512 0



INSERT /*+ BYPASS_RECURSIVE_CHECK APPEND SKIP_UNQ_UNUSABLE_IDX */ INTO
"HEMANT"."NEW_MV_2_2"("OBJECT_ID","OWNER","OBJECT_NAME","OBJECT_TYPE")
SELECT "SOURCE_TABLE_2"."OBJECT_ID","SOURCE_TABLE_2"."OWNER",
"SOURCE_TABLE_2"."OBJECT_NAME","SOURCE_TABLE_2"."OBJECT_TYPE" FROM
"SOURCE_TABLE_2" "SOURCE_TABLE_2"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.09 0 43 0 0
Execute 1 0.22 0.56 3 1487 1121 72366
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.24 0.65 3 1530 1121 72366

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD AS SELECT NEW_MV_2_2 (cr=3688 pr=7 pw=586 time=953367 us starts=1)
72366 72366 72366 OPTIMIZER STATISTICS GATHERING (cr=3337 pr=0 pw=0 time=142500 us starts=1 cost=392 size=3980130 card=72366)
72366 72366 72366 TABLE ACCESS FULL SOURCE_TABLE_2 (cr=1410 pr=0 pw=0 time=40841 us starts=1 cost=392 size=3980130 card=72366)




ALTER INDEX "HEMANT"."SYS_C_SNAP$_83SOURCE_TABLE_2_PK" REBUILD NOPARALLEL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.08 0 1 1 0
Execute 1 0.11 0.48 586 626 680 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.11 0.56 586 627 681 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106 (recursive depth: 2)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
1 1 1 INDEX BUILD UNIQUE SYS_C_SNAP$_83SOURCE_TABLE_2_PK (cr=632 pr=586 pw=150 time=392351 us starts=1)(object id 0)
72366 72366 72366 SORT CREATE INDEX (cr=590 pr=586 pw=0 time=148023 us starts=1)
72366 72366 72366 MAT_VIEW ACCESS FULL NEW_MV_2_2 (cr=590 pr=586 pw=0 time=86149 us starts=1 cost=166 size=361830 card=72366)



So, the ATOMIC_REFRESH=FALSE caused
a. TRUNCATE
b. INSERT APPEND (i.e. Direct Path Insert, minimizing Undo and reducing Redo)
c. REBUILD INDEX

I am not comparing Execution Time for the two Refresh's.  I would rather that you focus on the fact that the DELETE (in ATOMIC_REFRESH=TRUE) can be very expensive (think Undo and Redo) when it has delete, say, millions of rows.  Also, that the INSERT is a regular operation that also causes Undo and Redo to be generated.

ATOMIC_REFRESH=FALSE makes a significant difference to the Undo and Redo generation and will be faster for large Materialized Views.

What is the downside of ATOMIC_REFRESH=FALSE ?  Firstly, the MV will not present any rows to a query that executes against it while the Refresh is in progress.  Secondly, if the Refresh fails, the MV is left in a Truncated state (without rows) until another Refresh is executed.
The ATOMIC_REFRESH=TRUE avoids  these two pitfalls, at the expense of resources (Undo and Redo) and time to refresh.

For more information, see Oracle Support Document #553464.1


Categories: DBA Blogs

60Hz vs. 144Hz vs. 240Hz: What you should know in 2019

VitalSoftTech - Thu, 2019-09-26 09:53

60Hz vs. 144Hz vs. 240Hz; sounds like quite a mouthful, even for the average gamer. Which one’s the best? Which one’s the highest quality? If you’ve been interested in gaming for any length of time, you’re probably familiar with these terms. But what exactly do they mean? In this article, we’ll discuss the debate of […]

The post 60Hz vs. 144Hz vs. 240Hz: What you should know in 2019 appeared first on VitalSoftTech.

Categories: DBA Blogs

Pivot with list of rows

Tom Kyte - Thu, 2019-09-26 06:46
We have a table which contains db_name and usernames. In the output we need list of users per DB i.e. number of columns will be equal to distinct db_name. sample output format: <b>DB1 DB2</b> USER1 USER4 USER2 USER5 USER3 Database version:...
Categories: DBA Blogs

confuse at the order of execution plan table

Tom Kyte - Thu, 2019-09-26 06:46
As we were told that "The execution order in EXPLAIN PLAN output begins with the line that is the furthest indented to the right. The next step is the parent of that line. If two lines are indented equally, then the top line is executed first." howe...
Categories: DBA Blogs

Converting data types in where clause

Tom Kyte - Thu, 2019-09-26 06:46
Hi Tom, My question is regarding when a query is not having the right datatype in the where clause Example: -- Create table <code>CREATE TABLE mytable ( mynumber varchar2(20), primary key(mynumber));</code> -- Insert some rows <...
Categories: DBA Blogs

Resetting a live sequence

Tom Kyte - Thu, 2019-09-26 06:46
A sequence was about to finish, so I had make it bigger. I work on database 11.2 so I had to use a workaround described here (https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1119633817597). The procedure shown in LiveSQL was ru...
Categories: DBA Blogs

Strange behavior in analytic functions with partitions

Tom Kyte - Thu, 2019-09-26 06:46
Hello, I have met strange behavior that I can't understand. I have this table <code>CREATE TABLE test_table (id NUMBER(10,0) NOT NULL, register_date DATE DEFAULT sysdate NOT NULL, row...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs