Feed aggregator

Optimizer Tricks 1

Jonathan Lewis - Fri, 2019-08-23 06:39

I’ve got a number of examples of clever little tricks the optimizer can do to transform your SQL before starting in on the arithmetic of optimisation. I was prompted to publish this one by a recent thread on ODC. It’s worth taking note of these tricks when you spot one as a background knowledge of what’s possible makes it much easier to interpret and trouble-shoot from execution plans. I’ve labelled this one “#1” since I may publish a few more examples in the future, and then I’ll have to catalogue them – but I’m not making any promises about that.

Here’s a table definition, and a query that’s hinted to use an index on that table.

rem     Script:         optimizer_tricks_01.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2019
rem     Purpose:        
rem     Last tested 

create table t1 (
        v1      varchar2(10),
        v2      varchar2(10),
        v3      varchar2(10),
        padding varchar2(100)

create index t1_i1 on t1(v1, v2, v3);

explain plan for
        /*+ index(t1 (v1, v2, v3)) */
        v1 = 'ABC'
and     nvl(v3,'ORA$BASE') = 'SET2'

select * from table(dbms_xplan.display);

The query uses the first and third columns of the index, but wraps the 3rd column in an nvl() function. Because of the hint the optimizer will generate a plan with an index range scan, but the question is – what will the Predicate Information tell us about Oracle’s use of my two predicates:

Plan hash value: 3320414027

| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                    |       |     1 |    66 |     0   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    66 |     0   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |     1 |       |     0   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - access("V1"='ABC')

The nvl() test is used during the index range scan (from memory I think much older versions of Oracle would have postponed the predicate test until they had accessed the table itself). This means Oracle will do a range scan over the whole section of the index where v1 = ‘ABC’, testing every index entry it finds against the nvl() predicate.

But what happens if we modify column v3 to be NOT NULL? (“alter table t1 modify v3 not null;”) Here’s the new plan:

Plan hash value: 3320414027

| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                    |       |     1 |    66 |     0   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    66 |     0   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |     1 |       |     0   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - access("V1"='ABC' AND "V3"='SET2')

The optimizer will decide that with the NOT NULL status of the column the nvl() function can be eliminated and the predicate can be replaced with a simple column comparison. At this point the v3 predicate can now be used to reduce the number of index entries that need to be examined by using a type of skip-scan/iterator approach, but Oracle still has to test the predciate against the index entries it walks through – so the predicate still appears as a filter predicate as well.

You might notice, by the way, that the Plan hash value does not change as the predicate use changes – even though the change in use of predicates could make a huge difference to the performance. (As indicated in the comments at the top of the script, I’ve run this model against – which is the version used in the ODC thread – and the behaviour is the same in both versions, and the Plan hash value doesn’t change from version to version.)


The reason why I decided to publish this note is that the original thread on the ODC forums reported the Following contradictory details – an index definition and the optimizer’s use of that index as shown in the predicate section of the plan:

Index column name      Column position
---------------------- ----------------

|* 17 |      INDEX RANGE SCAN             | FND_FLEX_VALUE_NORM_HIER_U1   |
       filter((NVL("ZD_EDITION_NAME",'ORA$BASE')='SET2'  ..... lots more bits of filter predicate.

Since the expression nvl(zd_edition_name, ‘ORA$BASE’) = ‘SET2’ appears as an access predicate and a filter predicate it must surely be a column in the index. So either this isn’t the definition of the index being used or, somehow, there’s a trick that allows zd_edition_name to appear as a column name in the index when it really means nvl(zd_edition_name,’ORA$BASE’) at run-time. (And if there is I want to know what it is – edition-based redefinition and tricks with virtual columns spring to mind, but I avoid thinking about complicated explanations when a simpler one might be available.)


Speaking at Oracle OpenWorld 2019

Richard Foote - Thu, 2019-08-22 03:23
It’s been remarkably 9 years since I’ve been to Oracle OpenWorld, but will finally get the opportunity to present there again this year (with many thanks to the Oracle ACE Director program for making this possible). Details of my presentation are as follows: Conference: Oracle OpenWorld Session Type: Conference Session Session ID: CON1432 Session Title: […]
Categories: DBA Blogs


Jonathan Lewis - Wed, 2019-08-21 10:49

This note is a short follow-up to a note I wrote some time ago about validating foreign key constraints where I examined the type of SQL Oracle generates internally to do the validation between parent and child tables.  In that article I suggested (before testing) that you could create an SQL patch for the generated SQL to over-ride the plan taken by Oracle – a plan dictated to some extent by hints (including a “deprecated” ordered hint) embedded in the code. I did say that the strategy might not work for SQL optimised by SYS, but it turned out that it did.

Here’s a little script I ran to test a few variations on the theme:

        v1      varchar2(128);
        v1 :=   dbms_sqldiag.create_sql_patch(
                        sql_id  => 'g2z10tbxyz6b0',
                        name    => 'validate_fk',
                        hint_text => 'ignore_optim_embedded_hints'
--                      hint_text => 'parallel(a@sel$1 8)'      -- worked
--                      hint_text => 'parallel(8)'              -- worked
--                      hint_text => q'{opt_param('_fast_full_scan_enabled' 'false')}'  -- worked

I’ve tested this on and, but for earlier versions of Oracle, and depending what patches you’ve applied, you will need to modify the code.

The SQL_ID represents the query for my specific tables, of course, so you will have to do a test run to find the query and SQL_ID for the validation you want to do. This is what the statement for my parent/child pair looked like (cosmetically adjusted):

select /*+ all_rows ordered dynamic_sampling(2) */ 
        A.rowid, :1, :2, :3
        "TEST_USER"."CHILD" A , 
        "TEST_USER"."PARENT" B 
        ("A"."OBJECT_ID" is not null) 
and     ("B"."OBJECT_ID"(+) = "A"."OBJECT_ID")
and     ("B"."OBJECT_ID" is null)

The patch that the script creates simply tells Oracle to ignore the embedded hints (in particular I don’t want that ordered hint), but I’ve left a few other options in the text, commenting them out.

Without the patch I got the following plan:.

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  NESTED LOOPS ANTI (cr=399 pr=279 pw=0 time=47801 us starts=1 cost=70 size=22000 card=1000)
    100000     100000     100000   INDEX FAST FULL SCAN CHI_FK_PAR (cr=250 pr=247 pw=0 time=19943 us starts=1 cost=32 size=1700000 card=100000)(object id 73191)
     10000      10000      10000   INDEX UNIQUE SCAN PAR_PK (cr=149 pr=32 pw=0 time=3968 us starts=10000 cost=0 size=49995 card=9999)(object id 73189)

Rerunning the validation test with the patch in place I got the following plan – clearly the patch had had an effect.

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  HASH JOIN RIGHT ANTI (cr=246 pr=242 pw=0 time=96212 us starts=1 cost=39 size=22000 card=1000)
     10000      10000      10000   INDEX FAST FULL SCAN PAR_PK (cr=24 pr=23 pw=0 time=1599 us starts=1 cost=4 size=50000 card=10000)(object id 73235)
    100000     100000     100000   INDEX FAST FULL SCAN CHI_FK_PAR (cr=222 pr=219 pw=0 time=27553 us starts=1 cost=32 size=1700000 card=100000)(object id 73237)
(object id 73229)

Don’t worry too much about the fact that in my tiny example, and with a very new, nicely structured, data set the original plan was a little faster. In a production environment creating a hash table from the parent keys and probing it with the child keys may reduce the CPU usage and random I/O quite dramatically.

Bear in mind that the best possible plan may depend on many factors, such as the number of child rows per parent, the degree to which the parent and child keys arrive in sorted (or random) order, and then you have to remember that Oracle gets a little clever with the original anti-join (note that there are only 10,000 probes for 100,000 child rows – there’s an effect similar to the scalar subquery caching going on there), so trying to patch the plan the same way for every parent/child pair may not be the best strategy.

If you want to drop the patch after playing around with this example a call to execute dbms_sqldiag.drop_sql_patch(name=>’validate_fk’) will suffice.


Useful Linux commands for an Oracle DBA

Yann Neuhaus - Wed, 2019-08-21 09:00

Oracle & Linux is a great duet. Very powerfull, very scriptable. Here are several commands that make my life easier. These tools seems to be widespread on most of the Linux distributions.

watch with diff

It’s my favorite tool since a long time. watch can repeat a command indefinitely until you stop it with Ctrl+C. And it’s even more useful with the – -diff parameter. All the differences since last run are highlighted. For example if you want to monitor a running backup, try this:

watch -n 60 --diff 'sqlplus -s /nolog @check_backup; echo ; du -hs /backup'

The check_backup.sql being:

conn / as sysdba
set feedback off
set lines 150
set pages 100
col status for a30
alter session set NLS_DATE_FORMAT="DD/MM-HH24:MI:SS";
select start_time "Start", round (input_bytes/1024/1024,1) "Source MB", round(output_bytes/1024/1024,1) "Backup MB", input_type "Type", status "Status", round(elapsed_seconds/60,1) "Min", round(compression_ratio,1) "Ratio" from v$rman_backup_job_details where start_time >= SYSDATE-1 order by 1 desc;

Every minute (60 seconds), you will check, in the rman backup views, the amount of data already backed up. And the amount of data in your backup folder.

Very convenient to keep an eye on things without actually repeating the commands.

Truncate a logfile in one simple command

Oracle is generating a lot of logfiles, some of them can reach several GB and fill up your filesystem. How to quickly empty a big logfile without removing it? Simply use the true command:

true > listener.log

Run a SQL script on all the running databases

You need to check something on every databases running on your system? Or eventually make the same change to all these databases? A single line will do the job:

for a in `ps -ef | grep pmon | grep -v grep | awk '{print $8}' | cut -c 10- | sort`; do . oraenv <<< $a; sqlplus -s / as sysdba @my_script.sql >> output.log; done

Don’t forget to put an exit at the end of your SQL script my_script.sql. Using this script through ansible will even increase the scope and save hours of work.

Copy a folder to another server

scp is fine for copying single file or multiple files inside a folder. But copying a folder recursively to a remote server with scp is more complicated. Actually, you need to do a tarfile for that purpose. A clever solution is to use tar without creating any archive on the source server, but with a pipe to the destination server. Very useful and efficient, with just one line:

tar cf - source_folder | ssh oracle@ "cd destination_folder_for_source_folder; tar xf -"

For sure, you will need +rwx on destination_folder_for_source_folder for oracle user on

Check the network speed – because you need to check

As an Oracle DBA you probably have to deal with performance: not a problem it’s part of your job. But are you sure your database system is running at full network speed? You probably didn’t check that, but low network speed could be the root cause of some performance issues. This concerns copper-based networks.

Today’s servers handle 10Gb/s ethernet speed but can also work with 1Gb/s depending on the network behind the servers. You should be aware that you can still find 100Mb/s network speeds, for example if the network port of the switch attached to your server has been limitated for some reason (needed for the server connected to this port before yours for example). If 1Gb/s is probably enough for most of the databases, 100Mb/s is clearly inadequate, and most of the recent servers will even not handle correctly 100Mb/s network speed. Your Oracle environment may work, but don’t expect high performance level as your databases will have to wait for the network to send packets. Don’t forget that 1Gb/s gives you about 100-120MBytes/s in real condition, and 100Mb/s only allows 10-12MBytes/s, “Fast Ethernet” of the 90’s…

Checking the network speed is easy, with ethtool.

[root@oda-x6-2 ~]# ethtool btbond1
Settings for btbond1:
Supported ports: [ ] Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 1000Mb/s <= Network speed is OK
Duplex: Full
Port: Other
Transceiver: internal
Auto-negotiation: off
Link detected: yes

In case of a network bonding interface, please also check the real interfaces associated to the bonding, all the network interfaces belonging to the bonding need to have the same network speed :

[root@oda-x6-2 ~]# ethtool em1
Settings for em1:
Supported ports: [ TP ] Supported link modes: 100baseT/Full <= This network interface is physically supporting 100Mb/s
1000baseT/Full <= also 1Gb/s
10000baseT/Full <= and 10Gb/s
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Advertised link modes: 100baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Speed: 1000Mb/s <= Network speed is 1Gb/s
Duplex: Full
Port: Twisted Pair
Transceiver: external
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes <= This interface is connected to a switch


Hope this helps!

Cet article Useful Linux commands for an Oracle DBA est apparu en premier sur Blog dbi services.

Want to Know if Your Shopping Ads Will Appear on YouTube?

VitalSoftTech - Wed, 2019-08-21 05:30

How can you create video ads to play on YouTube? Are you wondering if your shopping ads will appear on YouTube? Well, I have something to tell you. If you’ve paid for them, of course, they will. The question isn’t whether or not your ads will appear on YouTube. The question is how YouTube advertising […]

The post Want to Know if Your Shopping Ads Will Appear on YouTube? appeared first on VitalSoftTech.

Categories: DBA Blogs

AUSOUG Connect 2019 Conference Series

Richard Foote - Wed, 2019-08-21 03:26
  AUSOUG will again be running their excellent CONNECT 2019 conference series this year at the following great venues: Monday 14th October – Rendezvous Hotel In Melbourne Wednesday 16th October –  Mercure Hotel in Perth As usual, there’s a wonderful lineup of speakers from both Australia and overseas including: Connor McDonald Scott Wesley Guy Harrison […]
Categories: DBA Blogs

Is Reverse Index Really the Best Option to Reduce Index Contention in RAC Enviornment?

VitalSoftTech - Tue, 2019-08-20 11:12

What: I am seeing index contention in my RAC environment and want to convert the index into a reverse key index. Is this the best option to increase throughput and scale-ability? Answer: When there is contention due to inserting data which is monotonically increasing as when a sequence is used there are waits like block […]

The post Is Reverse Index Really the Best Option to Reduce Index Contention in RAC Enviornment? appeared first on VitalSoftTech.

Categories: DBA Blogs

Join View

Jonathan Lewis - Tue, 2019-08-20 06:39

It’s strange how one thing leads to another when you’re trying to check some silly little detail. This morning I wanted to find a note I’d written about the merge command and “stable sets”, and got to a draft about updatable join views that I’d started in 2016 in response to a question on OTN (as it was at the time) and finally led to a model that I’d written in 2008 showing that the manuals were wrong.

Since the manual – even the 19c manual – is still wrong regarding the “Delete Rule” for updatable (modifiable) join views I thought I’d quickly finish off the draft and post the 2008 script. Here’s what the manual says about deleting from join views (my emphasis on “exactly”):

Rows from a join view can be deleted as long as there is exactly one key-preserved table in the join. The key preserved table can be repeated in the FROM clause. If the view is defined with the WITH CHECK OPTION clause and the key preserved table is repeated, then the rows cannot be deleted from the view.

But here’s a simple piece of code to model a delete from a join view that breaks the rule:

rem     Script:         delete_join.sql 
rem     Dated:          Dec 2008
rem     Author:         J P Lewis

create table source
select level n1
from dual
connect by level <= 10
create table search
select level n1
from dual
connect by level <= 10

alter table source modify n1 not null;
alter table search modify n1 not null;

create unique index search_idx on search(n1);
-- create unique index source_idx on source(n1)

I’ve set up a “source” and a “search” table with 10 rows each and the option for creating unique indexes on each table for a column that’s declared non-null. Initially, though, I’ve only created the index on search to see what happens when I run a couple of “join view” deletes using “ANSI” syntax.

prompt  ===============================
prompt  Source referenced first in ANSI
prompt  ===============================

delete from (select * from source s join search s1 on s.n1 = s1.n1);
select count(1) source_count from source;
select count(1) search_count from search;
prompt  ===============================
prompt  Search referenced first in ANSI
prompt  ===============================

delete from (select * from search s join source s1 on s.n1 = s1.n1);
select count(1) source_count from source;
select count(1) search_count from search;

With just one of the two unique indexes in place the order of the tables in the inline view makes no difference to the outcome. Thanks to the unique index on search any row in the inline view corresponds to exactly one row in the source table, while a single row in the search table could end up appearing in many rows in the view – so the delete implictly has to operate as “delete from source”. So both deletes will result in the source_count being zero, and the search_count remaining at 10.

If we now repeat the experiment but create BOTH unique indexes, both source and search will be key-preserved in the join. According to the manual the delete should produce some sort of error. In fact the delete works in both cases – but the order that the tables appear makes a difference. When source is the first table in the in-line view the source_count drops to zero and the search_count stays at 10; when search is the first table in the in-line view the search_count drops to zero and the source_count stays at 10.

I wouldn’t call this totally unreasonable – but it’s something you need to know if you’re going to use the method, and something you need to document very carefully in case someone editing your code at a later date (or deciding that they could add a unique index) doesn’t realise the significance of the table order.

This does lead on to another important test – is it the order that the tables appear in the from clause that matters, or the order they appear in the join order that Oracle uses to optimise the query. (We hope – and expect – that it’s the join order as written, not the join order as optimised, otherwise the effect of the delete could change from day to day as the optimizer chose different execution plans!). To confirm my expectation I switched to traditional Oracle syntax with hints (still with unique indexes on both tables), writing a query with search as the first table in the from clause, but hinting the inline view to vary the optimised join order.

prompt  ============================================
prompt  Source hinted as leading table in join order 
prompt  ============================================

delete from (
                /*+ leading(s1, s) */
                search s,
                source s1 
                s.n1 = s1.n1

select count(1) source_count from source; 
select count(1) search_count from search;

prompt  ============================================
prompt  Search hinted as leading table in join order 
prompt  ============================================

delete from (
                /*+ leading(s, s1) */
                search s,
                source s1 
                s.n1 = s1.n1

select count(1) source_count from source; 
select count(1) search_count from search;

In both cases the rows were deleted from search (the first table in from clause). And, to answer the question you should be asking, I did check the execution plans to make sure that the hints had been effective:

Source hinted as leading table in join order

| Id  | Operation           | Name       | Rows  | Bytes | Cost  |
|   0 | DELETE STATEMENT    |            |    10 |    60 |     1 |
|   1 |  DELETE             | SEARCH     |       |       |       |
|   2 |   NESTED LOOPS      |            |    10 |    60 |     1 |
|   3 |    INDEX FULL SCAN  | SOURCE_IDX |    10 |    30 |     1 |
|*  4 |    INDEX UNIQUE SCAN| SEARCH_IDX |     1 |     3 |       |

Search hinted as leading table in join order

| Id  | Operation           | Name       | Rows  | Bytes | Cost  |
|   0 | DELETE STATEMENT    |            |    10 |    60 |     1 |
|   1 |  DELETE             | SEARCH     |       |       |       |
|   2 |   NESTED LOOPS      |            |    10 |    60 |     1 |
|   3 |    INDEX FULL SCAN  | SEARCH_IDX |    10 |    30 |     1 |
|*  4 |    INDEX UNIQUE SCAN| SOURCE_IDX |     1 |     3 |       |


Using updatable join views to handle deletes can be very efficient but the manual’s statement of the “Delete Rule” is incorrect. It is possible to have several key-preserved tables in the view that you’re using, and if that’s the case you need to play safe and ensure that the table you want to delete from is the first table in the from clause. This means taking steps to eliminate the risk of someone editing some code at a later date without realising the importance of the table order.


Upgrading from OpenLeap to SLES15

Yann Neuhaus - Mon, 2019-08-19 13:58

Sometimes business plans change and maybe you need to move your OpenLeap 15 Server to the supported version SUSE Linux Enterprise Server 15. Upgrade is getting really easy with version 15. It can be performed online. So your server does not need to be offline during the upgrade.

So let’s have a look on the upgrade.

First of all, you need a SUSE Subscription. We will help you with this. Just send us a message.
As soon as you got it you can go on with the upgrade.

Let’s start with checking the actual version running on the server.

openleap:~ $ cat /etc/os-release
NAME="openSUSE Leap"
ID_LIKE="suse opensuse"
PRETTY_NAME="openSUSE Leap 15.0"

Now we can install SUSEConnect, so we can register the system in the next step.

openleap:~ $ zypper in SUSEConnect
Retrieving repository 'openSUSE-Leap-15.0-Update' metadata ..............................................................................................................................................................[done]
Building repository 'openSUSE-Leap-15.0-Update' cache ...................................................................................................................................................................[done]
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 3 NEW packages are going to be installed:
  SUSEConnect rollback-helper zypper-migration-plugin

3 new packages to install.
Overall download size: 138.9 KiB. Already cached: 0 B. After the operation, additional 213.9 KiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package SUSEConnect-0.3.17-lp150.2.14.1.x86_64                                                                                                                                 (1/3), 100.9 KiB (176.3 KiB unpacked)
Retrieving: SUSEConnect-0.3.17-lp150.2.14.1.x86_64.rpm ..................................................................................................................................................................[done]
Retrieving package rollback-helper-1.0+git20181218.5394d6e-lp150.3.3.1.noarch                                                                                                             (2/3),  22.6 KiB ( 19.9 KiB unpacked)
Retrieving: rollback-helper-1.0+git20181218.5394d6e-lp150.3.3.1.noarch.rpm ..................................................................................................................................[done (7.9 KiB/s)]
Retrieving package zypper-migration-plugin-0.11.1520597355.bcf74ad-lp150.1.1.noarch                                                                                                       (3/3),  15.5 KiB ( 17.6 KiB unpacked)
Retrieving: zypper-migration-plugin-0.11.1520597355.bcf74ad-lp150.1.1.noarch.rpm ..............................................................................................................................[done (253 B/s)]
Checking for file conflicts: ............................................................................................................................................................................................[done]
(1/3) Installing: SUSEConnect-0.3.17-lp150.2.14.1.x86_64 ................................................................................................................................................................[done]
(2/3) Installing: rollback-helper-1.0+git20181218.5394d6e-lp150.3.3.1.noarch ............................................................................................................................................[done]
(3/3) Installing: zypper-migration-plugin-0.11.1520597355.bcf74ad-lp150.1.1.noarch ......................................................................................................................................[done]
openleap:~ # 

Register your system at the SUSE Customer Center, so you get full access to the repositories. This step is mandatory, otherweise it’s not possible to upgrade.

openleap:~ $ SUSEConnect -r REGISTRATION_CODE -p SLES/15/x86_64
Registering system to SUSE Customer Center

Announcing system to https://scc.suse.com ...

Activating SLES 15 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system

Now check for available extensions and the command to activate it using SUSEConnect

openleap:~ $ SUSEConnect --list-extensions

    Basesystem Module 15 x86_64
    Activate with: SUSEConnect -p sle-module-basesystem/15/x86_64

        Containers Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-containers/15/x86_64

        Desktop Applications Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-desktop-applications/15/x86_64

            Development Tools Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-development-tools/15/x86_64

            SUSE Linux Enterprise Workstation Extension 15 x86_64
            Activate with: SUSEConnect -p sle-we/15/x86_64 -r ADDITIONAL REGCODE

        SUSE Cloud Application Platform Tools Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-cap-tools/15/x86_64

        SUSE Linux Enterprise Live Patching 15 x86_64
        Activate with: SUSEConnect -p sle-module-live-patching/15/x86_64 -r ADDITIONAL REGCODE

        SUSE Package Hub 15 x86_64
        Activate with: SUSEConnect -p PackageHub/15/x86_64

        Server Applications Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-server-applications/15/x86_64

            Legacy Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-legacy/15/x86_64

            Public Cloud Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-public-cloud/15/x86_64

            SUSE Linux Enterprise High Availability Extension 15 x86_64
            Activate with: SUSEConnect -p sle-ha/15/x86_64 -r ADDITIONAL REGCODE

            Web and Scripting Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-web-scripting/15/x86_64


(Not available) The module/extension is not enabled on your RMT/SMT
(Activated)     The module/extension is activated on your system


You can find more information about available modules here:

In case you need more modules, you can add now any module you need. Please keep in mind, for the extensions you need a separate subscription. For my needs the base module is enough.

openleap:~ $ SUSEConnect -p sle-module-basesystem/15/x86_64
Registering system to SUSE Customer Center

Updating system details on https://scc.suse.com ...

Activating sle-module-basesystem 15 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system

Next step is to do the upgrade itself. As the output is quite huge, I put some [***] as place holders.

openleap:~ $ zypper dup --force-resolution
Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
Refreshing service 'Basesystem_Module_15_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_15_x86_64'.
Loading repository data...
Warning: Repository 'openSUSE-Leap-15.0-Update-Non-Oss' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Computing distribution upgrade...

The following 11 NEW packages are going to be installed:
  dejavu-fonts glibc-locale-base google-opensans-fonts issue-generator kernel-default-4.12.14-lp150.12.67.1 man-pages man-pages-posix release-notes-sles rpcgen yast2-vm zypper-search-packages-plugin

The following 286 packages are going to be upgraded:
  NetworkManager NetworkManager-lang PackageKit PackageKit-backend-zypp PackageKit-gstreamer-plugin PackageKit-gtk3-module PackageKit-lang aaa_base aaa_base-extras apparmor-abstractions 
  yast2-storage-ng yast2-users

The following 288 packages have no support information from their vendor:
  NetworkManager NetworkManager-lang PackageKit PackageKit-backend-zypp PackageKit-gstreamer-plugin PackageKit-gtk3-module PackageKit-lang aaa_base aaa_base-extras apparmor-abstractions apparmor-docs apparmor-parser

The following package is not supported by its vendor:

286 packages to upgrade, 11 new.
Overall download size: 322.4 MiB. Already cached: 0 B. After the operation, additional 343.9 MiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package issue-generator-1.6-1.1.noarch                                                                                                                                       (1/297),  28.0 KiB ( 25.6 KiB unpacked)
Retrieving: issue-generator-1.6-1.1.noarch.rpm ..........................................................................................................................................................................[done]
Retrieving package man-pages-4.16-3.3.1.noarch                                                                                                                                          (2/297),   

Executing %posttrans scripts ............................................................................................................................................................................................[done]
There are some running programs that might use files deleted by recent upgrade. You may wish to check and restart some of them. Run 'zypper ps -s' to list these programs.

Disable the openLeap repository to prevent warnings when using zypper (this is optional).

openleap:~ $ zypper lr -d
Repository priorities are without effect. All enabled repositories share the same priority.

#  | Alias                                                                       | Name                                      | Enabled | GPG Check | Refresh | Priority | Type   | URI                                                                                                                                                                                                                                                        | Service       
 1 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Debuginfo-Pool          | SLE-Module-Basesystem15-Debuginfo-Pool    | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Module-Basesystem/15/x86_64/product_debug?8YsR5pv4h6qQr15qW8KWqXRBK0MDd9EONPOcnYjrQyXxeU4PVhIX5FRdwf5ziU1Oa8rdtuE2W4NyVotHhKeQrdvQMM9OQ3sEllMJno1VxgQEPq-1QyaCv24cSZsg2H21-d3hQqkxXD3iUKRgNTqHGtkRHHCN71yMa28   | Basesystem_Module_15_x86_64
 2 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Debuginfo-Updates       | SLE-Module-Basesystem15-Debuginfo-Updates | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Updates/SLE-Module-Basesystem/15/x86_64/update_debug?jjKAgTm0AAAAAAAAq_jTGwRAkx4zc8EQV0ANMjmrFjIoJBofX8ETJPW9qS9ojjVsnoDNK1TRGjk5t31J0Y9Bv_KRzpdYdJVmoH_gO-WaIo-dsZHiDXUm9fjYvLJcjsm0TidUzPnNkAqDAQsPZGZUUCXrek3JjRZl        | Basesystem_Module_15_x86_64
 3 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Pool                    | SLE-Module-Basesystem15-Pool              | Yes     | (r ) Yes  | No      |   99     | rpm-md | https://updates.suse.com/SUSE/Products/SLE-Module-Basesystem/15/x86_64/product?MbepfbRQy5WToAHi4xjhC2KOqjwW00ax8Xj23W9iMukhhWz78BKVY5sSDHiT4nurfz1JyHJrqcqpZiJU-PdajPthp3lQx4hyu-5FzifML0ALTTvKY6XEYA7qlwbn0E6fmA_iSbMl2JOWvZDpeQUZtMlCjQI                 | Basesystem_Module_15_x86_64
 4 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Source-Pool             | SLE-Module-Basesystem15-Source-Pool       | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Module-Basesystem/15/x86_64/product_source?86sSfrO8KT3dMsapcn4ihtYRbSwy2kunffEZ6oUiH-vBC-0IkEZQPniCPn63-DeOwlX9brw3vR-BqMNjC9KiOAq0JR0aHZUcyHP5sGhjitLFGTx9zUYo3F4u0KNC3rqIq2WGq-kZEhLm1s2U-vVJHpr6x5RWmMjuBDAe | Basesystem_Module_15_x86_64
 5 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Updates                 | SLE-Module-Basesystem15-Updates           | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | https://updates.suse.com/SUSE/Updates/SLE-Module-Basesystem/15/x86_64/update?WzCCey-NrSLfBHonPxWuaTXt1QuGMemPZsFEhtMfDC_jKtn5XUsqbdI8JZ9D6YNveeYrthpKY2uLTOIB_vtbMQsQUblAr2dU4D59yIBIjZv1l91CLeZD2z61oLPc7ad0UkZjl9R_e6bSNAGP8oz94Fp5                      | Basesystem_Module_15_x86_64
 6 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Debuginfo-Pool    | SLE-Product-SLES15-Debuginfo-Pool         | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Product-SLES/15/x86_64/product_debug?xtsT1GSwugZaHDGElBaTczgwJS79hgJDOy_tkzInodgbplBttQlatgP5rI0SnLQqLCw5WsfSqBIyN_tnMVZn4ZLJ3S3ENBDiZsYhg0vGZf7ILMix03bcXoHEKlzAYRntcEIx877RvS7DDHAAR4cj1V5gzcu6               | SUSE_Linux_Enterprise_Server_15_x86_64
 7 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Debuginfo-Updates | SLE-Product-SLES15-Debuginfo-Updates      | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Updates/SLE-Product-SLES/15/x86_64/update_debug?tkJ9rVV33hinQtEBnPYH_5D8OCs1ZtB4WEQFAShIaq1yN6Lwg2-W2Zu2AFALp5Jk3Oh1g1XVBqEOSPnSgACvcCIWuXr_cRfirUHEwbNqIcaSwcjxGjJYdhsb97t01_X-LPT0FDiGGezP64HheC_CzdV6xA                   | SUSE_Linux_Enterprise_Server_15_x86_64
 8 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Pool              | SLE-Product-SLES15-Pool                   | Yes     | (r ) Yes  | No      |   99     | rpm-md | https://updates.suse.com/SUSE/Products/SLE-Product-SLES/15/x86_64/product?887kGBgH3AfONFY1X3wVkuYn_5nm8sTKex06X1JSRI9gXQNqJioSBea5sAECwbVhqs510L3YRdVlVLgsavZ9D8PPplk8S_oEvhWEQdS-jfFH9dTKcukF09RkjliWQkcaNHkFzY4uQWbHzXJYekkn                             | SUSE_Linux_Enterprise_Server_15_x86_64
 9 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Source-Pool       | SLE-Product-SLES15-Source-Pool            | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Product-SLES/15/x86_64/product_source?XhlzrvfoPp1qTZqv1hErqkUwBGOoZMYY4RAS-c78IKoacswAmOXTemuxa8ZiAFfopgedlQfewbcC7_gxUERoKGdlcW7E4WaqpcuSDYh-xlJr2SG9-4OuxPDToPfZ1CgvDDZIAlqIyXDKGcwvl3EjALH9msDNHg            | SUSE_Linux_Enterprise_Server_15_x86_64
10 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Updates           | SLE-Product-SLES15-Updates                | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | https://updates.suse.com/SUSE/Updates/SLE-Product-SLES/15/x86_64/update?j0Qh2SfH11scgFCBmZI3R9B4GMJWAh5l6C0P7_jtUle_3dAATzJ2wwwo3SR_dOpn4bBYL4wSkD9bMuCRJlzcmWSkeh1W06Rz8Jsq1KysLODXqUtsBgeE5Tju1Pf-XTpNJF1RQMRRRmb_Tj8RPA                                 | SUSE_Linux_Enterprise_Server_15_x86_64
11 | openSUSE-Leap-15.0-1                                                        | openSUSE-Leap-15.0-1                      | No      | ----      | ----    |   99     | rpm-md | cd:///?devices=/dev/disk/by-id/ata-VBOX_CD-ROM_VB0-01f003f6                                                                                                                                                                                                |               
12 | repo-debug                                                                  | openSUSE-Leap-15.0-Debug                  | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/distribution/leap/15.0/repo/oss/                                                                                                                                                                                        |               
13 | repo-debug-non-oss                                                          | openSUSE-Leap-15.0-Debug-Non-Oss          | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/distribution/leap/15.0/repo/non-oss/                                                                                                                                                                                    |               
14 | repo-debug-update                                                           | openSUSE-Leap-15.0-Update-Debug           | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/update/leap/15.0/oss/                                                                                                                                                                                                   |               
15 | repo-debug-update-non-oss                                                   | openSUSE-Leap-15.0-Update-Debug-Non-Oss   | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/update/leap/15.0/non-oss/                                                                                                                                                                                               |               
16 | repo-non-oss                                                                | openSUSE-Leap-15.0-Non-Oss                | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.0/repo/non-oss/                                                                                                                                                                                          |               
17 | repo-oss                                                                    | openSUSE-Leap-15.0-Oss                    | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.0/repo/oss/                                                                                                                                                                                              |               
18 | repo-source                                                                 | openSUSE-Leap-15.0-Source                 | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/source/distribution/leap/15.0/repo/oss/                                                                                                                                                                                       |               
19 | repo-source-non-oss                                                         | openSUSE-Leap-15.0-Source-Non-Oss         | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/source/distribution/leap/15.0/repo/non-oss/                                                                                                                                                                                   |               
20 | repo-update                                                                 | openSUSE-Leap-15.0-Update                 | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/update/leap/15.0/oss/                                                                                                                                                                                                         |               
21 | repo-update-non-oss                                                         | openSUSE-Leap-15.0-Update-Non-Oss         | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/update/leap/15.0/non-oss/                                                                                                                                                                                                     |               
openleap:~ # zypper mr -d 21
Repository 'repo-update-non-oss' has been successfully disabled.

Check for orphaned packages.

openleap:~ $ zypper rm $(zypper --no-refresh packages --orphaned | gawk '{print $5}'  | tail -n +5 )
Too few arguments.
At least one package name is required.
remove (rm) [OPTIONS]  ...

Remove packages with specified capabilities. A capability is NAME[.ARCH][OP], where OP is
one of <, =, >.

  Command options:

-r, --repo     Load only the specified repository.
-t, --type            Type of package (package, patch, pattern, product).
-n, --name                  Select packages by plain name, not by capability.
-C, --capability            Select packages by capability.
-u, --clean-deps            Automatically remove unneeded dependencies.
-U, --no-clean-deps         No automatic removal of unneeded dependencies.
-D, --dry-run               Test the removal, do not actually remove.
    --details               Show the detailed installation summary.
-y, --no-confirm            Don't require user interaction. Alias for the --non-interactive global

  Solver options:

    --debug-solver          Create a solver test case for debugging.
    --force-resolution      Force the solver to find a solution (even an aggressive one) rather than
    --no-force-resolution   Do not force the solver to find solution, let it ask.

openleap:~ $ zypper --no-refresh packages --orphaned
Loading repository data...
Reading installed packages...
No packages found.

My whole migration lasts about 30 minutes. But that’s really a small server.

And in the end – you have to reboot, anyway.

openleap:~ $ systemctl reboot

Let’s check if we really run a SLES15 server now.

openleap:~ # cat /etc/os-release
PRETTY_NAME="SUSE Linux Enterprise Server 15"

Looks good! The system is running SLES15 now. Now you can enjoy the full support and service of SUSE.

Cet article Upgrading from OpenLeap to SLES15 est apparu en premier sur Blog dbi services.

Oracle 19c Automatic Indexing: My First Auto Index (Absolute Beginners)

Richard Foote - Mon, 2019-08-19 04:21
I am SOOOO struggling with this nightmare block editor but here goes. Please excuse any formatting issues below: I thought it was time to show the new Oracle 19c Automatic Indexing feature in action and what better way than to go through how I created my first ever Automatic Index. To start, I create a […]
Categories: DBA Blogs

Basic Replication -- 2a : Elements for creating a Materialized View

Hemant K Chitale - Sun, 2019-08-18 04:02
The CREATE MATERIALIZED VIEW statement is documented here.  It can look quite complex so I am presenting only the important elements here.  In this post, I begin with only the basic elements.

(EDIT: These SQL operations, queries and results were in a 19c Database)

First, I recreate the SOURCE_TABLE properly, with a Primary Key :

SQL> drop table source_table;

Table dropped.

SQL> create table source_table
2 (id number not null,
3 data_element_1 varchar2(15),
4 data_element_2 varchar2(15),
5 date_col date)
6 /

Table created.

SQL> create unique index source_table_pk
2 on source_table(id);

Index created.

SQL> alter table source_table
2 add constraint source_table_pk
3 primary key (id)
4 /

Table altered.


Then I create a Materialized View Log on SOURCE_TABLE.  This will capture all DML against this table and will be read by the target Materialized View to identify "changed" rows at every refresh.

SQL> create materialized view log on source_table;

Materialized view log created.


I then identify the objects that were created.

SQL> select object_id, object_name, object_type
2 from user_objects
3 where created > sysdate-1
4 order by object_id
5 /

---------- ------------------------------ -----------------------

SQL> desc mlog$_source_table;
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------

SQL> desc rupd$_source_table;
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------


Interesting that the "CREATE MATERIAIZED VIEW LOG" statement created 3 database objects.

What happens after I perform DML on the SOURCE_TABLE ?

SQL> insert into source_table
2 values (1,'First','One',sysdate);

1 row created.

SQL> insert into source_table
2 values (2,'Second','Two',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> delete source_table
2 where id=2
3 /

1 row deleted.

SQL> commit;

Commit complete.

SQL> select * from mlog$_source_table;

---------- --------- - -
1 01-JAN-00 I N

2 01-JAN-00 I N

2 01-JAN-00 D O

SQL> select * from rupd$_source_table;

no rows selected


So the MLOG$_SOURCE_TABLE is the log that captures 2 INSERT statements and 1 DELETE statement.  (OR is it 2 INSERT *rows* and 1 DELETE *row* ??)
We don't know what the RUPD$_SOURCE_TABLE captures yet.

Let me create a Materialized View and then query MLOG$_SOURCE_TABLE (which is the "MV Log")

SQL> create materialized view
2 mv_of_source
3 refresh fast on demand
4 as select * from source_table
5 /

Materialized view created.

SQL> select * from mv_of_source
2 /

---------- --------------- --------------- ---------
1 First One 18-AUG-19

SQL> select * from mlog$_source_table;

no rows selected


So, the CREATE MATERIALIZED VIEW statement has also done a cleanup of the MV Log entries with a SNAPTIME$ older than when it was created.

Let me insert two new rows and then refresh the Materialized View and check the MV Log again.

SQL> insert into source_table
2 values (3,'Third','Three',sysdate);

1 row created.

SQL> insert into source_table
2 values (4,'Fourth','Four',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mlog$_source_table;

---------- --------- - -
3 01-JAN-00 I N

4 01-JAN-00 I N

SQL> execute dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> select * from mlog$_source_table;

no rows selected

SQL> select * from mv_of_source;

---------- --------------- --------------- ---------
1 First One 18-AUG-19
3 Third Three 18-AUG-19
4 Fourth Four 18-AUG-19


So, the 2 single-row INSERTs did create two entries in the MV Log and the REFRESH of the Materialized View did a cleanup of those two entries.

I haven't yet explored :
b. Multi-Row Operations
Categories: DBA Blogs

Linux: Configuring hosts per ssh in parallel: pssh

Dietrich Schroff - Fri, 2019-08-16 15:11
If you have to set up some hosts in a way, that a number of commands has to be executed on each node, than you should consider PSSH:


The installation is straight forward:
root@zerberus:~# apt-get install pssh
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.      
Statusinformationen werden eingelesen.... Fertig
Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
  btrfs-tools geoip-database-extra libcryptui0a libjs-openlayers seahorse-daemon
Verwenden Sie »apt autoremove«, um sie zu entfernen.
Die folgenden NEUEN Pakete werden installiert:
0 aktualisiert, 1 neu installiert, 0 zu entfernen und 8 nicht aktualisiert.
Es müssen 29,0 kB an Archiven heruntergeladen werden.
Nach dieser Operation werden 135 kB Plattenplatz zusätzlich benutzt.
Holen:1 http://de.archive.ubuntu.com/ubuntu bionic/universe amd64 pssh all 2.3.1-1 [29,0 kB]
Es wurden 29,0 kB in 0 s geholt (71,0 kB/s).
Vormals nicht ausgewähltes Paket pssh wird gewählt.
(Lese Datenbank ... 488993 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../archives/pssh_2.3.1-1_all.deb ...
Entpacken von pssh (2.3.1-1) ...
pssh (2.3.1-1) wird eingerichtet ...
Trigger für man-db (2.8.3-2ubuntu0.1) werden verarbeitet ...
But executing on ubuntu is a little bit tricky:
If you want to do a test, you have to know, that ubuntu has renamed the binary to parallel-ssh. pssh is not known at commandline:

schroff@zerberus:~$ parallel-ssh -i -H "localhost" ls -l .bashrc
[1] 00:04:48 [SUCCESS]
-rw-r--r-- 1 schroff schroff 3815 Jul 14  2017 .bashrc
[2] 00:04:48 [SUCCESS] localhost
-rw-r--r-- 1 schroff schroff 3815 Jul 14  2017 .bashrc
Please note the syntax highlighting, which i found very helpful:

“No Primary Key item has been defined for form region”

Jeff Kemp - Fri, 2019-08-16 02:30

Otherwise reported as “WWV_FLOW_FORM_REGION.NO_PRIMARY_KEY_ITEM” or merely “Internal error”, this bug caused me to waste hours of investigation, twice, because a simple Google search was not returning any results. This post is merely to remedy that situation.

On an APEX 19.1 page with a new Form region combined with an Interactive Grid region, when the user makes a change to a record and Submits the page, they get this error.


Basic Replication -- 1 : Introduction

Hemant K Chitale - Thu, 2019-08-15 23:24
Basic Replication, starting with Read Only Snapshots has been available in Oracle since  V7.   This was doable with the "CREATE SNAPSHOT" command.

In 8i, the term was changed from "Snapshot" to "Materialized View"  and the "CREATE MATERIALIZED VIEW" command was introduced, while "CREATE SNAPSHOT" was still supported.

Just as CREATE SNAPSHOT is still available in 19c,  DBMS_SNAPSHOT.REFRESH is also available.

Not that I recommend that you use CREATE SNAPSHOT and DBMS_SNAPSHOT anymore.  DBAs and Developers should have been using CREATE MATERIALIZED VIEW and DBMS_REFRESH since 8i.

In the next few blog posts (this will be a very short series) I will explore Basic Replication.  Let me know if you want to see it in 11.2 and 12c as well.

Categories: DBA Blogs

2019 Public Appearances (What In The World)

Richard Foote - Wed, 2019-08-14 21:29
I’ll be presenting at a number of Oracle events over the remainder of the year. Details as follows: Oracle Open World – San Francisco (16-19 September 2019)
Categories: DBA Blogs

Find Docker Container IP Address?

DBASolved - Wed, 2019-08-14 11:38

This is just simple post for later reference, if I need it …

In setting up some docker containers for testing Oracle GoldenGate, I needed to find the IP address of the container where my database was running (I keep my database in a seperate container in order not to rebuild it every time).

To find the address of my database container, I had to use the docker “inspect” command. This command returns low level infomation on Docker objects.

The syntax is as follows:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'  




Categories: DBA Blogs

Identifying the TNS_ADMIN for an deployment in GoldenGate Microservices

DBASolved - Wed, 2019-08-14 10:30

Setting up network routing/access with Oracle GoldenGate Microservices (12.3 and later) can be an interesting experience. As many in the Oracle space knows, you simply have to setup TNS_ADMIN to point to the location where your sqlnet.ora and tnsnames.ora files are located. This would normally look like this:

export TNS_ADMIN=${ORACLE_HOME}/network/admin


export TNS_ADMIN=$(ORA_CLIENT_HOME}/network/admin


These examples still working for Oracle GoldenGate Classic, however, when we start looking at this through the lens of Microservices; it changes a bit. Within the Oracle GoldenGate Microservices architecture the TNS_ADMIN enviroment variable has to be set “per deployment”. Depending on the number of deployments that are deployed with in the architecture, it is possible to have 1+N TNS_ADMIN variables.

As a illistration, it would look something like this:

As you can see this is specific to the Microservices architecture and how to setup network routing for individual deployments.


How do you set the TNS_ADMIN environment variable for each deployment? It is quite simple, when you are building a deployment using the Oracle GoldenGate Configuration Assistant (OGGCA). Priort to running OGGCA, you can set the TNS_ADMIN variable at the OS level and the OGGCA will pick it up for that run and configuration of that specific deployment.

Optionally, you don’t have to set it at the OS level. During the OGGCA walkthrough, you will be able to set the variable manually. The OGGCA will not move past the enviornment variables step until it is provided.

Changing TNS_ADMIN

After building a deployment, you many want to chang the location of your network related files. This can be done from either the HTML5 web page for the deployment or from REST API.

To change TNS_ADMIN from the HTML5 pages within Oracle GoldenGate Microservices, you need to start at the ServiceManager Overview page. At the bottom on this page, there is a section called “Deployments”

The select the deployment you want to work with. After clicking on the deployment name, you should now be on the “Deployment Information” page. This page has two tabs at the top. The first tab is related to details of the deployment. The second table is related to configurations for the deployment.

Within the second tab – Configurations, is where you can set/change the environment variables for the deployment. In this case, we want to to modify the TNS_ADMIN enviornment variable.


To the right of the variable in the “Actions” column, click on the pencil icon. This will allow you to edit the environment variable. Change to the new location and save it. You may need to restart the deployment (hint, that step is on the ServiceManager Overview page).

At this point, you should now be able to change the location of your TNS_ADMIN variable. This is also handy for Oracle GoldenGate Microserivces on Marketplace as well … just saying.


This same process can be done quickly using the REST API. The below sample code, is only and sample and has not been tested. Use at your own risk!

curl -X PATCH \
  <a href="https://<ip_address>/services/v2/deployments/alpha" target="_blank" rel="noopener">https://<ip_address>/services/v2/deployments/alpha</a> \
  -H 'cache-control: no-cache' \
  -d '{


Categories: DBA Blogs


Jonathan Lewis - Wed, 2019-08-14 08:20

What happens when you execute dbms_stats.gather_system_stats() with the ‘Exadata’ option ?

Here’s what my system stats look like ( test results) after doing so. (The code to generate the two different versions is at the end of the note).

System Stats
Timed: 13-Aug-2019 15:00:00 - 13-Aug-2019 15:00:00
CPUSPEED        :
CPUSPEEDNW      :          918
IOSEEKTIM       :           10
IOTFRSPEED      :      204,800
MAXTHR          :
MBRC            :          128
MREADTIM        :
SLAVETHR        :
SREADTIM        :

PL/SQL procedure successfully completed.

MBRC       :          128
CPUSPEEDNW :          918
IOSEEKTIM  :           10
IOTFRSPEED :      204,800
MAXTHR     :

PL/SQL procedure successfully completed.

All the code does is set the MBRC, IOSEEKTIM, and IOTFRSPEED to fixed values and the only real gather is the CPUSPEEDNW. The parameters showing blanks are deliberately set null by the procedure – before I called the gather_system_stats() every parameter had a value. You could also check the SQL trace file (with bind captured enabled) to see the statements that deliberately set those parameters to null if you want more proof.

What are the consequences of this call (assuming you haven’t also done something with the calibrate_io() procedure? Essentially Oracle now has information that says a single (8KB) block read request will take marginally over 10 milliseconds, and a multiblock read request of 1MB will take just over 15 milliseconds: in other words “tablescans are great, don’t use indexes unless they’re really precisely targetted”. To give you a quantitative feel for the numbers: given the choice between doing a tablescan of 1GB to pick 1,500 randomly scattered rows and using a perfect index the optimizer would choose the index.

To explain the time calculations: Oracle has set an I/O seek time of 10 ms, and a transfer rate of 204,800 bytes per ms (200 MB/s), with the guideline that a “typical” multiblock read is going to achieve 128 blocks. So the optimizer believes a single block read will take 10 + 8192/204800 ms = 10.04ms, while a multiblock read request for 1MB will take 10 + 1048576/204800 ms = 15.12 ms.

It’s also important to note that Oracle will use the 128 MBRC value in its calculation of the cost of the tablescan – even if you’ve set the db_file_mulitblock_read_count parameter for the session or system to something smaller; and if you have set the db_file_multiblock_read_count that’s the maximum size of multiblock read that the run-time engine will use.


Here are the two procedures I used to report the values above. You will only need the privilege to execute the dbms_stats package for the second one, but you’ll need the privilege to access the SYS table aux_stats$ to use the first. The benefit of the first one is that it can’t go out of date as versions change.

rem     Script:         get_system_stats.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2002
rem     Last tested

set linesize 180
set trimspool on
set pagesize 60
set serveroutput on

spool get_system_stats

        m_value         number;
        m_status        varchar2(64);
        m_start         date;
        m_stop          date;
        for r1 in (
                select  rownum rn, pname
                from    sys.aux_stats$
                where   sname = 'SYSSTATS_MAIN'
        ) loop
                dbms_stats.get_system_stats(m_status, m_start, m_stop, r1.pname, m_value);
                if r1.rn = 1 then
                        dbms_output.put_line('System Stats');
                        dbms_output.put_line('Status: ' || m_status);
                                'Timed: ' ||
                                to_char(m_start,'dd-Mon-yyyy hh24:mi:ss') ||
                                ' - ' ||
                                to_char(m_stop ,'dd-Mon-yyyy hh24:mi:ss')
                end if;
                dbms_output.put_line(rpad(r1.pname,15) ||  ' : ' || to_char(m_value,'999,999,999'));
        end loop;

        m_value         number;
        m_status        varchar2(64);
        m_start         date;
        m_stop          date;
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MBRC', m_value);
        dbms_output.put_line('MBRC       : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MREADTIM', m_value);
        dbms_output.put_line('MREADTIM   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'SREADTIM', m_value);
        dbms_output.put_line('SREADTIM   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'CPUSPEED', m_value);
        dbms_output.put_line('CPUSPEED   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'CPUSPEEDNW', m_value);
        dbms_output.put_line('CPUSPEEDNW : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOSEEKTIM', m_value);
        dbms_output.put_line('IOSEEKTIM  : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOTFRSPEED', m_value);
        dbms_output.put_line('IOTFRSPEED : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MAXTHR', m_value);
        dbms_output.put_line('MAXTHR     : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'SLAVETHR', m_value);
        dbms_output.put_line('SLAVETHR   : ' || to_char(m_value,'999,999,999'));

spool off

Effects Of Technology on Society; A Boon or Bane for Us?

VitalSoftTech - Wed, 2019-08-14 00:11

We live in a day and age that it is virtually impossible to separate the effects of technology from human life. With every passing day, our dependency on technology is increasingly on the rise. Humankind has depended on technology for business, travel, education, healthcare, agriculture, communication but most of all, comfort and recreation. Technology has […]

The post Effects Of Technology on Society; A Boon or Bane for Us? appeared first on VitalSoftTech.

Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator