Feed aggregator

Basic Replication -- 3 : Multiple Materialized Views

Hemant K Chitale - Mon, 2019-09-16 09:53
You can define multiple Materialized Views against the same Source Table with differences in :
a) the SELECT clause column list
b) Predicates in the WHERE clause
c) Joins to one or more other Source Table(s) in the FROM clause
d) Aggregates in the SELECT clause

Thus, for my Source Table, I can add another Materialized View :

SQL> create materialized view mv_2
2 refresh on demand
3 as select id, data_element_2
4 from source_table;

Materialized view created.

SQL>
SQL> select count(*) from mlog$_source_table;

COUNT(*)
----------
0

SQL> insert into source_table
2 values (5, 'Fifth','Five',sysdate);

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from mlog$_source_table;

COUNT(*)
----------
1

SQL>
SQL> execute dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> select * from mv_of_source;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
5 Fifth Five 16-SEP-19
101 First One 18-AUG-19
103 Third Three 18-AUG-19
104 Fourth Updated 09-SEP-19

SQL> select count(*) from mlog$_source_table;

COUNT(*)
----------
1

SQL>


Now that there are two MVs referencing the Source Table, the MV Log is not completely purged when only one of the two MVs is refreshed.  Oracle still maintains entries in the MV Log for the second MV to be able to execute a Refresh.

SQL> select * from mlog$_source_table;

ID SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
--------------------------------------------------------------------------------
XID$$
----------
5 16-SEP-19 I N
FE
5.6299E+14


SQL> execute dbms_mview.refresh('MV_2');

PL/SQL procedure successfully completed.

SQL> select * from mlog$_source_table;

no rows selected

SQL> select * from mv_2;

ID DATA_ELEMENT_2
---------- ---------------
101 One
103 Three
104 Updated
5 Five

SQL>


The MV Log is "purged" only when the second (actually the last) MV executes a Refresh.  Of course, if more rows were inserted / updated in the Source Table between the Refresh of MV_OF_SOURCE and MV_2, there would be corresponding entries in the MV Log.

So, Oracle does use some mechanism to track MVs that execute Refresh's and does continue to "preserve" rows in the MV Log for MVs that haven't been refreshed yet.

As I've noted in two earlier posts, in 2007 and 2012, the MV Log (called "Snapshot Log" in the 2007 post) can keep growing for a long time if you have one or more Materialized Views that just aren't executing their Refresh  calls.


Categories: DBA Blogs

Get Azure Networking Hierarchy Components With Powershell

Jeff Moss - Mon, 2019-09-16 02:26

I needed to see what VNETs, their subnets and the NIC/IPs attached to those subnets which is all available in Azure Portal but I wanted a nice hierarchical listing so here is a Powershell script for that…

$vnets = Get-AzVirtualNetwork
foreach ($vnet in $vnets)
{
"VNET: " + $vnet.Name
"Subnet Count: " + $vnet.Subnets.count
foreach ($subnet in $vnet.Subnets)
{
"..Subnet: " + $subnet.Name
if ($Subnet.IpConfigurations.count -eq 0)
{"Subnet has no IpConfigurations"}
else
{
foreach ($IpConfiguration in $Subnet.IpConfigurations)
{
"…." + $IpConfiguration.Id.substring($IpConfiguration.Id.indexof("resourceGroups")+15)
}
}
}
}

code

The output looks like this (redacted):

VNET: myvnet1
Subnet Count: 1
..Subnet: mysubnet1
Subnet has no IpConfigurations
VNET: myvnet2
Subnet Count: 1
..Subnet: mysubnet2
….myresourcegroup2/providers/Microsoft.Network/networkInterfaces/myvm1-nic1/ipConfigurations/ipconfig1
….myresourcegroup2/providers/Microsoft.Network/networkInterfaces/myvm2-nic1/ipConfigurations/ipconfig1
VNET: myvnet3
Subnet Count: 0

Duplicate OMF DB Using Backupset To New Host And Directories

Michael Dinh - Sat, 2019-09-14 10:23

Database 12.1.0.2.0 is created using OMF.

At SOURCE:
db_create_file_dest=/u01/app/oracle/oradata
db_recovery_file_dest=/u01/app/oracle/fast_recovery_area

At DESTINATION:
db_create_file_dest=/u01/oradata
db_recovery_file_dest=/u01/fast_recovery_area

BACKUP LOCATION on shared storage:
/media/shared_storage/rman_backup/EMU

I has to explicitly set control_files since I was not able to determine how it can be done automatically.

set control_files='/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl','/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl'

If don’t set_controlfiles then duplicate will restore to original locations.

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
output file name=/u01/app/oracle/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

STEPS:

================================================================================
### SOURCE: Backup Database
================================================================================

--------------------------------------------------
### Retrieve controlfile locations.
--------------------------------------------------

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
/u01/app/oracle/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

SQL>

--------------------------------------------------
### Retrieve redo logs locations.
--------------------------------------------------

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_1_gqsq2mvy_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_1_gqsq2myy_.log
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_2_gqsq2n1o_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_2_gqsq2n3g_.log
/u01/app/oracle/oradata/EMU/onlinelog/o1_mf_3_gqsq2n50_.log
/u01/app/oracle/fast_recovery_area/EMU/onlinelog/o1_mf_3_gqsq2nql_.log

6 rows selected.

SQL>

--------------------------------------------------
### Backup database.
--------------------------------------------------

[oracle@ol741 EMU]$ export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[oracle@ol741 EMU]$ rman @ backup.rman

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:09:06 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> spool log to /media/shared_storage/rman_backup/EMU/rman_EMU_level0.log
2> set echo on
3> connect target;
4> show all;
5> set command id to "BACKUP_EMU";
6> run {
7> allocate channel d1 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
8> allocate channel d2 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
9> allocate channel d3 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
10> allocate channel d4 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
11> allocate channel d5 device type disk format '/media/shared_storage/rman_backup/EMU/%d_%I_%T_%U.bks' maxopenfiles 1;
12> backup as compressed backupset incremental level 0 check logical database filesperset 1 tag="EMU"
13> plus archivelog filesperset 8 tag="EMU"
14> ;
15> }
16> alter database backup controlfile to trace as '/media/shared_storage/rman_backup/EMU/cf_@.sql' REUSE RESETLOGS;
17> create pfile='/media/shared_storage/rman_backup/EMU/init@.ora' from spfile;
18> list backup summary tag="EMU";
19> list backup of spfile tag="EMU";
20> list backup of controlfile tag="EMU";
21> report schema;
22> exit

--------------------------------------------------
### Retrive datafiles and tempfiles locations.
--------------------------------------------------

[oracle@ol741 EMU]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:09:44 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EMU (DBID=3838062773)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name EMU

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               YES     /u01/app/oracle/oradata/EMU/datafile/o1_mf_system_gqsq2qw4_.dbf
2    550      SYSAUX               NO      /u01/app/oracle/oradata/EMU/datafile/o1_mf_sysaux_gqsq30xo_.dbf
3    265      UNDOTBS1             YES     /u01/app/oracle/oradata/EMU/datafile/o1_mf_undotbs1_gqsq3875_.dbf
4    5        USERS                NO      /u01/app/oracle/oradata/EMU/datafile/o1_mf_users_gqsq405f_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u01/app/oracle/oradata/EMU/datafile/o1_mf_temp_gqsq3c3d_.tmp

RMAN> exit

Recovery Manager complete.
[oracle@ol741 EMU]$


================================================================================
### TARGET: Restore Database
================================================================================

--------------------------------------------------
### Create pfile.
--------------------------------------------------

[oracle@ol742 dbs]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/dbs

[oracle@ol742 dbs]$ cat initemu.ora
*.db_name='emu'
[oracle@ol742 dbs]$

--------------------------------------------------
### Startup nomount.
--------------------------------------------------

[oracle@ol742 EMU]$ . oraenv << startup nomount;
ORACLE instance started.

Total System Global Area  234881024 bytes
Fixed Size                  2922904 bytes
Variable Size             176162408 bytes
Database Buffers           50331648 bytes
Redo Buffers                5464064 bytes
SQL>

--------------------------------------------------
### Create new directories.
--------------------------------------------------

[oracle@ol742 EMU]$ mkdir -p /u01/oradata/EMU/controlfile/
[oracle@ol742 EMU]$ mkdir -p /u01/fast_recovery_area/EMU/controlfile

--------------------------------------------------
### Duplicate database.
--------------------------------------------------

[oracle@ol742 EMU]$ export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[oracle@ol742 EMU]$ rman @ dup_omf_bks.rman

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:37:51 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> spool log to /media/shared_storage/rman_backup/EMU/rman_duplicate_database.log
2> set echo on
3> connect auxiliary *
4> show all;
5> set command id to "DUPLICATE_EMU";
6> DUPLICATE DATABASE TO emu
7>   SPFILE
8>   set db_file_name_convert='/u01/app/oracle/oradata','/u01/oradata'
9>   set log_file_name_convert='/u01/app/oracle/oradata','/u01/oradata'
10>   set db_create_file_dest='/u01/oradata'
11>   set db_recovery_file_dest='/u01/fast_recovery_area'
12>   set control_files='/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl','/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl'
13>   BACKUP LOCATION '/media/shared_storage/rman_backup/EMU'
14>   NOFILENAMECHECK
15> ;
16> exit

--------------------------------------------------
### Retrive datafiles and tempfiles locations.
--------------------------------------------------

[oracle@ol742 EMU]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sat Sep 14 16:40:33 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: EMU (DBID=3838070815)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name EMU

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    700      SYSTEM               YES     /u01/oradata/EMU/datafile/o1_mf_system_gqsyvo1y_.dbf
2    550      SYSAUX               NO      /u01/oradata/EMU/datafile/o1_mf_sysaux_gqsywg40_.dbf
3    265      UNDOTBS1             YES     /u01/oradata/EMU/datafile/o1_mf_undotbs1_gqsywx7n_.dbf
4    5        USERS                NO      /u01/oradata/EMU/datafile/o1_mf_users_gqsyxd90_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u01/oradata/EMU/datafile/o1_mf_temp_gqsyy469_.tmp

RMAN> exit


Recovery Manager complete.
[oracle@ol742 EMU]$

--------------------------------------------------
### Retrieve controlfile locations.
--------------------------------------------------

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/u01/oradata/EMU/controlfile/o1_mf_gqsq2mlg_.ctl
/u01/fast_recovery_area/EMU/controlfile/o1_mf_gqsq2mpz_.ctl

SQL>

--------------------------------------------------
### Retrieve redo logs locations.
--------------------------------------------------

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u01/oradata/EMU/onlinelog/o1_mf_3_gqsyy2kh_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_3_gqsyy2lx_.log
/u01/oradata/EMU/onlinelog/o1_mf_2_gqsyy165_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_2_gqsyy17p_.log
/u01/oradata/EMU/onlinelog/o1_mf_1_gqsyxztc_.log
/u01/fast_recovery_area/EMU/onlinelog/o1_mf_1_gqsyxzw3_.log

6 rows selected.

SQL>

Logs:

rman_EMU_level0.log

rman_duplicate_database.log

 

Oracle Sign up: more problems

Dietrich Schroff - Fri, 2019-09-13 14:03
I thought, i was successful, but:

I received a mail with the following content:

"We have re-authorized a new, specific amount on the credit/debit card used during the sign up process."

and

"To verify the account you have created, please confirm the specific amount re-authorized."


My problem: there is not any "re-authorized amount" on my banking account. I do not know, what is "re-authorized"?
Is this: this amount is charged on my credit card (then i should see it).
Or is this process buggy and i was for some reason not charged?
Or is re-authorization something else?



RMAN in a Multitenant Environment

Yann Neuhaus - Fri, 2019-09-13 12:13

The Oracle Multitenant architecture came with Oracle 12c a few years ago. For people usually working with traditional Non-CDB database it might be confusing the first time to do Backup and Recovery with pluggable databases (PDBs)
In this document we are trying to explain how to use RMAN backup and recovery in a multitenant environment for an oracle 19c database with 2 pluggable databases.
Below the configuration we are using.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO

We will not use a recovery catalog, but the use of recovery is the same than in a non-CDB environment.
Note that starting with Oracle 19c, we can now connect to a recovery catalog when the target database is a PDB.

Whole CDB Backups

Backin up a whole CDB is like backing up non-CDB database. We have to backup
-root pdb
-all pluggable databases
-archived logs
The steps are:
1- Connect to the root container with a common user having SYSBACKUP or SYSDBA privileges

RMAN> connect target /

connected to target database: ORCL (DBID=1546409981)

RMAN>

2- Launch the backup

RMAN> BACKUP DATABASE PLUS ARCHIVELOG;

Starting backup at 11-SEP-19
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=1 STAMP=1018632767
input archived log thread=1 sequence=7 RECID=2 STAMP=1018690452
input archived log thread=1 sequence=8 RECID=3 STAMP=1018691169
input archived log thread=1 sequence=9 RECID=4 STAMP=1018693343
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2019_09_11/o1_mf_annnn
...
...
Finished backup at 11-SEP-19

Starting Control File and SPFILE Autobackup at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018693411_gqkcr46z_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 11-SEP-19

Oracle also recommends to backup sometimes the root container.
Once connected to the root container with a common user, run the backup command

RMAN> BACKUP DATABASE ROOT;

Starting backup at 11-SEP-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ORCL/system01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ORCL/sysaux01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ORCL/undotbs01.dbf
input datafile file number=00007 name=/u01/app/oracle/oradata/ORCL/users01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T103019_gqkd4vxb_.bkp tag=TAG20190911T103019 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
Finished backup at 11-SEP-19

Starting Control File and SPFILE Autobackup at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018693836_gqkd5d65_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 11-SEP-19

RMAN>
PDBSs Backups

Backing up PDBs is not difficult, there are just some mechanisms to know. When connecting with RMAN to the root container, we can back up one or more PDBs while directly connecting to a PDB, we can only back up this PDB.
1- Connect to the root container with a common user having SYSBACKUP or SYSDBA privileges

RMAN> connect target /

connected to target database: ORCL (DBID=1546409981)

RMAN>

And backup individual PDBs

RMAN> BACKUP PLUGGABLE DATABASE PDB1,PDB2;

Starting backup at 11-SEP-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
...
...
Starting Control File and SPFILE Autobackup at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018695111_gqkff85l_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 11-SEP-19

RMAN>

2- Connecting to the PDBs with a local user having SYSBACKUP or SYSDBA privileges
PDB1

RMAN> connect target sys/root@pdb1

connected to target database: ORCL:PDB1 (DBID=4178439423)

RMAN> BACKUP DATABASE;

Starting backup at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/u01/app/oracle/oradata/ORCL/pdb1/sysaux01.dbf
input datafile file number=00009 name=/u01/app/oracle/oradata/ORCL/pdb1/system01.dbf
input datafile file number=00011 name=/u01/app/oracle/oradata/ORCL/pdb1/undotbs01.dbf
input datafile file number=00012 name=/u01/app/oracle/oradata/ORCL/pdb1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T110707_gqkg9w5n_.bkp tag=TAG20190911T110707 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 11-SEP-19

RMAN>

PDB2

RMAN> connect target sys/root@pdb2

connected to target database: ORCL:PDB2 (DBID=3996013191)

RMAN>  BACKUP DATABASE;

Starting backup at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00014 name=/u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
input datafile file number=00013 name=/u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
input datafile file number=00015 name=/u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
input datafile file number=00016 name=/u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T110844_gqkgdwmm_.bkp tag=TAG20190911T110844 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 11-SEP-19

RMAN>
Tablespace Backup in a PDB

Tablespaces in different PDBs can have the same name. So to eliminate ambiguity always connect to the PDB you want to back up tablespaces.
1- Connect to the PDB with a local user having SYSBACKUP or SYSDBA privilege

[oracle@oraadserver ~]$ rman target sys/root@pdb1

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 11:35:53 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL:PDB1 (DBID=4178439423)

2- Issue the BACKUP TABLESPACE command

RMAN> BACKUP TABLESPACE USERS;

Starting backup at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=290 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00012 name=/u01/app/oracle/oradata/ORCL/pdb1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T113623_gqkj0qxl_.bkp tag=TAG20190911T113623 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 11-SEP-19

RMAN>
Data File Backup in a PDB

Data Files are identified by a unique number across the CDB. So for the Backup we can connect either to the root container or directly to the PDB.
Note that while directly connecting to the PDB, we can only backup files belonging to this PDB.

1- Connect to the root container with a common user having SYSBACKUP or SYSDBA privileges

[oracle@oraadserver admin]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 11:54:42 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1546409981)

2- Backup the Data File

RMAN> BACKUP DATAFILE 10;

Starting backup at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/u01/app/oracle/oradata/ORCL/pdb1/sysaux01.dbf
channel ORA_DISK_1: starting piece 1 at 11-SEP-19
channel ORA_DISK_1: finished piece 1 at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T115504_gqkk3s44_.bkp tag=TAG20190911T115504 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 11-SEP-19

Starting Control File and SPFILE Autobackup at 11-SEP-19
piece handle=/u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018698908_gqkk3wwt_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 11-SEP-19

RMAN>
Whole CDB Complete Recovery

Suppose we lose all Data Files, Control Files and Redo Log Files of the whole container. We can restore with the following steps
1- Restore Control Files while connecting to the root container

[oracle@oraadserver ORCL]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 14:25:25 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area     872413680 bytes

Fixed Size                     9140720 bytes
Variable Size                297795584 bytes
Database Buffers             557842432 bytes
Redo Buffers                   7634944 bytes

RMAN>

RMAN> restore controlfile FROM AUTOBACKUP;

Starting restore at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

recovery area destination: /u01/app/oracle/fast_recovery_area
database name (or database unique name) used for search: ORCL
channel ORA_DISK_1: AUTOBACKUP /u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018698908_gqkk3wwt_.bkp found in the recovery area
AUTOBACKUP search with format "%F" not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /u01/app/oracle/fast_recovery_area/ORCL/autobackup/2019_09_11/o1_mf_s_1018698908_gqkk3wwt_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/u01/app/oracle/oradata/ORCL/control01.ctl
output file name=/u01/app/oracle/fast_recovery_area/ORCL/control02.ctl
Finished restore at 11-SEP-19
RMAN>

2- Mount the CDB

RMAN> alter database mount;

released channel: ORA_DISK_1
Statement processed

3- List Backup of archived logs

RMAN> list backup of archivelog all;


List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
2       397.52M    DISK        00:00:04     11-SEP-19
        BP Key: 2   Status: AVAILABLE  Compressed: NO  Tag: TAG20190911T102225
        Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2019_09_11/o1_mf_annnn_TAG20190911T102225_gqkcp1k9_.bkp

  List of Archived Logs in backup set 2
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    6       2120330    10-SEP-19 2155559    10-SEP-19
  1    7       2155559    10-SEP-19 2257139    11-SEP-19
  1    8       2257139    11-SEP-19 2327294    11-SEP-19
  1    9       2327294    11-SEP-19 2342937    11-SEP-19

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
7       5.00K      DISK        00:00:00     11-SEP-19
        BP Key: 7   Status: AVAILABLE  Compressed: NO  Tag: TAG20190911T102330
        Piece Name: /u01/app/oracle/fast_recovery_area/ORCL/backupset/2019_09_11/o1_mf_annnn_TAG20190911T102330_gqkcr2n1_.bkp

  List of Archived Logs in backup set 7
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    10      2342937    11-SEP-19 2342996    11-SEP-19

RMAN>

4- Restore the database according to the sequence

RMAN> restore database until sequence 11;

Starting restore at 11-SEP-19
Starting implicit crosscheck backup at 11-SEP-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
Crosschecked 18 objects
...
...
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 11-SEP-19

RMAN>

5- Recover the database

RMAN> recover database until sequence 11;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 10 is already on disk as file /u01/app/oracle/fast_recovery_area/ORCL/archivelog/2019_09_11/o1_mf_1_10_gqkcr27d_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/ORCL/archivelog/2019_09_11/o1_mf_1_10_gqkcr27d_.arc thread=1 sequence=10
media recovery complete, elapsed time: 00:00:00
Finished recover at 11-SEP-19

RMAN>

6- Open database in Resetlogs mode

RMAN> alter database open resetlogs;

Statement processed

RMAN>
PDBs Complete Recovery

To recover a PDB we can :
– Connect to the root and then use the RESTORE PLUGGABLE DATABASE and RECOVER PLUGGABLE DATABASE commands.
1- Close the PDB to recover

SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           READ WRITE NO
SQL>

2- Connect to rman on the root container and issue the restore command

RMAN> RESTORE PLUGGABLE DATABASE  PDB1;

Starting restore at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=56 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/ORCL/pdb1/system01.dbf
channel ORA_DISK_1: restoring datafile 00010 to /u01/app/oracle/oradata/ORCL/pdb1/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00011 to /u01/app/oracle/oradata/ORCL/pdb1/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00012 to /u01/app/oracle/oradata/ORCL/pdb1/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T144816_gqkv9btm_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359B0BEC8B4545E0531502A8C0F64E/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T144816_gqkv9btm_.bkp tag=TAG20190911T144816
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 11-SEP-19

3- Recover the pluggable database

RMAN> RECOVER PLUGGABLE DATABASE  PDB1;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 11-SEP-19

4- Open the pluggable database

RMAN> alter pluggable database PDB1 open;

Statement processed

RMAN>

– Connect to the PDB and use the RESTORE DATABASE and RECOVER DATABASE commands.
1- Close the PDB to recover

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           MOUNTED
SQL>

2- Connect to the PDB and issue the RESTORE DATABASE command

[oracle@oraadserver pdb1]$ rman target sys/root@pdb2

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 15:19:03 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL:PDB2 (DBID=3996013191, not open)

RMAN> RESTORE DATABASE;

Starting restore at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=45 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00015 to /u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00016 to /u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T144816_gqkv9tfq_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T144816_gqkv9tfq_.bkp tag=TAG20190911T144816
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 11-SEP-19

RMAN>

3- Recover the pluggable database

RMAN> recover database;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 11-SEP-19

4- Open the database

RMAN> alter database open ;

Statement processed

RMAN>
Complete Tablespace Recovery in a PDB

-Non-SYSTEM Tablespace

To recover a Non-SYSTEM Tablespace in a PDB we can do next steps
1-Put the tablespace offline while connecting to the PDB

SQL> ALTER TABLESPACE MYTAB OFFLINE;

Tablespace altered.

SQL>

2- Connect to the PDB with RMAN and issue the RESTORE TABLESPACE command

[oracle@oraadserver pdb2]$ rman target sys/root@pdb2

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Sep 11 16:52:37 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL:PDB2 (DBID=3996013191)

RMAN> RESTORE TABLESPACE MYTAB;

Starting restore at 11-SEP-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00017 to /u01/app/oracle/oradata/ORCL/pdb2/mytab01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T163708_gql1o4gx_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T163708_gql1o4gx_.bkp tag=TAG20190911T163708
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 11-SEP-19

3- Issue the RECOVER TABLESPACE command

RMAN> RECOVER TABLESPACE MYTAB;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 11-SEP-19

RMAN>

4- Put back the tablespace ONLINE

RMAN> ALTER TABLESPACE MYTAB ONLINE;

Statement processed

RMAN>

-SYSTEM Tablespace

To recover a SYSTEM Tablespace in a PDB

1- Shutdown the entire CDB and Mount it

[oracle@oraadserver pdb2]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Sep 11 17:09:33 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shut immediate
ORA-01116: error in opening database file 13
ORA-01110: data file 13: '/u01/app/oracle/oradata/ORCL/pdb2/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
SQL> shut abort;
ORACLE instance shut down.
SQL>

SQL> startup mount
ORACLE instance started.

Total System Global Area  872413680 bytes
Fixed Size                  9140720 bytes
Variable Size             310378496 bytes
Database Buffers          545259520 bytes
Redo Buffers                7634944 bytes
Database mounted.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       MOUNTED
         3 PDB1                           MOUNTED
         4 PDB2                           MOUNTED
SQL>

2- Connect to root container and restore the corresponding files (Files can be identified using command REPORT SCHEMA for example)

RMAN> RESTORE DATAFILE 13;

Starting restore at 11-SEP-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=26 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T163708_gql1o4gx_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_11/o1_mf_nnndf_TAG20190911T163708_gql1o4gx_.bkp tag=TAG20190911T163708
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 11-SEP-19

RMAN>

3- Recover the Data File

RMAN> RECOVER DATAFILE 13;

Starting recover at 11-SEP-19
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 11-SEP-19

4- Open all containers

RMAN> alter database open;

Statement processed

RMAN> ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE;

Statement processed

RMAN>

Conclusion
In this blog we basically explain how to use RMAN in a multitenant environment. We did not talked about PITR recovery, we will do it in a coming blog.
Note also that we did not use RMAN commands like LIST FAILURE, ADVISE FAILURE and REPAIR FAILURE. But these commands also work.

Cet article RMAN in a Multitenant Environment est apparu en premier sur Blog dbi services.

HVR | Real-Time CDC ::Oracle Autonomous DW::

Rittman Mead Consulting - Fri, 2019-09-13 09:49
Introduction :Oracle Autonomous DW::

High quality Business Intelligence is key in decision making for any successful organisation and for an increasing number of businesses this means being able to access real-time data.  At Rittman Mead we are seeing a big upturn in interest in technologies that will help our customers to integrate real-time data into their BI systems.  In a world where streaming is revolutionising the way we consume data, log-based Change Data Capture (CDC) comes into the picture as a way of providing access to real-time transactional data for reporting and analysis. Most databases support CDC nowadays; in Oracle for example the redo logs are a potential source of CDC data. Integration tools that support CDC are coming to the fore and one of these is HVR.

HVR is a real-time data integration solution that replicates data to and from many different data management technologies, both on-premise and cloud-based, including Oracle, SAP and Amazon. The HVR GUI makes it easy to connect together multiple databases, replicate entire schemas, initialise the source and target systems and then keep them in sync.

HVR have recently added Oracle’s Autonomous Data Warehouse (ADW) to their list of supported technologies and so in this blog we are stepping through the process of configuring an HVR Channel to replicate data from an Oracle database to an instance of Oracle ADW.


Setup

Before setting up replication you have to install HVR itself. This is simple enough, a fairly manual CLI job with a couple of files to create and save in the correct directories. Firewalls also needs to allow all HVR connections. HVR needs a database schema in which to store the repository configuration data and so we created a schema in the source Oracle database. It also needs some additional access rights on the Oracle source database.


Process 1.

The first step is to register the newly created Hub in the HVR GUI. The GUI can be run on any machine that is able to connect to the server on which the HVR hub is installed. We tested two GUI instances, one  running on a Windows machine and one on a MAC. Both were easy to install and configure.

:Oracle Autonomous DW::

The database connection details entered here are for the HVR hub database, where metadata about the hub configuration is stored.

2.

Next we need to define our source and target. In both cases the connection between the HVR and the data uses standard Oracle database connectivity. The source connection is to a database on the same server as the HVR hub and the target connection uses a TNS connection pointing at the remote ADW instance.

Defining the source database involves right clicking on Location Configuration and selecting New Location:

:Oracle Autonomous DW::

Configuring the target involves the same steps:

:Oracle Autonomous DW::

You can see from the screenshot that we are using one of the Oracle-supplied tnsnames entries to connect to ADW and also that we are using a separate Oracle client install to connect to ADW. Some actions within HVR use the Oracle Call Interface and require a more recent version of the Oracle client than provided by our 12c database server install.

Next up is creating the “channel”. A channel channel in HVR groups together the source and target locations and allows the relationship between the two to be defined and maintained.  Configuring a new channel involves naming it, defining source and target locations and then identifying the tables in the source that contain the data to be replicated.

3.

The channel name is defined by right clicking on Channel Definitions and selecting New Channel.

:Oracle Autonomous DW::

We then open the new channel and right click on Location Groups and select New Group to configure the group to contain source locations:

:Oracle Autonomous DW::

The source location is the location we defined in step 2 above. We then right click on the newly created group and select New Action, Capture  to define the role of the group in the channel:

:Oracle Autonomous DW::

The Capture action defines that data will be read from the locations in this group.

A second Location Group is needed for the for the target. This time we defined the target group to have the Integrate action so that data will be written to the locations in this group.

4.

The final step in defining the channel is to identify the tables we want to replicate. This can be done using the Table Explore menu option when you right-click on Tables.:

:Oracle Autonomous DW:: 5.

With the channel defined we can start synchronising the data between the two systems. We are starting with an empty database schema in our ADW target so we use the HVR Refresh action to first create the target tables in ADW and to populate them with the current contents of the source tables.  As the Refresh action proceeds we can monitor progress:

:Oracle Autonomous DW:: 6.

Now with the two systems in sync we can start the process of real-time data integration using the HVR Initialise action. This creates two new jobs in  the HVR Scheduler which then need to be started:

:Oracle Autonomous DW::

One more thing to do of course: test that the channel is working and replications is happening in real-time. We applied a series of inserts, updates and deletes to the source system and monitored the log files for the two scheduled jobs to see the activity captured from the redo logs on the source:

:Oracle Autonomous DW::

and then applied as new transactions on the target:

:Oracle Autonomous DW::

The HVR Compare action allows us to confirm that the source and target are still in sync.

:Oracle Autonomous DW::
Conclusion

Clearly the scenario we are testing here is a simple one. HVR can do much more - supporting one-to-many, many-to-many and also bi-directional replication configurations. Nonetheless we were impressed with how easy it was to install and configure HVR and also with the simplicity of executing actions and monitoring the channel through the GUI. We dipped in to using the command line interface when executing some of the longer running jobs and this was straightforward too.

Categories: BI & Warehousing

Presenting at UKOUG Techfest19 Conference in Brighton, UK

Richard Foote - Thu, 2019-09-12 19:07
I’m very excited to be attending my 3rd UKOUG Conference, this year re-badged as Techfest19. The fact it’s being held in Brighton is a little disconcerting for a Crystal Palace fan, but really looking forward nonetheless to what has always been one of the very best Oracle conferences on the yearly calendar. I have a […]
Categories: DBA Blogs

Oracle Cloud: First login

Dietrich Schroff - Thu, 2019-09-12 13:53
After signing up to Oracle cloud i tried my first login:

https://cloud.oracle.com/en_US/sign-in


but i only got:
I think the problem is, that there i a manual review step on Oracle's side which i have not passed for now:
So let's wait for a day or two...

Estimating how much write I/O is not logged

Bobby Durrett's DBA Blog - Thu, 2019-09-12 11:30

I am trying to figure out how much non-logged write I/O an Oracle database is doing. I want to run an ALTER DATABASE FORCE LOGGING command on the database so that I can use Oracle GoldenGate(GGS) which reads updates from Oracle’s logs. GGS will miss writes that are not logged. But if I turn on force logging it may slow down applications that depend on non-logged writes for good performance. So, I want to find some Oracle performance metrics that give me an idea about how much non-logged write I/O we have so I have an estimate of how much force logging will degrade performance.

I created SQL*Plus and PythonDBAGraphs reports based on DBA_HIST_IOSTAT_FUNCTION that gives some insight into the write I/O that is not logged. Here is the Python based graphical version of the report for one recent weekend:

Possible NOLOGGING Write I/O

The purple-blue line represents Direct Writes. These may or may not be logged. The red-orange line represents writes through the DBWR process. These are always logged. The light green line represents log I/O through the LGWR process. My theory is that if the purple line is above the green line the difference must be write I/O that is not logged. But if the green line is equal to or greater than the purple line you really do not know if there was any write I/O that was not logged. But if there is non-logged write I/O it cannot be more than the amount indicated by the purple line. So, this graph does not directly answer my question about how much write I/O was not logged but it does show some numbers that relate to the question.

I did some experiments with the V$IOSTAT_FUNCTION view that populates DBA_HIST_IOSTAT_FUNCTION to see what values it gives for Direct Writes, DBWR, and LGWR using different scenarios. Here is the zip of these scripts and their output: nologgingscriptsandlogs09122018.zip. I tested four scenarios:

  1. Insert append nologging
  2. Insert append logging
  3. Insert noappend logging
  4. Insert noappend nologging

1 and 2 did Direct Writes. 3 and 4 did DBWR writes. 2, 3, and 4 all did LGWR writes.

Here are the relevant sections of the output that correspond to these statements.

Insert append nologging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
Direct Writes                      4660
LGWR                                 46
DBWR                                 27

Insert append logging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
LGWR                               4789
Direct Writes                      4661
DBWR                                 37

Insert noappend logging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
DBWR                               6192
LGWR                               4528
Direct Writes                         2

Insert noappend nologging:

FUNCTION_NAME      WRITE_DIFF_MEGABYTES
------------------ --------------------
DBWR                               6213
LGWR                               4524
Direct Writes                         2

This pattern is similar to that in a Ask Tom post that I wrote about a while back. That post showed the different situations in which writes were logged or not. I also got some ideas about direct writes and logging from this Oracle support document:

Check For Logging / Nologging On DB Object(s) (Doc ID 269274.1)

It sounds like inserts into tables that go through the normal processing eventually get written to disk by DBWR but inserts with the append hint write directly to the datafiles and may or may not be logged and written out by LGWR.

These tests and documents gave me the idea of building a report and graph based on DBA_HIST_IOSTAT_FUNCTION showing the values for the Direct Writes, DBWR, and LGWR FUNCTION_NAME values. The graph above shows an example of a real system. I was surprised to see how high the DBWR and LGWR values were and how low the Direct Writes were. That made me think that it would be safe to try turning on FORCE LOGGING because it likely will have minimal impact on the overall weekend processing. It gave me enough evidence to push for approval to do a controlled test of enabling FORCE LOGGING in production over an upcoming weekend. I will update this post with the results if we move forward with the test.

Bobby

Categories: DBA Blogs

Announcement: Australia/NZ “Let’s Talk Database” Events October 2019 !!

Richard Foote - Wed, 2019-09-11 22:49
I’ve very excited to announce the next series of Oracle “Let’s Talk Database” events to be run throughout Australia and New Zealand in October 2019. I’ll be discussing two exciting topics this series, “Oracle Database 19c New Features” and “Oracle Exadata X8“. As always, these sessions run between 9am-1pm, include a networking lunch and are free, […]
Categories: DBA Blogs

Blockchain Sessions at Oracle OpenWorld

David Haimes - Wed, 2019-09-11 16:27

Screen Shot 2019-09-11 at 2.25.41 PM

Next week is Oracle OpenWorld our annual conference that will take over San Francisco.  The past few years I have presented a session about Blockchain, but this year I will be presenting Oracle Financials Overview, Strategy and Roadmap
(Wednesday 11:15 AM | Moscone West Room 2006/2008 ) instead.

However there is no need to fear, there are plenty of other sessions to choose form if you are interested in Blockchain, I listed all of them below

  • An Enterprise Digital Transformation Playbook Using IoT, Blockchain, and AI [PRO5788]

    SPEAKERS

    Jai Suri, Senior Director, Product Management, Oracle

    Bhagat Nainani, Group Vice President, Oracle

    Jeff Blattner, Director of IT, Titan International

    • SCHEDULEMonday, September 16, 12:15 PM – 01:00 PM | Moscone West – Room 2012
  • Best Practices for Developing Advanced Blockchain Solutions [TIP3492]

    SPEAKERS

    Deepak Goel, Senior Director, Software Development, Oracle

    Todd Little, Architect, Oracle

    • SCHEDULEThursday, September 19, 01:15 PM – 02:00 PM | Moscone South – Room 210
  • Blockchain and End-to-End Audit Trail Validation with 360kompany [BUS1653]

    SPEAKERS

    Peter Bainbridge-Clayton, CTO, 360kompany AG

    Russell Perry, CEO, kompany (360kompany AG)

    • SCHEDULEMonday, September 16, 02:45 PM – 03:30 PM | Moscone West – Room 3020B
  • Bridge Physical and Digital Retailing with 3D Blockchain [BUS6083]

    SPEAKERS

    Mario Vollbracht, Global Director, Consumer Markets, ISG, Oracle

    Dinesh Dhamija, President & CTO, Smartrac

    • SCHEDULEWednesday, September 18, 11:00 AM – 11:45 AM | Moscone West – Room 3012
  • Containing the Opioid Crisis with Oracle Cloud [CON3379]

    SPEAKERS

    Maharshi Desai, Cloud Solution Hub Senior Manager, Oracle

    Bob Nevins, Director of Health and Human Services Strategy, Oracle

    • SCHEDULETuesday, September 17, 12:30 PM – 01:15 PM | Moscone South – Room 156B
  • Creating Trusted Supply Chain Networks with Oracle Blockchain Platform [HOL5995]

    SPEAKERS

    Deepak Goel, Senior Director, Software Development, Oracle

    Todd Little, Architect, Oracle

    Jared Li, Sr Director Software Development, Oracle

    Brijesh Deo, Principal Member of Technical Staff, Oracle

    Bala Vellanki, Product Management, Oracle

    • SCHEDULEMonday, September 16, 02:30 PM – 03:30 PM | Moscone West – Room 3019
    • SCHEDULEThursday, September 19, 10:30 AM – 11:30 AM | Moscone West – Room 3019
  • Cutting-Edge New Tech in Oracle Database: Microservices, Blockchain, Streaming [CON4669]

    SPEAKERS

    Wei Hu, Senior Vice President, High Availability & Emerging Technologies, Oracle

    • SCHEDULETuesday, September 17, 05:15 PM – 06:00 PM | Moscone South – Room 155D
  • Enterprise Blockchain Is Real: Production Apps on Oracle Blockchain Platform [BUS3490]

    SPEAKERS

    umberto arrighini, CIO, fabbrica d’Armi p. Beretta spa

    Zhu Kuang Lee, Standard Chartered Bank

    Mark Rakhmilevich, Senior Director, Blockchain Product Management, Oracle

    • SCHEDULEThursday, September 19, 12:15 PM – 01:00 PM | Moscone South – Room 152B
  • Exploit Technology Innovations Today with Smarter Enterprise Applications [PRO5699]

    SPEAKERS

    Simon Nicholson, Oracle

    Krishnan Rajan, GVP, Oracle

    Eric Rogge, Product Management, Oracle Digital Assistant, Oracle

    • SCHEDULEThursday, September 19, 09:00 AM – 10:30 AM | Moscone West – Room 2006/2008
  • Extend Oracle SaaS and Design New Experiences with Intelligent Automation [GEN6040]

    SPEAKERS

    Antonia Casamassima, Chief Information Officer, Ferrari S.p.A.

    Salvatore Ancoretti, Head of Customer Marketing, Ferrari spa

    Suhas Uliyar, Vice President, Product Management, Digital Assistant/AI & Integration, Oracle

    • SCHEDULEWednesday, September 18, 11:15 AM – 12:00 PM | Moscone South – Room 207/208
  • How Blockchain Will Revolutionize Multi-Enterprise Supply Chain Networks [CON5793]

    SPEAKERS

    Michael Richter, Director Product Marketing, Emerging Technologies, Oracle

    Prasen Palvankar, Senior Director Product Management, Oracle

    Rajan Kashyap, Head of Solutions and Blockchain Center of Excellence, Birlasoft

    • SCHEDULETuesday, September 17, 04:15 PM – 05:00 PM | Moscone West – Room 2009C
  • How Retailers and Brands Can Take Advantage of Blockchain [CON6098]

    SPEAKERS

    Gerald Poncet, Senior Director, Consumer Markets, Oracle Industry Solutions Group, Oracle

    Nikhil Vadgama, Deputy Director, University College London

    • SCHEDULEWednesday, September 18, 04:45 PM – 05:30 PM | Moscone West – Room 3012
  • Industry 4.0–Based Smart Factory in Action [THT5391]

    SPEAKERS

    Sheetal Prasad, Chief Enterprise Architect, Senior Practice Director, Oracle

    Hemanth Gujjola, Technical Manager, Oracle

    • SCHEDULEMonday, September 16, 02:45 PM – 03:05 PM | The Exchange (Moscone South) – Theater 2
    • SCHEDULEWednesday, September 18, 03:30 PM – 03:50 PM | The Exchange (Moscone South) – Theater 1
  • Making Blockchain a Reality for Retailers and Brands [PAN6099]

    SPEAKERS

    Gerald Poncet, Senior Director, Consumer Markets, Oracle Industry Solutions Group, Oracle

    Lukas Puender, Co-Founder, retraced GmbH

    Nikhil Vadgama, Deputy Director, University College London

    Vojko Kercan, Managing Director, Dhimahi d.o.o.

    • SCHEDULEWednesday, September 18, 09:00 AM – 09:45 AM | Moscone West – Room 3012
  • Modernizing Intercompany Billing Using a Permissioned Blockchain Platform [PAN6441]

    SPEAKERS

    Gary Crisci, Principle Architect, General Electric

    Mark Rakhmilevich, Senior Director, Blockchain Product Management, Oracle

    • SCHEDULEWednesday, September 18, 10:00 AM – 10:45 AM | Moscone South – Room 209
  • Optimize Your Supply Chain Performance with Oracle SCM Cloud [CON4933]

    SPEAKERS

    Rahul Asthana, Senior Director, SCM Product Marketing, Oracle

    • SCHEDULEThursday, September 19, 09:00 AM – 09:45 AM | Moscone West – Room 2011A
  • Oracle Blockchain: Helping Reduce Carbon Emissions and Find the Tastiest Steak [CAS1787]

    SPEAKERS

    Karl O’ Connell, CIO, I.C.B.F.

    • SCHEDULEWednesday, September 18, 10:00 AM – 10:45 AM | Moscone South – Room 152B
  • Oracle Blockchain Platform Integration with OAuth 2.0 [TRN5099]

    SPEAKERS

    Tapas Pramanik, Principal Technical Support Engineer, Oracle

    • SCHEDULEThursday, September 19, 11:15 AM – 12:00 PM | Moscone West – Room 3003
  • Oracle Cloud: The Next Big Things [GEN3248]

    SPEAKERS

    Douglas Johnson-Poensgen, CEO, Circulor

    Ashish Mohindroo, Vice President, Oracle Cloud, Oracle

    Diego Netto, Co-Founder & Chief Technology Officer, Booster

    • JOIN WAITLISTTuesday, September 17, 12:30 PM – 01:15 PM | Moscone South – Room 207/208
  • Payment Modernization with Open and Agile Frameworks [BUS6281]

    SPEAKERS

    Karthick Prasad, Vice President, FSGBU, Oracle

    • SCHEDULEWednesday, September 18, 11:00 AM – 11:45 AM | Moscone West – Room 3004
  • Revolutionizing Finance and Supply Chain Powered by Blockchain [CON6588]

    SPEAKERS

    SHAGUN GOYAL, Oracle Digital Finance and Blockchain Leader, Deloitte Consulting LLP

    Richard Pumphret, Finance & Enterprise Performance Blockchain Leader, Deloitte Consulting LLP

    • SCHEDULEMonday, September 16, 11:15 AM – 12:00 PM | Moscone West – Room 2011B
  • The State of Analytics and Machine Learning [CON5884]

    SPEAKERS

    Shyam Varan Nath, Director IoT & Cloud, BIWA User Group

    Dan Vlamis, CEO – President, Vlamis Software Solutions, Inc.

    • SCHEDULEMonday, September 16, 12:15 PM – 01:00 PM | Moscone West – Room 3016
  • Transformative Potential of Blockchain Technology in Healthcare Payer [CON6175]

    SPEAKERS

    Srini Venkatasanthanam, Vice President, Oracle Health Insurance, Oracle

    Mark Rakhmilevich, Senior Director, Blockchain Product Management, Oracle

    • SCHEDULETuesday, September 17, 04:15 PM – 05:00 PM | Moscone West – Room 3004
  • Trust and Visibility in Business Networks Using Oracle Intelligent Track and Trace [PRO5794]

    SPEAKERS

    Michael Richter, Director Product Marketing, Emerging Technologies, Oracle

    Prasen Palvankar, Senior Director Product Management, Oracle

    • SCHEDULEThursday, September 19, 01:15 PM – 02:00 PM | Moscone West – Room 2000
  • Using Oracle Blockchain to Prove the Ethical Sourcing of Conflict Minerals [CAS2567]

    SPEAKERS

    Douglas Johnson-Poensgen, CEO, Circulor

    • SCHEDULETuesday, September 17, 11:15 AM – 12:00 PM | Moscone West – Room 3007A
  • Using Process Automation and Integration to Manage Blockchain and B2B Transactions [PAN5844]

    SPEAKERS

    Nathan Angstadt, Director Product Management, Oracle

    DIBYA MOHAPATRA, Technical Manager, Avaya Inc.

    Michael Meiner, Director, Product Management, Oracle

    • SCHEDULEThursday, September 19, 11:15 AM – 12:00 PM | Moscone South – Room 210
  • What’s New in Oracle Blockchain Platform [PRO3489]

    SPEAKERS

    Deepak Goel, Senior Director, Software Development, Oracle

    Gary Crisci, Principle Architect, General Electric

    Frank Xiong, GVP, Oracle

    • SCHEDULETuesday, September 17, 12:30 PM – 01:15 PM | Moscone South – Room 213

 

Categories: APPS Blogs

Taking kpack, a Kubernetes Native Container Build Service for a test drive

Pas Apicella - Tue, 2019-09-10 23:51
We wanted Build Service to combine the Cloud Native Buildpacks experience with the declarative model of Kubernetes, and extend the K8s workflow in an idiomatic fashion. With this goal in mind, we leveraged custom resource definitions to extended the K8s API. This way, we could use Kubernetes technology to create a composable, declarative architecture to power build service. The Custom Resource Definitions (CRDs) are coordinated by Custom Controllers to automate container image builds and keep them up to date based on user-provided configuration.

So with that in mind lets go and deploy kpack on GKE cluster and build our first image...



Steps

1. Install v0.0.3 of kpack into your Kube cluster

$ kubectl apply -f <(curl -L https://github.com/pivotal/kpack/releases/download/v0.0.3/release.yaml)

...

namespace/kpack created
customresourcedefinition.apiextensions.k8s.io/builds.build.pivotal.io created
customresourcedefinition.apiextensions.k8s.io/builders.build.pivotal.io created
clusterrole.rbac.authorization.k8s.io/kpack-admin created
clusterrolebinding.rbac.authorization.k8s.io/kpack-controller-admin created
deployment.apps/kpack-controller created
customresourcedefinition.apiextensions.k8s.io/images.build.pivotal.io created
serviceaccount/controller created
customresourcedefinition.apiextensions.k8s.io/sourceresolvers.build.pivotal.io created

2. Lets just verify what Custom resources definition (CRD's) have been installed

$ kubectl api-resources --api-group build.pivotal.io
NAME              SHORTNAMES                    APIGROUP           NAMESPACED   KIND
builders          cnbbuilder,cnbbuilders,bldr   build.pivotal.io   true         Builder
builds            cnbbuild,cnbbuilds,bld        build.pivotal.io   true         Build
images            cnbimage,cnbimages            build.pivotal.io   true         Image
sourceresolvers                                 build.pivotal.io   true         SourceResolver

3. Create a builder resource as follows

builder-resource.yaml

apiVersion: build.pivotal.io/v1alpha1
kind: Builder
metadata:
  name: sample-builder
spec:
  image: cloudfoundry/cnb:bionic
  updatePolicy: polling

$ kubectl create -f builder-resource.yaml
builder.build.pivotal.io/sample-builder created

$ kubectl get builds,images,builders,sourceresolvers
NAME                                      AGE
builder.build.pivotal.io/sample-builder   42s

4. Create a secret for push access to the desired docker registry

docker-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: basic-docker-user-pass
  annotations:
    build.pivotal.io/docker: index.docker.io
type: kubernetes.io/basic-auth
stringData:
  username: papicella
  password:

$ kubectl create -f docker-secret.yaml
secret/basic-docker-user-pass created

5. Create a secret for pull access from the desired git repository. The example below is for a github repository

git-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: basic-git-user-pass
  annotations:
    build.pivotal.io/git: https://github.com
type: kubernetes.io/basic-auth
stringData:
  username: papicella
  password:

$ kubectl create -f git-secret.yaml
secret/basic-git-user-pass created

6. Create a service account that uses the docker registry secret and the git repository secret.

service-account.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: service-account
secrets:
  - name: basic-docker-user-pass
  - name: basic-git-user-pass

$ kubectl create -f service-account.yaml
serviceaccount/service-account created

7. Install logs utility. In order to view the build logs for each image as it's created right now you have to use a utility that you build from the kpack github repo source fils itself. Follow the steps below to get it built

$ export GOPATH=`pwd`
$ git clone https://github.com/pivotal/kpack $GOPATH/src/github.com/pivotal/kpack
$ cd $GOPATH/src/github.com/pivotal/kpack
$ dep ensure -v
$ go build ./cmd/logs

You will have "logs" executable created in current directory which we will use it shortly

8. Create an image as follows. The GitHub repo I have here is public so will work no problem at all

pbs-demo-sample-image.yaml

apiVersion: build.pivotal.io/v1alpha1
kind: Image
metadata:
  name: pbs-demo-image
spec:
  tag: pasapples/pbs-demo-image
  serviceAccount: service-account
  builderRef: sample-builder
  cacheSize: "1.5Gi" # Optional, if not set then the caching feature is disabled
  failedBuildHistoryLimit: 5 # Optional, if not present defaults to 10
  successBuildHistoryLimit: 5 # Optional, if not present defaults to 10
  source:
    git:
      url: https://github.com/papicella/pbs-demo
      revision: master
  build: # Optional
    env:
      - name: BP_JAVA_VERSION
        value: 11.*
    resources:
      limits:
        cpu: 100m
        memory: 1G
      requests:
        cpu: 50m
        memory: 512M

$ kubectl create -f pbs-demo-sample-image.yaml
image.build.pivotal.io/sample-image created

9. Now at this point we can view the created image and current Cloud native Buildpack builds being run using two commands as follows.

$ kubectl get images
NAME             LATESTIMAGE   READY
pbs-demo-image                 Unknown

$ kubectl get cnbbuilds
NAME                           IMAGE   SUCCEEDED
pbs-demo-image-build-1-pvh6k           Unknown

Note: Unknown is normal as it has not yet completed 

10. Now using our created "logs" utility lets view the current build logs

$ ./logs -image pbs-demo-image
{"level":"info","ts":1568175056.446671,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
source-init:main.go:277: Successfully cloned "https://github.com/papicella/pbs-demo" @ "cee67e26d55b6d2735afd7fa3e0b81e251e0d5ce" in path "/workspace"
2019/09/11 04:11:23 Unable to read "/root/.docker/config.json": open /root/.docker/config.json: no such file or directory
======== Results ========
skip: org.cloudfoundry.archiveexpanding@1.0.0-RC03
pass: org.cloudfoundry.openjdk@1.0.0-RC03
pass: org.cloudfoundry.buildsystem@1.0.0-RC03
pass: org.cloudfoundry.jvmapplication@1.0.0-RC03
pass: org.cloudfoundry.tomcat@1.0.0-RC03
pass: org.cloudfoundry.springboot@1.0.0-RC03
pass: org.cloudfoundry.distzip@1.0.0-RC03
skip: org.cloudfoundry.procfile@1.0.0-RC03
skip: org.cloudfoundry.azureapplicationinsights@1.0.0-RC03
skip: org.cloudfoundry.debug@1.0.0-RC03
skip: org.cloudfoundry.googlestackdriver@1.0.0-RC03
skip: org.cloudfoundry.jdbc@1.0.0-RC03
skip: org.cloudfoundry.jmx@1.0.0-RC03
pass: org.cloudfoundry.springautoreconfiguration@1.0.0-RC03
Resolving plan... (try #1)
Success! (7)
Cache '/cache': metadata not found, nothing to restore
Analyzing image 'index.docker.io/pasapples/pbs-demo-image@sha256:40fe8aa932037faad697c3934667241eef620aac1d09fc7bb5ec5a75d5921e3e'
Writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'

......

11. Now this will take some time to do our first build given it will hagve to download all the maven dependancies but you may be wondering how do we determine how many builds have been run so we can actually view the logs of any builds across our image we just created. To do that run a command as follows

$ kubectl get pods --show-labels | grep pbs-demo-image
pbs-demo-image-build-1-pvh6k-build-pod   0/1     Init:6/9   0          6m29s   image.build.pivotal.io/buildNumber=1,image.build.pivotal.io/image=pbs-demo-image

12. So from the output above you can clearly see we just have the one single build so to view logs of just a particular build we use it's ID as shown above as follows

$ ./logs -image pbs-demo-image -build {ID}

...

13. Now if we wait at least 5 minutes as the first build will always take time just to the dependancies required to be downloaded it will eventually complete and show it's complete using the following commands

$ kubectl get images
NAME             LATESTIMAGE                                                                                                        READY
pbs-demo-image   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True

$ kubectl get cnbbuilds
NAME                           IMAGE                                                                                                              SUCCEEDED
pbs-demo-image-build-1-pvh6k   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True

14. Now let's actually make a code change to our source code and issue a git commit. In this example below I am using IntelliJ IDEA for my code change/commit


15. Now let's see if a new build is kicked off it should be. Run the following command

$ kubectl get cnbbuilds
NAME                           IMAGE                                                                                                              SUCCEEDED
pbs-demo-image-build-1-pvh6k   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True

pbs-demo-image-build-2-stl8w                                                                                                                      Unknown


16. Now lets see that in fact this new build is build ID 2 using a command as follows

$ kubectl get pods --show-labels | grep pbs-demo-image
pbs-demo-image-build-1-pvh6k-build-pod   0/1     Completed   0          21m     image.build.pivotal.io/buildNumber=1,image.build.pivotal.io/image=pbs-demo-image
pbs-demo-image-build-2-stl8w-build-pod   0/1     Init:6/9    0          2m15s   image.build.pivotal.io/buildNumber=2,image.build.pivotal.io/image=pbs-demo-image

17. Lets view the logs for BUILD 2 as follows

$ ./logs -image pbs-demo-image -build 2
{"level":"info","ts":1568176191.088838,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
source-init:main.go:277: Successfully cloned "https://github.com/papicella/pbs-demo" @ "e2830bbcfb32bfdd72bf5d4b17428c405f46f3c1" in path "/workspace"
2019/09/11 04:29:55 Unable to read "/root/.docker/config.json": open /root/.docker/config.json: no such file or directory
======== Results ========
skip: org.cloudfoundry.archiveexpanding@1.0.0-RC03
pass: org.cloudfoundry.openjdk@1.0.0-RC03
pass: org.cloudfoundry.buildsystem@1.0.0-RC03
pass: org.cloudfoundry.jvmapplication@1.0.0-RC03
pass: org.cloudfoundry.tomcat@1.0.0-RC03
pass: org.cloudfoundry.springboot@1.0.0-RC03
pass: org.cloudfoundry.distzip@1.0.0-RC03
skip: org.cloudfoundry.procfile@1.0.0-RC03
skip: org.cloudfoundry.azureapplicationinsights@1.0.0-RC03
skip: org.cloudfoundry.debug@1.0.0-RC03
skip: org.cloudfoundry.googlestackdriver@1.0.0-RC03
skip: org.cloudfoundry.jdbc@1.0.0-RC03
skip: org.cloudfoundry.jmx@1.0.0-RC03
pass: org.cloudfoundry.springautoreconfiguration@1.0.0-RC03
Resolving plan... (try #1)
Success! (7)
Restoring cached layer 'org.cloudfoundry.openjdk:openjdk-jdk'
Restoring cached layer 'org.cloudfoundry.openjdk:90c33cf3f2ed0bd773f648815de7347e69cfbb3416ef3bf41616ab1c4aa0f5a8'
Restoring cached layer 'org.cloudfoundry.buildsystem:build-system-cache'
Restoring cached layer 'org.cloudfoundry.jvmapplication:executable-jar'
Restoring cached layer 'org.cloudfoundry.springboot:spring-boot'
Analyzing image 'index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe'
Using cached layer 'org.cloudfoundry.openjdk:90c33cf3f2ed0bd773f648815de7347e69cfbb3416ef3bf41616ab1c4aa0f5a8'
Using cached layer 'org.cloudfoundry.openjdk:openjdk-jdk'
Writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'
Using cached layer 'org.cloudfoundry.buildsystem:build-system-cache'
Using cached launch layer 'org.cloudfoundry.jvmapplication:executable-jar'
Rewriting metadata for layer 'org.cloudfoundry.jvmapplication:executable-jar'
Using cached launch layer 'org.cloudfoundry.springboot:spring-boot'
Rewriting metadata for layer 'org.cloudfoundry.springboot:spring-boot'
Writing metadata for uncached layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration'

Cloud Foundry OpenJDK Buildpack 1.0.0-RC03
  OpenJDK JDK 11.0.4: Reusing cached layer
  OpenJDK JRE 11.0.4: Reusing cached layer

Cloud Foundry Build System Buildpack 1.0.0-RC03
    Using wrapper
    Linking Cache to /home/cnb/.m2
  Compiled Application: Contributing to layer
    Executing /workspace/mvnw -Dmaven.test.skip=true package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------< com.example:pbs-demo >------------------------
[INFO] Building pbs-demo 0.0.1-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ pbs-demo ---

...

18. Now this build won't take as long as the first build as this time we don't have to pull down the maven dependancies plus avoid creating layers that have not changes in the first OCI complaint image which is something cloud native buildpacks does for us nicely. Once complete you now have two builds as follows

$ kubectl get cnbbuilds
NAME                           IMAGE                                                                                                              SUCCEEDED
pbs-demo-image-build-1-pvh6k   index.docker.io/pasapples/pbs-demo-image@sha256:a2d4082004d686bb2c76222a631b8a9b3866bef54c1fae03261986a528b556fe   True
pbs-demo-image-build-2-stl8w   index.docker.io/pasapples/pbs-demo-image@sha256:a22c64754cb7addc3f7e9a9335b094adf466b5f8035227691e81403d0c9c177f   True

19. Now let's run this locally as follows given I have docker desktop running. First we pull down the created image which in this case is the LATEST build build 2 here



$ docker pull pasapples/pbs-demo-image
Using default tag: latest
latest: Pulling from pasapples/pbs-demo-image
35c102085707: Already exists
251f5509d51d: Already exists
8e829fe70a46: Already exists
6001e1789921: Already exists
76a30c9e6d47: Pull complete
8538f1fe6188: Pull complete
2a899c7e684d: Pull complete
0ea0c38329cb: Pull complete
bb281735f842: Pull complete
664d87aab7ff: Pull complete
f4b03070a779: Pull complete
682af613b7ca: Pull complete
b893e5904080: Pull complete
Digest: sha256:a22c64754cb7addc3f7e9a9335b094adf466b5f8035227691e81403d0c9c177f
Status: Downloaded newer image for pasapples/pbs-demo-image:latest

20. Now let's run it

$ docker run -p 8080:8080 pasapples/pbs-demo-image

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.6.RELEASE)

2019-09-11 04:40:41.747  WARN 1 --- [           main] pertySourceApplicationContextInitializer : Skipping 'cloud' property source addition because not in a cloud
2019-09-11 04:40:41.751  WARN 1 --- [           main] nfigurationApplicationContextInitializer : Skipping reconfiguration because not in a cloud
2019-09-11 04:40:41.760  INFO 1 --- [           main] com.example.pbsdemo.PbsDemoApplication   : Starting PbsDemoApplication on 5975633400c4 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)

...

2019-09-11 04:40:50.255  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-09-11 04:40:50.259  INFO 1 --- [           main] com.example.pbsdemo.PbsDemoApplication   : Started PbsDemoApplication in 8.93 seconds (JVM running for 9.509)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2019-09-11 04:40:50.323  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=1, name=pas, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2019-09-11 04:40:50.326  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=2, name=lucia, status=active)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2019-09-11 04:40:50.329  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=3, name=lucas, status=inactive)
Hibernate: insert into customer (id, name, status) values (null, ?, ?)
2019-09-11 04:40:50.331  INFO 1 --- [           main] com.example.pbsdemo.LoadDatabase         : Preloading Customer(id=4, name=siena, status=inactive)

21. Invoke it through a browser as follows

http://localhost:8080/swagger-ui.html


22. Finally let's actually run this application on our k8s cluster itself. So start by creating a basic YAML file for deployment as follows

run-pbs-image-k8s-yaml.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: pbs-demo-image
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: pbs-demo-image
    spec:
      containers:
        - name: pbs-demo-image
          image: pasapples/pbs-demo-image
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: pbs-demo-image-service
  labels:
    name: pbs-demo-image-service
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  selector:
    app: pbs-demo-image
  type: LoadBalancer

23. Apply your config

$ kubectl create -f run-pbs-image-k8s-yaml.yaml
deployment.extensions/pbs-demo-image created
service/pbs-demo-image-service created

24. Check we have running pods and LB service created

$ kubectl get all
NAME                                         READY   STATUS      RESTARTS   AGE
pod/pbs-demo-image-build-1-pvh6k-build-pod   0/1     Completed   0          39m
pod/pbs-demo-image-build-2-stl8w-build-pod   0/1     Completed   0          19m
pod/pbs-demo-image-f5c9d989-l2hg5            1/1     Running     0          48s
pod/pbs-demo-image-f5c9d989-pfxzs            1/1     Running     0          48s

NAME                             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes               ClusterIP      10.101.0.1              443/TCP        86m
service/pbs-demo-image-service   LoadBalancer   10.101.15.197        80:30769/TCP   49s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pbs-demo-image   2/2     2            2           49s

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/pbs-demo-image-f5c9d989   2         2         2       50s


More Information

Introducing kpack, a Kubernetes Native Container Build Service
https://content.pivotal.io/blog/introducing-kpack-a-kubernetes-native-container-build-service

Cloud Native Buildpacks
https://buildpacks.io/
Categories: Fusion Middleware

Oracle Database 19c Automatic Indexing: Default Index Column Order Part II (Future Legend)

Richard Foote - Tue, 2019-09-10 20:59
In Part I, we explored some options that Oracle might adopt when ordering the columns within an Automatic Index by default, in the absence of other factors where there is only the one SQL statement to be concerned with. A point worth making is that if all columns of an index are specified within SQL […]
Categories: DBA Blogs

Jami (Gnu Ring) review

RDBMS Insight - Tue, 2019-09-10 15:35

An unavoidable fact of database support life is webconferences with clients or users. Most of the time, we’re more interested in what’s going on onscreen than in each others’ faces. But every now and then we need to have a face-to-face. Skype is popular, but I recently had the chance to try out a FOSS alternative with better security: Jami.

Jami (formerly Gnu Ring) is a FOSS alternative to Skype that advertises a great featureset and some terrific privacy features. I suggested to a small group that we try it out for an upcoming conference call.

Just going by its specs, Jami (https://jami.net/) looks amazing. It’s free, open-source software that’s available on all the major platforms, including all the major Linux distros. It boasts the following advantages over Skype and many other Skype alternatives:

  • Distributed: Uniquely, there aren’t any central servers. Jami uses distributed hash table technology to distribute directory functions, authentication, and encryption across all devices connected to it.
  • Secure: All communications are end-to-end encrypted.
  • FOSS: Jami’s licensed under a GPLv3+ license, is a GNU package and a project of the Free Software Foundation.
  • Ad-free: If you’re not paying for commercial software, then you are the product. Not so with Jami, which is non-commercial and ad-free. Jami is developed and maintained by Savoir Faire Linux, a Canadian open-source consulting company.

And its listed features include pretty much everything you’d use Skype for: text messaging, voice calls, video calls, file and photo sharing, even video conference calls.

I wanted to use it for a video conference call, and my group was willing to give it a try. I had high hopes for this FOSS Skype alternative.

Installation

Jami is available for: Windows, Linux, OS X, iOS, Android, and Android TV. (Not all clients support all features; there’s a chart in the wiki.) I tried the OS X and iOS variants.

First, I installed Jami on OS X and set it up. The setup was straightforward, although I had to restart Jami after setting up my account, in order for it to find that account.

Adding contacts

One particularly cool feature of Jami is that your contact profile is stored locally, not centrally. Your profile’s unique identifier is a cumbersomely long 40-digit hexadecimal string, such as “7a639b090e1ab9b9b54df02af076a23807da7299” (not an actual Jami account afaik). According to the documentation, you can also register a username for your account, such as “natalkaroshak”.

Contacts are listed as hex strings.Unfortunately, I wasn’t able to actually find any of my group using their registered usernames, nor were they able to find me under my username. We had to send each other 40-digit hex strings, and search for the exact hex strings in Jami, in order to find each other.

The only way to add a contact, once you’ve located them, is to interact with them, eg. by sending a text or making a call. This was mildly annoying when trying to set up my contact list a day ahead of the conference call.

Once I’d added the contacts, some of them showed up in my contact list with their profile names… and some of them didn’t, leaving me guessing which hex string corresponded to which member of my group.

Sending messages, texts, emojis

Sending and receiving Skype-style messages and emojis worked very well in Jami. Group chat isn’t available.

Making and taking calls

The documented process for a conference call in Jami is pretty simple: call one person,

Only the Linux and Windows versions currently support making conference calls. Another member of our group tried to make the conference call. As soon as I answered his incoming call, my Jami client crashed. So I wasn’t able to actually receive a call using Jami for OS X.

The caller and one participant were able to hear each other briefly, before the caller’s Jami crashed as well.

Linking another device to the same account

I then tried installing Jami on my iPhone. Again, the installation went smoothly, and this let me try another very cool feature of Jami.

In Jami, your account information is all stored in a folder on your device. There’s no central storage. Password creation is optional, because you don’t log in to any server when you join Jami. If you do create a password, you can (1) register a username with the account and (2) use the same account on another device.

The process of linking my iPhone’s Jami to the same account I used with my OSX Jami was very smooth. In the OSX install, I generated an alphanumeric PIN, entered the PIN into my device, and entered the account password. I may have mis-entered the first alphanumeric PIN, because it worked on the second try.

Unfortunately, my contacts from the OSX install didn’t appear in the iOS install, even though they were linked to the same account. I had to re-enter the 40-digit hex strings and send a message to each conference call participant.

Making calls on iOS

The iOS client doesn’t support group calling, but I tried video calling one person. We successfully connected. However, that’s where the success ended. I could see the person I called, but was unable to hear her. And she couldn’t see OR hear me. After a few minutes, the video of the other party froze up too.

Conclusion

Jami looked very promising, but didn’t actually work.

All of the non-call stuff worked: installation, account creation, adding contacts (though having to use the 40-digit hex codes is a big drawback), linking my account to another device.

But no one in my group was able to successfully make a video call that lasted longer than a few seconds. The best result was that two people could hear each other for a couple of seconds.

Jami currently has 4.5/5 stars on alternativeto.net. I have to speculate that most of the reviews are from Linux users, and that the technology is more mature on Linux. For OSX and iOS, Jami’s not a usable alternative to Skype yet.

Big thanks to my writing group for gamely trying Jami with me!

Categories: DBA Blogs

Oracle Cloud: Sign up failed... [3] & solved

Dietrich Schroff - Tue, 2019-09-10 13:44
Finally (see my attempts here and here) i was able to sign up to Oracle cloud.
What did the trick?

I got help from Oracle support:
So i used my gmail address and this worked:

and then:

Let's see how this cloud will work compared to Azure and AWS

Red Hat Forum Zürich, 2019, some impressions

Yann Neuhaus - Tue, 2019-09-10 07:34

Currently the Red Hat Forum 2019 in Zürich is ongoing and people just finished lunch before the more technical sessions are starting.

As expected a lot is around OpenShift 4 and automation with Ansible. As dbi is a Red Hat advanced business partner we took the opportunity to be present with a booth for getting in touch with our existing customers and to meet new people:

All the partners got their logo on a huge wall at the entrance to the event:

As the event is getting more and more popular, Red Hat moved to the great and huge location, the Stage One in Zürich Oerlikon. So all of the 850 participants found their space.

There is even space for some fun stuff:

Important as well: the catering was excellent:

The merger with IBM was an important topic and Red Hat again stated several times: Red Hat will stay Red Hat. Let’s see what happens here, not all people trust this statement. All in all it is a great atmosphere here in Oerlikon, great people to discuss with, interesting topics, great organization and a lot of “hybrid cloud”. Technology is moving fast and Red Hat is trying to stay at the front. From a partner perspective the Forum is a great chance to meet the right people within Red Hat, no matter what topic you want to discuss: Technology, marketing, training, whatever. I am pretty sure we will attend the next forum as well.

Cet article Red Hat Forum Zürich, 2019, some impressions est apparu en premier sur Blog dbi services.

[Video] 7 Things Every Oracle Apps DBA or Architect Must know for Cloud

Online Apps DBA - Tue, 2019-09-10 06:39

7 Things Every Oracle Apps DBA or Architect Must know in order to Build Manage & Migrate EBS (R12) on Oracle’s Gen 2 Cloud that’s Oracle Cloud Infrastructure (OCI) These 7 things include: ✔ Deployment Options On Oracle Cloud ✔ The architecture of EBS (R12) on OCI ✔ Cloud Tools i.e. EBS Cloud Manager, Cloud […]

The post [Video] 7 Things Every Oracle Apps DBA or Architect Must know for Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introducing Accelerated Database Recovery with SQL Server 2019

Yann Neuhaus - Tue, 2019-09-10 06:01

SQL Server 2019 RC1 was released out a few weeks ago and it is time to start blogging about my favorite core engine features that will be shipped with the next version of SQL Server. Things should not be completely different with the RTM, so let’s introduce the accelerated database recovery (aka ADR) which is mainly designed to solve an annoying issue that probably most of SQL Server DBAs already faced at least one time: long running transactions that impact the overall recovery time. As a reminder with current versions of SQL Server, database recovery time is tied to the largest transaction at the moment of the crash. This is even more true in high-critical environments where it may have a huge impact on the service or application availability and ADR is another feature that may help for sure.

Image from Microsoft documentation

In order to allow very fast rollback and recovery process the SQL Server team redesigned completely the SQL database engine recovery process and the interesting point is they have introduced row-versioning to achieve it. Row-versioning, however, exist since the SQL Server 2005 version through RCSI and SI isolation levels and from my opinion this is finally good news to extend (finally) such capabilities to address long recovery time.

Anyway, I performed some testing to get an idea of what could be the benefit of ADR and the impact of the workload as well. Firstly, I performed a recovery test without ADR and after initiating a long running transaction, I simply crashed my SQL Server instance. I used an AdventureWorks database with the dbo.bigTransactionHistory table which is big enough (I think) to get a relevant result.

The activation of ADR is per database meaning that row-versioning is also managed locally per database. It allows a better workload isolation compared to using the global tempdb version store with previous SQL Server versions.

USE AdventureWorks_dbi;

ALTER DATABASE AdventureWorks_dbi SET
    ACCELERATED_DATABASE_RECOVERY = OFF; 

ALTER DATABASE AdventureWorks_dbi SET
	COMPATIBILITY_LEVEL = 150;
GO

 

The dbo.bigtransactionHistory table has only one clustered primary key …

EXEC sp_helpindex 'dbo.bigTransactionHistory';
GO

 

… with 158’272’243 rows and about 2GB of data

EXEC sp_helpindex 'dbo.bigTransactionHistory';
GO

 

I simulated a long running transaction with the following update query that touches every row of the dbo.bigTransactionHistory table to get a relevant impact on the recovery process duration time.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

The related transactions wrote a log of records into the transaction log size as show below:

SELECT 
	DB_NAME(database_id) AS [db_name],
	total_log_size_in_bytes / 1024 / 1024 AS size_MB,
	used_log_space_in_percent AS [used_%]
FROM sys.dm_db_log_space_usage;
GO

 

The sys.dm_tran_* and sys.dm_exec_* DMVs may be helpful to dig into the transaction detail including the transaction start time and log used in the transaction log:

SELECT 
   GETDATE() AS [Current Time],
   [des].[login_name] AS [Login Name],
   DB_NAME ([dtdt].database_id) AS [Database Name],
   [dtdt].[database_transaction_begin_time] AS [Transaction Begin Time],
   [dtdt].[database_transaction_log_bytes_used] / 1024 / 1024 AS [Log Used MB],
   [dtdt].[database_transaction_log_bytes_reserved] / 1024 / 1024 AS [Log Reserved MB],
   SUBSTRING([dest].text, [der].statement_start_offset/2 + 1,(CASE WHEN [der].statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max),[dest].text)) * 2 ELSE [der].statement_end_offset END - [der].statement_start_offset)/2) as [Query Text]
FROM 
   sys.dm_tran_database_transactions [dtdt]
   INNER JOIN sys.dm_tran_session_transactions [dtst] ON  [dtst].[transaction_id] = [dtdt].[transaction_id]
   INNER JOIN sys.dm_exec_sessions [des] ON  [des].[session_id] = [dtst].[session_id]
   INNER JOIN sys.dm_exec_connections [dec] ON   [dec].[session_id] = [dtst].[session_id]
   LEFT OUTER JOIN sys.dm_exec_requests [der] ON [der].[session_id] = [dtst].[session_id]
   OUTER APPLY sys.dm_exec_sql_text ([der].[sql_handle]) AS [dest]
GO

 

The restart of my SQL Server instance kicked-in the AdventureWorks_dbi database recovery process. It took about 6min in my case:

EXEC sp_readerrorlog 0, 1, N'AdventureWorks_dbi'

 

Digging further in the SQL Server error log, I noticed the phase2 (redo) and phase3 (undo) of the recovery process that took the most of time (as expected).

However, if I performed the same test with ADR enabled for the AdventureWorks_dbi database …

USE AdventureWorks_dbi;

ALTER DATABASE AdventureWorks_dbi SET
    ACCELERATED_DATABASE_RECOVERY = ON;

 

… and I dig again into the SQL Server error log:

Well, the output above is pretty different but clear and irrevocable: there is a tremendous improvement of the recovery time process here. The SQL Server error log indicates the redo phase took 0ms and the undo phase 119ms. I also tested different variations in terms of long transactions and logs generated in the transaction log (4.5GB, 9.1GB and 21GB) without and with ADR. With the latter database recovery remained fast irrespective to the transaction log size as shown below:

But there is no free lunch when enabling ADR because it is a row-versioning based process which may have an impact on the workload. I was curious to compare the performance of my update queries between scenarios including no row-versioning (default), row-versioning with RCSI only, ADR only and finally both RCSI and ADR enabled. I performed all my tests on a virtual machine quad core Intel® Core ™ i7-6600U CPU @ 2.6Ghz and 8GB of RAM. SQL Server memory is capped to 6GB. The underlying storage for SQL Server data files is hosted on SSD disk Samsung 850 EVO 1TB.

Here the first test I performed. This is the same update I performed previously which touches every row on the dbo.bigTransactionHistory table:

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

And here the result with the different scenarios:

Please don’t focus strongly on values here because it will depend on your context but the result answers to the following questions: Does the activation of ADR will have an impact on the workload and if yes is it in the same order of magnitude than RCSI / SI? The results are self-explanatory.

Then I decided to continue my tests by increasing the impact of the long running transaction with additional updates on the same data in order to stress a little bit the version store.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

Here the new results:

This time ADR seems to have a bigger impact than RCSI in my case. Regardless the strict values of this test, the key point here is we have to be aware that enabling ADR will have an impact to the workload.

After performing these bunch of tests, it’s time to get a big picture of ADR design with several components per database including a persisted version store (PVS), a Logical Revert, a sLog and a cleaner process. In this blog post I would like to focus on the PVS component that acts as persistent version store for the concerned database. In other words, with ADR, tempdb will not be used to store row versions anymore. The interesting point is that RCSI / SI row-versioning will continue to be handle through the PVS if ADR is enabled according to my tests.

There is the new added column named is_accelerated_database_recovery_on to the sys.databases system view. In my case both RCSI and ADR are enabled in AdventureWorks_dbi database.

SELECT 
	name AS [database_name],
	is_read_committed_snapshot_on,
	is_accelerated_database_recovery_on
FROM sys.databases
WHERE database_id = DB_ID()

 

The sys.dm_tran_version_store_space_usage DMV displays the total space in tempdb used by the version store for each database whereas the new sys.dm_tran_persistent_version_store_stats DMV provides information related to the new PVS created with the ADR activation.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

SELECT 
	DB_NAME(database_id) AS [db_name],
	oldest_active_transaction_id,
	persistent_version_store_size_kb / 1024 AS pvs_MB
FROM sys.dm_tran_persistent_version_store_stats;
GO

SELECT 
	database_id,
	reserved_page_count / 128 AS reserved_MB
FROM sys.dm_tran_version_store_space_usage;
GO

 

After running my update query, I noticed the PVS in AdventureWorks_dbi database was used rather the version store in tempdb.

So, getting rid of the version store in tempdb seems to be a good idea and probably more scalable per database but according to my tests and without drawing any conclusion now it may lead to performance considerations … let’s see in the future what happens …

In addition, from a storage perspective, because SQL Server doesn’t use tempdb anymore as version store, my curiosity led  to see what happens behind the scene and how PVS interacts with the data pages where row-versioning comes into play. Let’s do some experiments:

Let’s create the dbo.bigTransationHistory_row_version table from the dbo.bigTransationHistory table with fewer data:

USE AdventureWorks_dbi;
GO

DROP TABLE IF EXISTS [dbo].[bigTransactionHistory_row_version];

SELECT TOP 1 *
INTO [dbo].[bigTransactionHistory_row_version]
FROM [dbo].[bigTransactionHistory]

 

Now, let’s have a look at the data page that belongs to my dbo.bigTransacitonHistory_row_version table with the page ID 499960 in my case:

DBCC TRACEON (3604, -1);
DBCC PAGE (AdventureWorks_dbi, 1, 499960, 3);

 

Versioning info exists in the header but obviously version pointer is set to Null because there is no additional version of row to maintain in this case. I just inserted one.

Let’s update the only row that exists in the table as follows:

BEGIN TRAN;
UPDATE [dbo].[bigTransactionHistory_row_version]
SET Quantity = Quantity + 1

 

The version pointer has been updated (but not sure the information is consistent here or at least the values displayed are weird). One another interesting point is there exists more information than the initial 14 bytes of information we may expect to keep track of the pointers. There is also extra 21 bytes at the end of row as show above. On the other hand, the sys.dm_db_index_physical_stats() DMF has been updated to reflect the PVS information with new columns inrow_*, total_inrow_* and offrow_* and may help to understand some of the PVS internals.

SELECT 
	index_id,
	index_level,
	page_count,
	record_count,
	version_record_count,
	inrow_version_record_count,
	inrow_diff_version_record_count,
	total_inrow_version_payload_size_in_bytes,
	offrow_regular_version_record_count,
	offrow_long_term_version_record_count
FROM sys.dm_db_index_physical_stats(
	DB_ID(), OBJECT_ID('dbo.bigTransactionHistory_row_version'), 
	NULL, 
	NULL, 
	'DETAILED'
)

 

Indeed, referring to the above output and correlating them to results I found inside the data page, I would assume the extra 21 bytes stored in the row seems to reflect a (diff ?? .. something I need to get info) value of the previous row (focus on in_row_diff_version_record_count and in_row_version_payload_size_in_bytes columns).

Furthermore, if I perform the update operation on the same data the storage strategy seems to switch to a off-row mode if I refer again to the sys.dm_db_index_physical_stats() DMF output:

Let’s go back to the DBCC PAGE output to confirm this assumption:

Indeed, the extra payload has disappeared, and it remains only the 14 bytes pointer which has been updated accordingly.

Finally, if I perform multiple updates of the same row, SQL Server should keep the off-row storage and should create inside it a chain of version pointers and their corresponding values.

BEGIN TRAN;

UPDATE [dbo].[bigTransactionHistory_row_version]
SET Quantity = Quantity + 1
GO 100000

 

My assumption is verified by taking a look at the previous DMVs. The persistent version store size has increased from ~16MB to ~32MB and we still have 1 version record in off-row mode meaning there is still one version pointer that references the off-row mode structure for my record.

Finally, let’s introduce the cleaner component. Like the tempdb version store, cleanup of old row versions is achieved by an asynchronous process that cleans page versions that are not needed. It wakes up periodically, but we can force it by executing the sp_persistent_version_cleanup stored procedure.

Referring to one of my first tests, the PVS size is about 8GB.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO


UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
SELECT 
	DB_NAME(database_id) AS [db_name],
	oldest_active_transaction_id,
	persistent_version_store_size_kb / 1024 AS pvs_MB
FROM sys.dm_tran_persistent_version_store_stats;
GO
-- Running PVS cleanu process
EXEC sp_persistent_version_cleanup

 

According to my tests, the cleanup task took around 6min for the entire PVS, but it was not a blocking process at all as you may see below. As ultimate test, I executed in parallel an update query that touched every row of the same table, but it was not blocked by the cleaner as show below:

This is a process I need to investigate further. Other posts are coming as well .. with other ADR components.

See you!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article Introducing Accelerated Database Recovery with SQL Server 2019 est apparu en premier sur Blog dbi services.

Using Web Worker for Long Tasks in Oracle JET

Andrejus Baranovski - Tue, 2019-09-10 02:42
JavaScript app runs in a single thread. This means if there is a long-running resource-intensive operation - the thread will be blocked and the page will stay frozen until operation completes. Obviously, this is not user-friendly and such behavior should be avoided. We can use Web Workers, through Web Workers we could run long-running operations in separate threads, without blocking the main thread. Code running in Web Worker doesn't have access to UI DOM, this means logic coded in Web Worker should operate with logic which is not directly related to UI.

Sample app contains commented code in dashboard.js. This code blocks main thread for 10 seconds, if you uncomment it - you will see that app becomes frozen for 10 seconds:


Web Worker is defined in dashboard.js. Web Worker is a separate JS file, which is being used for Worker object. API allows to send and receive messages, this way we can communicate to and from Web Worker (start a new task and receive message when task is completed):


Web Worker code - onmessage invoked when the message arrives from the main thread. postMessage sends message back to the main thread:


The sample app is available on GitHub repo.

Azure Advisor And Fixing Errors

Jeff Moss - Mon, 2019-09-09 17:23

Azure can be configured to send you advisor reports detailing things that are not quite right in your environment. The advisor is not necessarily always right but it’s sensible to review the outputs periodically, even if they relate to non production environments.

A few issues popped up on an advisor report on my recent travels and although you can just use the entries on the report on the portal to target the offending resources, I thought it might be helpful to write some Powershell to identify the offending resources as an alternative.

Secure transfer to storage accounts should be enabled

This error shows up similar to this on the report:

Fairly obvious what this means really – the storage account has a setting which is currently set to allow insecure transfers (via http rather than https) – an example looks like this under the Configuration blade of the Storage Account:

The advisor highlights this and the solution is to just set the toggle to Enabled for “Secure transfer required” and press save.

To identify all the storage accounts which have this issue use the following:

Get-AzStorageAccount | where {$_.EnableHttpsTrafficOnly -eq $False}
code

This gives output similar to the following (redacted):

PS Azure:> Get-AzStorageAccount | where {$_.EnableHttpsTrafficOnly -eq $False}

StorageAccountName      ResourceGroupName      Location    SkuName      Kind    AccessTier CreationTime         ProvisioningState EnableHttps TrafficOnly
------------------ ----------------- -------- ------- ---- ---------- ------------ ----------------- -----------
XXXXXXXXXXXXXXXXXX AAAAAAAAAAAAAAA northeurope Standard_LRS Storage 9/6/19 9:51:53 PM Succeeded False
YYYYYYYYYYYYYYYYYY AAAAAAAAAAAAAAA northeurope Standard_LRS Storage 6/26/19 3:29:38 PM Succeeded False
An Azure Active Directory
administrator should be
provisioned for SQL servers

This one appears like the following in the advisor output:

As a long term Oracle guy I’m no SQL Server expert so I can’t quite see why this is an issue if you have a SQL Server authenticated administrative user active – no doubt a friendly SQL DBA will chime in and explain.

To fix this navigate to the SQL Server in question and the Active Directory admin blade and select “Set admin”, choose a user from the Active Directory and press Save.

To find all SQL Servers affected by this I wrote the following Powershell:

$sqlservers = Get-AzResource -ResourceType Microsoft.Sql/servers
foreach ($sqlserver in $sqlservers)
{
    $sqlserver.Name
    $ADAdmin = Get-AzureRmSqlServerActiveDirectoryAdministrator -ServerName $sqlserver.Name -ResourceGroupName $sqlserver.ResourceGroupName
    "AD Administrator:" + $ADAdmin.DisplayName + "/" + $ADAdmin.ObjectId
}
code

This returns output similar to the following (redacted):

mysqlserver1
AD Administrator:clark.kent@dailyplanet.com/fe93c742-d83c-2b4c-bc38-83bc34def38c
mysqlserver2
AD Administrator:/
mysqlserver3
AD Administrator:clark.kent@dailyplanet.com/fe93c742-d83c-2b4c-bc38-83bc34def38c
mysqlserver4
AD Administrator:clark.kent@dailyplanet.com/fe93c742-d83c-2b4c-bc38-83bc34def38c

From the above you can that mysqlserver2 has no AD Administrator and will be showing up on the advisor report.

Enable virtual machine backup to
protect your data from corruption
and accidental deletion

This one appears like the following in the advisor output:


To fix this, navigate to the Backup blade on the VM Resource in question and set the appropriate settings to enable the backup.

To identify VMs where this issue is evident use the following Powershell:

$VMs = Get-AzVM
foreach ($VM in $VMs)
{
    "VM: " + $VM.Name
    $RecoveryServicesVaults = Get-AzRecoveryServicesVault
    foreach ($RecoveryServicesVault in $RecoveryServicesVaults)
    {
        Get-AzRecoveryServicesBackupContainer -VaultID $RecoveryServicesVault.ID -ContainerType "AzureVM" -Status "Registered" -FriendlyName $VM.Name
    }
}
code

This gives results similar to the following, allowing you to see VMs where no backup is enabled:

VM: myVM1

FriendlyName                   ResourceGroupName    Status               ContainerType
------------                   -----------------    ------               -------------
myVM1                          myResourceGroup      Registered           AzureVM
myVM1                          myResourceGroup      Registered           AzureVM
myVM1                          myResourceGroup      Registered           AzureVM
VM: myVM2
VM: myVM3
myVM3                          myResourceGroup      Registered           AzureVM
myVM3                          myResourceGroup      Registered           AzureVM
myVM3                          myResourceGroup      Registered           AzureVM

What you can see from the above is myVM1 and myVM3 both have registered backups unlike myVM2 which has none and therefore myVM2 needs backup enabling.

Pages

Subscribe to Oracle FAQ aggregator