Feed aggregator

Basic Replication -- 5 : REFRESH_METHOD : FAST or FORCE ?

Hemant K Chitale - Wed, 2019-09-25 10:14
In the previous blog post, I had a remark "We'll explore the implications of "REFRESH FAST" and just "REFRESH" alone in a subsequent blog post."

This is in the context of whether it is a FORCE or FAST that shows up as the REFRESH_METHOD.  A FORCE attempts a FAST and, if it can't do so (e.g. the Materialized View Log is not accessible), attempts a COMPLETE Refresh from all the rows of the Source Table.

Other than a MV Log being a requirement, there are constraints on which types of Materialized Views can do a FAST Refresh.

SQL> create materialized view mv_fast_not_possible
2 refresh fast on demand
3 as select id, data_element_2, sysdate
4 from source_table
5 /
as select id, data_element_2, sysdate
*
ERROR at line 3:
ORA-12015: cannot create a fast refresh materialized view from a complex query


SQL> !oerr ora 12015
12015, 00000, "cannot create a fast refresh materialized view from a complex query"
// *Cause: Neither ROWIDs and nor primary key constraints are supported for
// complex queries.
// *Action: Reissue the command with the REFRESH FORCE or REFRESH COMPLETE
// option or create a simple materialized view.

SQL>


Thus, a "complex" query -- here one that adds a SYSDATE column -- cannot use a FAST Refresh.
(For all the restrictions, see Paragraph "5.3.8.4 General Restrictions on Fast Refresh" in the 19c documentation. )

SQL> create materialized view mv_fast_not_possible
2 refresh force on demand
3 as select id, data_element_2, sysdate
4 from source_table
5 /

Materialized view created.

SQL> select refresh_mode, refresh_method, last_refresh_type
2 from user_mviews
3 where mview_name = 'MV_FAST_NOT_POSSIBLE'
4 /

REFRESH_M REFRESH_ LAST_REF
--------- -------- --------
DEMAND FORCE COMPLETE

SQL>
SQL> insert into source_table
2 values (2000,'TwoThousand','NewTwoTh',sysdate);

1 row created.

SQL> select * from source_table order by date_col ;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
101 First One 18-AUG-19
103 Third Three 18-AUG-19
104 Fourth Updated 09-SEP-19
5 Fifth Five 16-SEP-19
6 Sixth TwoHundred 19-SEP-19
7 Seventh ThreeHundred 19-SEP-19
2000 TwoThousand NewTwoTh 25-SEP-19

7 rows selected.

SQL>
SQL> commit;

Commit complete.

SQL> exec dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_mview.refresh('MV_2');

PL/SQL procedure successfully completed.

SQL> exec dbms_mview.refresh('MV_FAST_NOT_POSSIBLE');

PL/SQL procedure successfully completed.

SQL>
SQL> select mview_name, refresh_mode,refresh_method,last_refresh_type, last_refresh_date
2 from user_mviews
3 order by last_refresh_date
4 /

MVIEW_NAME REFRESH_M REFRESH_ LAST_REF LAST_REFR
--------------------- --------- -------- -------- ---------
MV_OF_SOURCE DEMAND FAST FAST 25-SEP-19
MV_2 DEMAND FORCE FAST 25-SEP-19
MV_FAST_NOT_POSSIBLE DEMAND FORCE COMPLETE 25-SEP-19

SQL>


MV_FAST_NOT_POSSIBLE will always undergo a COMPLETE Refresh using REFRESH_METHOD='FORCE'.

MV_2 has REFRESH_METHOD='FORCE' because it was created with "refresh on demand" with the "fast" keyword missing.  Nevertheless, it is a "simple" Materialized View so does a FAST Refresh.

MV_OF_SOURCE was created with "refresh fast on demand", so it is already configured as REFRESH_METHOD='FAST'



Categories: DBA Blogs

Managing Licenses with AWS License Manager

Yann Neuhaus - Wed, 2019-09-25 06:51
Introduction

Computing environments became more and more agile over these last years. Companies need to provide solutions helping people to quickly set up new resources, starting and stopping them, scaling them according to the need and finally, removing them. In such environments, it could be tricky to follow license compliance when resources are changing on hourly basis.

Having a look on AWS services, I saw that AWS provides a license managing tool named “AWS License Manager”. I took few minutes in order to:

  • Understand which resources this service is able to monitor
  • How it works
  • Test it with an on-premise Linux server executing an oracle database
License Manager Service

The first step in order to use License Manager is to select it in the list of AWS Services.

AWS Services List

AWS Services List

After having clicked on AWS License Manager, the AWS License Manager window will appear.

"<yoastmark

Now, we simply have to create a license configuration with required license terms according to the software vendor. You can setup different kind of metrics such as

  • vPCUs
  • Cores
  • Sockets
  • Instances

License Manager also provides the possibility to enforce license limit, meaning that it prevents license usage after available licenses are exhausted.

AWS Create License options

AWS Create License configuration

In a context of on-premise License monitoring, it is important to notice that sockets and cores license’s type are not accepted. Therefore, in this example I used vCPUs.

"<yoastmark

Error while trying to associate Socket License to an on-premise host

AWS System Manager

Once the license configuration created, it’s now mandatory to use another AWS Service, AWS System Manager. This service allows you to view and control your infrastructure on AWS. AWS System Manager not only allows you to view and control your Amazon EC2 Instance but also on-premises servers, virtual machines (including VMs in other cloud environments). Some System Manager capabilities are not free, however in the context of this example everything is free.

 

AWS System Manager Agent (SSM Agent)

In order to benefit from AWS System Manager we need to install AWS Systems Manager Agent (SSM Agent) on our on-premised host. Indeed, SSM Agent is an Amazon software that can be installed and configured on an Amazon EC2 instance, an on-premises server, or a virtual machine (VM) and provides a solution to update, manage, and configure resources. SSM Agent is installed, by default on instances created from Windows Server 2016 and Windows Server 2019, Amazon Linux, Ubuntu Server Images AMIs. However, if you are running an on-premise server you need to install it. The process is really straightforward as presented below.

[root@vmrefdba01 ~]# mkdir /tmp/ssm
[root@vmrefdba01 ~]# curl https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm -o /tmp/ssm/amazon-ssm-agent.rpm
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 18.9M  100 18.9M    0     0  3325k      0  0:00:05  0:00:05 --:--:-- 4368k
[root@vmrefdba01 ~]# sudo yum install -y /tmp/ssm/amazon-ssm-agent.rpm
Loaded plugins: refresh-packagekit, ulninfo
Setting up Install Process
Examining /tmp/ssm/amazon-ssm-agent.rpm: amazon-ssm-agent-2.3.707.0-1.x86_64
Marking /tmp/ssm/amazon-ssm-agent.rpm to be installed
public_ol6_UEK_latest                                    | 2.5 kB     00:00
public_ol6_UEK_latest/primary_db                         |  64 MB     00:07
public_ol6_latest                                        | 2.7 kB     00:00
public_ol6_latest/primary_db                             |  18 MB     00:07
Resolving Dependencies
--> Running transaction check
---> Package amazon-ssm-agent.x86_64 0:2.3.707.0-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================================
 Package                            Arch                     Version                        Repository                           Size
======================================================================================================================================
Installing:
 amazon-ssm-agent                   x86_64                   2.3.707.0-1                    /amazon-ssm-agent                    61 M

Transaction Summary
======================================================================================================================================
Install       1 Package(s)

Total size: 61 M
Installed size: 61 M
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : amazon-ssm-agent-2.3.707.0-1.x86_64                                                                                1/1
amazon-ssm-agent start/running, process 3896
  Verifying  : amazon-ssm-agent-2.3.707.0-1.x86_64                                                                                1/1

Installed:
  amazon-ssm-agent.x86_64 0:2.3.707.0-1

Complete!

 

Creating an activation

Once the agent installed, we have to create a new “Activation” in the AWS System Manager Service by clicking on “Create activation“. At the end of the creation you will get an Activation Code and an Activation ID (in the green field below). You have to keep this information for the agent configuration.

AWS System Manager Activation

AWS System Manager Activation

Agent Configuration

In order to register your on-premise instance on AWS, you simply have to execute the following command with the activation code and activation id provided by AWS System Manager

sudo amazon-ssm-agent -register -code "<cf Activation Code>" -id "<cf Activation ID>" -region "us-east-2"

2019-09-19 13:53:05 INFO Successfully registered the instance with AWS SSM using Managed instance-id: mi-0756a9f0dc25be3cd

 

Once registered the Managed Instance should appear as presented below in AWS Systems Manager

AWS System Manager - Managed Instances

AWS Systems Manager – Managed Instances

The Platform type is detected as well as the Kernel version, IP address and computer name. AWS Systems Manager provides also a package inventory and many other kinds of inventory such as Network inventory, Files inventory, aso…

AWS Systems Manager - Application Inventory

AWS Systems Manager – Application Inventory

 

Association between License Configuration and Resource ID

We now have to make the link between the Managed Instance (resource) and the license configuration. The goal of course is to define which license configuration will be applied to which resource. In order to proceed, we have to go into the AWS License Manager, and select “Search Inventory” into the menu. Then we simply have to select the resource and then click on “Associate license Configuration”.

"<yoastmark

AWS License Manager – Search Inventory

The following window will appear, allowing you to define to which license configuration matches which resource:

"<yoastmark

Having a look in the AWS License Manager Dashboard, you can see that 1 out of 1 license is consumed since I dedicated one vCPU to my virtual machine and I provided 1vCPU  license to this instance.

"<yoastmark

AWS License Manager – Dashboard

Core Messages
  • AWS License Manager offers more functionalities for EC2 Instances than for on-premise servers.
  • AWS License Manager offers functionalities in order to monitor socket, vCPU, Cores and Instances.
  • AWS License Manager definitively helps to manage licenses but doesn’t fit with all requirements and license model.
  • AWS Systems Manager is a powerful tool providing several functionalities.
Strenghts
  • AWS License Manager is free.
  • AWS License Manager offers possibilities to monitor on-premise resources.
  • AWS License Manager provides solution in order to prevent instance to run if license compliance doesn’t fit.
  • AWS License Manager and AWS System Manager are straightforward to install and configure.
  • AWS License Manager and AWS System Manager offers a good documentation.
  • AWS System Manager offers many free functionalities (Patch Manager, Session Manager, Insights Dashboard, aso…).
  • AWS System Manager offers many  functionalities and is the basis of several other AWS tools such as AWS Config which allows to monitor instance’s compliance.
Weaknesses
  • AWS License Manager is not able by default to monitor options usage such as the ones of an Oracle database (Partitioning, Active Data Guard, aso…).
  • AWS License Manager is not able to calculate Oracle Processors, meaning taking into consideration core factors.
  • AWS System Manager is not able to monitor socket or cores on on-premise resources, only vCPUs.

Cet article Managing Licenses with AWS License Manager est apparu en premier sur Blog dbi services.

OOW19 Review: Oracle Analytics Deep Dive

Rittman Mead Consulting - Wed, 2019-09-25 05:41
 Oracle Analytics Deep Dive

In my previous blog I outlined the global news regarding Oracle like the Always Free Tier, the new datacenter plan and the set of new tools for Data Science. Today's blog is dedicated to all the news announced regarding Oracle Analytics in any of the versions: Cloud, Server or Applications.

 Oracle Analytics Deep DiveOracle Analytics Server

OAS is the long awaited replacement of OBIEE 12c on-premises and promises  functional parity with OAC. Current official ETA is Fiscal Year 2020 and it will be available to customers as a free upgrade. With OAS all customers still on-premises will experience the following benefits:

  • Almost 1-1 feature with OAC
  • Complete compatibility with OAC
  • Simplified cloud migration and better support for hybrid deployments

A related announcement for on-premises customers regards licensing: there is only a single license to purchase OAS which includes all features within it, no separate option for Mobile or Self-Service Data Visualization needed!

Oracle Analytics for Application

This represents the new incarnation of BIApps, completely redesigned specifically for Fusion Apps. As his predecessor, OAX (this is the acronym) it's a packaged, ready-to-use solution with pre-built ETLs and Analytics content like RPD, dashboards, analysis, KPIs. Under the covers uses Oracle Autonomous Data Warehouse and Oracle Data Integrator Cloud. OAX is also extendible, by bringing additional datasets in ADW and extending the semantic model and catalog.

Oracle Analytics Cloud

Several enhancements were announced, especially during Gabby Rubin's (VP of Oracle Analytics Product Management) Strategy & Roadmap Session. New features will be available in most of the areas of the tool, including the core of the centralized reporting: the RPD.

 Oracle Analytics Deep DiveData Preparation

New options will be available in the Data Preparation/Enrichment phase such as:

  • Custom Enrichments based on pre-existing set of values. E.g. enriching PRODUCT_ID with fields coming from a standard Product dimension. This is an interesting idea to enable standardization of dimensions across reporting without forcing people to write SQL or to know where the standard information is coming from.
  • Force Enrichments/Masking: as Administrators, we could enforce some transformations like the credit card obfuscation of fields that may contain sensitive data.
Natural Language Generation

The Natural Language view is already present in the current version of OAC, there is a plan to enhance this visualization by adding more options in the settings panel for grouping and trending analysis.

 Oracle Analytics Deep Dive

Spatial Analytics in OAC

A few weeks ago I wrote about Oracle Spatial Studio, a tool designed to provide advanced visual Spatial Analytics. This tool will remain and progress over time, OAC will not cover all the specific use-cases of Spatial Studio. However OAC will enhance its spatial capabilities, like:

  • Providing accurate information about row geo-location: e.g. how many rows were correctly located, how may errors and menus to fix value to location association.
  • Provide spatial functions in the front-end: an end-user will be easily able to calculate the distance between points in a map by writing a simple Logical SQL statement. This option will probably appear on the RPD first (check the twitter thread below)

Yeah!Now you can natively perform #spatialanalytics on #oac! #geospatial functions are available in the RPD and can be exposed to #OracleDataVisualization! pic.twitter.com/g5q3Lf9CiG

— Francesco Tisiot (@FTisiot) September 16, 2019

As you can see, calculating the distance will be just a matter of having the correct dataset and writing a GeometryDistance function.

Connectivity and Security

One of OAC's missions is to become the Analytics Platform on top of any type of datasource. The plan in the future is to expand the list of connectors and security/configuration options like SSL or Kerberos. There is also a roadmap to extend the Data Gateway capabilities to query non-oracle databases.

Modeling capabilities

In OAC we were used to either the classic RPD approach or the self-service Data-Sets. The future reserves some news in both approaches:

  • A new cloud web-based Modeler with the objective of functional parity with the Admintool, so capable of handling more complex designs that the current light data-modeler. I believe this will be also an effort to adapt the RPD development process to the current standards of concurrent development, versioning and storage format.
  • A new Self Service Data Model solution to build light self service models allowing end-users to evolve datasets into proper models sharable and optimized for reporting.

I like the idea of allowing both top-down (centralized) as well as bottom-up (self-service) approach to data modeling. This provides clients the flexibility on the analytical approach while still allowing to enforce centralized rules  (e.g. unique source of truth) when needed.

Unified User Experience and Layout Customizations

As of now the old "Answers and Dashboards" and the new "Data Visualization Projects" were almost completely separated products with each one having its own home page and layout. In the next releases we'll see that the two worlds will start combining, with a unique home and a similar look and feel.

In other news, highly requested by end-users is the possibility of customize almost any option of the layout: from font type and size to colors of any object visible in a project.

Machine Learning Integration

As discussed in the previous OOW review post in the future OAC will be able to use models built in other tools like Oracle Machine Learning in the Autonomous Data Warehouse or Oracle Data Science. This provides an end-to-end Data Science story from Data Analyst to Data Scientist all with a simple, secure, highly configurable and performant toolset.

 Oracle Analytics Deep Dive

As you can see a lot of news coming in various aspects of the tool, from on-premise functional parity, a new packaged solution for Fusion Apps and a lot of features enhancing OAC functionality and customization options.

What do you think? Is this the right direction? Do you feel there is something missing?

Categories: BI & Warehousing

MobaXterm 12.2

Tim Hall - Wed, 2019-09-25 02:06

In another “the rest of the world ceases to exist in the lead up to OpenWorld” moment, I missed the release of MobaxTerm 12.2.

The downloads and changelog are in the usual places.

For Windows users who, like me, spend most of the day connecting to machines via SSH, this is the best tool I’ve found.

Cheers

Tim…

MobaXterm 12.2 was first posted on September 25, 2019 at 8:06 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

500 blog posts (in 12 years)

Dietrich Schroff - Tue, 2019-09-24 15:30
Last week i did my 500th posting - not really "run like hell" but i think better than nothing.
Here some statistics:




I think, i will continue for another 500. The exciting point is, wether i will find enough topics, which i am interested in...

The Product Concept: What does it mean in 2019?

VitalSoftTech - Tue, 2019-09-24 10:41

If you’ve ever taken a marketing course, you’ve likely heard about the “product concept.” However, there may be many who are unaware of what it is and what it entails. The advertising lexicon is getting broader by the day, and with each passing minute, more words for different concepts are being introduced. Which brings us […]

The post The Product Concept: What does it mean in 2019? appeared first on VitalSoftTech.

Categories: DBA Blogs

PFCLScan - Version 3.0

Pete Finnigan - Tue, 2019-09-24 09:26
We are very excited to announce that we are currently working to have version 3.0 of PFCLScan our flagship database security scanner for the Oracle database. We will be ready for sale in September and this development is going really....[Read More]

Posted by Pete On 11/07/19 At 03:33 PM

Categories: Security Blogs

Basic VMware Harbor Registry usage for Pivotal Container Service (PKS)

Pas Apicella - Tue, 2019-09-24 01:25
VMware Harbor Registry is an enterprise-class registry server that stores and distributes container images. Harbor allows you to store and manage images for use with Enterprise Pivotal Container Service (Enterprise PKS).

In this simple example we show what you need at a minimum to get an image on Harbor deployed onto your PKS cluster. First we need the following to be able to run this basic demo

Required Steps

1. PKS installed with Harbor Registry tile added as shown below


2. VMware Harbor Registry integrated with Enterprise PKS as per the link below. The most important step is the one as follows "Import the CA Certificate Used to Sign the Harbor Certificate and Key to BOSH". You must complete that prior to creating a PKS cluster

https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html

3. A PKS cluster created. You must have completed step #2 before you create the cluster

https://docs.pivotal.io/pks/1-4/create-cluster.html

$ pks cluster oranges

Name:                     oranges
Plan Name:                small
UUID:                     21998d0d-b9f8-437c-850c-6ee0ed33d781
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   oranges.run.yyyy.bbbb.pivotal.io
Kubernetes Master Port:   8443
Worker Nodes:             4
Kubernetes Master IP(s):  1.1.1.1
Network Profile Name:

4. Docker Desktop Installed on your local machine



Steps

1. First let's log into Harbor and create a new project. Make sure you record your username and password you have assigned for the project. In this example I make the project public.




Details

  • Project Name: cto_apj
  • Username: pas
  • Password: ****

2. Next in order to be able to connect to our registry from our local laptop we will need to install

The VMware Harbor registry isn't running on a public domain, and is using a self-signed certificate. So we need to access this registry with self-signed certificates from my mac osx clients given I am using Docker for Mac. This link shows how to add the self signed certificate to Linux and Mac clients

https://blog.container-solutions.com/adding-self-signed-registry-certs-docker-mac

You can download the self signed cert from Pivotal Ops Manager as sown below


With all that in place a command as follows is all I need to run

$ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt

3. Now lets login to the registry using a command as follows

$ docker login harbor.haas-bbb.yyyy.pivotal.io -u pas
Password:
Login Succeeded

4. Now I have an image sitting on Docker Hub itself so let's tag that and then deploy that to our VMware Harbor registry as shown below

 $ docker tag pasapples/customer-api:latest harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest
 $ docker push harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest


5. Now lets create a new secret for accessing the container registry

$ kubectl create secret docker-registry regcred --docker-server=harbor.haas-bbb.yyyy.io --docker-username=pas --docker-password=**** --docker-email=papicella@pivotal.io

6. Now let's deploy this image to our PKS cluster using a deployment YAML file as follows

customer-api.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: customer-api
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: customer-api
    spec:
      containers:
        - name: customer-api
          image: harbor.haas-206.pez.pivotal.io/cto_apj/customer-api:latest
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: customer-api-service
  labels:
    name: customer-api-service
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  selector:
    app: customer-api
  type: LoadBalancer

7. Deploy as follows

$ kubectl create -f customer-api.yaml

8. You should see the POD and SERVICE running as follows

$ kubectl get pods | grep customer-api
customer-api-7b8fcd5778-czh46                    1/1     Running   0          58s

$ kubectl get svc | grep customer-api
customer-api-service            LoadBalancer   10.100.2.2    10.195.1.1.80.5   80:31156/TCP 


More Information

PKS Release Notes 1.4
https://docs.pivotal.io/pks/1-4/release-notes.html

VMware Harbor Registry
https://docs.vmware.com/en/VMware-Enterprise-PKS/1.4/vmware-harbor-registry/GUID-index.html

Categories: Fusion Middleware

Impact of Force Logging

Bobby Durrett's DBA Blog - Mon, 2019-09-23 17:29

I am working on upgrading an Oracle database from 11.2.0.4 to 19c and migrating it from HP Unix to Linux. This 15-terabyte database is too large to copy from the old to the new system during our normal weekend downtime window. It also has a ton of weekend batch updates that overlap the normal weekend change window so it would be best for our business processing if the cut over from the old to the new system was as fast as possible.

I want to use GoldenGate to minimize the downtime for the cutover using an approach similar to what is described in this Oracle document:

Zero Downtime Database Upgrade Using Oracle GoldenGate

You start GoldenGate collecting changes on the current production system and then take your time copying the 15 TB of data from the old to new system. Once you are done with the initial load you apply the changes that happened in the meanwhile. Finally, you cut over to the new system. You could even switch the direction of the replication to push changes on the new production system back to the old system to allow for a mid-week back out several days after your upgrade. Pretty cool. A teammate of mine successfully used this approach on an important database some years back.

But the database that I am working on now, unlike the one that my colleague worked on, has a lot of tables set to nologging. Under the right conditions inserts into tables set to nologging are not written to the redo logs and will be missed by GoldenGate. This Oracle article recommends setting your database to FORCE LOGGING so GoldenGate will not miss any updates:

In order to ensure that the required redo information is contained in the Oracle redo logs for segments being replicated, it is important to override any NOLOGGING operations which would prevent the required redo information from being generated. If you are replicating the entire database, enable database force logging mode.

Oracle GoldenGate Performance Best Practices

We could also switch all our application tables and partitions in the source system to logging but we have so many I think we would set the whole database to force logging.

But the big question which I touched on in my previous post is whether force logging will slow down our weekend batch processing so much that we miss our deadlines for weekend processing to complete and affect our business in a negative way. The more I investigate it the more convinced I am that force logging will have minimal impact on our weekend jobs. This is an unexpected and surprising result. I really thought that our batch processing relied heavily on nologging writes to get the performance they need. It makes me wonder why we are using nologging in the first place. It would be a lot better for backup and recovery to have all our inserts logged to the redo logs. Here is a nice Oracle Support document that lays out the pros and cons of using nologging:

The Gains and Pains of Nologging Operations (Doc ID 290161.1)

I have an entry in my notes for this upgrade project dated 8/26/19 in which I wrote “Surely force logging will bog the … DB down”. Now I think the opposite. So, what changed my mind? It started with the graph from the previous post:

Graph From Previous Post with Little Direct Writes I/O

I was really surprised that the purple line was so low compared to the other two. But I felt like I needed to dig deeper to make sure that I was not just misunderstanding these metrics. The last thing I want to do is make some production change that slows down our weekend processes that already struggle to meet their deadlines. I was not sure what other metrics to look at since I could not find something that directly measures non-logged writes. But then I got the idea of using ASH data.

In my “Fast way to copy data into a table” post I said that to copy data quickly between two Oracle tables “you want everything done nologging, in parallel, and using direct path”. I may have known then and forgotten but working on this now has me thinking about the relationship between these three ways of speeding up inserts into tables. I think there are the following two dependencies:

  • Nologging requires direct path
  • Parallel requires direct path

Oracle document “Check For Logging / Nologging On DB Object(s) (Doc ID 269274.1)” says the first one. In the second case if you have a target table set to parallel degree > 1 and you enable parallel DML you get direct path writes when you insert into the target table.

From all this I got the idea to look for direct path write waits in the ASH views. I could use ASH to identify insert statements that are using direct path. Then I could check that the target tables or partitions are set to nologging. Then I would know they are doing non-logged writes even if I did not have a metric that said so directly.

directwritesql.sql looked at all the SQL statements that had direct write waits over the entire 6 weeks of our AWR history. The output looks like this:

     select
  2  sql_id,count(*) active
  3  from DBA_HIST_ACTIVE_SESS_HISTORY a
  4  where
  5  event = 'direct path write'
  6  group by sql_id
  7  order by active desc;

SQL_ID            ACTIVE
------------- ----------
2pfzwmtj41guu         99
g11qm73a4w37k         88
6q4kuj30agxak         58
fjxzfp4yagm0w         53
bvtzn333rp97k         39
6as226jb93ggd         38
0nx4fsb5gcyzb         36
6gtnb9t0dfj4w         31
3gatgc878pqxh         31
cq433j04qgb18         25

These numbers startled me because they were so low. Each entry in DBA_HIST_ACTIVE_SESS_HISTORY represents 10 seconds of activity. So over 6 weeks our top direct path write waiter waited 990 seconds. Given that we have batch processes running full out for a couple of days every weekend 990 seconds over 6 weekends is nothing.

I took the top SQL ids and dumped out the SQL text to see what tables they were inserting into. Then I queried the LOGGING column of dba_tables and dba_tab_partitions to see which insert was going into a table or partition set to nologging.

select logging,table_name
from dba_tables
where owner='MYOWNER' and
table_name in
(
... tables inserted into ...
)
order by table_name;


select logging,table_name,count(*) cnt
from dba_tab_partitions
where table_owner='MYOWNER' and
table_name in
(
... tables inserted into ...
)
group by logging,table_name
order by table_name,cnt desc;

This simple check for LOGGING or NOLOGGING status eliminated several of the top direct path write waiters. This process reduced the list of SQL ids down to three top suspects:

SQL_ID            ACTIVE
------------- ----------
cq433j04qgb18         25
71sr61v1rmmqc         17
0u0drxbt5qtqk         11

These are all inserts that are not logged. Notice that the most active one has 250 seconds of direct path write waits over the past 6 weeks. Surely enabling force logging could not cause more than about that much additional run time over the same length of time.

I got the idea of seeing what percentage of the total ASH time was direct path write waits for each of these SQL statements. In every case it was small:

cq433j04qgb18

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              2508              25    .996810207
 
71sr61v1rmmqc

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              1817              17    .935608145

0u0drxbt5qtqk

TOTAL_SAMPLE_COUNT DW_SAMPLE_COUNT DW_SAMPLE_PCT
------------------ --------------- -------------
              8691              11    .126567714

TOTAL_SAMPLE_COUNT was all the samples for that SQL_ID value for the past 6 weeks. DW_SAMPLE_COUNT is the same count of samples that are direct write waits that we already talked about. DW_SAMPLE_PCT is the percentage of the total samples that were direct write wait events. They were all around 1% or lower which means that write I/O time was only about 1% of the entire run time of these inserts. The rest was query processing best I can tell.

Also I used my sqlstat3 script to look at the average run time for these inserts:

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
71sr61v1rmmqc      3333358322 01-SEP-19 12.00.46.381 PM                1         2768879.28         892080    207085.624                  0                      0                  3.817             9028323            1045428               19337954
71sr61v1rmmqc      3333358322 08-SEP-19 10.00.43.551 AM                0         264428.594          98840     28257.339                  0                      0                  3.657              177736             143345                      0
71sr61v1rmmqc      3333358322 08-SEP-19 11.00.49.648 AM                1          2352509.9         767440    160933.191                  0                      0                      0             8729437             791837               19110340
71sr61v1rmmqc      3333358322 15-SEP-19 11.00.03.027 AM                1         3090070.21         904310    190593.062                  0                      0                  2.192             9095421             949579               19470026

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
0u0drxbt5qtqk       382840242 01-SEP-19 02.00.23.436 AM                1         29281391.6        3211050    16624311.7                  0                      0              99532.905            37076159           14440303               24479240
0u0drxbt5qtqk       382840242 08-SEP-19 02.00.11.424 AM                1         3871668.37         424670    2563007.61                  0                      0               1236.003             4622248            2457057                2468983
0u0drxbt5qtqk       382840242 15-SEP-19 03.00.12.349 AM                0         5161808.16         615520    3358994.55                  0                      0              20656.365             6251060            2801828                      0
0u0drxbt5qtqk       382840242 15-SEP-19 04.00.33.661 AM                1         2412910.02         240650    1741053.89                  0                      0                699.717             3050529            1542895                4638794

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
cq433j04qgb18      1267318024 02-SEP-19 10.00.57.871 PM                1          77132.892          51180     10719.692                  0                      0                  2.003              460346              47055                 772468
cq433j04qgb18      1267318024 03-SEP-19 10.00.55.601 PM                1         116064.154          68350      9808.483                  0                      0              15746.609              911571              20422                1256808
cq433j04qgb18      1267318024 04-SEP-19 10.00.31.071 PM                1         106594.074          64030      6328.462                  0                      0              15603.102              777779              14945                1561172
cq433j04qgb18      1267318024 05-SEP-19 10.00.13.265 PM                0          44435.247          31810      2760.438                  0                      0                365.132              139637               5111                 257770
cq433j04qgb18      1267318024 09-SEP-19 10.00.18.346 PM                1         791385.928         324050    171504.931                  0                      0               7484.358             6430665             600703               14262960
cq433j04qgb18      1267318024 10-SEP-19 10.00.29.224 PM                0         1685763.14         676210    304045.354                  0                      0                283.296            11884045             838290               16268667
cq433j04qgb18      1267318024 11-SEP-19 10.01.00.800 PM                0         369644.825         172120     42679.357                  0                      0                  3.929             2443772             151369                3901044
cq433j04qgb18      1267318024 12-SEP-19 10.00.28.499 PM                0          30381.614          25630      1191.884                  0                      0                 422.55               98580               3389                 184812
cq433j04qgb18      1267318024 13-SEP-19 10.00.07.502 PM                0         173286.567         109990     11461.865                  0                      0                 359.37             1475324              63073                2360818
cq433j04qgb18      1267318024 16-SEP-19 10.00.52.175 PM                1         190203.822          93680     47585.666                  0                      0                122.658             1221886             348327                2955258

These queries run at most a couple of hours. If direct path writes are 1% of their total run time, I estimated that force logging would add about 1% to the elapsed time or about 2 minutes per execution.

The final step was to try to run one of these top nologging I/O inserts in a test environment with and without force logging to see if the test matches the expected performance slowdown. I was not able to run 0u0drxbt5qtqk without setting up a more elaborate test with the development team. My test of cq433j04qgb18 ran considerably faster with force logging than without it so I think other factors were hiding whatever effect force logging had. But 71sr61v1rmmqc had some nice results that matched my estimates well. This is on a Delphix clone of production so the data was up to date with prod but the underlying I/O was slower.

71sr61v1rmmqc results running 5 times normal 5 times force logging

The individual run times are in seconds and the averages are listed in seconds and in minutes. I ran the insert 5 times with no force logging and 5 times with it alternating. I dropped the primary key and unique index of the target table to keep from getting constraint errors. I rolled back the insert each time. It averaged about 1.2 minutes more out of 40 minutes of run time which is about a 3% increase. My estimate from ASH was about 1% so this test matches that well.

The final test remains. In some upcoming production weekend, I will put in a change to flip the database to force logging and see how it goes. My tests were run on a test system with a different storage system and with no other activity. We might see different results on a heavily loaded system with a queue for the CPU. But, after all this analysis and testing I feel confident that we won’t be able to tell that force logging is enabled. Unfortunately, we sometimes have performance issues anyway due to plan changes or data volume so the force logging switch might get blamed. But I feel confident enough to push for the final test and I think we ultimately will pass that test and be able to use force logging to enable GoldenGate to support a short cut over time for our migration and upgrade project.

Bobby

P.S. A good question came in as a comment about direct path write waits and asynchronous I/O. The system I am testing on does not support async I/O because it is HP Unix and a filesystem. This older blog post talks a bit about async and direct I/O on HP-UX:

https://www.bobbydurrettdba.com/2013/04/26/db_writer_processes-dbwr_io_slaves-with-no-asynch-io-on-hp-ux/

So, your mileage may vary (YMMV) if you do these same queries on a system with asynchronous writes. Linux filesystems support async writes and on HP-UX our RAC system on ASM supports it. It is one of the challenges of writing blog posts. Other people may be in different situations than I am.

Categories: DBA Blogs

Comprehensive TensorFlow.js Example

Andrejus Baranovski - Mon, 2019-09-23 14:45
I have implemented an app which includes TensorFlow.js API usage. First I will walk you through the app functionality and then will dive into implementation details. This app implements a business report execution time prediction use case (this time in JavaScript), which was explained in my previous post — Report Time Execution Prediction with Keras and TensorFlow.

Read more in my Towards Data Science post.


OOW 2019 Review: Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

Rittman Mead Consulting - Mon, 2019-09-23 09:41
 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

In the Oracle world, last week was "the week" with Oracle Openworld 2019 happening in San Francisco. A week full of exciting news, some of it were also associated with words like "Free", never heard before in any Oracle associated topic. This blog post will go in detail into some of the news with a special focus on the Analytics and Data Science topics.

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data ScienceOracle Cloud Free Tier

Let's start with the big news: Oracle Cloud Free Tier! A set of services that can ALWAYS be used for free which include Oracle's best offering in the database space like ATP (Autonomous Transaction Processing) and ADW (Autonomous Data Warehouse) as well as Compute, Storage and additional services for networking, monitoring and notifications.

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

This is a huge news in Oracle ecosystem since it enables everyone to start using the products without the need of a credit card! The always free schema can be also used in conjunction with the 30-day Free Trial (with associated 300$ in credits) to experience the full set of Oracle products without spending a single cent.

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

An additional interesting point (compared to previous Oracle's developer licensing models) is that there is nothing in the licensing terms blocking any customer to use the free tier for production usage! This means that potentially, if the resources provided satisfy the business requirements, anyone could potentially run production applications directly on the free tier! And, in cases when an upscale is needed, Oracle will provide a very easy option to switch from a free instance to a paid one.

However, as one can expect, the free tier has limitations, for example the databases will allow only 1 OCPU and 20GB of Storage each. On top of the technical limitation, for any of the products in the free tier there is no support and no SLAs. This means, for example, that in case of problems, you'll not be able to open a ticket to Oracle support. Something definitely to ponder about when implementing a production system.

OCI Gen2 and New Datacenters

During his Keynote, Larry Ellison also announced the plan to launch 20 new OCI Gen2 datacenters in the next year! An average of a new datacenter every 23 days!

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data ScienceTaken from oracle documentation

This is very impressive and, as mentioned during the Keynote, will mean Oracle overtake Amazon for the number of datacenters. A particular mention needs to be given also to the OCI Gen2, the new version of Oracle Cloud Interface. The first generation of OCI mission was the pay per usage: offering a set of services available on demand and payable by hour. The OCI Gen2 adds the autonomous features to Gen1: services are now self-managed, self-patched, self-monitored, self-secured with no downtime required. OCI Gen2 removes a huge set of activities from the hands of administrators taking the human error out of the equation.

Analytics & Data Science

I had a talk on how to Become a Data Scientist with Oracle Analytics Cloud. The topic of Analytics & Data Science was really what interested me most, and my expectation for exciting news was met.

A whole new set of products will be shipped soon, making the whole Data Science experience more continuous and pervasive across the Oracle Products. Let's have a look at the news, I'll try to add links to the relevant sessions I attended.

  • Oracle Machine Learning: ML will be more pervasive in the Oracle Database, the Oracle Machine Learning Notebooks will be capable of handling R and Python in addition to SQL. All of this will be available on the Cloud Databases including ADW. A new Spark option is also coming enabling Machine Learning on the BDA.
  • Oracle Data Science: This is a brand new product for Data Science collaboration, work in team on projects, with sharing and versioning options available out of the box.
  • Oracle Data Catalog: Again a new product aimed at creating inventories of company's data assets and make them searchable and easily usable by business users or data scientist.
  • Oracle Analytics Cloud: A lot of new announcements for this product which is mature and consolidated in the market, like Natural Language Generation or enhancements in the Data Enrichment, which I'll address in a separate blog post.

An interesting feature is AutoML, available both in Oracle Machine Learning and Oracle Data Science, which removes some barriers to Data Science by automating most of the steps in the Machine Learning model creation such as model and feature selection, and hyper-parameters tuning.

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data ScienceTaken from Oracle ML presentation

You might notice several enhancements in different products. However, the key indicator of Oracle's Data Science maturity is the fact that all of the products above can be easily combined! Oracle Data Science will use Oracle Machine Learning optimizations when running on supported platforms on top of datasets easily discoverable by Oracle Data Catalog. Machine Learning models developed by OML or ODS can then be exposed and used by Oracle Analytics Cloud. This provides an end to end path from data scientist to data analyst and business user all within the same toolset and on top of the best hardware (with support for GPUs coming)!

 Free Tier, New Datacenters and a New End-to-End Path for Analytics and Data Science

All in all a great OOW full of exciting news: no more barriers to access Oracle Cloud with the free tier, 20 new datacenters coming in the next year and a set of tools to perform Data Science, from the Analyst to the Data Scientist, in a collaborative and extremely performant way! If you want to have more news regarding Oracle Analytics Cloud, don't miss my next blog post (edit: available here)!

Categories: BI & Warehousing

Rolling Upgrade of a Galera Cluster with ClusterControl

Yann Neuhaus - Mon, 2019-09-23 03:29
Rolling Upgrade is easy

In this blog, I will show you how easy it is with Clustercontrol to perform a Galera cluster “Rolling Upgrade” without any loss of service.
Let’s say we want to upgrade from Percona XtraDB Cluster version 5.6 to 5.7.
The same procedure can be used to upgrade MariaDB (from 10.1 to 10.2 or 10.3)

Prerequisites

First of all, make sure that on your Galera cluster all nodes are synchronized.
From the Dashboard, on the tab Overview, then the Galera Nodes window, in the Last “Committed” column , all 3 figures must be identical.

Then disable from the GUI the “cluster & node auto-recovery”either by clicking on both until it gets red or by setting temporarely on the clustercontrol server in the
/etc/cmon.d/cmon_N.cnf file (N stands for Cluster ID) the 2 following parameters:
– enable_cluster_autorecovery=0
– enable_node_autorecovery=0
don’t forget to restart the cmon service.
systemctl restart cmon
It is very important & even crucial otherwise Clustercontrol will try everytime to restart the Galera node when you will stop it during the upgrade process.

Now we have to put the first Galera node in maintenance mode for one hour.


The Cluster status bar should be now as following.

Cluster Upgrade

Log on the first master node using your favorite terminal emulator “putty” or “MobaXterm”, open 2 sessions and stop the Percona service on the first node.
# service mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (19698)
# service mysql stop
Shutting down MySQL (Percona XtraDB Cluster).. SUCCESS!
# service mysql status
ERROR! MySQL (Percona XtraDB Cluster) is not running

Remove now existing Percona XtraDB Cluster and Percona XtraBackup packages
#[root@node1 yum.repos.d]# yum remove percona-xtrabackup* Percona-XtraDB-Cluster*
Loaded plugins: fastestmirror, ovl
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package Percona-XtraDB-Cluster-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package Percona-XtraDB-Cluster-galera-3.x86_64 0:3.34-1.el6 will be erased
---> Package Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.44-28.34.1.el6 will be erased
---> Package percona-xtrabackup.x86_64 0:2.3.10-1.el6 will be erased
--> Finished Dependency Resolution
Removed:
Percona-XtraDB-Cluster-56.x86_64 1:5.6.44-28.34.1.el6 Percona-XtraDB-Cluster-client-56.x86_64 1:5.6.44-28.34.1.el6 Percona-XtraDB-Cluster-galera-3.x86_64 0:3.34-1.el6
Percona-XtraDB-Cluster-server-56.x86_64 1:5.6.44-28.34.1.el6 Percona-XtraDB-Cluster-shared-56.x86_64 1:5.6.44-28.34.1.el6 percona-xtrabackup.x86_64 0:2.3.10-1.el6
Complete!

Install the new packages
#[root@node1 yum.repos.d]# yum install Percona-XtraDB-Cluster-57
Resolving Dependencies
--> Running transaction check
---> Package Percona-XtraDB-Cluster-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
--> Processing Dependency: Percona-XtraDB-Cluster-client-57 = 5.7.26-31.37.1.el6 for package: Percona-XtraDB-Cluster-57-5.7.26-31.37.1.el6.x86_64
--> Processing Dependency: Percona-XtraDB-Cluster-server-57 = 5.7.26-31.37.1.el6 for package: Percona-XtraDB-Cluster-57-5.7.26-31.37.1.el6.x86_64
--> Running transaction check
---> Package Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
---> Package Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
--> Processing Dependency: Percona-XtraDB-Cluster-shared-57 = 5.7.26-31.37.1.el6 for package: Percona-XtraDB-Cluster-server-57-5.7.26-31.37.1.el6.x86_64
--> Processing Dependency: percona-xtrabackup-24 >= 2.4.12 for package: Percona-XtraDB-Cluster-server-57-5.7.26-31.37.1.el6.x86_64
--> Processing Dependency: qpress for package: Percona-XtraDB-Cluster-server-57-5.7.26-31.37.1.el6.x86_64
--> Running transaction check
---> Package Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.26-31.37.1.el6 will be installed
---> Package percona-xtrabackup-24.x86_64 0:2.4.15-1.el6 will be installed
---> Package qpress.x86_64 0:11-1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
Installed:
Percona-XtraDB-Cluster-57.x86_64 0:5.7.26-31.37.1.el6
Dependency Installed:
Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.26-31.37.1.el6 Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.26-31.37.1.el6
Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.26-31.37.1.el6 percona-xtrabackup-24.x86_64 0:2.4.15-1.el6
qpress.x86_64 0:11-1.el6
Complete!

Start the node outside the cluster (in standalone mode) by setting the wsrep_provider variable to none.
$ ps -edf|grep -i mysql
$ mysqld --user=mysql --wsrep-provider='none'

Run now mysql_upgrade in the second session
[root@node1 /]# mysql_upgrade -u root -p
Enter password:
Checking if update is needed.
Checking server version.
Running queries to upgrade MySQL server.
Checking system database.
mysql.columns_priv OK
mysql.db OK
mysql.engine_cost OK
mysql.user OK
Upgrading the sys schema.
Checking databases.
sys.sys_config OK
Upgrade process completed successfully.
Checking if update is needed

When the upgrade is over, stop the mysqld process.
You can either kill the mysqld process ID or use mysqladmin shutdown with the MySQL root user credentials.
$ mysqladmin shutdown -uroot -p
Now you can restart the upgraded node to join the Galera cluster in the first session.
$ service mysql start
Starting Percona-Xtradb-server.190612 13:04:33 mysqld_safe Logging to '/var/log/mysql/mysqld.log'.
190612 13:04:33 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
.. SUCCESS!

Post-exec tasks

From the GUI, disable the maintenance mode and check for the new version by logging in the instance.
[root@node1 /]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 64
Server version: 5.7.27-30-57 Percona XtraDB Cluster (GPL), Release rel30, Revision 64987d4, WSREP version 31.39, wsrep_31.39
Once the first node is upgraded, you can repeat exactly the same procedure for all the other nodes in the cluster.

Now you can repeat exactly the same procedure for the other nodes of the cluster. At the end, Clustercontrol should display the same version for all nodes.

Conclusion

Rolling upgrade of a Galera cluster with Clustercontrol is really easy and fast with no or very few impact on the service.

Cet article Rolling Upgrade of a Galera Cluster with ClusterControl est apparu en premier sur Blog dbi services.

Oracle 19c : Point-In-Time Recovery in a PDB

Yann Neuhaus - Fri, 2019-09-20 11:51

Point-In-Time Recovery is also possible in a multitenant environment. As in Non-CDB, a recovery catalog can be used or not. In this blog we will see how to recover a dropped tablespace in a PDB. We will also see the importance of using a recovery catalog or not.
A PITR of a PDB does not affect remaining PBDs. That means that while doing a PITR in PDB, people can use the other PDBs. In this blog we are using an oracle 19c database with local undo mode enabled

SQL> 
  1  SELECT property_name, property_value
  2  FROM   database_properties
  3* WHERE  property_name = 'LOCAL_UNDO_ENABLED'

PROPERTY_NAME        PROPE
-------------------- -----
LOCAL_UNDO_ENABLED   TRUE
SQL>

SELECT con_id, tablespace_name FROM   cdb_tablespaces WHERE  tablespace_name LIKE 'UNDO%';

    CON_ID TABLESPACE_NAME
---------- ------------------------------
         3 UNDOTBS1
         4 UNDOTBS1
         1 UNDOTBS1

SQL>

We suppose that
-We have a tablespace named MYTABPDB2
-We have a valid backup of the whole database
-A recovery catalog is not used

Now connecting to the PDB2, let’s drop a tablespace after creating a restore point.

SQL> show con_name;

CON_NAME
------------------------------
PDB2

SQL> create restore point myrestpoint;

Restore point created.

SQL>
SQL> drop tablespace mytabpdb2 including contents and datafiles;

Tablespace dropped.

SQL>

And now let’s perform a PITR to the restore point myrestpoint

1- Connect to the root container

[oracle@oraadserver ~]$ rman target /

[oracle@oraadserver ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Sep 20 13:07:07 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1546409981)

RMAN>

2- Close the PDB

RMAN> ALTER PLUGGABLE DATABASE PDB2 close;

using target database control file instead of recovery catalog
Statement processed

RMAN>

3- Do the PITR

RMAN> run
{
  SET TO RESTORE POINT myrestpoint;
   RESTORE PLUGGABLE DATABASE pdb2;
   RECOVER PLUGGABLE DATABASE pdb2;
}2> 3> 4> 5> 6>

executing command: SET until clause

Starting restore at 20-SEP-19
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=54 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00015 to /u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00016 to /u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T141945_gr9jzry9_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T141945_gr9jzry9_.bkp tag=TAG20190920T141945
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 20-SEP-19

Starting recover at 20-SEP-19
current log archived
using channel ORA_DISK_1


starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 20-SEP-19

RMAN>

4- Open the PDB on resetlogs mode

RMAN> alter pluggable DATABASE  pdb2 open resetlogs;

Statement processed

RMAN>

I did not get any error from RMAN, but when looking the alert log file, I have following errors

PDB2(4):Pluggable database PDB2 dictionary check beginning
PDB2(4):Tablespace 'MYTABPDB2' #7 found in data dictionary,
PDB2(4):but not in the controlfile. Adding to controlfile.
PDB2(4):File #25 found in data dictionary but not in controlfile.
PDB2(4):Creating OFFLINE file 'MISSING00025' in the controlfile.
PDB2(4):Pluggable Database PDB2 Dictionary check complete
PDB2(4):Database Characterset for PDB2 is AL32UTF8

Seems there is some issue with the recovery of MYTABPDB2 tablespace. Connected to PDB2 I can have

SQL> select FILE_NAME,TABLESPACE_NAME from dba_data_files where TABLESPACE_NAME='MYTABPDB2';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
/u01/app/oracle/product/19.0.0/dbhome_3/dbs/MISSING00025
MYTABPDB2

The tablespace was not recovered as expected.
What happens? In fact this issue is expected according Doc ID 2435452.1 where we can find
If the point in time recovery of the pluggable database is performed without the catalog, then it is expected to fail

As we are not using a recovery catalog, backup information are stored in the control file and it seems that the actual control file is no longer aware of the data file 25.
As specified in the document, we have to use a recovery catalog

Now let’s connect to a catalog and do again the same PITR
After connecting to the catalog we do a full backup. Then we drop the tablespace and run again the same recovery command while connecting to the catalog. We use the time before the tablespace was dropped.

[oracle@oraadserver trace]$ rman catalog rman/rman@rmancat

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Sep 20 15:28:29 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to recovery catalog database

RMAN> connect target /

connected to target database: ORCL (DBID=1546409981)

After closing PDB2 we run following bloc

RMAN> run
{
  SET UNTIL TIME "to_date('20-SEP-2019 15:27:00','DD-MON-YYYY HH24:MI:SS')";
   RESTORE PLUGGABLE DATABASE pdb2;
   RECOVER PLUGGABLE DATABASE pdb2;
}
2> 3> 4> 5> 6>
executing command: SET until clause

Starting restore at 20-SEP-19
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/ORCL/pdb2/system01.dbf
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/ORCL/pdb2/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00015 to /u01/app/oracle/oradata/ORCL/pdb2/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00016 to /u01/app/oracle/oradata/ORCL/pdb2/users01.dbf
channel ORA_DISK_1: restoring datafile 00026 to /u01/app/oracle/oradata/ORCL/pdb2/mytabpdb201.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T152554_gr9nws0x_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORCL/92359E387C754644E0531502A8C02C00/backupset/2019_09_20/o1_mf_nnndf_TAG20190920T152554_gr9nws0x_.bkp tag=TAG20190920T152554
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

datafile 26 switched to datafile copy
input datafile copy RECID=5 STAMP=1019489668 file name=/u01/app/oracle/oradata/ORCL/pdb2/mytabpdb201.dbf
Finished restore at 20-SEP-19
starting full resync of recovery catalog
full resync complete

Starting recover at 20-SEP-19
using channel ORA_DISK_1


starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 20-SEP-19

RMAN>

We then open PDB2 with resetlogs mode and then verify with sqlplus

SQL> select FILE_NAME,TABLESPACE_NAME from dba_data_files where TABLESPACE_NAME='MYTABPDB2';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
/u01/app/oracle/oradata/ORCL/pdb2/mytabpdb201.dbf
MYTABPDB2


SQL>

And this time the PITR works fine. The tablespace was restored.

Conclusion

As seen in this blog, it is recommended to use a recovery catalog when coming to do some PITR operations in a multitenant environment.

Cet article Oracle 19c : Point-In-Time Recovery in a PDB est apparu en premier sur Blog dbi services.

OpenWorld and Code One 2019 : The Journey Home

Tim Hall - Fri, 2019-09-20 09:06

I got up at a reasonable time and got caught up with blog posts, then it was time to check out and get the BART to the airport. Bag drop was empty, because the rest of the planet was waiting at security. After what felt like an eternity I was through security and sat down and waited for my plane…

We boarded the flight from San Francisco to Amsterdam on time and didn’t have a significant wait for the departure slot, so the captain said we would arrive early. No luck with a spare seat on this flight. The guy next to me was about my size, but wasn’t making an effort to stay in his space. There was some serious man-spreading going on. I ended up spending most of the flight leaning into the aisle and pulling my arm across my body, so my left elbow feels knackered now. Doing that for 11 hours is not fun. I managed to watch the following films.

  • The Shape of Water – I love this film. I’ve seen it a load of times.
  • Rocketman – I wasn’t feeling this at the start. I’m not big on musicals, and I didn’t like the stuff when he was a kid. Once Taron Egerton started playing him it was cool. I kind-of forgot he wasn’t Elton John. If you can get past the start, it’s worth a go!
  • The Accountant – I liked it. Ben Affleck doing deadpan and expressionless is the perfect role for him.
  • John Wick: Chapter 3 – Parabellum – I got up to the final sequence, so I’m not sure how it ends. Pretty much the same as the previous films, which I liked. Just crazy fight scenes with loads of guns.

There was one bit of the flight that was odd. The in-flight entertainment died, then we hit some turbulence. Queue me deciding it was linked and we were all going to die… Pretty soon the turbulence stopped, then after about 10 minutes the screens rebooted…

I had quite a long wait at Schiphol. About 3 hours. That was pretty dull, but what are you going to do?

The flight from Amsterdam to Birmingham was delayed by a few minutes, then the was the issue of people trying to board with 15 pieces of hand luggage and a donkey. I had my bag on my feet. Luckily it was only an hour flight.

II was originally planning to get the train home, but I was so tired I got a taxi. The driver was a nice guy and we had a chat about his kids and future plans, which is a lot nicer than listening to me drone on…

I’m now home and started doing the washing…

I’ll do a wrap-up post tomorrow, with some thoughts about the event…

Cheers

Tim…

OpenWorld and Code One 2019 : The Journey Home was first posted on September 20, 2019 at 3:06 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

The Top 10 Biggest Sports Brands (2019)

VitalSoftTech - Thu, 2019-09-19 10:03

The sportswear market is one of the most profitable and lucrative niches in the entire world. Some of the biggest sports brands have successfully capitalized on the demand for sports apparel and equipment, turning them into internationally acknowledged top sports brands. It is also incredibly competitive, and newer and more innovative designs are manufactured each […]

The post The Top 10 Biggest Sports Brands (2019) appeared first on VitalSoftTech.

Categories: DBA Blogs

OpenShift 4.1 Partner Introduction Day

Yann Neuhaus - Wed, 2019-09-18 12:57

Today was the OpenShift Partner Introduction Day by RedHat. This event happened at Lausanne
There were people with different backgrounds.

After presenting the RedHat compagny, the speaker explained what is Openshifts and why people must adopt it.
What we will retain is that OpenShift is trusted enterprise Kubernetes

With OpenShift we can for example
-Automated, full-stack installation from the container host to application services
-Seamless Kubernetes deployment to any cloud or on-premises environment
-Autoscaling of cloud resources
-One-click updates for platform, services,and applications

The new features in the version 4.1 were presented. The speaker also showed the Red Hat Openshift business value

The notions of Ansible Playbooks, Operator, CRI-O, Helm … were also explained.
The speaker also did a little demonstration of creating a small project with OpenShift.
Below the speaker during the demonstration

This was a very interesting event. It was general and allowed people to understand where Openshift is located in the architecture of containerization. But we have to retain that there are lot of components to understand when using OpenShift

Cet article OpenShift 4.1 Partner Introduction Day est apparu en premier sur Blog dbi services.

VirtualBox 6.0.12

Tim Hall - Tue, 2019-09-17 11:35

I know I’ve been distracted with the lead up to OpenWorld and Code One 2019, but how did I miss this release? VirtualBox 6.0.12 arrived two weeks ago.

The downloads and changelog are in the usual places.

Being a reckless type, I downloaded it and installed it on my Windows 10m laptop this morning. I’ve got a live demo in 2 hours!

The install was fine and my Vagrant VMs start with no problems. More extensive testing and installations on Oracle Linux and macOS hosts will happen when I get home, but so far so good!

Cheers

Tim…

VirtualBox 6.0.12 was first posted on September 17, 2019 at 5:35 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Basic Replication -- 4 : Data Dictionary Queries

Hemant K Chitale - Tue, 2019-09-17 08:58
Now that we have two Materialized Views against a Source table, how can we identify the relationship via the data dictionary ?

This is the query to the data dictionary in the database where the Source Table exists :

SQL> l
1 select v.owner MV_Owner, v.name MV_Name, v.snapshot_site, v.refresh_method,
2 l.log_table MV_Log_Name, l.master MV_Source,
3 to_char(l.current_snapshots,'DD-MON-RR HH24:MI:SS') Last_Refresh_Date
4 from dba_registered_snapshots v, dba_snapshot_logs l
5 where v.snapshot_id = l.snapshot_id
6* and l.log_owner = 'HEMANT'
SQL> /

MV_OWNER MV_NAME SNAPSHOT_SITE REFRESH_MET MV_LOG_NAME MV_SOURCE LAST_REFRESH_DATE
-------- ---------------- ------------------ ----------- ------------------ --------------------- ------------------
HEMANT MV_OF_SOURCE ORCLPDB1 PRIMARY KEY MLOG$_SOURCE_TABLE SOURCE_TABLE 16-SEP-19 22:41:04
HEMANT MV_2 ORCLPDB1 PRIMARY KEY MLOG$_SOURCE_TABLE SOURCE_TABLE 16-SEP-19 22:44:37

SQL>


I have run the query on the DBA_REGISTERED_SNAPSHOTS and DBA_SNAPSHOT_LOGS because the join on SNAPSHOT_ID is not available between DBA_REGISTERED_MVIEWS and DBA_MVIEW_LOGS.  Similarly, the CURRENT_SNAPSHOTS column is also not available in DBA_MVIEW_LOGS.  These two columns are important when you have *multiple* MViews against the same Source Table.

Note the "Snapshot_Site" is required because the Materialized View can be in a different database.  In this example, the MViews are in the same database as the Source Table. 

The target database containing the MViews will not have the Source Table "registered" into a data dictionary view.  The Source Table will be apparently from the QUERY column of DBA_MVIEWS (also, if the Source Table is in a different database, look at the MASTER_LINK column to identify the Database Link that connects to the source database).


UPDATE :  In case you are wondering what query you'd write against the database containing the Materialized View(s), you can simply query DBA_MVIEWS.

SQL> l
1 select mview_name, query, master_link, refresh_mode, refresh_method,
2 last_refresh_type, to_char(last_refresh_date,'DD-MON-RR HH24:MI:SS') Last_Refresh_Date
3 from dba_mviews
4 where owner = 'HEMANT'
5* order by 1 desc
SQL> /

MVIEW_NAME
------------
QUERY
--------------------------------------------------------------------------------
MASTER_LINK REFRESH_M REFRESH_ LAST_REF LAST_REFRESH_DATE
------------ --------- -------- -------- ---------------------------
MV_OF_SOURCE
SELECT "SOURCE_TABLE"."ID" "ID","SOURCE_TABLE"."DATA_ELEMENT_1" "DATA_ELEMENT_1"
,"SOURCE_TABLE"."DATA_ELEMENT_2" "DATA_ELEMENT_2","SOURCE_TABLE"."DATE_COL" "DAT
E_COL" FROM "SOURCE_TABLE" "SOURCE_TABLE"
DEMAND FAST FAST 16-SEP-19 22:41:04

MV_2
select id, data_element_2
from source_table
DEMAND FORCE FAST 16-SEP-19 22:44:37


SQL>


Here, the MASTER_LINK would specify the name of the Database Link used to connect to the Master (i.e. Source) table, if it was a different database.

REFRESH_MODE is ON DEMAND so that the MVs can be refreshed by either scheduled jobs or manually initiated calls -- as I've done in previous blog posts.  (The alternative can be ON COMMIT, if the Source Table and MV are in the same database).

LAST_REFRESH_TYPE is FAST, meaning that the refresh was able to use the MV Log on the Source Table to identify changes and merge them into the MV.  See the entries from the trace file that I've shown in the previous blog post.

Note the difference in the two REFRESH_METHOD values for the two MVs.
MV_OF_SOURCE was created as "refresh fast on demand" while "MV_2" was created as "refresh on demand".

We'll explore the implications of "REFRESH FAST" and just "REFRESH" alone in a subsequent blog post.

Question : Why does the QUERY look so different between MV_OF_SOURCE and MV_2 ?



Categories: DBA Blogs

One More Thing: New Oracle Cloud free tier better than AWS free tier

Iggy Fernandez - Mon, 2019-09-16 19:20
Larry Ellison just concluded his Oracle OpenWorld keynote with the announcement of an Oracle Cloud free tier that is better than the AWS free tier. The Oracle Cloud free tier never expires and includes the crown jewels. The slides say it all.
Categories: DBA Blogs

How to Make a Pandora Submission? An Easy Guide!

VitalSoftTech - Mon, 2019-09-16 10:12

How exactly can you make a Pandora submission? Getting your music on Pandora might seem intimidating. However, if you’re an aspiring musician or an independent artist, this music streaming platform can be more beneficial for you than you might expect. More and more musicians are entering the booming music industry. In fact, it’s hard to […]

The post How to Make a Pandora Submission? An Easy Guide! appeared first on VitalSoftTech.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator