Feed aggregator

Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 11 Regions Latest Sydney @ Australia

Online Apps DBA - Sat, 2019-08-31 05:29

New Region Added: Sydney, Australia In 2019 till August Oracle added 7 new Regions in Gen 2 Cloud that’s OCI and a lot more in the pipeline This means you now have in total 11 regions, 4 with 3 availability domain while 7 with single availability domain If you want to get full picture related […]

The post Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 11 Regions Latest Sydney @ Australia appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Documentum – Encryption/Decryption of WebTop 6.8 passwords ‘REJECTED’ with recent JDK

Yann Neuhaus - Sat, 2019-08-31 03:00

Recently, we had a project to modernize a little bit a pretty old Documentum installation. As part of this project, there were a refresh of the Application Server hosting a WebTop 6.8. In this blog, I will be talking about an issue that we faced in encryption & decryption of passwords in the refresh environment. This new environment was using WebLogic 12.1.3 with the latest PSU in conjunction with the JDK 1.8u192. Since WebTop 6.8 P08, the JDK 1.8u111 is supported so a newer version of the JDK8 should mostly be working without much trouble.

To properly deploy a WebTop application, you will need to encrypt some passwords like the Preferences or Preset passwords. Doing so in the new environment unfortunately failed:

[weblogic@wls_01 ~]$ work_dir=/tmp/work
[weblogic@wls_01 ~]$ cd ${work_dir}/
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ jar -xf webtop_6.8_P27.war WEB-INF/classes WEB-INF/lib
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ kc="${work_dir}/WEB-INF/classes/com/documentum/web/formext/session/KeystoreCredentials.properties"
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ sed -i "s,use_dfc_config_dir=[^$]*,use_dfc_config_dir=false," ${kc}
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ sed -i "s,keystore.file.location=[^$]*,keystore.file.location=${work_dir}," ${kc}
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ grep -E "^use_dfc_config_dir|^keystore.file.location" ${kc}
use_dfc_config_dir=false
keystore.file.location=/tmp/work
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ enc_classpath="${work_dir}/WEB-INF/classes:${work_dir}/WEB-INF/lib/*"
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ java -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Aug 27, 2019 11:02:23 AM java.io.ObjectInputStream filterCheck
INFO: ObjectInputFilter REJECTED: class com.rsa.cryptoj.o.nc, array length: -1, nRefs: 1, depth: 1, bytes: 72, ex: n/a
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter or jdk.serialFilter property
        at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
        at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
        at java.security.KeyStoreSpi.engineGetEntry(KeyStoreSpi.java:473)
        at java.security.KeyStore.getEntry(KeyStore.java:1521)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.getSecretKey(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.decryptByDES(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorTool.main(TrustedAuthenticatorTool.java:64)
[weblogic@wls_01 work]$

 

As you can see above, the encryption of password is failing with some error. The issue is that starting with the JDK 1.8u171, Oracle introduced some new restrictions. From the Oracle release note (JDK-8189997):

New Features
security-libs/javax.crypto
Enhanced KeyStore Mechanisms
A new security property named jceks.key.serialFilter has been introduced. If this filter is configured, the JCEKS KeyStore uses it during the deserialization of the encrypted Key object stored inside a SecretKeyEntry. If it is not configured or if the filter result is UNDECIDED (for example, none of the patterns match), then the filter configured by jdk.serialFilter is consulted.

If the system property jceks.key.serialFilter is also supplied, it supersedes the security property value defined here.

The filter pattern uses the same format as jdk.serialFilter. The default pattern allows java.lang.Enum, java.security.KeyRep, java.security.KeyRep$Type, and javax.crypto.spec.SecretKeySpec but rejects all the others.

Customers storing a SecretKey that does not serialize to the above types must modify the filter to make the key extractable.

 

On recent versions of Documentum Administrator for example, there is no issue because it complies but for WebTop 6.8, it doesn’t and therefore to be able to encrypt/decrypt the password, you will have to modify the filter. There are several solutions to our current problem:

  • Downgrade the JDK: this isn’t a good solution since it might introduce security vulnerabilities and it will also prevent you to upgrade it in the future so…
  • Extend the ‘jceks.key.serialFilter‘ definition inside the ‘$JAVA_HOME/jre/lib/security/java.security‘ file: that’s a possibility but it means that any processes using this Java will use the updated filter list. Whether or not that’s fine, it’s up to you
  • Override the ‘jceks.key.serialFilter‘ definition using a JVM startup parameter on a per-process basis: better control on which processes are allowed to use updated filters and which ones aren’t

 

So the simplest way, and most probably the better way, to solve this issue is to simply add a command line parameter to specify that you want to allow some additional classes. By default, the ‘java.security‘ provides a list of some classes that are allowed and it ends with ‘!*‘ which means that everything else is forbidden.

[weblogic@wls_01 work]$ grep -A2 "^jceks.key.serialFilter" $JAVA_HOME/jre/lib/security/java.security
jceks.key.serialFilter = java.lang.Enum;java.security.KeyRep;\
  java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*

[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ grep "^security.provider" $JAVA_HOME/jre/lib/security/java.security
security.provider.1=com.rsa.jsafe.provider.JsafeJCE
security.provider.2=com.rsa.jsse.JsseProvider
security.provider.3=sun.security.provider.Sun
security.provider.4=sun.security.rsa.SunRsaSign
security.provider.5=sun.security.ec.SunEC
security.provider.6=com.sun.net.ssl.internal.ssl.Provider
security.provider.7=com.sun.crypto.provider.SunJCE
security.provider.8=sun.security.jgss.SunProvider
security.provider.9=com.sun.security.sasl.Provider
security.provider.10=org.jcp.xml.dsig.internal.dom.XMLDSigRI
security.provider.11=sun.security.smartcardio.SunPCSC
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Using an empty parameter allows everything (not the best idea)
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Encrypted: [4Fc6kvmUc9cCSQXUqGkp+A==], Decrypted: [MyP4ssw0rd]
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Using the default value from java.security causes the issue
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Aug 27, 2019 12:05:08 PM java.io.ObjectInputStream filterCheck
INFO: ObjectInputFilter REJECTED: class com.rsa.cryptoj.o.nc, array length: -1, nRefs: 1, depth: 1, bytes: 72, ex: n/a
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter or jdk.serialFilter property
        at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
        at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
        at java.security.KeyStoreSpi.engineGetEntry(KeyStoreSpi.java:473)
        at java.security.KeyStore.getEntry(KeyStore.java:1521)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.getSecretKey(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.encryptByDES(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorTool.main(TrustedAuthenticatorTool.java:63)
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Adding com.rsa.cryptoj.o.nc to the allowed list
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='com.rsa.cryptoj.o.nc;java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Aug 27, 2019 12:06:14 PM java.io.ObjectInputStream filterCheck
INFO: ObjectInputFilter REJECTED: class com.rsa.jcm.f.di, array length: -1, nRefs: 3, depth: 2, bytes: 141, ex: n/a
java.security.UnrecoverableKeyException: Rejected by the jceks.key.serialFilter or jdk.serialFilter property
        at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
        at com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
        at java.security.KeyStoreSpi.engineGetEntry(KeyStoreSpi.java:473)
        at java.security.KeyStore.getEntry(KeyStore.java:1521)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.getSecretKey(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorUtils.encryptByDES(Unknown Source)
        at com.documentum.web.formext.session.TrustedAuthenticatorTool.main(TrustedAuthenticatorTool.java:63)
[weblogic@wls_01 work]$
[weblogic@wls_01 work]$ # Adding com.rsa.jcm.f.* + com.rsa.cryptoj.o.nc to the allowed list
[weblogic@wls_01 work]$ java -Djceks.key.serialFilter='com.rsa.jcm.f.*;com.rsa.cryptoj.o.nc;java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*' -classpath "${enc_classpath}" com.documentum.web.formext.session.TrustedAuthenticatorTool "MyP4ssw0rd"
Encrypted: [4Fc6kvmUc9cCSQXUqGkp+A==], Decrypted: [MyP4ssw0rd]
[weblogic@wls_01 work]$

 

So as you can see above, to encrypt passwords for WebTop 6.8 using a JDK 8u171+, you will need to add both ‘com.rsa.cryptoj.o.nc‘ and ‘com.rsa.jcm.f.*‘ in the allowed list. There is a wildcard for the JCM one because it will require several classes from this package.

The above was for the encryption of the password. That’s fine but obviously, when you will deploy WebTop, it will need to decrypt these passwords at some point… So you will also need to put the same JVM parameter for the process of your Application Server (for the Managed Server’s process in WebLogic):

-Djceks.key.serialFilter='com.rsa.jcm.f.*;com.rsa.cryptoj.o.nc;java.lang.Enum;java.security.KeyRep;java.security.KeyRep$Type;javax.crypto.spec.SecretKeySpec;!*'

 

You can change the order of the classes in the list, it just needs to be before the ‘!*‘ section because everything after that is ignored.

 

Cet article Documentum – Encryption/Decryption of WebTop 6.8 passwords ‘REJECTED’ with recent JDK est apparu en premier sur Blog dbi services.

Getting started with Hyper-V on Windows 10

The Oracle Instructor - Fri, 2019-08-30 03:27

Microsoft Windows 10 comes with its own virtualization software called Hyper-V. Not for the Windows 10 Home edition, though.

Check if you fulfill the requirements by opening a CMD shell and typing in systeminfo:

The below part of the output from systeminfo should look like this:

If you see No there instead, you need to enable virtualization in your BIOS settings.

Next you go to Programms and Features and click on Turn Windows features on or off:

You need Administrator rights for that. Then tick the checkbox for Hyper-V:

That requires a restart at the end:

Afterwards you can use the Hyper-V Manager:

Hyper-V can do similar things than VMware or VirtualBox. It doesn’t play well together with VirtualBox in my experience, though: VirtualBox VMs refused to start with errors like “VT-x is not available” after I installed Hyper-V. I also found it a bit trickier to handle than VirtualBox, but that’s maybe just because of me being less familiar with it.

The reason I use it now is because one of our customers who wants to do an Exasol Administration training cannot use VirtualBox – but Hyper-V is okay for them. And now it looks like that’s also an option. My testing so far shows that our educational cluster installation and management labs work also with Hyper-V.

Categories: DBA Blogs

Tagging

Jeff Moss - Thu, 2019-08-29 11:32

Show tags for all resources

az group list --query [].tags 

Create tag

az tag create --name "Review Date"

Create tag with values

az tag add-value --name Environment --value Development

Set a tag to a value for a resource group

az group update -n example-resource-group --set tags.Environment=prod tags.CostCenter=IT

Tagging

Jeff Moss - Thu, 2019-08-29 11:23

— Create a new tag

New-AzureRmTag -Name "Review Date"

— Create a tag and set the Purpose

New-AzureRmTag -Name Purpose -Value "Azure DevOps Self Hosted Agent"

— Get details of all tags

Get-AzureRmTag -Detailed

— Get selected column details of all tags

Get-AzureRmTag -Detailed | select name,values

— Remove a tag

Remove-AzureRmTag -Name "Review Date"

Oracle 19c Automatic Indexing: How Many Executions Does It Take? (One Shot)

Richard Foote - Wed, 2019-08-28 23:16
One of the first questions I asked when playing with the new Oracle Database 19c Automatic Indexing feature was how many executions of an SQL does it take for a new index to be considered? To find out, I create the following table: I then ran the following query just once and checked to see […]
Categories: DBA Blogs

Cloning of RDS Instance to Another Account

Pakistan's First Oracle Blog - Wed, 2019-08-28 21:01
Frequently, we need to refresh our development RDS based Oracle database from the production which is in another account in AWS. So we take a snapshot from production, share it with another account and then restore it in target from the snapshot.

I will post full process in a later post, but for now just sharing an issue we encountered today. While trying to share a snapshot with another account, I got the following error:


Sharing snapshots encrypted with the default service key for RDS is currently not supported.


Now, this snapshot was using default RDS keys and that is not supported. So in order to share it, we need to have customer managed keys and then copy this snapshot with these news keys and only then we can share it. You don't have to do anything at the target, as these customer managed keys become part of that snapshot. You can create customer managed keys in KMS console and may be assign to IAM user you are using.


I hope it helps.
Categories: DBA Blogs

Speaking at Trivadis Performance Days 2019

Richard Foote - Wed, 2019-08-28 06:36
I’ll again be speaking at the wonderful Trivadis Performance Days 2019 conference in Zurich, Switzerland on 26-27 September. There’s again another fantastic lineup of speakers, including: CHRISTIAN ANTOGNINI IVICA ARSOV MARK ASHDOWN SHASANK CHAVAN EMILIANO FUSAGLIA STEPHAN KÖHLER JONATHAN LEWIS FRANCK PACHOT TANEL PODER DANI SCHNIDER   I’ll be presenting two papers: “Oracle 18c and […]
Categories: DBA Blogs

Kafka | IoT Ecosystem ::Cluster; Performance Metrics; Sensorboards & OBD-II::

Rittman Mead Consulting - Wed, 2019-08-28 04:30
:Cluster; Performance Metrics; Sensorboards & OBD-II::

Infrastructure is the place to start and the keyword here is scalability. Whether it needs to run on premise, on cloud or both, Kafka makes it possible to scale at low complexity cost when more brokers are either required or made redundant. It is also equally easy to deploy nodes and nest them in different networks and geographical locations. As for IoT devices, whether it’s a taxi company, a haulage fleet, a racing team or just a personal car, Kafka can make use of the existing vehicle OBDII port using the same process; whether it’s a recording studio or a server room packed with sensitive electronic equipment and where climate control is critical, sensorboards can be quickly deployed and stream almost immediately into the same Kafka ecosystem. Essentially, pretty much anything that can generate data and touch python will be able to join this ecosystem.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

In large data centres it is fundamental to keep a close eye on misbehaving nodes, possibly overheating, constantly failing jobs or causing unexpected issues. Fires can occur too. This is quite a challenge with thousands and thousands of nodes. Though, Kafka allows for all of the node stats to individually stream in real time and get picked up by any database or machine, using Kafka Connect or kafka-python for consumption.

To demonstrate this on a smaller scale with a RaspberryPi 3 B+ cluster and test a humble variety of different conditions, a cluster of 7 nodes, Pleiades, was set up. Then, to make it easier to identify them, each computer was named after the respective stars of the Pleiades constellation.

  • 4 nodes {Alcyone; Atlas; Pleione; Maia} in a stack with cooling fans and heatsinks
:Cluster; Performance Metrics; Sensorboards & OBD-II::

  • 1 node in metal case with heatsink {Merope}
:Cluster; Performance Metrics; Sensorboards & OBD-II::

  • 1 node in plastic case {Taygeta}
:Cluster; Performance Metrics; Sensorboards & OBD-II::

  • 1 node in touchscreen plastic case {Electra}
:Cluster; Performance Metrics; Sensorboards & OBD-II::::Yes. It's a portable Retropie, Kafka broker & perfect for Grafana dashboards too::

Every single node has been equipped with the same python Kafka-producer script, from which the stream is updated every second in real-time under 1 topic, Pleiades. Measures taken include CPU-Percentage-%, CPU-Temperature, Total-Free-Memory, Available-System-Memory, CPU-Current-Hz.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

Kafka then connects to InfluxDB on Pleione, which can be queried using the terminal through a desktop or android SSH client. Nothing to worry about in terms of duplication, load balancing or gaps in the data. Worst case scenario InfluxDB, for example, crashes and the data will still be retrievable using KSQL to rebuild gap in DB depending on the retention policy set.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

We can query InfluxDB directly from the command line. The Measure (InfluxDB table) for Pleiades is looking good and holding plenty of data for us to see in Grafana next.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

A live feed is then delivered with Grafana dashboards. It's worth noting how mobile friendly these dashboards really are.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

At a glance, we know the critical factors such as how much available memory there is and how much processing power is being used, for the whole cluster as well as each individual node, in real time and anywhere in the world (with an internet connection).

It has then been observed that the nodes in the stack remain fairly cool and stable between 37 °C and 43 °C, whereas the nodes in plastic cases around 63 °C. Merope is in the metal casing with a heatsink, so it makes sense to see it right in the middle there at 52 °C. Spikes in temperature and CPU usage are directly linked to running processes. These spikes are followed by software crashes. Moving some of the processes from the plastic enclosures over to the stack nodes stopped Grafana from choking; this was a recurring issue when connecting to the dashboards from an external network. Kafka made it possible to track the problem in real time and allow us to come up with a solution much quicker and effortlessly; and then immediately also track if that solution was the correct approach. In the end, the SD cards between Electra and Pleione were quickly swapped, effectively moving Pleione to the fan cooled stack where it was much happier living.

If too many spikes begin to occur, we should expect for nodes to soon need maintenance, repair or replacement. KSQL makes it possible to tap into the Kafka Streams and join to DW stored data to forecast these events with increased precision and notification time. It's machine-learning heaven as a platform. KSQL also makes it possible to join 2 streams together and thus create a brand new stream, so to add external environment metrics and see how they may affect our cluster metrics, a sensor board on a RaspberryPi Zero-W was setup producing data into our Kafka ecosystem too.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

To keep track of the room conditions where the cluster sits, an EnviroPhat sensor board is being used. It measures temperature, pressure, colour and motion. There are many available sensorboards for SBCs like RaspberryPi that can just as easily be added to this Kafka ecosystem. Again, important to emphasize both data streams and dashboards can be accessed from anywhere with an internet connection.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

OBDII data from vehicles can be added to the ecosystem just as well. There are a few ways this can be achieved. The most practical, cable free option is with a Bluetooth ELM327 device. This is a low cost adaptor that can be purchased and installed on pretty much any vehicle after 1995. The adaptor plugs into the OBDII socket in the vehicle, connects via Bluetooth to a Pi-Zero-W, which then connects to a mobile phone’s 4G set up as a wi-fi hotspot. Once the data is flowing as far as needing a Kafka topic, the create command is pretty straight forward.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

With the obd-producer python script running, another equivalently difficult command opens up the console consumer for the topic OBD in Alcyone, and we can check if we have streams and if the OBD data is flowing through Kafka. A quick check on my phone reveals we have flow.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

To make things more interesting, the non-fan nodes in plastic and metal enclosures {Taygeta; Electra; Merope} were moved to a different geographical location and setup under a different network. This helps network outages and power cuts become less likely to affect our dashboard services or ability to access the IoT data. Adding cloud services to mirror this setup at this point would make it virtually bulletproof; zero point of failure is the aim of the game. When the car is on the move, Kafka is updating InfluxDB + Grafana in real time, and the intel can be tracked live as it happens from a laptop, desktop or phone from anywhere in the world.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

In a fleet scenario, harsh braking could trigger a warning and have the on-duty tracking team take immediate action; if the accelerometer spikes as well, then that could suggest an accident may have just occurred or payload checks may be necessary. Fuel management systems could pick up on driving patterns and below average MPG performance, even sense when the driver is perhaps not having the best day. This is where the value of Kafka in IoT and the possibilities of using ML algorithms really becomes apparent because it makes all of this possible in real time without a huge overhead of complexity.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

After plugging in the OBDII bluetooth adapter to the old e92-335i and driving it for 20 minutes, having it automatically stream data over the internet to the kafka master, Alcyone, and automatically create and update an OBD influxdb measure in Pleione, it can quickly be observed in Grafana that it doesn't enjoy idling that much; the coolant and intake air temperature dropped right down as it started moving at a reasonable speed. This kind of correlation is easier to spot in time series Grafana dashboards whereas it would be far less intuitive with standard vehicle dashboards that provide only current values.

:Cluster; Performance Metrics; Sensorboards & OBD-II::

So now that a real bare-metal infrastructure exists - and it’s a self-monitoring, low power consumption cluster, spread across multiple geographical locations, keeping track of enviro-sensor producers from multiple places/rooms, logging all vehicle data and learning to detect problems as far ahead as possible - adding sensor data pickup points to this Kafka ecosystem is as simple as its inherent scalability. As such, with the right Kafka-Fu, pretty much everything is kind of plug-&-play from this point onwards, meaning we can now go onto connecting, centralising and automating as many things in life as possible that can become IoT using Kafka as the core engine under the hood.

:Cluster; Performance Metrics; Sensorboards & OBD-II::
Categories: BI & Warehousing

OAC Row Limits and Scale Up or Down

Rittman Mead Consulting - Wed, 2019-08-28 04:26
OAC Row Limits and Scale Up or Down

I created an OAC instance the other day for some analysis in preparation of my OOW talk, and during the analytic journey I reached the row limit with the error Exceeded configured maximum number of allowed input records.

OAC Row Limits and Scale Up or Down

Since a few releases back, each OAC instance has fixed row limits depending by the number of OCPU assigned that can be checked in the related documentation, with the current ones shown in the table below.

OAC Row Limits and Scale Up or Down

If you plan using BI Publisher (included in OAC a few versions ago) check also the related limits.

OAC Row Limits and Scale Up or Down

Since in my analytical journey I reached the row limit, I wanted to scale up my instance, but surprise surprise, the Scale Up or Down option wasn't available.

OAC Row Limits and Scale Up or Down

After some research I understood that Scaling Up&Down is available only if you chose originally a number of OCPUs greater than one. This is in line with Oracle's suggestion to use 1 OCPU only for non-production instances as stated in the instance creation GUI.

OAC Row Limits and Scale Up or Down

When choosing originally an OAC instance with 4 OCPUs the Scale Up/Down option becomes available (you need to start the instance first).

OAC Row Limits and Scale Up or Down

When choosing the scale option, we can decide whether to increase/decrease the number of OCPUs.

OAC Row Limits and Scale Up or Down

Please note that we could have limited choice in the number of OCPUs we can increase/decrease by depending on the availability and current usage.

Concluding, if you want to be able to Scale Up/Down your OAC instances depending on your analytic/traffic requirements, always start your instance with a number of OCPUs greater than one!

Categories: BI & Warehousing

Old Locked Optimizier Stats? Collect in Pending area, Compare and Redeploy

VitalSoftTech - Tue, 2019-08-27 11:40

Want to move stats from a development environment? What if the stats are dramatically different? Don’t deploy until you what is different and the impact the difference will cause in your production environment. Easily compare the stats between the Production and the Development environment before you deploy! Tables with Locked Stats SELECT owner, table_name, partition_name, […]

The post Old Locked Optimizier Stats? Collect in Pending area, Compare and Redeploy appeared first on VitalSoftTech.

Categories: DBA Blogs

AW-argh

Jonathan Lewis - Tue, 2019-08-27 09:59

This is another of the blog notes that have been sitting around for several years – in this case since May 2014, based on a script I wrote a year earlier. It makes an important point about “inconsistency” of timing in the way that Oracle records statistics of work done. As a consequence of being first drafted in May 2014 the original examples showed AWR results from 10.2.0.5 and 11.2.0.4 – I’ve just run the same test on 19.3.0.0 to see if anything has changed.

 

[Originally drafted May 2014]: I had to post this as a reminder of how easy it is to forget things – especially when there are small but significant changes between versions. It’s based loosely on a conversation from Oracle-L, but I’m going to work the issue in the opposite order by running some code and showing you the ongoing performance statistics rather than the usual AWR approach of reading the performance stats and trying to guess what happened.

The demonstration needs two sessions to run; it’s based on one session running some CPU-intensive SQL inside an anonymous PL/SQL block with a second another session launching AWR snapshots at carefully timed moments. Here’s the code for the working session:

rem
rem     Script:         awr_timing.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2013
rem

alter session set "_old_connect_by_enabled"=true';

create table kill_cpu(n, primary key(n))
organization index
as
select  rownum n
from    all_objects
where   rownum <= 26 -- > comment to avoid wordpress format issue
;

execute dbms_stats.gather_table_stats(user,'kill_cpu')

pause Take an AWR snapshot from another session and when it has completed  press return

declare
        m_ct    number;
begin

        select  count(*) X
        into    m_ct
        from    kill_cpu
        connect by
                n > prior n
        start with
                n = 1
        ;

        dbms_lock.sleep(30);

end;
/

You may recognise an old piece of SQL that I’ve often used as a way of stressing a CPU and seeing how fast Oracle can run. The “alter session” at the top of the code is necessary to use the pre-10g method of running a “connect by” query so that the SQL does a huge number of buffer gets (and “buffer is pinned count” visits). On my current laptop the query takes about 45 seconds (all CPU) to complete. I’ve wrapped this query inside a pl/sql block that then sleeps for 30 seconds.

From the second session you need to launch an AWR snapshot 4 times – once in the pause shown above, then (approximately) every 30 seconds thereafter. The second one should execute while the SQL statement is still running, the third one should execute while the sleep(30) is taking place, and the fourth one should execute after the pl/sql block has ended and the SQL*Plus prompt is visible.

Once you’ve got 4 snapshots you can generate 3 AWR reports. The question to ask then, is “what do the reports say about CPU usage?” Here are a few (paraphrased) numbers – starting with 10.2.0.5 comparing the “Top 5 timed events”, “Time Model”, and “Instance Activity” There are three sets of figures, the first reported while the SQL was still running, the second reported after the SQL statement had completed and the dbms_lock.sleep() call is executing, the last reported after the PL/SQL block has completed. There are some little oddities in the numbers due to backgorund “noise” – but the key points are still clearly visible:

While the SQL was executing
Top 5
-----
CPU Time                       26 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     26.9        100.0
DB CPU                                       26.2         97.6

Instance Activity
-----------------
CPU used by this session         0.65 seconds
recursive cpu usage              0.67 seconds

SQL ordered by CPU
------------------
31 seconds reported for both the SQL and PLSQL
During the sleep()
Top 5
-----
CPU Time                        19 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     19.0        100.0
DB CPU                                       18.6         98.1

Instance Activity
-----------------
CPU used by this session         0.66 seconds
recursive cpu usage             44.82 seconds

SQL ordered by CPU
------------------
14 seconds reported for both the SQL and PLSQL
After the PL/SQL block ended
Top 5
-----
CPU Time                         1 second

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                      1.4         99.9
DB CPU                                        1.4         99.7

Instance Activity
-----------------
CPU used by this session        44.68 seconds
recursive cpu usage              0.50 seconds

SQL ordered by CPU
------------------
1 second reported for the PLSQL, but the SQL was not reported

Points to notice:

While the SQL was excecuting (and had been executing for about 26 seconds, the Time Model mechanism was recording the work done by the SQL, and the Top N information echoed the Time model CPU figure. At the same time the “CPU used …” Instance Activity Statistics have not recorded any CPU time for the session – and they won’t until the SQL statement completes. Despite this, the “SQL ordered by …” reports double-count in real-time, show the SQL and the PL/SQL cursors as consuming (with rounding errors, presumable) 31 seconds each.

After the SQL execution was over and the session was sleeping the Time model (hence the Top 5) had recorded a further 19 seconds of work. The instance activity, however, has now accumulated 44 seconds of CPU, but only as “recursive CPU usage” (recursive because our SQL was called from with a PL/SQL block), with no “CPU used by this session”. The “SQL ordered by …” figures have recorded the amount of CPU used by both the SQL and PL/SQL in the interval (i.e. 14 seconds – which is a little off) recorded against both.)

After the PL/SQL block has completed the Time Model and the Top 5 report both say that nothing much happened in the interval, but the Instance Activity suddenly reports 44.68 seconds of CPU used by this session – which (roughly speaking) is truish as the PL/SQL block ended and assigned the accumulated recursive CPU usage to the session CPU figure. Finally, when we get down to the “SQL ordered by CPU” the SQL was not reported  – it did no work in the interval – but the PL/SQL block was still reported but only with a generous 1 second of CPU since all it did in the interval was finish the sleep call and tidy up the stack before ending.

Now the same sets of figures for 11.2.0.4 – there’s a lot of similarity, but one significant difference:

While the SQL was executing

Top 5
-----
CPU Time                        26.6 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     27.0        100.0
DB CPU                                       26.6         98.5

Instance Activity
-----------------
CPU used by this session         1.09 seconds
recursive cpu usage              1.07 seconds

SQL ordered by CPU
------------------
25.6 seconds reported for both the SQL and PLSQL
During the sleep()
Top 5
-----
CPU Time                        15.1 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                     15.3         99.8
DB CPU                                       15.1         98.2

Instance Activity
-----------------
CPU used by this session        41.09 seconds
recursive cpu usage             41.03 seconds

SQL ordered by CPU
------------------
14.3 seconds reported for the SQL
13.9 seconds reported for the PLSQL
After the PL/SQL block ended
Top 5
-----
CPU Time                         1.4 seconds

Time Model                               Time (s) % of DB Time
------------------------------------------------- ------------
sql execute elapsed time                      1.5         99.6
DB CPU                                        1.4         95.4

Instance Activity
-----------------
CPU used by this session         1.02 seconds
recursive cpu usage              0.95 seconds

SQL ordered by CPU
------------------
0.5 seconds reported for the PLSQL, and no sign of the SQL

Spot the one difference in the pattern – during the sleep() the Instance Activity Statistic “CPU used by this session” is recording the full CPU time for the complete query, whereas the time for the query appeared only in the “recursive cpu” in the 10.2.0.5 report.

I frequently point out that for proper understanding of the content of an AWR report you need to cross-check different ways in which Oracle reports “the same” information. This is often to warn you about checking underlying figures before jumping to conclusions about “hit ratios”, sometimes it’s to remind you that while the Top 5 might say some average looks okay the event histogram may say that what you’re looking at is mostly excellent with an occasional disaster thrown in. In this blog note I just want to remind you that if you only ever look at one set of figures about CPU usage there are a few special effects (particularly relating to long running PL/SQL / Java / SQL) where you may have to work out a pattern of behaviour to explain unexpectedly large (or small) figures and contradictory figures, The key to the problem is recognising that different statistics may be updated at different stages in a complex process.

Footnote

I doubt if many people still run 11.2.0.4, so I also re-ran the test on 19.3.0.0 before publishing. The behaviour hasn’t changed since 11.2.0.4 although the query ran a little faster, perhaps due to changes in the mechanisms for this type of “connect by pump”.

11.2.0.4 stats

Name                                            Value
----                                            -----
session logical reads                      33,554,435
consistent gets                            33,554,435
consistent gets from cache                 33,554,435
consistent gets from cache (fastpath)      33,554,431
no work - consistent read gets             33,554,431
index scans kdiixs1                        33,554,433
buffer is not pinned count                 16,777,219


19.3.0.0 stats
Name                                            Value
----                                            -----
session logical reads                      16,843,299
consistent gets                            16,843,299
consistent gets from cache                 16,843,299
consistent gets pin                        16,843,298
consistent gets pin (fastpath)             16,843,298
no work - consistent read gets             16,790,166
index range scans                          33,554,433
buffer is not pinned count                 16,790,169

Some changes are trivial (like the change of name for “index scans kdiixs1”) some are interesting (like some gets not being labelled as “pin” and “pin (fastpath)”), some are baffling (like how you can manage 33M index range scans while doing only 16M buffer gets!)

Oracle Cloud at Customers(C@C): Overview and Concepts for Beginners

Online Apps DBA - Tue, 2019-08-27 05:56

Are you a Beginner in Oracle Cloud at Customers(C@C) and looking for an Overview of Oracle C@C & its Offerings? If YES, then the blog post at https://k21academy.com/oci47 is a perfect fit! The blog post discusses: ➥ What is Oracle C@C? ➥ How is Oracle C@C beneficial to you? ➥ Oracle C@C’s Offerings: Cloud at […]

The post Oracle Cloud at Customers(C@C): Overview and Concepts for Beginners appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

The 5 C’s of Marketing for 5 Key Areas

VitalSoftTech - Tue, 2019-08-27 05:18

If you’re interested in advertising, you’ve probably heard about the 5 C’s of marketing at one point or another. Not only is it a catchy phrase, but it’s also beneficial in contributing to the marketing decisions you make for your company. They’re not a complete guide about how you should be advertising your business, but […]

The post The 5 C’s of Marketing for 5 Key Areas appeared first on VitalSoftTech.

Categories: DBA Blogs

Analog Detour

Bobby Durrett's DBA Blog - Mon, 2019-08-26 23:04

Definition of analog: of, relating to, or being a mechanism or device in which information is represented by continuously variable physical quantities

Merriam Webster Dictionary Introduction My Analog Tools Before The Detour

I just finished going off on a major tangent away from my normal home computer pursuits such as studying algorithms, messing with Nethack source code, or practicing Python programming. Prior to this diversion I pursued these computer-related activities for learning and fun in my free time outside of work. But I spent the last three months pursuing activities related to pens, pencils, paper, and notebooks. Some people like to call using a pen and notebook an “analog” activity, so I used that word for the post title.

For several years my primary analog tools have been cheap wide ruled composition notebooks and Pilot G-2 gel pens. For example, I used a composition notebook to keep track of the homework and tests for my OpenCourseWare algorithms classes. I also used a composition notebook for non-technical purposes such as taking notes about things I read or heard, or just writing about things going on in my life. But otherwise most of what I wrote down was in computer form. I use Microsoft Outlook for my calendar at work and keep notes and tasks in Microsoft Word documents and text files, both in the office and on my home laptop. A lot of information was just kept in email. I have stuff on my iPhone.

But back in May I started looking at better ways to use pens, pencils and paper. I started looking on the internet for reviews and recommendations of what to buy and I ended up spending a lot of time and probably a few hundred dollars trying out different products and coming up with my own preferences. Finally, a couple or three weeks back I finally stepped back from going deeper and deeper into this exploration of pens, pencils, paper, and notebooks. I had spent enough money and time researching the best options. Now it was time for me to just use the tools I had and not buy any more and not read any more about them.

Now that I have stopped exploring these analog tools, I thought I should write a blog post about what I learned. I have a bunch of bookmarks of interesting web sites that I found helpful. I also have the results of my own use of the tools. Clearly, I am not a pen, pencil, paper, or notebook expert. This is an Oracle database blog and my strongest skills are in the database arena. Also, would people who read this blog for Oracle tuning scripts and information care about pens and paper? I am not sure. But the information that I learned and gathered has been helpful to me and it has been fun. Maybe others can benefit from my experience and if they want more expert advice, they can follow the links in this post to people who specialize in these areas.

I have decided to break this post, which is almost surely going to be my longest to date, into sections that alternate between things you write with and things you write on. Here is the outline:

  1. Introduction
  2. Pilot G-2 Gel Pens
  3. Graph Paper
  4. Pen Party
  5. Bullet Journal
  6. Fountain Pens
  7. Rhodia Dot Pads
  8. Pencils
  9. Conclusion

Lastly, I want to end the introduction with a caveat that is like those I find in a lot of the pen and paper blogs. I will have links to various businesses that sell products like pens or notebooks. I have not received any money to advertise these products, nor have I received any products for free. I bought all the products with my own money. As I mentioned in my privacy page, this blog is completely non-commercial and does not generate income in any way. I do not sell advertising or people’s emails to spammers or anything like that. My only income is from my job doing Oracle database work. I like having a blog and I cough up a couple hundred dollars a year for hosting, a domain name, and a certificate so I can put these posts out on the internet. So, don’t worry that I’m trying to sell you something because I am not.

Pilot G-2 Gel Pens Pilot G-2 Gel Pens of each size

I have been using Pilot G-2 gel pens for several years but did not realize that they came in different widths before I started researching all of these analog tools. I was using my gel pen with my composition notebook and kept accidentally smearing the ink. So, I finally Google searched for something like Pilot G-2 review and that really kicked off all of this research into types of pens and related tools. I found out that Pilot G-2 pens came in four width tips and that I had accidentally bought the widest tip pens which are the worst at smearing. They just put the most ink on the paper. I looked back at my Amazon orders and found that I bought a dozen “Fine Point” G-2 pens in 2015, but when I ran out and reordered 12 more in 2017, I got the “Bold Point” which has a thicker line. If you look at the pictures on Amazon, the boxes look very similar. You must know what the widths are named. So, as simple as this sounds, it was helpful to learn about the different tip sizes of my favorite gel pens and to try out each size to find which one I wanted. Here is a table of the four sizes:

Clip #MillimetersName# Colors0.38.38Ultra Fine405.5Extra Fine507.7Fine16101.0Bold8

The clips of the pens have numbers on them but only the .38 millimeter tip has a decimal point so that can be confusing. The .38 mm pen writes a very thin line. Evidently several manufacturers compete for the very thin line gel pen market. I have tried all four G-2 sizes and right now my favorite is the .5 mm extra fine.

I got the number of colors from Pilot’s site. It looks like the .7 mm fine has the most colors. I pretty much just use black, but I like red for editing.

A key thing that I learned about testing a new pen is that you must write a lot with it before you really know if you like it. Gel pens seem to start out a little uneven when you first write with them. Maybe they have been on a shelf somewhere for a long time. But after you write say 5 pages the ink starts to really flow well.

As I said in the introduction, I liked the Pilot G-2 gel pen before all this investigation began. But I know so much more about my favorite pen such as the different sizes. I will talk about this later, but one result of all this research is that I have started to like pens that write thinner lines. I accidentally bought a dozen of the thickest line G-2 pens in 2017 and it almost threw me off the pen. Now I have settled down with a G-2 pen with a tip half the size of the one I struggled with and it has really helped me out.

Graph Paper Graph paper with pseudo-code

I got the idea of switching to graph paper from the handwritten lecture notes from an algorithms class I was working through. Evidently the professor wrote out his notes using several color pens on graph paper. It made me wonder if I should use graph paper too.

I had already started using lined loose leaf filler paper in addition to composition notebooks. I think part of the problem is that it has been 30 years since I was a student in school, and I am out of practice using paper and 3 ring binders and all. My daughters use all these tools in school without thinking about it but for me the idea of using some loose filler paper is a revelation. Maybe some of this is not so much learning but relearning how to use these analog tools that I used regularly during my school days.

The lined filler paper was great as scratch paper when I was working on a problem for my online class but then I could clean up the answer and write it in my sturdy composition notebook to keep long term. But it made sense to get graph paper instead of lined paper because it would help to line things up along the vertical lines. If I need to write some code or pseudo-code and line up blocks within loops or if statements, I can use the vertical lines. The horizontal lines just separate the lines of code.

I ended up buying a nice pad of graph paper from Amazon but didn’t realize that some of the lines were darker than others. I prefer the lines to all be the same darkness. Then I ended up returning something that I bought from Office Depot with a gift card and they would not give me cash back. So, I used the gift card that was the refund to buy several pads of loose three-hole punched graph paper as well as two or three composition books with graph paper instead of lines. The composition books are kind of neat but they seem a little cheap and flimsy.

Of the three types of graph paper that I tried I like the Office Depot loose paper the best. All the lines are the same darkness and it is already three-hole punched so I can throw it in a binder if I want to keep it. I can use it for scratch paper, or I can save what I write down. So, at this point I like my loose paper to be graph paper but I still like the wide rule lined composition notebooks over the ones with graph paper.

Pen Party Pen Party Favorites

After reading the Pilot G-2 review I started reading reviews about other pens. Several web sites have best pen lists. The Gentleman Stationer has a nice best pens for 2019 page. The Pen Addict has lists of top 5 pens in different categories. Lastly, the online store JetPens, which specializes in Japanese pens, has their list of 33 best pens for 2019. One challenge that I had when I was looking at the different widths of G-2 pens is that I ended up buying packs of 4 or more pens on Amazon when I really just wanted one black pen to test. JetPens.com sells individual pens. If you spend $25 or more, you get free shipping. So, to test out some of the pens I had been reading about in the top pen lists I ordered several different kinds of pens, pencils, and an eraser, one of each kind.

Here was what I bought in my first order:

Uni-ball Signo UM-151 Gel Pen - 0.38 mm - Black         $2.85
Pentel EnerGel Euro Needle-PoinT - 0.35 mm - Black      $2.50
Platinum Preppy Fountain Pen - Black - 05 Medium Nib    $4.00
Tombow Mono 100 Pencil - HB                             $2.35
Zebra Sarasa Dry Gel Pen - 0.7 mm - Black               $2.95
Sakura Foam Eraser W 80                                 $1.50
Sakura Pigma Micron Pen - Size 02 - 0.3 mm - Black      $2.50
Tombow Mono Drawing Pen - 03 - Black                    $2.89
Tombow Fudenosuke Brush Pen - Hard - Black              $2.75
Uni Jetstream Sport Ballpoint Pen - 0.7 mm - Black Ink  $3.15

Some of these were suggested by Bullet Journal enthusiasts but I will talk about that in a later section. I will also talk about fountain pens and pencils later. The point here is that JetPens.com is cool. You must chunk up your purchase to be $25 or more but it is nice picking out individual pens to try them. One of the reasons I wanted to write this post is that I wanted to share about fun vendors like JetPens for people like me who had never heard of them before.

I took these things from my JetPens order and other pens and pencils that I already had and convinced my wife and three daughters to join me for a “pen party” to try them out. We sat around out kitchen table passing around groups of similar pens and pencils and tried each one out. After everyone finished trying them, we all picked our favorites.

My favorite was the .38 mm UM-151 or Signo DX gel pen. I have been using the Signo DX pen for tiny writing in a smaller notebook where it helps to have the smallest possible lines. I think it is slightly better than its G-2 .38 mm competitor. But I prefer the .5 mm G-2 gel pen for writing on full size 8 1/2 x 11 inch paper. So, my favorite gel pens are the .5 mm G-2 for normal writing and the .38 mm UM-151 for extra small writing.

My wife and two of my daughters preferred the Zebra Sarasa Dry .7 mm gel pen because it had the thick line of the .7 mm G-2 pen but with fast drying ink so that it did not smear. I’m not as big of a fan of the Sarasa Dry because the clip kind of sticks out and gets in my way. I may also just be a Pilot G-2 loyalist. Also, I have moved toward the thinner lines so the fast dry ink is not as important to me. We have a few of these Sarasa Dry pens in our pen cup in our kitchen.

My youngest daughter liked the Uni Jetstream Sport .7 mm ballpoint pen. This pen draws a finer line than you would think a .7 mm pen would because it is a ballpoint and not a gel pen. It also does not smear and is waterproof. We got her a bunch of these to take off to college as an incoming freshman this year.

We did not have this pen for our pen party but I wanted to mention the space pen that my family got me for Father’s Day. I carry it around in my pocket with my iPhone 8. I do not like the way the ink looks nearly as well as that of my gel pens, but the space pen is supposed to write at any angle and even on wet paper and it is mostly waterproof. Plus, it comes in a just under 4-inch-long bullet shape that fits comfortably in my pocket and cannot leak. The space pen gives me a way to write on just about anything when I am away from my home or office.

I am not a pen expert, but I thought I would pass along my own experiences as well as the links to the much more knowledgeable bloggers and vendors with their top pens lists. Someone out there might enjoy the top pens lists and trying out individual pens as much as we did. It was fun and even practical.

Bullet Journal Bullet Journal Supplies

At some point in this analog detour I started a Bullet Journal or BuJo for short. The main web site describes a BuJo as “The Analog Method for the Digital Age”. It sounds a little pretentious to call it “the” analog method instead of “an” analog method as if the Bullet Journal cures all ills. But I took the term analog from there to name this post. This post focuses on the analog tools and not the underlying philosophy or psychology behind the Bullet Journal. If you want to go deeper into that I recommend starting on the web site and then reading the book. I do not endorse everything in the book and web site or know if this method of journal writing really has all the claimed benefits. But I do endorse the tools that I have used such as the notebook which is very cool.

The Little Coffee Fox blog has a nice list of Bullet Journal supplies. I bought the black Leuchtturm1917 A5 Dotted Hardcover Notebook for my journal in May. A lot of the BuJo enthusiasts recommend this notebook. Even if you never get into bullet journals you might want to try one of these notebooks. The pages have dots instead of lines which is kind of like graph paper but less intrusive. The paper is nice quality and the notebook is hardbound and sturdy. The pages are numbered. I am writing this on August 25th, so I have been using my notebook for over 3 months. I like it a lot. Even if all the BuJo philosophy/psychology/method does not appeal to you the notebook itself is worth checking out.

I have tried several of the other BuJo supplies but I mainly use my Signo DX UM-151 .38 mm gel pen with my Leuchtturm A5 notebook along with a ruler. I got a foot long metal ruler with cork backing. I probably could have used any old straight edge just as well. I use it to draw lines and make boxes. I have not gotten into drawing and lettering as some BuJo enthusiasts do but I have purchased a couple of pens and a stencil to try. But I cannot really endorse something I do not use.

The Bullet Journal is all about using paper and pens instead of a computer which is really what this blog post is all about. What tools have I checked out to use offline? Can they work together with my computer tools to make me more productive and to help me have more fun?

Fountain Pens My First Two Fountain Pens – Preppies

I added a $4 fountain pen to my first JetPens order on a whim. I had to get the order up to $25 to get free shipping and $4 was reasonable for a fountain pen. If you look at the top pen lists above you will see that beginner fountain pens tend to run around $15 and yet my $4 Platinum Preppy was still considered a good pen. I got a second Preppy with my second JetPens order. The first was a medium nib and the second a fine nib. The medium has a .5 mm tip and the fine .3 mm. I had a lot of fun playing with these two pens. I ended up getting two matching converters so that I could use them with bottled ink and then I bought a nice bottle of black ink. Before I bought the full bottle of ink I got an ink sample and a pair of ink syringes so I could test out the ink.

While I experimented with my two Preppies I got a lot of helpful advice from the Reddit fountain pens group. Also vendors like JetPens and Goulet Pens have helpful videos and pages such as how to fill a fountain pen, how to clean a fountain pen, and how to use an ink sample. I think it makes good sense to start with a less expensive fountain pen and learn the ropes. The stereotypical experience of a new fountain pen user is that they do not learn how to take care of the pen, it stops working, and ends up in the back of a drawer unused. For example, I had trouble with the ink flow in my Preppies, so it helped to get advice on cleaning them and getting the ink flowing better.

After playing with my Preppies for a while I decided to get a nicer pen. If you read the top pen lists the break fountain pens into price ranges like “under $50”, “$50 to $100”, and “over $100”. I tried to be good and get a pen in the middle range, but I had my eye on several gold nib pens in the over $100 range. Japanese fountain pens mess up these neat price ranges because some pens that cost over $150 in the US can be purchased for less than $100 if you buy them directly from Japan. So, I told myself that I could get a $170 Platinum 3776 gold nib pen from Japan for under $100 and that is still in the middle range. This led to a lot of stress and frustration. I tried eBay first. A week after eBay charged over $80 to my credit card, I got an email from the seller saying that my pen was not in stock and did I want a blue one instead of the black one I ordered. I cancelled the order, but it took several days to get my money back. Then I ordered the same pen from a seller on Amazon and that was a total scam. Criminals broke into some poor unsuspecting inactive Amazon seller’s account and redirected the account to their bank account. Then they put out a bunch of bogus products at bargain prices including the fountain pen that I ordered. It took me a little over two weeks to get my money back.

After three or four weeks of frustration trying to buy an expensive fountain pen at a discount directly from Japan, I decided that it made more sense to buy from a reputable dealer in the US. I bought my 3776 at Pen Chalet and the buyer experience could not have been more different from my eBay and Amazon experiences. I got the black pen with gold trim and a fine nib. Lots of people on the fountain pens group on Reddit swear by buying fountain pens from Japan and they have more experience than I do. I suggest that wherever you buy your expensive fountain pen that you contact the seller first and ask them if they have the pen in stock. If they do not respond to your message, then run away very fast. Also, you probably should not try to get the best deal. Pay $20 more to buy the pen from a well-known dealer in Japan that sells a lot of fountain pens instead of going for the lowest price. Or just forget shopping for bargains from Japan and go with a well-regarded US vendor like Pen Chalet. I did contact Pen Chalet about something else before buying my pen and they responded quickly. A quick response to a question is a good sign for fountain pen sellers.

My Platinum 3776 pen writes like a dream. I have the matching gold colored converter and I use Aurora Black ink. It writes a nice thin black line, kind of like the ones my .5 mm G-2 and .38 mm Signo DX gel pens write. The big question is why spend $179 (including sales tax) for a fountain pen when a $3 gel pen makes pretty much the same line? I am not sure. Some people argue that fountain pens are better for the environment because you can fill them with bottled ink but with other pens you use them once and throw them away filling up landfills with plastic. Someone on the internet said, “gel pens are for work and school and fountain pens are spiritual”. I am not sure what they meant by spiritual but using a high-quality fountain pen is a nice experience. I keep mine at home, so I do not lose it somewhere like all my other pens. It is kind of nice to sit down at my kitchen table and write with my fountain pen. Maybe someone will read this post and find enjoyment in fountain pens themselves.

Rhodia Dot Pads Blog Post Outline on Rhodia Paper with Platinum 3776 Fountain Pen

Once I got into fountain pens, I needed some nice paper to write on. Many paper brands advertise as fountain pen friendly but I focused on Rhodia Dot Pads. These have dots like my Leuchtturm bullet journal notebook, but the pages are perforated so they can be removed. I started with the 6 x 8 1/4 inch pad because it was the best deal. I ended up writing on both sides of all 80 sheets and trying out different kinds of pens and pencils on it. We used these sheets in our family pen party. When I finished off this pad I bought the more expensive 8 1/4 by 11 3/4 inch pad and I really like it. I three-hole punch the pages after I rip them off and save them as fountain pen writing samples. I get a lot of enjoyment writing with my gold nib Platinum 3776 fountain pen on my full size Rhodia dot pad.

Before I started this analog detour, I wrote in a composition book with a gel pen. Today I write on my Rhodia pad with a fountain pen. One thing about the Rhodia paper is that it is smoother and less absorbent than cheaper paper. As a result, pens draw thinner lines on Rhodia paper. This probably would be more important with really wide fountain pen nibs, but it is nice that my fine nib pen leaves a nice sharp thin black line. The Rhodia paper is more expensive. At this instant you can get a full size Rhodia pad for $14.99. It has 80 sheets so that is about 19 cents per sheet. A 5 pack of Mead composition books will run you $16.75 for 500 sheets which is less than 4 cents per sheet. Is the Rhodia pad worth over 4 times as much? Why not stick with my Pilot G-2 .5 mm gel pen and write on Mead wide ruled composition books instead of using my Platinum 3776 fountain pen on a Rhodia Dot Pad? I think I could be happy with either. There is a small advantage to the more expensive pair. The fountain pen does not show through very much on the Rhodia paper. The gel pen shows through on the Mead composition book in my testing. At the end of the day, I just enjoy the Rhodia pad like I enjoy the nice fountain pen. It goes beyond simple practicality even though there are some practical benefits of the more expensive Rhodia pads.

Pencils My Favorite Pencil

The last type of analog tool that I checked out was pencils. I have not been using pencils at all in my work or in my computer science study at home. But I remember back in college writing my Pascal code in my first CS class all on paper. The TA that graded my programs said that I had the third least amount of CPU usage of anyone in the class and that the other two with less CPU usage than me dropped the class. I had been programming in BASIC and FORTRAN before coming to college so learning Pascal was not that hard. But I liked to write code out with pencil and edit using an eraser, so I did not spend a lot of time on the computer screen. These days I mainly do things on the screen. I think I need to get back to using pencils and erasers for writing and editing code and pseudo-code both in my online class and for work. I guess that goes along with the analog way of thinking like the bullet journal philosophy of using a notebook and pen instead of a program or app for your planner and journal.

My favorite pencil that I tried was the Tombow Mono 100 HB pencil that I bought in my first JetPens order. It is a pretty thing. It is basically a drawing pencil. It writes a nice dark line and is very smooth. When I was trying out various pencils, I found a fantastic pencil store called CW Pencil Enterprise. You can buy individual pencils of all kinds including types from countries all over the world. I only bought two or three pencils, but they were great to order from. They included a nice handwritten note written with a pencil. Vendors like CW Pencil motivated me to write this blog. JetPens, Goulet Pens, Pen Chalet, and CW Pencil were all very nice stores to buy from. I am sure that they are not perfect, but I had a great experience with them all and I wanted to share that with other people.

In addition to pencils I also looked at erasers. The Sakura Foam Eraser that I got from JetPens is a step up from the traditional pink hand held eraser. It erases all the pencil marks and leaves less stuff behind. A couple of my pencils like the Tombow Mono 100 did not have erasers on the end so I got a pack of Pentel Hi-Polymer Eraser Caps. These convert a drawing pencil into a more conventional writing pencil with eraser. When I use pencils for programming I alternate between the erasers on the end of the pencil and the stand-alone foam eraser.

As much fun as I had looking at pencils, I really did not find much difference between them when I used them for hand coding. The much less expensive Dixon Ticonderoga pencils that my children and wife swear by worked really well for me. I can barely tell the difference between them and the Tombow Mono 100. The Tombow is a little darker and smoother but it really does not matter much for what I need. So, I splurged on the expensive fountain pen but at this point I’m pretty much sold on the affordable and quite nice Dixon Ticonderoga standard in pencils.

Conclusion Rest of my analog stuff

I went back on forth on whether I should write this post and what to put in it. There is a lot more that I could talk about that goes beyond just the tools themselves and I thought about writing multiple posts. But, really, this is a DBA blog and not a pen, pencil, paper, and notebook blog so one post is probably enough. I thought about making this a lot shorter and just having links to the various web sites and products without explanation – bullet points with URLs behind them. I settled on a pretty long single post that weaved my personal experiences from the past 3 months or so in with links to the sites and products.

My exploration of these “analog” tools is like a breadth first search of a very large search tree of products and information. For example, there are many kinds of fountain pens and each pen has multiple nib sizes. How many pens would you have to buy to really try them all? If you look at Platinum’s 3776 site there are many different colors and designs of this one pen, plus multiple nib sizes for each. Then there are the other manufacturers each with multiple pens. It is a huge search tree. I got just a little way into this massive search and pulled the plug. This post documents the results of how far I got. I thought about waiting a year before writing a post about this to see if I am still using these tools and what benefits I got from them. But, by then I would have forgotten much of what I learned in my initial search. Maybe a year from now I can follow up with a post about whether this detour has had a lasting positive impact on my work and personal life.

Thanks to everyone who checks out this post. If you have questions or comments, it would be great if you left them below. I hope that something in here will be helpful to others. I had a lot of fun and learned a few useful things.

Bobby

Categories: DBA Blogs

Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 10 Regions Latest Sao Paulo @ Brazil

Online Apps DBA - Mon, 2019-08-26 08:58

New Region Added: Sao Paulo @ Brazil In 2019 till mid-August Oracle added 6 new Regions in Gen 2 Cloud that’s OCI and a lot more in the pipeline This means you now have in total 10 regions, 4 with 3 availability domain while 6 with single availability domain If you want to get full […]

The post Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 10 Regions Latest Sao Paulo @ Brazil appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Troubleshooting

Jonathan Lewis - Mon, 2019-08-26 06:19

A recent thread on the Oracle Developer Community starts with the statement that a query is taking a very long time (with the question “how do I make it go faster?” implied rather than asked). It’s 12.1.0.2 (not that that’s particularly relevant to this blog note), and we have been given a number that quantifies “very long time” (again not particularly relevant to this blog note – but worth mentioning because your “slow” might be my “wow! that was fast” and far too many people use qualitative adjectives when the important detail is quantative). The query had already been running for 15 hours – and here it is:


SELECT 
        OWNER, TABLE_NAME 
FROM
        DBA_LOGSTDBY_NOT_UNIQUE 
WHERE
        (OWNER, TABLE_NAME) NOT IN (
                SELECT 
                        DISTINCT OWNER, TABLE_NAME 
                        FROM     DBA_LOGSTDBY_UNSUPPORTED
        ) 
AND     BAD_COLUMN = 'Y'

There are many obvious suggestions anyone could make for things to do to investigate the problem – start with the execution plan, check whether the object statistics are reasonably representative, run a trace with wait state tracing enabled to see where the time goes; but sometimes that are a couple of very simple observation you can make that point you to simple solutions.

Looking at this query we can recognise that it’s (almost certainly) about a couple of Oracle data dictionary views (which means it’s probably very messy under the covers with a horrendous execution plan) and, as I’ve commented from time to time in the past, Oracle Corp. developers create views for their own purposes so you should take great care when you re-purpose them. This query also has the very convenient feature that it looks like two simpler queries stitched together – so a very simple step in trouble-shooting, before going into any fine detail, is to unstitch the query and run the two parts separately to see how much data they return and how long they take to complete:


SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE WHERE BAD_COLUMN = 'Y'

SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED

It’s quite possble that the worst case scenario for the total run time of the original query could be reduced to the sum of the run time of these two queries. One strategy to achieve this would be a rewrite of the form:

select  * 
from    (
        SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE WHERE BAD_COLUMN = 'Y'
        minus
        SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED
)

Unfortunately the immediately obvious alternative may be illegal thanks to things like duplicates (which disappear in MINUS operations) or NULLs (which can make ALL the data “disappear” in some cases). In this case the original query might be capable of returning duplicates of (owner, table_name) from dba_lgstdby_not_unique which would collapse to a single ocurrence each in my rewrite – so my version of the query is not logically equivalent (unless the definition of the view enforces uniqueness); on the other hand tracking, back through the original thread to the MoS article where this query comes from, we can see that even if the query could return duplicates we don’t actually need to see them.

And this is the point of the blog note – it’s a general principle (that happens to be a very obvious strategy in this case): if a query takes too long, how does it compare with a simplified version of the query that might be a couple of steps short of the final target. If it’s easy to spot the options for simplification, and if the simplified version operates efficiently, them isolate it (using a no_merge hint if necessary), and work forwards from there. Just be careful that your rewrite remains logically equivalent to the original (if it really needs to).

In the case of this query, the two parts took 5 seconds and 9 seconds to complete, returning 209 rows and 815 rows respectively. Combining the two queries with a minus really should get the required result in no more than 14 seconds.

Footnote

The “distinct” in the second query is technically redundant as the minus operation applies a sort unique operation to both the two intermediate result sets before comparing them.  Similarly the  “distinct” was also redundant when the second query was used for the “in subquery” construction – again there would be an implied uniqueness operation if the optimizer decided to do a simple unnest of the subquery.

 

 

 

 

DevOps for Oracle DBA

Pakistan's First Oracle Blog - Sun, 2019-08-25 00:21
DevOps is natural evolution for Oracle database administrators or sysadmins of any kind. The key to remain relevant in the industry is to embrace DevOps these days and in near future.

The good news is that if you are an Oracle DBA, you already have the solid foundation. You have worked with the enterprise, world class database system and are aware of high availability, disaster recovery, performance optimization, and troubleshooting. Having said that, there is still lots to learn and unlearn to become a DevOps Engineer.


You would need to look outside of Oracle, Linux Shell and the core competency mantra. You would need to learn a proper computer language such as Python. You would need to learn software engineering framework like Agile methodology, and you would need to learn stuff such as Git. Above all you would need to unlearn that you only manage Database. As DevOps Engineer in today's Cloud era, you would be responsible for end to end delivery.


Without Cloud skills, its impossible to transition from Oracle DBA to DevOps role. Regardless of the cloud provider, you must know the networking, compute, storage, and infrastructure as code. You already know the databases side of things, but now learn a decent amount about other databases as you would be expected to migrate and manage them in the cloud.


So any of public cloud like AWS, Azure, or GCP plus a programming language like Python or Go or NodeJS, plus agile concepts, IaC as Terraform or CloudFormation, and plethora of stuff like code repositories and pipelining would be required to be come an acceptable DevOps Engineer.


Becoming obsolete by merely staying Oracle DBA is not an option. So buckle up and start DevOps journey today.
Categories: DBA Blogs

Alfresco – Share Clustering fail with ‘Ignored XML validation warning’

Yann Neuhaus - Sat, 2019-08-24 10:00

In a recent project on Alfresco, I had to setup a Clustering environment. It all went smoothly but I did face one single issue with the setup of the Clustering on the Alfresco Share layer. That’s something I never faced before and you will understand why below.

Initially, to setup the Alfresco Share Clustering, I used the sample file packaged in the distribution zip (E.g.: alfresco-content-services-distribution-6.1.0.5.zip):

<?xml version='1.0' encoding='UTF-8'?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:hz="http://www.hazelcast.com/schema/spring"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
                http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
                http://www.hazelcast.com/schema/spring
                https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd">

   <!--
        Hazelcast distributed messaging configuration - Share web-tier cluster config
        - see http://www.hazelcast.com/docs.jsp
        - and specifically http://docs.hazelcast.org/docs/2.4/manual/html-single/#SpringIntegration
   -->
   <!-- Configure cluster to use either Multicast or direct TCP-IP messaging - multicast is default -->
   <!-- Optionally specify network interfaces - server machines likely to have more than one interface -->
   <!-- The messaging topic - the "name" is also used by the persister config below -->
   <!--
   <hz:topic id="topic" instance-ref="webframework.cluster.slingshot" name="slingshot-topic"/>
   <hz:hazelcast id="webframework.cluster.slingshot">
      <hz:config>
         <hz:group name="slingshot" password="alfresco"/>
         <hz:network port="5801" port-auto-increment="true">
            <hz:join>
               <hz:multicast enabled="true"
                     multicast-group="224.2.2.5"
                     multicast-port="54327"/>
               <hz:tcp-ip enabled="false">
                  <hz:members></hz:members>
               </hz:tcp-ip>
            </hz:join>
            <hz:interfaces enabled="false">
               <hz:interface>192.168.1.*</hz:interface>
            </hz:interfaces>
         </hz:network>
      </hz:config>
   </hz:hazelcast>

   <bean id="webframework.cluster.clusterservice" class="org.alfresco.web.site.ClusterTopicService" init-method="init">
      <property name="hazelcastInstance" ref="webframework.cluster.slingshot" />
      <property name="hazelcastTopicName"><value>slingshot-topic</value></property>
   </bean>
   -->

</beans>

 

I obviously uncommented the whole section and configured it properly for the Share Clustering. The above content is only the default/sample content, nothing more.

Once configured, I restarted Alfresco but it failed with the following messages:

24-Aug-2019 14:35:12.974 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
24-Aug-2019 14:35:12.974 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.34
24-Aug-2019 14:35:12.988 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deploying configuration descriptor [/opt/tomcat/conf/Catalina/localhost/share.xml]
Aug 24, 2019 2:35:15 PM org.apache.jasper.servlet.TldScanner scanJars
INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Aug 24, 2019 2:35:15 PM org.apache.catalina.core.ApplicationContext log
INFO: No Spring WebApplicationInitializer types detected on classpath
Aug 24, 2019 2:35:15 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
2019-08-23 14:35:16,052  WARN  [factory.xml.XmlBeanDefinitionReader] [localhost-startStop-1] Ignored XML validation warning
 org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; schema_reference.4: Failed to read schema document 'https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:204)
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.warning(ErrorHandlerWrapper.java:100)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:392)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:306)
	at java.xml/com.sun.org.apache.xerces.internal.impl.xs.traversers.XSDHandler.reportSchemaErr(XSDHandler.java:4218)
  ... 69 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
	at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
	at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
	... 89 more
...
2019-08-23 14:35:16,067  ERROR [web.context.ContextLoader] [localhost-startStop-1] Context initialization failed
 org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from relative location [surf-config.xml]
Offending resource: class path resource [web-application-config.xml]; nested exception is org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from URL location [classpath*:alfresco/web-extension/*-context.xml]
Offending resource: class path resource [surf-config.xml]; nested exception is org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from file [/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:68)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:76)
  ... 33 more
Caused by: org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from URL location [classpath*:alfresco/web-extension/*-context.xml]
Offending resource: class path resource [surf-config.xml]; nested exception is org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from file [/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:68)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:76)
	... 42 more
Caused by: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from file [/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:397)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:335)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:303)
	... 44 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:204)
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:135)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:396)
	... 64 more
...
24-Aug-2019 14:35:16.196 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
24-Aug-2019 14:35:16.198 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal Context [/share] startup failed due to previous errors
Aug 24, 2019 2:35:16 PM org.apache.catalina.core.ApplicationContext log
...

 

As you can see above, the message is pretty clear: there is a problem within the file “/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml” which is causing Share to fail to start properly. The first warning message points you directly to the issue: “Failed to read schema document ‘https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd’

After checking the content of the sample file and comparing it with a working one, I found out what was wrong. To solve this specific issue, you can simply replace “https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd” with “http://www.hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd“. Please note the two differences in the URL:

  • Switch from “https” to “http
  • Switch from “hazelcast.com” to “www.hazelcast.com

 

The issue was actually caused by the fact that this installation was completely offline, with no access to internet. Because of that, Spring wasn’t able to check for the XSD file to validate the definition in the context file. The solution is therefore to switch the URL to http with www.hazelcast.com so that the Spring internal resolution can understand and use the local file to do the validation and not look for it online.

As mentioned previously, I never faced this issue before for two main reasons:

  • I usually don’t use the sample files provided by Alfresco, I always prefer to build my own
  • I mainly install Alfresco on servers which have internet access (outgoing communications allowed)

 

Once the URL is corrected, Alfresco Share is able to start and the Clustering is configured properly:

24-Aug-2019 14:37:22.558 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
24-Aug-2019 14:37:22.558 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.34
24-Aug-2019 14:37:22.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deploying configuration descriptor [/opt/tomcat/conf/Catalina/localhost/share.xml]
Aug 24, 2019 2:37:24 PM org.apache.jasper.servlet.TldScanner scanJars
INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Aug 24, 2019 2:37:25 PM org.apache.catalina.core.ApplicationContext log
INFO: No Spring WebApplicationInitializer types detected on classpath
Aug 24, 2019 2:37:25 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Resolving domain name 'share_n1.domain' to address(es): [10.10.10.10]
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Resolving domain name 'share_n2.domain' to address(es): [127.0.0.1, 10.10.10.11]
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [share_n1.domain/10.10.10.10, share_n2.domain/10.10.10.11, share_n2.domain/127.0.0.1]
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Prefer IPv4 stack is true.
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Picked Address[share_n2.domain]:5801, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5801], bind any local is true
Aug 24, 2019 2:37:28 PM com.hazelcast.system
INFO: [share_n2.domain]:5801 [slingshot] Hazelcast Community Edition 2.4 (20121017) starting at Address[share_n2.domain]:5801
Aug 24, 2019 2:37:28 PM com.hazelcast.system
INFO: [share_n2.domain]:5801 [slingshot] Copyright (C) 2008-2012 Hazelcast.com
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.LifecycleServiceImpl
INFO: [share_n2.domain]:5801 [slingshot] Address[share_n2.domain]:5801 is STARTING
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.TcpIpJoiner
INFO: [share_n2.domain]:5801 [slingshot] Connecting to possible member: Address[share_n1.domain]:5801
Aug 24, 2019 2:37:28 PM com.hazelcast.nio.ConnectionManager
INFO: [share_n2.domain]:5801 [slingshot] 54991 accepted socket connection from share_n1.domain/10.10.10.10:5801
Aug 24, 2019 2:37:29 PM com.hazelcast.impl.Node
INFO: [share_n2.domain]:5801 [slingshot] ** setting master address to Address[share_n1.domain]:5801
Aug 24, 2019 2:37:35 PM com.hazelcast.cluster.ClusterManager
INFO: [share_n2.domain]:5801 [slingshot]

Members [2] {
	Member [share_n1.domain]:5801
	Member [share_n2.domain]:5801 this
}

Aug 24, 2019 2:37:37 PM com.hazelcast.impl.LifecycleServiceImpl
INFO: [share_n2.domain]:5801 [slingshot] Address[share_n2.domain]:5801 is STARTED
2019-08-23 14:37:37,664  INFO  [web.site.ClusterTopicService] [localhost-startStop-1] Init complete for Hazelcast cluster - listening on topic: share_hz_test
...

 

Cet article Alfresco – Share Clustering fail with ‘Ignored XML validation warning’ est apparu en premier sur Blog dbi services.

Rittman Mead at Oracle OpenWorld 2019

Rittman Mead Consulting - Fri, 2019-08-23 07:52
Rittman Mead at Oracle OpenWorld 2019

Oracle OpenWorld is coming soon! 16th-20th September in Moscone Center, San Francisco. It's Oracle's biggest conference and I'll represent Rittman Mead there with the talk "Become a Data Scientist"  exploring how Oracle Analytics Cloud can speed any analyst path to data science. If you are an analyst looking to move your first steps in data-science or a manager trying to understand how to optimize your business analytics workforce, look no further, this presentation is your kickstarter!

Rittman Mead at Oracle OpenWorld 2019

To have an introduction to the topic have a look at my blog post series episodes I, II and III.

If you'll be at OOW2019 and you see me around, don't hesitate to stop me! I’d be pleased to speak with you about OAC, Analytics, ML, and more important topics like food or wine as well!

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator