Feed aggregator

Announcing Fluid 3!

Jim Marion - Mon, 2019-08-05 12:43

At jsmpros, we teach Fluid training courses several times a month. PeopleTools Fluid is by far our most popular course. Through our Fluid 1 and 2 courses, we teach an incredible amount of material including:

  • Fluid navigation,
  • Fluid page development strategies,
  • Oracle-delivered style classes and layout
  • External CSS libraries,
  • Fluid grid layouts
  • Fluid Group Boxes
  • Dynamic Tiles
  • Responsive and adaptive mobile design concepts, etc.

The feedback from our Fluid 1 and 2 series is overwhelmingly positive. What we are announcing today is our next level Fluid course: Fluid 3. Through this course you will learn how to:

  • Use Master/Detail to build business process based solutions,
  • Build effective secondary headers similar to PeopleSoft's self-service headers (including Related Actions),
  • Use Scroll Areas and Fluid alternatives,
  • Extend Fluid with JavaScript libraries such as d3 and Oracle JET,
  • Leverage the DataGrid to create compelling solutions,
  • Add Fluid Related Content,
  • Convert Classic components to Fluid,
  • Extend Fluid Pages with 8.57+ Drop Zones,
  • Construct robust, business-centric dynamic tiles and Fluid navigation, and
  • Learn additional PeopleSoft CSS classes not covered in the Fluid 1 and 2 courses.

To register for our upcoming August Fluid 3 session or any of our live virtual training courses, please visit our Live Virtual Training online catalog.

Has it been a while since your last Fluid training course? Are your Fluid skills a little rusty? Use our Fluid 3 course as a refresher to get you back into shape.

Note: If you have taken Fluid from other trainers, feel free to start with our Fluid 2 course. Even though we do not monitor prerequisites, we do encourage attendees with Fluid experience to attend our Fluid 2 training before continuing to Fluid 3.

How To Get The Usage Reports In Oracle Cloud Infrastructure

Online Apps DBA - Mon, 2019-08-05 11:02

How To Get The Usage Reports In Oracle Cloud Infrastructure If you are working as a Cloud Administrator or Architect then it is your day to day goal to track the usage & billing of cloud resources To know how to get the usage reports in Oracle Cloud Infrastructure using the Console, check the blog […]

The post How To Get The Usage Reports In Oracle Cloud Infrastructure appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Sparse OVM virtual disks on appliances

Yann Neuhaus - Mon, 2019-08-05 00:23

For some reason, you may need to sparse OVM virtual disks in an Oracle appliances. Even though that feature is present trough the OVM Manager, most of the Oracle appliances doesn’t have any OVM Manager deployed on it. Therefore if you un-sparse your virtual disk by mistake, you are on your own.

This is a note on how to sparse virtual disks which are un-sparse.

Stop I/Os on the virtual disk

First, ensure the VM using the disk is stopped:
xm shutdown {VM_NAME}

For instance:
xm shutdown exac01db01.domain.local

Sparse disk


dd if={PATH_TO_DISK_TO_BE_SPARSED} of={PATH_TO_NEW_SPARSED_DISK} conv=sparsed

For instance:

dd if=/EXAVMIMAGES/GuestImages/exac01db01.domain.local/vdisk_root_01.img \
of=/staging/vdisk_root_01.img \
conv=sparsed

Move disk to former location

After the sparsing operation finished, copy the disk back to their former location:

# Retrieve the disks path:
cat /EXAVMIMAGES/GuestImages/{VM_NAME}/vm.cfg | grep disk
# Copy each disk back to its location:
mv /staging/{DISK_NAME}.img /EXAVMIMAGES/GuestImages/{VM_NAME}/{DISK_NAME}.img

For instance:

mv /staging/vdisk_root_01.img /EXAVMIMAGES/GuestImages/exac01db01.domain.local/vdisk_root_01.img

Start back the VM

Then you can start back the VM which use the new disk:
xm create /EXAVMIMAGES/GuestImages/{VM_NAME}/vm.cfg

I hope this helps and please contact us or comment below should you need more details.

Cet article Sparse OVM virtual disks on appliances est apparu en premier sur Blog dbi services.

Oracle VM Server: Why the server uuid is important and why changes to this uuid are critical

Dietrich Schroff - Sun, 2019-08-04 15:02
After working a while with Oracle VM server it turns out, that a very important parameter is the UUID of a Oracle VM server.

This UUID is used by the ovs-agent (take a look at the Oracle documentation). Here a few excerpts of these chapter:
The Oracle VM Agent is a daemon that runs within dom0 on each Oracle VM Server instance. Its primary role is to facilitate communication between Oracle VM Server and Oracle VM Manager.
[...]
Oracle VM Agent is responsible for carrying out all of the configuration changes required on an Oracle VM Server instance, in accordance with the messages that are sent to it by Oracle VM Manager.
[...]
If you wish to allow another Oracle VM Manager instance to take ownership of the server, the original Oracle VM Manager instance must release ownership first.
[...]
Oracle VM Agent also maintains its own log files on the Oracle VM Server that can be used for debugging issues on a particular server instance or for auditing purposes. 
The oracle vm server gets identified at the oracle vm manager by its UUID. There is a very nice blogposting from Bjorn Naessens:
https://bjornnaessens.wordpress.com/2012/08/10/best-practices-in-ovm-2-fakeuuid-it/
He made his way through the source code and comes up with the following important things about this uuid:


From an architectural point of view, this is a really bad way, because the UUID will change, if you change the motherboard SMBIOS oder change a network MAC.
With loosing your UUID, the OVS-agent will no longer communicate with your OVM-manager and therfore you can not start/stop any VM on that host.

You can get the UUID of a server from the OVM Manager GUI:
(--> "Servers and VMs": select the server on the tree under "server pools" --> change the dropdown to "info")

How to fix a UUID change can be found here:
https://hadafq8.wordpress.com/2016/03/22/oracleovmovm-server-uuid-challenges/

Alfresco Clustering – Solr6

Yann Neuhaus - Sat, 2019-08-03 01:00

In previous blogs, I talked about some basis and presented some possible architectures for Alfresco, I talked about the Clustering setup for the Alfresco Repository, the Alfresco Share and for ActiveMQ. I also setup an Apache HTTPD as a Load Balancer. In this one, I will talk about the last layer that I wanted to present, which is Solr and more particularly Solr6 (Alfresco Search Services) Sharding. I planned on writing a blog related to Solr Sharding Concepts & Methods to explain what it brings concretely but unfortunately, it’s not ready yet. I will try to post it in the next few weeks, if I find the time.

 

I. Solr configuration modes

So, Solr supports/provides three configuration modes:

  • Master-Slave
  • SolrCloud
  • Standalone


Master-Slave
: It’s a first specific configuration mode which is pretty old. In this one, the Master node is the only to index the content and all the Slave nodes will replicate the Master’s index. This is a first step to provide a Clustering solution with Solr, and Alfresco supports it, but this solution has some important drawbacks. For example, and contrary to an ActiveMQ Master-Slave solution, Solr cannot change the Master. Therefore, if you lose your Master, there is no indexing happening anymore and you need to manually change the configuration file on each of the remaining nodes to specify a new Master and target all the remaining Slaves nodes to use the new Master. This isn’t what I will be talking about in this blog.

SolrCloud: It’s another specific configuration mode which is a little bit more recent, introduced in Solr4 I believe. SolrCloud is a true Clustering solution using a ZooKeeper Server. It adds an additional layer on top of a Standalone Solr which is slowing it down a little bit, especially on infrastructures with a huge demand on indexing. But at some points, when you start having dozens of Solr nodes, you need a central place to organize and configure them and that’s what SolrCloud is very good at. This solution provides Fault Tolerance as well as High Availability. I’m not sure if SolrCloud could be used by Alfresco because sure SolrCloud also has Shards and its behaviour is pretty similar to a Standalone Solr but it’s not entirely working in the same way. Maybe it’s possible, however I have never seen it so far. Might be the subject of some testing later… In any cases, using a SolrCloud for Alfresco might not be that useful because it’s really easier to setup a Master-Master Solr mixed with Solr Sharding for pretty much the same benefits. So, I won’t talk about SolrCloud here either.

You guessed it, in this blog, I will only talk about Standalone Solr nodes and only using Shards. Alfresco supports Solr Shards only since the version 5.1. Before that, it wasn’t possible to use this feature, even if Solr4 provided it already. When using the two default cores (the famous “alfresco” & “archive” cores), with all Alfresco versions (all supporting Solr… So since Alfresco 4), it is possible to have a High Available Solr installation by setting up two Solr Standalone nodes and putting a Load Balancer in front of it but in this case, there is no communication between the Solr nodes so, it’s only a HA solution, nothing more.

 

In the architectures that I presented in the first blog of this series, if you remember the schema N°5 (you probably don’t but no worry, I didn’t either), I put a link between the two Solr nodes and I mentioned the following related to this architecture:
“N°5: […]. Between the two Solr nodes, I put a Clustering link, that’s in case you are using Solr Sharding. If you are using the default cores (alfresco and archive), then there is no communication between distinct Solr nodes. If you are using Solr Sharding and if you want a HA architecture, then you will have the same Shards on both Solr nodes and in this case, there will be communications between the Solr nodes, it’s not really a Clustering so to speak, that’s how Solr Sharding is working but I still used the same representation.”

 

II. Solr Shards creation

As mentioned earlier in this blog, there are real Cluster solutions with Solr but in the case of Alfresco, because of the features that Alfresco adds like the Shard Registration, there is no real need to set up complex things like that. Having just a simple Master-Master installation of Solr6 with Sharding is already a very good and strong solution to provide Fault Tolerance, High Availability, Automatic Failover, Performance improvements, aso… So how can that be setup?

First, you will need to install at least two Solr Standalone nodes. You can use exactly the same setup for all nodes and it’s also exactly the same setup to use the default cores or Solr Sharding so just do what you are always doing. For the Tracking, you will need to use the Load Balancer URL so it can target all Repository nodes, if there are several.

If you created the default cores, you can remove them easily:

[alfresco@solr_n1 ~]$ curl -v "http://localhost:8983/solr/admin/cores?action=removeCore&storeRef=workspace://SpacesStore&coreName=alfresco"
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=removeCore&storeRef=workspace://SpacesStore&coreName=alfresco HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 150
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">524</int></lst>
</response>
* Connection #0 to host localhost left intact
[alfresco@solr_n1 ~]$
[alfresco@solr_n1 ~]$ curl -v "http://localhost:8983/solr/admin/cores?action=removeCore&storeRef=archive://SpacesStore&coreName=archive"
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=removeCore&storeRef=archive://SpacesStore&coreName=archive HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 150
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">485</int></lst>
</response>
* Connection #0 to host localhost left intact
[alfresco@solr_n1 ~]$

 

A status of “0” means that it’s successful.

Once that’s done, you can then simply create the Shards. In this example, I will:

  • use the DB_ID_RANGE method
  • use two Solr nodes
  • for workspace://SpacesStore: create 2 Shards out of a maximum of 10 with a range of 20M
  • for archive://SpacesStore: create 1 Shard out of a maximum of 5 with a range of 50M

Since I will use only two Solr nodes and since I want a High Availability on each of the Shards, I will need to have them all on both nodes. With a simple loop, it’s pretty easy to create all the Shards:

[alfresco@solr_n1 ~]$ solr_host=localhost
[alfresco@solr_n1 ~]$ solr_node_id=1
[alfresco@solr_n1 ~]$ begin_range=0
[alfresco@solr_n1 ~]$ range=19999999
[alfresco@solr_n1 ~]$ total_shards=10
[alfresco@solr_n1 ~]$
[alfresco@solr_n1 ~]$ for shard_id in `seq 0 1`; do
>   end_range=$((${begin_range} + ${range}))
>   curl -v "http://${solr_host}:8983/solr/admin/cores?action=newCore&storeRef=workspace://SpacesStore&numShards=${total_shards}&numNodes=${total_shards}&nodeInstance=${solr_node_id}&template=rerank&coreName=alfresco&shardIds=${shard_id}&property.shard.method=DB_ID_RANGE&property.shard.range=${begin_range}-${end_range}&property.shard.instance=${shard_id}"
>   echo ""
>   echo "  -->  Range N°${shard_id} created with: ${begin_range}-${end_range}"
>   echo ""
>   sleep 2
>   begin_range=$((${end_range} + 1))
> done

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=newCore&storeRef=workspace://SpacesStore&numShards=10&numNodes=10&nodeInstance=1&template=rerank&coreName=alfresco&shardIds=0&property.shard.method=DB_ID_RANGE&property.shard.range=0-19999999&property.shard.instance=0 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 182
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">254</int></lst><str name="core">alfresco-0</str>
</response>
* Connection #0 to host localhost left intact

  -->  Range N°0 created with: 0-19999999


*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=newCore&storeRef=workspace://SpacesStore&numShards=10&numNodes=10&nodeInstance=1&template=rerank&coreName=alfresco&shardIds=1&property.shard.method=DB_ID_RANGE&property.shard.range=20000000-39999999&property.shard.instance=1 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 182
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">228</int></lst><str name="core">alfresco-1</str>
</response>
* Connection #0 to host localhost left intact

  -->  Range N°1 created with: 20000000-39999999

[alfresco@solr_n1 ~]$
[alfresco@solr_n1 ~]$ begin_range=0
[alfresco@solr_n1 ~]$ range=49999999
[alfresco@solr_n1 ~]$ total_shards=4
[alfresco@solr_n1 ~]$ for shard_id in `seq 0 0`; do
>   end_range=$((${begin_range} + ${range}))
>   curl -v "http://${solr_host}:8983/solr/admin/cores?action=newCore&storeRef=archive://SpacesStore&numShards=${total_shards}&numNodes=${total_shards}&nodeInstance=${solr_node_id}&template=rerank&coreName=archive&shardIds=${shard_id}&property.shard.method=DB_ID_RANGE&property.shard.range=${begin_range}-${end_range}&property.shard.instance=${shard_id}"
>   echo ""
>   echo "  -->  Range N°${shard_id} created with: ${begin_range}-${end_range}"
>   echo ""
>   sleep 2
>   begin_range=$((${end_range} + 1))
> done

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=newCore&storeRef=archive://SpacesStore&numShards=4&numNodes=4&nodeInstance=1&template=rerank&coreName=archive&shardIds=0&property.shard.method=DB_ID_RANGE&property.shard.range=0-49999999&property.shard.instance=0 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 181
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">231</int></lst><str name="core">archive-0</str>
</response>
* Connection #0 to host localhost left intact

-->  Range N°0 created with: 0-49999999

[alfresco@solr_n1 ~]$

 

On the Solr node2, to create the same Shards (another Instance of each Shard) and therefore provide the expected setup, just re-execute the same commands but replacing solr_node_id=1 with solr_node_id=2. That’s all there is to do on Solr side, just creating the Shards is sufficient. On the Alfresco side, configure the Shards registration to use the Dynamic mode:

[alfresco@alf_n1 ~]$ cat $CATALINA_HOME/shared/classes/alfresco-global.properties
...
# Solr Sharding
solr.useDynamicShardRegistration=true
search.solrShardRegistry.purgeOnInit=true
search.solrShardRegistry.shardInstanceTimeoutInSeconds=60
search.solrShardRegistry.maxAllowedReplicaTxCountDifference=500
...
[alfresco@alf_n1 ~]$

 

After a quick restart, all the Shard’s Instances will register themselves to Alfresco and you should see that each Shard has its two Shard’s Instances. Thanks to the constant Tracking, Alfresco knows which Shard’s Instances are healthy (up-to-date) and which ones aren’t (either lagging behind or completely silent). When performing searches, Alfresco will make a request to any of the healthy Shard’s Instances. Solr will be aware of the healthy Shard’s Instances as well and it will start the distribution of the search request to all the Shards for the parallel query. This is the communication between the Solr nodes that I mentioned earlier: it’s not really Clustering but rather query distribution between all the healthy Shard’s Instances.

 

 

Other posts of this series on Alfresco HA/Clustering:

Cet article Alfresco Clustering – Solr6 est apparu en premier sur Blog dbi services.

AdminClient and Set Commands

DBASolved - Fri, 2019-08-02 13:32

AdminClient is the “new” command line utility that is used with Oracle GoldenGate Microservices. Initally, AdminClient was released with Oracle GoldenGate 12c (12.3.0.0.1) and enhanced in each release there after. With this new command line tool, there are a few things you can do with it that makes it a powerful tool for administering Oracle GoldenGate.

Reminder: This is only avaliable in Oracle GoldenGate Microservices Editions.

Features that make this tool so nice:

  • Default command line tool for Microservices
  • Can be installed on a remote linux machine or Windows Workstations/Laptops
  • Can “Set” advanced setting that provide a few nice features

The third bullet is what will be the focus of this post.

The “Set” command within AdminClient provide you with options that allow you to extend the command line for Oracle GoldenGate. These features are:

After starting the AdminClient, it is possible to see the current settings of these values by using the SHOW command:

Oracle GoldenGate Administration Client for Oracle
Version 19.1.0.0.1 OGGCORE_19.1.0.0.0_PLATFORMS_190524.2201


Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


Linux, x64, 64bit (optimized) on May 25 2019 02:00:23
Operating system character set identified as US-ASCII.


OGG (not connected) 1> show


Current directory: /home/oracle/software/scripts
COLOR            : OFF
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF


OGG (not connected) 2>

 

If you want to change any of these settings, you can simply run the “set <option> <value>” at the command prompt. For example, I want to turn on the color option.

OGG (not connected) 2> set color on


OGG (not connected) 3> show


Current directory: /home/oracle/software/scripts
COLOR            : ON
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF


OGG (not connected) 4>

 

Now, that we can set these values and change how AdminClient responds; how can these settings be automated (to a degree)? In order to do this, you can write a wrapper around the execution of the AdminClient executable (similar to my post on resolving OGG-01525 error). Within this wrapper, the setting you want to change has to be prefixed with ADMINCLIENT_. This would like this:

export ADMINCLIENT_COLOR=<value>

Note: The <value> is case sensitive.

My shell script for AdminClient with the settings I like to have turned on is setup as follows:

#/bin/bash


export OGG_VAR_HOME=/tmp
export ADMINCLIENT_COLOR=ON
export ADMINCLIENT_DEBUG=OFF


${OGG_HOME}/bin/adminclient

 

Now, when I start AdminClient, I have all the settings I want for my environment. Plus, the ones I do not set will take the default settings.

[oracle@ogg19c scripts]$ sh ./adminclient.sh
Oracle GoldenGate Administration Client for Oracle
Version 19.1.0.0.1 OGGCORE_19.1.0.0.0_PLATFORMS_190524.2201


Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


Linux, x64, 64bit (optimized) on May 25 2019 02:00:23
Operating system character set identified as US-ASCII.


OGG (not connected) 1> show


Current directory: /home/oracle/software/scripts
COLOR            : ON
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF


OGG (not connected) 2>

 

Enjoy!!!

Categories: DBA Blogs

VirtualBox – running a Windows 10 Guest on an Ubuntu Host

The Anti-Kyte - Fri, 2019-08-02 10:02

Yes, you read that right. There are lots of guides out there on how to set up and run Ubuntu in VirtualBox on a Windows host.
These days, you even have access to an Ubuntu sub-system in Windows itself.
If, like me, you’re OS of choice is Ubuntu but you need to test how something behaves in Windows – is it possible to knock up an appropriate environment ?
The answer is, of course, yes – otherwise this would be quite a short post.

The following steps will work for VirtualBox on any host – Linux, Mac, even Windows.

What I’m going to cover is :

  • Finding a Windows ISO
  • Configuring the VM in VirtualBox
  • Persuading VirtualBox to use a sensible screen size for your new VM

But first…

A quick word about versions

The Host OS I’m running is Ubuntu 16.04 LTS.
I’m using version 5.0 of VirtualBox.
NOTE – steps to install VirtualBox on a Debian-based host such as Ubuntu can be found here.
The Guest OS I’m installing is, as you’d expect, Windows 10.

Finding a Windows ISO

Depending on which Windows edition you are after, there are a couple of places you can look.
Microsoft provides an ISO for a 180-day evaluation version of Windows Server here.

In this case, I simply want to try Windows 10 so I need to go to this page.

Once here, I need to select an edition…

…and the language…

…before we’re presented with a choice of 32 or 64-bit :

I’ve chosen 64-bit. After the download, I am now the proud owner of :

-rw-rw-r-- 1 mike mike 4.7G Jul 10 17:10 Win10_1903_V1_English_x64.iso
Creating the VirtualBox VM

Fire up VirtualBox and click on the New button to start the Create Virtual Machine wizard :

…Next assign it some memory


I’m going to create a Virtual Hard Disk :

…using the default type…

…and being dynamically allocated…

…of the size recommended by VirtualBox :

I now have a new VM, which I need to point at the Windows ISO I downloaded so that I can install Windows itself :

All I have to do now is follow the Windows installation prompts, a process which I’ll not bore you with here.
However, you may be interested to learn that you don’t necessarily require a Product Key for this installation.
Chris Hoffman has produced an excellent guide on the subject.

Installing Guest Additions

Now I’ve configured Windows, I still need to install VirtualBox Guest Additions. Among other things, this will help to control the screen size of the VM so that I don’t need a magnifying glass !

First of all, we need to virtually eject the virtual cd containing the Windows ISO. To do this, we actually go to the VM’s VirtualBox menu and select Devices/Optical Drives/Remove disk from virtual drive :

Now, using the same menu (Devices), we select Insert Guest Additions CD Image :

When Windows prompts you, choose to install :

Accept the defaults when prompted and then reboot the VM.

If, by some chance you are still faced with a small viewport for your Windows VM, you can try the following…

Resizing the VM display

Go to the VirtualBox application itself and with the VM selected, go to the File/Preferences menu.

Click on Display, and set the Maximum Guest Screen Size to Automatic

When you next re-start the VM, the window should now be a more reasonable size.
In fact, with any luck, your desktop should now look something like this :

The best way to run Windows !

Taking Pivotal Build Service (PBS) for a test drive

Pas Apicella - Fri, 2019-08-02 05:13
Pivotal Build Service ALPHA was just released and in this blog post let's take it for a test drive to work out how it works. The Pivotal blog post about this release is below. In short it Assembles and Updates Containers in Kubernetes

https://content.pivotal.io/blog/pivotal-build-service-now-alpha-assembles-and-updates-containers-in-kubernetes

Steps:

1. Once you have deployed Pivotal Build Service, the pb CLI can be used to target it with the following command.

Note: Use the --skip-ssl-validation flag if the Pivotal Build Service targets a UAA that has a self-signed CA cert

$ pb api set https://pbs.picorivera.cf-app.com --skip-ssl-validation
Successfully set 'https://pbs.picorivera.cf-app.com' as the Build Service

2. Login using "pb login" as shown below

$ pb login
Target Build Server at: https://pbs.picorivera.cf-app.com

Username: papicella@gmail.com
Password: ******
Login successful

Using the Pivotal Build Service (PBS) requires us to create a TEAM and IMAGE. Both are explained below.

TEAM: A team is an entity on Pivotal Build Service that is used to manage authentication for the images built by Pivotal Build Service and to manage registry and git credentials for the images managed by the team

3. Create a TEAM yaml as per below and then apply that config using the pb cli

example-team.yaml

name: example-team-name
registries:
- registry: index.docker.io
  username: pasapples
  password: *****
repositories:
- domain: github.com
  username: papicella
  password: *****

$ pb team apply -f example-team.yaml
Successfully applied team 'example-team-name'

IMAGE: An image defines the specification that Pivotal Build Service uses to create images for a user.

4. Create a IMAGE yaml as per below and then apply that config using the pb cli. The PBS will automatically kick off a build.

example-image.yaml

team: example-team-name
source:
  git:
    url: https://github.com/papicella/pbs-demo
    revision: master
image:
  tag: pasapples/pbs-demo-image

$ pb image apply -f example-image.yaml
Successfully applied image configuration 'pasapples/pbs-demo-image'

$ pb image builds pasapples/pbs-demo-image
Build    Status      Image    Started Time           Finished Time    Reason
-----    ------      -----    ------------           -------------    ------
    1    BUILDING    --       2019-08-02 09:43:06    --               CONFIG

5. You can view the logs of the build using it's ID as shown below driving off the Build ID

$ pb image logs pasapples/pbs-demo-image -b 1

papicella@papicella:~$ pb image logs pasapples/pbs-demo-image -b 1
[build-step-credential-initializer] {"level":"info","ts":1564739008.561973,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
[build-step-credential-initializer]
[build-step-git-source-0] git-init:main.go:81: Successfully cloned "https://github.com/papicella/pbs-demo" @ "c1aae50feaffcd61c521796cd675e6576e58bc64" in path "/workspace"
[build-step-git-source-0]
[build-step-prepare]
[build-step-detect] Trying group 1 out of 3 with 27 buildpacks...
[build-step-detect] ======== Results ========
[build-step-detect] skip: Cloud Foundry Archive Expanding Buildpack
[build-step-detect] pass: Pivotal OpenJDK Buildpack
[build-step-detect] pass: Pivotal Build System Buildpack
[build-step-detect] pass: Cloud Foundry JVM Application Buildpack
[build-step-detect] pass: Cloud Foundry Spring Boot Buildpack
[build-step-detect] pass: Cloud Foundry Apache Tomcat Buildpack
[build-step-detect] pass: Cloud Foundry DistZip Buildpack
[build-step-detect] skip: Cloud Foundry Procfile Buildpack
[build-step-detect] skip: Pivotal AppDynamics Buildpack
[build-step-detect] skip: Pivotal AspectJ Buildpack
[build-step-detect] skip: Pivotal CA Introscope Buildpack
[build-step-detect] pass: Pivotal Client Certificate Mapper Buildpack

....


6. So after a few minutes or so we will see we have built our initial image from the GitHub repo and that OCI compliant image built using Cloud Native Buildpacks is created on our DockerHub account

$ pb image builds pasapples/pbs-demo-image
Build    Status     Image       Started Time           Finished Time          Reason
-----    ------     -----       ------------           -------------          ------
    1    SUCCESS    98239112    2019-08-02 09:43:06    2019-08-02 09:44:34    CONFIG



One of the PBS job is to keep this image updated as new successful commits occur off the matser branch. Lets show how this works as per below

7. Let's make a change to the code for or GitHub repo here I do this in IntelliJ IDEA



8. Commit the changes as shown below



9. Let's see if indeed the PBS actually started a new build for us and we should see it is doing that.

$ pb image builds pasapples/pbs-demo-image
Build    Status      Image       Started Time           Finished Time          Reason
-----    ------      -----       ------------           -------------          ------
    1    SUCCESS     98239112    2019-08-02 09:43:06    2019-08-02 09:44:34    CONFIG
    2    BUILDING    --          2019-08-02 09:57:11    --                     COMMIT

10. We can tail the logs as shown below and actually tail the build logs live using "-f"

papicella@papicella:~$ pb image logs pasapples/pbs-demo-image -b 2 -f
[build-step-credential-initializer] {"level":"info","ts":1564739850.5331886,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
[build-step-credential-initializer]
[build-step-git-source-0] git-init:main.go:81: Successfully cloned "https://github.com/papicella/pbs-demo" @ "0bb81c7523be7ada3ed956569d0241cda6b410d2" in path "/workspace"
[build-step-git-source-0]
[build-step-prepare]
[build-step-detect] Trying group 1 out of 3 with 27 buildpacks...
[build-step-detect] ======== Results ========
[build-step-detect] skip: Cloud Foundry Archive Expanding Buildpack
[build-step-detect] pass: Pivotal OpenJDK Buildpack
[build-step-detect] pass: Pivotal Build System Buildpack
[build-step-detect] pass: Cloud Foundry JVM Application Buildpack
[build-step-detect] pass: Cloud Foundry Spring Boot Buildpack
[build-step-detect] pass: Cloud Foundry Apache Tomcat Buildpack
[build-step-detect] pass: Cloud Foundry DistZip Buildpack
[build-step-detect] skip: Cloud Foundry Procfile Buildpack
[build-step-detect] skip: Pivotal AppDynamics Buildpack
[build-step-detect] skip: Pivotal AspectJ Buildpack
[build-step-detect] skip: Pivotal CA Introscope Buildpack
[build-step-detect] pass: Pivotal Client Certificate Mapper Buildpack
[build-step-detect] skip: Pivotal Elastic APM Buildpack
[build-step-detect] skip: Pivotal JaCoCo Buildpack
[build-step-detect] skip: Pivotal JProfiler Buildpack
[build-step-detect] skip: Pivotal JRebel Buildpack
[build-step-detect] skip: Pivotal New Relic Buildpack
[build-step-detect] skip: Pivotal OverOps Buildpack
[build-step-detect] skip: Pivotal Riverbed AppInternals Buildpack
[build-step-detect] skip: Pivotal SkyWalking Buildpack
[build-step-detect] skip: Pivotal YourKit Buildpack
[build-step-detect] skip: Cloud Foundry Azure Application Insights Buildpack
[build-step-detect] skip: Cloud Foundry Debug Buildpack
[build-step-detect] skip: Cloud Foundry Google Stackdriver Buildpack
[build-step-detect] skip: Cloud Foundry JDBC Buildpack
[build-step-detect] skip: Cloud Foundry JMX Buildpack
[build-step-detect] pass: Cloud Foundry Spring Auto-reconfiguration Buildpack
[build-step-detect]
[build-step-restore] Restoring cached layer 'io.pivotal.openjdk:openjdk-jdk'
[build-step-restore] Restoring cached layer 'io.pivotal.buildsystem:build-system-application'
[build-step-restore] Restoring cached layer 'io.pivotal.buildsystem:build-system-cache'
[build-step-restore] Restoring cached layer 'org.cloudfoundry.jvmapplication:executable-jar'
[build-step-restore] Restoring cached layer 'org.cloudfoundry.springboot:spring-boot'
[build-step-restore]
[build-step-analyze] Analyzing image 'index.docker.io/pasapples/pbs-demo-image@sha256:982391123b47cdbac534aaeed78c5e121d89d2064b53897c23f2248a7658fa50'
[build-step-analyze] Using cached layer 'io.pivotal.openjdk:openjdk-jdk'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:java-security-properties'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:jvmkill'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:link-local-dns'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:memory-calculator'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:openjdk-jre'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:security-provider-configurer'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:class-counter'
[build-step-analyze] Using cached layer 'io.pivotal.buildsystem:build-system-application'
[build-step-analyze] Using cached layer 'io.pivotal.buildsystem:build-system-cache'
[build-step-analyze] Using cached launch layer 'org.cloudfoundry.jvmapplication:executable-jar'
[build-step-analyze] Rewriting metadata for layer 'org.cloudfoundry.jvmapplication:executable-jar'
[build-step-analyze] Using cached launch layer 'org.cloudfoundry.springboot:spring-boot'
[build-step-analyze] Rewriting metadata for layer 'org.cloudfoundry.springboot:spring-boot'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.clientcertificatemapper:client-certificate-mapper'
[build-step-analyze] Writing metadata for uncached layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration'
[build-step-analyze]
[build-step-build]
[build-step-build] Pivotal OpenJDK Buildpack 1.0.0-M9
[build-step-build]   OpenJDK JDK 11.0.3: Reusing cached layer
[build-step-build]   OpenJDK JRE 11.0.3: Reusing cached layer
[build-step-build]   Java Security Properties 1.0.0-M9: Reusing cached layer
[build-step-build]   Security Provider Configurer 1.0.0-M9: Reusing cached layer
[build-step-build]   Link-Local DNS 1.0.0-M9: Reusing cached layer
[build-step-build]   JVMKill Agent 1.16.0: Reusing cached layer
[build-step-build]   Class Counter 1.0.0-M9: Reusing cached layer
[build-step-build]   Memory Calculator 4.0.0: Reusing cached layer
[build-step-build]
[build-step-build] Pivotal Build System Buildpack 1.0.0-M9
[build-step-build]     Using wrapper
[build-step-build]     Linking Cache to /home/vcap/.m2
[build-step-build]   Compiled Application (141 files): Contributing to layer
[build-step-build] [INFO] Scanning for projects...
[build-step-build] [INFO]
[build-step-build] [INFO] ------------------------< com.example:pbs-demo >------------------------
[build-step-build] [INFO] Building pbs-demo 0.0.1-SNAPSHOT
[build-step-build] [INFO] --------------------------------[ jar ]---------------------------------
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ pbs-demo ---
[build-step-build] [INFO] Using 'UTF-8' encoding to copy filtered resources.
[build-step-build] [INFO] Copying 1 resource
[build-step-build] [INFO] Copying 0 resource
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ pbs-demo ---
[build-step-build] [INFO] Changes detected - recompiling the module!
[build-step-build] [INFO] Compiling 9 source files to /workspace/target/classes
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ pbs-demo ---
[build-step-build] [INFO] Not copying test resources
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ pbs-demo ---
[build-step-build] [INFO] Not compiling test sources
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ pbs-demo ---
[build-step-build] [INFO] Tests are skipped.
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ pbs-demo ---
[build-step-build] [INFO] Building jar: /workspace/target/pbs-demo-0.0.1-SNAPSHOT.jar
[build-step-build] [INFO]
[build-step-build] [INFO] --- spring-boot-maven-plugin:2.1.6.RELEASE:repackage (repackage) @ pbs-demo ---
[build-step-build] [INFO] Replacing main artifact with repackaged archive
[build-step-build] [INFO] ------------------------------------------------------------------------
[build-step-build] [INFO] BUILD SUCCESS
[build-step-build] [INFO] ------------------------------------------------------------------------
[build-step-build] [INFO] Total time:  7.214 s
[build-step-build] [INFO] Finished at: 2019-08-02T09:57:52Z
[build-step-build] [INFO] ------------------------------------------------------------------------
[build-step-build]   Removing source code
[build-step-build]
[build-step-build] Cloud Foundry JVM Application Buildpack 1.0.0-M9
[build-step-build]   Executable JAR: Reusing cached layer
[build-step-build]   Process types:
[build-step-build]     executable-jar: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[build-step-build]     task:           java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[build-step-build]     web:            java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[build-step-build]
[build-step-build] Cloud Foundry Spring Boot Buildpack 1.0.0-M9
[build-step-build]   Spring Boot 2.1.6.RELEASE: Reusing cached layer
[build-step-build]   Process types:
[build-step-build]     spring-boot: java -cp $CLASSPATH $JAVA_OPTS com.example.pbsdemo.PbsDemoApplication
[build-step-build]     task:        java -cp $CLASSPATH $JAVA_OPTS com.example.pbsdemo.PbsDemoApplication
[build-step-build]     web:         java -cp $CLASSPATH $JAVA_OPTS com.example.pbsdemo.PbsDemoApplication
[build-step-build]
[build-step-build] Pivotal Client Certificate Mapper Buildpack 1.0.0-M9
[build-step-build]   Cloud Foundry Client Certificate Mapper 1.8.0: Reusing cached layer
[build-step-build]
[build-step-build] Cloud Foundry Spring Auto-reconfiguration Buildpack 1.0.0-M9
[build-step-build]   Spring Auto-reconfiguration 2.7.0: Reusing cached layer
[build-step-build] 

...


11. This time the build will be faster given we are using Cloud Native Buildpacks a CNCF project and it will only rebuild the layers required versus the whole image itself. You can see from the time taken of build "2"

$ pb image builds pasapples/pbs-demo-image
Build    Status     Image       Started Time           Finished Time          Reason
-----    ------     -----       ------------           -------------          ------
    1    SUCCESS    98239112    2019-08-02 09:43:06    2019-08-02 09:44:34    CONFIG
    2    SUCCESS    1e4b63b1    2019-08-02 09:57:11    2019-08-02 09:58:15    COMMIT

Hopefully this demo shows what the PBS is all about and how it will simplify how you create and keep updated your OCI compliant images.

More Information:

1. Get started with Pivotal Build Service.
https://github.com/pivotal-cf/docs-build-service/blob/master/using.md

2. Request alpha access to Build Service via this form, or by reaching out to your account team. Once you’ve gained access, you’ll see the bits on up PivNet

3. Cloud Native buildpacks
https://buildpacks.io/


Categories: Fusion Middleware

Alfresco Clustering – Apache HTTPD as Load Balancer

Yann Neuhaus - Fri, 2019-08-02 01:00

In previous blogs, I talked about some basis and presented some possible architectures for Alfresco, I talked about the Clustering setup for the Alfresco Repository, the Alfresco Share and for ActiveMQ. In this one, I will talk about the Front-end layer, but in a very particular setup because it will also act as a Load Balancer. For an Alfresco solution, you can choose the front-end that you prefer and it can just act as a front-end to protect your Alfresco back-end components, to add SSL or whatever. There is no real preferences but you will obviously need to know how to configure it. I posted a blog some years ago for Apache HTTPD as a simple front-end (here) or you can check the Alfresco documentation which now include a section for that as well but there is no official documentation for a Load Balancer setup.

In an Alfresco architecture that includes HA/Clustering you will, at some point, need a Load Balancer. From time to time, you will come across companies that do not already have a Load Balancer available and you might therefore have to provide something to fill this gap. Since you will most probably (should?) already have a front-end to protect Alfresco, why not using it as well as a Load Balancer? In this blog, I choose Apache HTTPD because that’s the front-end I’m usually using and I know it’s working fine as a LB as well.

The architectures that I described in the first blog of this series, there always were a front-end installed on each node with Alfresco Share and there were a LB above that. Here, these two boxes are actually together. There are multiple ways to set that up but I didn’t want to talk about that in my first blog because it’s not really related to Alfresco, it’s above that so it would just have multiplied the possible architectures that I wanted to present and my blog would just have been way too long. There were also no communications between the different front-end nodes because technically speaking, we aren’t going to setup Apache HTTPD as a Cluster, we only need to provide a High Availability solution.

Alright so let’s say that you don’t have a Load Balancer available and you want to use Apache HTTPD as a front-end+LB for a two-node Cluster. There are several solutions so here are two possible ways to do that from an inbound communication point of view that will still provide redundancy:

  • Setup a Round Robin DNS that points to both Apache HTTPD node1 and node2. The DNS will redirect connections to either of the two Apache HTTPD (Active/Active)
  • Setup a Failover DNS with a pretty low TimeToLive (TTL) which will point to a single Apache HTTPD node and redirect all traffic there. If this one isn’t available, it will failover to the second one (Active/Passive)

 

In both cases above, the Apache HTTPD configuration can be exactly the same, it will work. From an outbound communication point of view, Apache HTTPD will talk directly with all the Share nodes behind it. To avoid disconnection and loss of sessions in case an Apache HTTPD is going down, the solution will need to support session stickiness across all Apache HTTPD. With that, all communications coming a single browser will always be redirected to the same backend server which ensures that the sessions are still intact, even if you are losing an Apache HTTPD. I mentioned previously that there won’t be any communications between the different front-ends so this session stickiness must be based on something present inside the session (header or cookie) or inside the URL.

With Apache HTTPD, you can use the Proxy modules to provide both a front-end configuration as well as a Load Balancer but, in this blog, I will use the JK module. The JK module is provided by Apache for communications between Apache HTTPD and Apache Tomcat. It has been designed and optimized for this purpose and it also provides/supports a Load Balancer configuration.

 

I. Apache HTTPD setup for a single back-end node

For this example, I will use the package provided by Ubuntu for a simple installation. You can obviously build it from source to customize it, add your best practices, aso… This has nothing to do with the Clustering setup, it’s a simple front-end configuration for any installation. So let’s install a basic Apache HTTPD:

[alfresco@httpd_n1 ~]$ sudo apt-get install apache2 libapache2-mod-jk
[alfresco@httpd_n1 ~]$ sudo systemctl enable apache2.service
[alfresco@httpd_n1 ~]$ sudo systemctl daemon-reload
[alfresco@httpd_n1 ~]$ sudo a2enmod rewrite
[alfresco@httpd_n1 ~]$ sudo a2enmod ssl

 

Then to configure it for a single back-end Alfresco node (I’m just showing a minimal configuration again, there is much more to do add security & restrictions around Alfresco and mod_jk):

[alfresco@httpd_n1 ~]$ cat /etc/apache2/sites-available/alfresco-ssl.conf
...
<VirtualHost *:80>
    RewriteRule ^/?(.*) https://%{HTTP_HOST}/$1 [R,L]
</VirtualHost>

<VirtualHost *:443>
    ServerName            dns.domain
    ServerAlias           dns.domain dns
    ServerAdmin           email@domain
    SSLEngine             on
    SSLProtocol           -all +TLSv1.2
    SSLCipherSuite        EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:AES2
    SSLHonorCipherOrder   on
    SSLVerifyClient       none
    SSLCertificateFile    /etc/pki/tls/certs/dns.domain.crt
    SSLCertificateKeyFile /etc/pki/tls/private/dns.domain.key

    RewriteRule ^/$ https://%{HTTP_HOST}/share [R,L]

    JkMount /* alfworker
</VirtualHost>
...
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ cat /etc/libapache2-mod-jk/workers.properties
worker.list=alfworker
worker.alfworker.type=ajp13
worker.alfworker.port=8009
worker.alfworker.host=share_n1.domain
worker.alfworker.lbfactor=1
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ sudo a2ensite alfresco-ssl
[alfresco@httpd_n1 ~]$ sudo a2dissite 000-default
[alfresco@httpd_n1 ~]$ sudo rm /etc/apache2/sites-enabled/000-default.conf
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ sudo service apache2 restart

 

That should do it for a single back-end Alfresco node. Again, this was just an example, I wouldn’t recommend using the configuration as is (inside the alfresco-ssl.conf file), there is much more to do for security reasons.

 

II. Adaptation for a Load Balancer configuration

If you want to configure your Apache HTTPD as a Load Balancer, then on top of the standard setup shown above, you just have to modify two things:

  • Modify the JK module configuration to use a Load Balancer
  • Modify the Apache Tomcat configuration to add an identifier for Apache HTTPD to be able to redirect the communication to the correct back-end node (session stickiness). This ID put in the Apache Tomcat configuration will extend the Session’s ID like that: <session_id>.<tomcat_id>

 

So on all the nodes hosting the Apache HTTPD, you should put the exact same configuration:

[alfresco@httpd_n1 ~]$ cat /etc/libapache2-mod-jk/workers.properties
worker.list=alfworker

worker.alfworker.type=lb
worker.alfworker.balance_workers=node1,node2
worker.alfworker.sticky_session=true
worker.alfworker.method=B

worker.node1.type=ajp13
worker.node1.port=8009
worker.node1.host=share_n1.domain
worker.node1.lbfactor=1

worker.node2.type=ajp13
worker.node2.port=8009
worker.node2.host=share_n2.domain
worker.node2.lbfactor=1
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ sudo service apache2 reload

 

With the above configuration, we keep the same JK Worker (alfworker) but instead of using a ajp13 type, we use a lb type (line 4) which is an encapsulation. The alfworker will use 2 sub-workers named node1 and node2 (line 5), that’s just a generic name. The alfworker will also enable stickiness and use the method B (Busyness), which means that for new sessions, Apache HTTPD to choose to use the worker with the less requests being served, divided by the lbfactor value.

Each sub-worker (node1 and node2) define their type which is ajp13 this time, the port and host it should target (where the Share nodes are located) and the lbfactor. As mentioned above, increasing the lbfactor means that more requests are going to be sent to this worker:

  • For the node2 to serve 100% more requests than the node1 (x2), then set worker.node1.lbfactor=1 and worker.node2.lbfactor=2
  • For the node2 to serve 50% more requests than the node1 (x1.5), then set worker.node1.lbfactor=2 and worker.node2.lbfactor=3

 

The second thing to do is to modify the Apache Tomcat configuration to add a specific ID. On the Share node1:

[alfresco@share_n1 ~]$ grep "<Engine" $CATALINA_HOME/conf/server.xml
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="share_n1">
[alfresco@share_n1 ~]$

 

On the Share node2:

[alfresco@share_n2 ~]$ grep "<Engine" $CATALINA_HOME/conf/server.xml
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="share_n2">
[alfresco@share_n2 ~]$

 

The value to be put in the jvmRoute parameter is just a string so it can be anything but it must be unique across all Share nodes so that the Apache HTTPD JK module can find the correct back-end node that it should transfer the requests to.

It’s that simple to configure Apache HTTPD as a Load Balancer in front of Alfresco… To check which back-end server you are currently using, you can use the browser’s utilities and in particular the network recording which will display, in the headers/cookies section, the Session ID which will therefore display the value that you put in the jvmRoute.

 

 

Other posts of this series on Alfresco HA/Clustering:

Cet article Alfresco Clustering – Apache HTTPD as Load Balancer est apparu en premier sur Blog dbi services.

Too Old To Remember

Michael Dinh - Thu, 2019-08-01 12:01

Is it required to run datapatch after creating database?

Why bother trying to remember versus running datapatch -prereq to find out?

Test case for 12.2.

Database July 2019 Release Update 12.2 applied:

[oracle@racnode-dc2-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.2.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.2.0.1/grid/OPatch/opatch lspatches
29770090;ACFS JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770090)
29770040;OCW JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770040)
29757449;Database Jul 2019 Release Update : 12.2.0.1.190716 (29757449)
28566910;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:180802.1448.S) (28566910)
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)

OPatch succeeded.
+ exit
[oracle@racnode-dc2-1 ~]$

Create 12.2 RAC database:

[oracle@racnode-dc2-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname hawkcdb -sid hawkcdb -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc2-1,racnode-dc2-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
21% complete
Creating and starting Oracle instance
35% complete
Creating cluster database views
50% complete
Completing Database Creation
57% complete
Creating Pluggable Databases
78% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/hawkcdb/hawkcdb.log" for further details.
[oracle@racnode-dc2-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc2-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.2.0.1.0 Production on Thu Aug  1 17:45:13 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    Nothing to apply
**********************************************************************

SQL Patching tool complete on Thu Aug  1 17:46:39 2019
[oracle@racnode-dc2-1 ~]$

Test case for 12.1.

Database July 2019 Bundle Patch 12.1 applied:

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.2/grid
ORACLE_HOME=/u01/app/12.1.0.2/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.1.0.2/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.1.0.2/grid/OPatch/opatch lspatches
29509318;OCW PATCH SET UPDATE 12.1.0.2.190716 (29509318)
29496791;Database Bundle Patch : 12.1.0.2.190716 (29496791)
29423125;ACFS PATCH SET UPDATE 12.1.0.2.190716 (29423125)
26983807;WLM Patch Set Update: 12.1.0.2.180116 (26983807)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Create 12.1 RAC database:

[oracle@racnode-dc1-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname cdbhawk -sid cdbhawk -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc1-1,racnode-dc1-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
23% complete
Creating and starting Oracle instance
38% complete
Creating cluster database views
54% complete
Completing Database Creation
77% complete
Creating Pluggable Databases
81% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdbhawk/cdbhawk.log" for further details.
[oracle@racnode-dc1-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.1.0.2.0 Production on Thu Aug  1 18:24:53 2019
Copyright (c) 2012, 2017, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    The following patches will be applied:
      29496791 (DATABASE BUNDLE PATCH 12.1.0.2.190716)
**********************************************************************

SQL Patching tool complete on Thu Aug  1 18:26:26 2019
[oracle@racnode-dc1-1 ~]$

For 12.1, datapatch is required and not for 12.2.

Oracle Cloud for DBAs: 8 Things Every Beginner Should Know

Online Apps DBA - Thu, 2019-08-01 05:16

8 Things Every Beginner Should Know To Start On Oracle Cloud If you are a DBA/Apps DBA planning to learn Database on Oracle Cloud or someone already working on Cloud but want to have a high-level overview of Database options available on Oracle Cloud, then check out the latest post from Oracle ACE & Cloud […]

The post Oracle Cloud for DBAs: 8 Things Every Beginner Should Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Recursive Subquery

Bar Solutions - Thu, 2019-08-01 03:01

At my current assignment we are processing files coming from an online system to be inserted into our database (kind of a data warehouse). This is done using external tables and a scheduled job. The job just checks if there is a file available and will process this. The trouble is that the files might not make it to our database, for various reasons.

I want to be able to check if all the files have been processed and no file has gone missing. After processing a file, its name gets logged into a table so I can check this at a later time. Of course, I don’t want to eyeball the list to see if all the files have been processed.

I have been playing around with some queries to find out the gaps. A couple of things to know in advance.
The files are named using a timestamp (‘YYYYMMDDHH24MI’)
The files are always 5 minutes apart from each other.
Let’s first create a test table:

rem create the testdata table
create table testdata
( filename            varchar2(4000)
)
/

Then I create some testdata. Using the DBMS_RANDOM.VALUE function I determine which records should be inserted.

rem fill the testdata table
begin
  for d in 10 .. 15 loop -- create 5 days of test data
    for hr in 0 .. 23 loop -- for every hour of the day
      for mn in 0 .. 59 loop -- for every second of the hour
        if mod(mn, 5) = 0 then -- only if it's a 5 minute value
          if trunc(dbms_random.value * 10) &lt;&gt; 4 then -- and our randomizer doesn't end up being 4 (create random gaps)
            insert into testdata
              (filename)
            values
              ('201907' || trim(to_char(d, '09')) || trim(to_char(hr, '09')) || trim(to_char(mn, '09')) || '.tst');
          end if;
        end if;
      end loop;
    end loop;
  end loop;
  commit;
end;
/

My first attempt at this query was to find all the gaps between to filenames which weren’t 5 minutes apart. So I started creating the query. Using subquery factoring I can show the steps I took.
Since the table consists of filenames and not dates (or timestamps) I needed to get the date portion of the file first:

select to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MISS') filedate
    from testdata
   order by 1

Using this set of data I can create the next step in the query, that is determining the preceding and the following date for each date and while we’re at it, determine the gap before and after:

with filenames as
 (select to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MISS') filedate
    from testdata
   order by 1)
select lag(filedate) over(order by filedate) previousfiledate
      ,filedate
      ,lead(filedate) over(order by filedate) nextfiledate
      ,(filedate - lag(filedate) over(order by filedate)) gapbefore
      ,(lead(filedate) over(order by filedate) - filedate) gapafter
  from filenames

Wrapping this set of data into yet another factored subquery, I can remove all the rows that have a gap of exactly 5 minutes.

with filenames as
 (select to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MISS') filedate
    from testdata
   order by 1),
gaps as
 (select lag(filedate) over(order by filedate) previousfiledate
        ,filedate
        ,lead(filedate) over(order by filedate) nextfiledate
        ,(filedate - lag(filedate) over(order by filedate)) gapbefore
        ,(lead(filedate) over(order by filedate) - filedate) gapafter
    from filenames)
select *
  from gaps
 where 1 = 1
   and (gapafter &lt;&gt; to_dsinterval('0 00:05:00')) 
    or (gapbefore &lt;&gt; to_dsinterval('0 00:05:00'))

After I got the results for this query it made me wonder: This query shows me where the gaps in the data are, but it doesn’t tell me exactly which file or files are missing. I still have to figure that out myself. It also shows every gap twice, once after one file and once before the next file. There has got to be a better way to find and fill up the gaps. What if I could just generate all the filenames that should be there and then subtract the filenames that have been recorded.
First I need to create a list of all the possible filenames that exist between the first and last recorded filename. I know about a feature called recursive subquery factoring, but I never used it before. Luckily Tim Hall has created a nice post on this subject.

with
/* 
 * First determine all the possible dates between the first and the last recorded file
 * Using recursive subquery
 * Thanks to Tim Hall for https://oracle-base.com/articles/11g/recursive-subquery-factoring-11gr2
 */
possibledates(thedate) as
 ( -- Anchor member
  select min(to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MI')) thedate
    from testdata
  union all
  -- Recursive member
  select thedate + to_dsinterval('0 00:05:00') thedate
    from possibledates
   where thedate &lt;
         (select max(to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MI'))
            from testdata)),
/*
 * Then determing the filenames from these dates
 */
possiblefilenames as
 (select to_char(thedate, 'YYYYMMDDHH24MI') || '.tst' filename
    from possibledates)
/*
 * Using a simple minus operation, determine which filenames are missing
 */
select filename
  from possiblefilenames
minus
select filename
  from testdata

Not only is this query very fast, it also takes away the problem for me to eye-ball the data to find the missing files. It just shows which files are actually missing.

Report Time Execution Prediction with Keras and TensorFlow

Andrejus Baranovski - Thu, 2019-08-01 01:03
The aim of this post is to explain Machine Learning to software developers in hands-on terms. Model is based on a common use case in enterprise systems — predicting wait time until the business report is generated.

Report generation in business applications typically takes time, it can be from a few seconds to minutes. Report generation requires time, because typically it would fetch and process many records, this process needs time. Users often get frustrated, they don’t know how long to wait until the report is finished and may go away by closing browser, etc. If we could inform user, before submitting the report request — how it long it will take to execute it, this would be great usability improvement.

I have implemented Machine Learning model using Keras regression to calculate expected report execution time, based on training data (logged information from the past report executions). Keras is a library which wraps TensorFlow complexity into simple and user-friendly API.

Python source code and training data is available on my GitHub repo. This code is based on Keras tutorial.

Alfresco Clustering – ActiveMQ

Yann Neuhaus - Thu, 2019-08-01 01:00

In previous blogs, I talked about some basis and presented some possible architectures for Alfresco, I talked about the Clustering setup for the Alfresco Repository and the Alfresco Share. In this one, I will work on the ActiveMQ layer. I recently posted something related to the setup of ActiveMQ and some initial configuration. I will therefore extend this topic in this blog with what needs to be done to have a simple Cluster for ActiveMQ. I’m not an ActiveMQ expert, I just started using it a few months ago in relation to Alfresco but still, I learned some things in this timeframe so this might be of some use.

ActiveMQ is a Messaging Server so there are therefore three sides to this component. First, there are Producers which produce messages. These messages are put in the broker’s queue which is the second side and finally there are Consumers which consume the messages from the queue. Producers and Consumers are satellites that are using the JMS broker’s queue: they are both clients. Therefore, in a standalone architecture (one broker), there is no issue because clients will always produce and consume all messages. However, if you start adding more brokers and if you aren’t doing it right, you might have producers talking to a specific broker and consumers talking to another one. To solve that, there are a few things possible:

  • a first solution is to create a Network of Brokers which will allow the different brokers to forward the necessary messages between them. You can see that as an Active/Active Cluster
    • Pros: this allows ActiveMQ to support a huge architecture with potentially hundreds or thousands of brokers
    • Cons: messages are, at any point in time, only owned by one single broker so if this broker goes down, the message is lost (if there is no persistence) or will have to wait for the broker to be restarted (if there is persistence)
  • the second solution that ActiveMQ supports is the Master/Slave one. In this architecture, all messages will be replicated from a Master to all Slave brokers. You can see that as something like an Active/Passive Cluster
    • Pros: messages are always processed and cannot be lost. If the Master broker is going down for any reasons, one of the Slave is instantly taking its place as the new Master with all the previous messages
    • Cons: since all messages are replicated, it’s much harder to support a huge architecture

In case of a Network of Brokers, it’s possible to use either the static or dynamic discovery of brokers:

  • Static discovery: Uses the static protocol to provide a list of all URIs to be tested to discover other connections. E.g.: static:(tcp://mq_n1.domain:61616,tcp://mq_n2.domain:61616)?maxReconnectDelay=3000
  • Dynamic discovery: Uses a multicast discovery agent to check for other connections. This is done using the discoveryUri parameter in the XML configuration file

 

I. Client’s configuration

On the client’s side, using several brokers is very simple since it’s all about using the correct broker URL. To be able to connect to several brokers, you should use the Failover Transport protocol which replaced the Reliable protocol used in ActiveMQ 3. For Alfresco, this broker URL needs to be updated in the alfresco-global.properties file. This is an example for a pretty simple URL with two brokers:

[alfresco@alf_n1 ~]$ cat $CATALINA_HOME/shared/classes/alfresco-global.properties
...
### ActiveMQ
messaging.broker.url=failover:(tcp://mq_n1.domain:61616,tcp://mq_n2.domain:61616)?timeout=3000&randomize=false&nested.daemon=false&nested.dynamicManagement=false
#messaging.username=
#messaging.password=
...
[alfresco@alf_n1 ~]$

 

There are a few things to note. The Failover used above is a transport layer that can be used in combination with any of the other transport methods/protocol. Here it’s used with two TCP protocol. The correct nomenclature is either:

  • failover:uri1,…,uriN
    • E.g.: failover:tcp://mq_n1.domain:61616,tcp://mq_n2.domain:61616 => the simplest broker URL for two brokers with no custom options
  • failover:uri1?URIOptions1,…,uriN?URIOptionsN
    • E.g.: failover:tcp://mq_n1.domain:61616?daemon=false&dynamicManagement=false&trace=false,tcp://mq_n2.domain:61616?daemon=false&dynamicManagement=true&trace=true => a more advanced broker URL with some custom options for each of the TCP protocol URIs
  • failover:(uri1?URIOptions1,…,uriN?URIOptionsN)?FailoverTransportOptions
    • E.g.: failover:(tcp://mq_n1.domain:61616?daemon=false&dynamicManagement=false&trace=false,tcp://mq_n2.domain:61616?daemon=false&dynamicManagement=true&trace=true)?timeout=3000&randomize=false => the same broker URL as above but, in addition, with some Failover Transport options
  • failover:(uri1,…,uriN)?FailoverTransportOptions&NestedURIOptions
    • E.g.: failover:(tcp://mq_n1.domain:61616,tcp://mq_n2.domain:61616)?timeout=3000&randomize=false&nested.daemon=false&nested.dynamicManagement=false&nested.trace=false => since ActiveMQ 5.9, it’s now possible to set the nested URIs options (here the TCP protocol options) at the end of the broker URL, they just need to be preceded by “nested.”. Nested options will apply to all URIs.

There are a lot of interesting parameters, these are some:

  • Failover Transport options:
    • backup=true: initialize and keep a second connection to another broker for faster failover
    • randomize=true: will pick a new URI for the reconnect randomly from the list of URIs
    • timeout=3000: time in ms before timeout on the send operations
    • priorityBackup=true: clients will failover to other brokers in case the “primary” broker isn’t available (that’s always the case) but it will consistently try to reconnect to the “primary” one. It is possible to specify several “primary” brokers with the priorityURIs option (comma separated list)
  • TCP Transport options:
    • daemon=false: specify that ActiveMQ isn’t running in a Spring or Web container
    • dynamicManagement=false: disabling the JMX management
    • trace=false: disabling the tracing

The full list of Failover Transport options is described here and the full list of TCP Transport options here.

II. Messaging Server’s configuration

I believe the simplest setup for Clustering in ActiveMQ is using the Master/Slave setup, that’s what I will talk about here. If you are looking for more information about the Network of Brokers, you can find that here. As mentioned previously, the idea behind the Master/Slave is to replicate somehow the messages to Slave brokers. To do that, there are three possible configurations:

  • Shared File System: use a shared file system
  • JDBC: use a Database Server
  • Replicated LevelDB Store: use a ZooKeeper Server. This has been deprecated in recent versions of ActiveMQ 5 in favour of KahaDB, which is a file-based persistence Database. Therefore, this actually is linked to the first configuration above (Shared File System)

In the scope of Alfresco, you should already have a shared file system as well as a shared Database Server for the Repository Clustering… So, it’s pretty easy to fill the prerequisites for ActiveMQ since you already have them. Of course, you can use a dedicated Shared File System or dedicated Database, that’s up to your requirements.

a. JDBC

For the JDBC configuration, you will need to change the persistenceAdapter to use the dedicated jdbcPersistenceAdapter and create the associated DataSource for your Database. ActiveMQ supports some DBs like Apache Derby, DB2, HSQL, MySQL, Oracle, PostgreSQL, SQLServer or Sybase. You will also need to add the JDBC library at the right location.

[alfresco@mq_n1 ~]$ cat $ACTIVEMQ_HOME/conf/activemq.xml
<beans
  xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
  ...
  <broker xmlns="http://activemq.apache.org/schema/core" brokerName="mq_n1" dataDirectory="${activemq.data}">
    ...
    <persistenceAdapter>
      <jdbcPersistenceAdapter dataDirectory="activemq-data" dataSource="postgresql-ds"/>
    </persistenceAdapter>
    ...
  </broker>
  ...
  <bean id="postgresql-ds" class="org.postgresql.ds.PGPoolingDataSource">
    <property name="serverName" value="db_vip"/>
    <property name="databaseName" value="alfresco"/>
    <property name="portNumber" value="5432"/>
    <property name="user" value="alfresco"/>
    <property name="password" value="My+P4ssw0rd"/>
    <property name="dataSourceName" value="postgres"/>
    <property name="initialConnections" value="1"/>
    <property name="maxConnections" value="10"/>
  </bean>
  ...
</beans>
[alfresco@mq_n1 ~]$

 

b. Shared File System

The Shared File System configuration is, from my point of view, the simplest one to configure but for it to work properly, there are some things to note because you should use a shared file system that supports proper file lock. This means that:

  • you cannot use the Oracle Cluster File System (OCFS/OCFS2) because there is no cluster-aware flock or POSIX locks
  • if you are using NFS v3 or lower, you won’t have automatic failover from Master to Slave because there is no timeout and therefore the lock will never be released. You should therefore use NFS v4 instead

Additionally, you need to share the persistenceAdapter between all brokers but you cannot share the data folder completely otherwise the logs will be overwritten by all brokers (that’s bad but it’s not really an issue) and more importantly, the PID file will also be overwritten which will therefore cause issues to start/stop Slave brokers…

Therefore, configuring properly the Shared File System is all about keeping the “$ACTIVEMQ_DATA” environment variable set to the place where you want the logs and PID files to be stored (i.e. locally) and you need to overwrite the persistenceAdapter path to be on the Shared File System:

[alfresco@mq_n1 ~]$ # Root folder of the ActiveMQ binaries
[alfresco@mq_n1 ~]$ echo $ACTIVEMQ_HOME
/opt/activemq
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ # Location of the logs and PID file
[alfresco@mq_n1 ~]$ echo $ACTIVEMQ_DATA
/opt/activemq/data
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ # Location of the Shared File System
[alfresco@mq_n1 ~]$ echo $ACTIVEMQ_SHARED_DATA
/shared/file/system
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ sudo systemctl stop activemq.service
[alfresco@mq_n1 ~]$ grep -A2 "<persistenceAdapter>" $ACTIVEMQ_HOME/conf/activemq.xml
    <persistenceAdapter>
      <kahaDB directory="${activemq.data}/kahadb"/>
    </persistenceAdapter>
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ # Put the KahaDB into the Shared File System
[alfresco@mq_n1 ~]$ sed -i "s, directory=\"[^\"]*\", directory=\"${ACTIVEMQ_SHARED_DATA}/activemq/kahadb\"," $ACTIVEMQ_HOME/conf/activemq.xml
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ grep -A2 "<persistenceAdapter>" $ACTIVEMQ_HOME/conf/activemq.xml
    <persistenceAdapter>
      <kahaDB directory="/shared/file/system/activemq/kahadb"/>
    </persistenceAdapter>
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ sudo systemctl start activemq.service

 

Starting the Master ActiveMQ will display some information in the log of the node1 showing that it has started properly and it will listen for connections on the different transportConnector:

[alfresco@mq_n1 ~]$ cat $ACTIVEMQ_DATA/activemq.log
2019-07-28 11:34:37,598 | INFO  | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@9f116cc: startup date [Sun Jul 28 11:34:37 CEST 2019]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2019-07-28 11:34:38,289 | INFO  | Using Persistence Adapter: KahaDBPersistenceAdapter[/shared/file/system/activemq/kahadb] | org.apache.activemq.broker.BrokerService | main
2019-07-28 11:34:38,330 | INFO  | KahaDB is version 6 | org.apache.activemq.store.kahadb.MessageDatabase | main
2019-07-28 11:34:38,351 | INFO  | PListStore:[/opt/activemq/data/mq_n1/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2019-07-28 11:34:38,479 | INFO  | Apache ActiveMQ 5.15.6 (mq_n1, ID:mq_n1-36925-1564306478360-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2019-07-28 11:34:38,533 | INFO  | Listening for connections at: tcp://mq_n1:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-07-28 11:34:38,542 | INFO  | Connector openwire started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:34:38,545 | INFO  | Listening for connections at: amqp://mq_n1:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-07-28 11:34:38,546 | INFO  | Connector amqp started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:34:38,552 | INFO  | Listening for connections at: stomp://mq_n1:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-07-28 11:34:38,553 | INFO  | Connector stomp started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:34:38,556 | INFO  | Listening for connections at: mqtt://mq_n1:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-07-28 11:34:38,561 | INFO  | Connector mqtt started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:34:38,650 | WARN  | ServletContext@o.e.j.s.ServletContextHandler@11841b15{/,null,STARTING} has uncovered http methods for path: / | org.eclipse.jetty.security.SecurityHandler | main
2019-07-28 11:34:38,710 | INFO  | Listening for connections at ws://mq_n1:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.ws.WSTransportServer | main
2019-07-28 11:34:38,712 | INFO  | Connector ws started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:34:38,712 | INFO  | Apache ActiveMQ 5.15.6 (mq_n1, ID:mq_n1-36925-1564306478360-0:1) started | org.apache.activemq.broker.BrokerService | main
2019-07-28 11:34:38,714 | INFO  | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2019-07-28 11:34:39,118 | INFO  | No Spring WebApplicationInitializer types detected on classpath | /admin | main
2019-07-28 11:34:39,373 | INFO  | ActiveMQ WebConsole available at http://0.0.0.0:8161/ | org.apache.activemq.web.WebConsoleStarter | main
2019-07-28 11:34:39,373 | INFO  | ActiveMQ Jolokia REST API available at http://0.0.0.0:8161/api/jolokia/ | org.apache.activemq.web.WebConsoleStarter | main
2019-07-28 11:34:39,402 | INFO  | Initializing Spring FrameworkServlet 'dispatcher' | /admin | main
2019-07-28 11:34:39,532 | INFO  | No Spring WebApplicationInitializer types detected on classpath | /api | main
2019-07-28 11:34:39,563 | INFO  | jolokia-agent: Using policy access restrictor classpath:/jolokia-access.xml | /api | main
[alfresco@mq_n1 ~]$

 

Then starting a Slave will only display the information on the node2 logs that there is already a Master running and therefore the Slave is just waiting and it’s not listening for now:

[alfresco@mq_n2 ~]$ cat $ACTIVEMQ_DATA/activemq.log
2019-07-28 11:35:53,258 | INFO  | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@9f116cc: startup date [Sun Jul 28 11:35:53 CEST 2019]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2019-07-28 11:35:53,986 | INFO  | Using Persistence Adapter: KahaDBPersistenceAdapter[/shared/file/system/activemq/kahadb] | org.apache.activemq.broker.BrokerService | main
2019-07-28 11:35:53,999 | INFO  | Database /shared/file/system/activemq/kahadb/lock is locked by another server. This broker is now in slave mode waiting a lock to be acquired | org.apache.activemq.store.SharedFileLocker | main
[alfresco@mq_n2 ~]$

 

Finally stopping the Master will automatically transform the Slave into a new Master, without any human interaction. From the node2 logs:

[alfresco@mq_n2 ~]$ cat $ACTIVEMQ_DATA/activemq.log
2019-07-28 11:35:53,258 | INFO  | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1@9f116cc: startup date [Sun Jul 28 11:35:53 CEST 2019]; root of context hierarchy | org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
2019-07-28 11:35:53,986 | INFO  | Using Persistence Adapter: KahaDBPersistenceAdapter[/shared/file/system/activemq/kahadb] | org.apache.activemq.broker.BrokerService | main
2019-07-28 11:35:53,999 | INFO  | Database /shared/file/system/activemq/kahadb/lock is locked by another server. This broker is now in slave mode waiting a lock to be acquired | org.apache.activemq.store.SharedFileLocker | main
  # The ActiveMQ Master on node1 has been stopped here (11:37:10)
2019-07-28 11:37:11,166 | INFO  | KahaDB is version 6 | org.apache.activemq.store.kahadb.MessageDatabase | main
2019-07-28 11:37:11,187 | INFO  | PListStore:[/opt/activemq/data/mq_n2/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2019-07-28 11:37:11,316 | INFO  | Apache ActiveMQ 5.15.6 (mq_n2, ID:mq_n2-41827-1564306631196-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2019-07-28 11:37:11,370 | INFO  | Listening for connections at: tcp://mq_n2:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-07-28 11:37:11,372 | INFO  | Connector openwire started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:37:11,379 | INFO  | Listening for connections at: amqp://mq_n2:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-07-28 11:37:11,381 | INFO  | Connector amqp started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:37:11,386 | INFO  | Listening for connections at: stomp://mq_n2:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-07-28 11:37:11,387 | INFO  | Connector stomp started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:37:11,390 | INFO  | Listening for connections at: mqtt://mq_n2:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2019-07-28 11:37:11,391 | INFO  | Connector mqtt started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:37:11,485 | WARN  | ServletContext@o.e.j.s.ServletContextHandler@2cfbeac4{/,null,STARTING} has uncovered http methods for path: / | org.eclipse.jetty.security.SecurityHandler | main
2019-07-28 11:37:11,547 | INFO  | Listening for connections at ws://mq_n2:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.ws.WSTransportServer | main
2019-07-28 11:37:11,548 | INFO  | Connector ws started | org.apache.activemq.broker.TransportConnector | main
2019-07-28 11:37:11,556 | INFO  | Apache ActiveMQ 5.15.6 (mq_n2, ID:mq_n2-41827-1564306631196-0:1) started | org.apache.activemq.broker.BrokerService | main
2019-07-28 11:37:11,558 | INFO  | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2019-07-28 11:37:11,045 | INFO  | No Spring WebApplicationInitializer types detected on classpath | /admin | main
2019-07-28 11:37:11,448 | INFO  | ActiveMQ WebConsole available at http://0.0.0.0:8161/ | org.apache.activemq.web.WebConsoleStarter | main
2019-07-28 11:37:11,448 | INFO  | ActiveMQ Jolokia REST API available at http://0.0.0.0:8161/api/jolokia/ | org.apache.activemq.web.WebConsoleStarter | main
2019-07-28 11:37:11,478 | INFO  | Initializing Spring FrameworkServlet 'dispatcher' | /admin | main
2019-07-28 11:37:11,627 | INFO  | No Spring WebApplicationInitializer types detected on classpath | /api | main
2019-07-28 11:37:11,664 | INFO  | jolokia-agent: Using policy access restrictor classpath:/jolokia-access.xml | /api | main
[alfresco@mq_n2 ~]$

 

You can of course customize ActiveMQ as per your requirements, remove some connectors, setup SSL, aso… But that’s not really the purpose of this blog.

 

 

Other posts of this series on Alfresco HA/Clustering:

Cet article Alfresco Clustering – ActiveMQ est apparu en premier sur Blog dbi services.

Windows Docker containers, when platform matters

Yann Neuhaus - Wed, 2019-07-31 07:58

A couple of days ago, I got a question from a customer about an issue he ran into when trying to spin up a container on Windows.

The context was as follows:

> docker container run hello-world:nanoserver
Unable to find image 'hello-world:nanoserver' locally
nanoserver: Pulling from library/hello-world
C:\Program Files\Docker\docker.exe: no matching manifest for windows/amd64 10.0.14393 in the manifest list entries.
See 'C:\Program Files\Docker\docker.exe run --help'.

 

I thought that was very interesting because it pointed out some considerations about Docker image architecture design. First, we must bear in mind that containers and the underlying host share a single kernel by design and the container’s base image must match that of the host.

Let’s first begin with containers in a Linux world because it highlights the concept of Kernel sharing between different distros. In this demo, let’s say I’m running a Linux Ubuntu server 16.04 …

$ cat /etc/os-release | grep -i version
VERSION="16.04.6 LTS (Xenial Xerus)"
VERSION_ID="16.04"
VERSION_CODENAME=xenial

 

… and let’s say I want to run a container based on Centos 6.6 …

$ docker run --rm -ti centos:6.6 cat /etc/centos-release
Unable to find image 'centos:6.6' locally
6.6: Pulling from library/centos
5dd797628260: Pull complete
Digest: sha256:32b80b90ba17ed16e9fa3430a49f53ff6de0d4c76ad8631717a1373d5921fa26
Status: Downloaded newer image for centos:6.6
CentOS release 6.6 (Final)

 

You may wonder how it is possible to run different distros between the container and the host and what’s the magic behind the scene? In fact, both the container and the host share the same Linux kernel and even if CentOS 6.6 ships with a kernel version 2.6, while Ubuntu 16.04 ships with 4.4 we usually may upgrade the kernel since it’s backward compatible. The commands below demonstrate the centos container is using the same Kernel than the host.

$ uname -r
4.4.0-142-generic
$ docker run --rm -ti centos:6.6 uname -r
4.4.0-142-generic

 

Let’s say now my docker host is running on the x64 architecture. If we look at the Centos image supported architectures on Docker hub, we notice different ones:

From the output above, we may deduce it should exist a combination of different images and tags for each available architecture and the interesting point is how does Docker pull the correct one regarding my underlying architecture? This is where manifest lists come into play and allow multi-architecture images. A manifest list contains platform segregated references to a single-platform manifest entry. We may inspect a manifest list through the docker manifest command (still in experimental mode at the moment of writing this blog post).

For example, if I want to get a list of manifests and their corresponding architectures for the Centos 7, I can run docker manifest command as follows:

$ docker manifest inspect centos:7 --verbose
[
        {
                "Ref": "docker.io/library/centos:7@sha256:ca58fe458b8d94bc6e3072f1cfbd334855858e05e1fd633aa07cf7f82b048e66",
                "Descriptor": {
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "digest": "sha256:ca58fe458b8d94bc6e3072f1cfbd334855858e05e1fd633aa07cf7f82b048e66",
                        "size": 529,
                        "platform": {
                                "architecture": "amd64",
                                "os": "linux"
                        }
                },
                "SchemaV2Manifest": {
                        "schemaVersion": 2,
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "config": {
                                "mediaType": "application/vnd.docker.container.image.v1+json",
                                "size": 2182,
                                "digest": "sha256:9f38484d220fa527b1fb19747638497179500a1bed8bf0498eb788229229e6e1"
                        },
                        "layers": [
                                {
                                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                                        "size": 75403831,
                                        "digest": "sha256:8ba884070f611d31cb2c42eddb691319dc9facf5e0ec67672fcfa135181ab3df"
                                }
                        ]
                }
        },
        {
                "Ref": "docker.io/library/centos:7@sha256:9fd67116449f225c6ef60d769b5219cf3daa831c5a0a6389bbdd7c952b7b352d",
                "Descriptor": {
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "digest": "sha256:9fd67116449f225c6ef60d769b5219cf3daa831c5a0a6389bbdd7c952b7b352d",
                        "size": 529,
                        "platform": {
                                "architecture": "arm",
                                "os": "linux",
                                "variant": "v7"
                        }
                },
                "SchemaV2Manifest": {
                        "schemaVersion": 2,
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "config": {
                                "mediaType": "application/vnd.docker.container.image.v1+json",
                                "size": 2181,
                                "digest": "sha256:8c52f2d0416faa8009082cf3ebdea85b3bc1314d97925342be83bc9169178efe"
                        },
                        "layers": [
                                {
                                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                                        "size": 70029389,
                                        "digest": "sha256:193bcbf05ff9ae85ac1a58cacd9c07f8f4297dc648808c347cceb3797ae603af"
                                }
                        ]
                }
        },
        {
                "Ref": "docker.io/library/centos:7@sha256:f25f24daae92b5b5fe75bc0d5d9a3d2145906290f25aa434c43bfcefecd10dec",
                "Descriptor": {
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "digest": "sha256:f25f24daae92b5b5fe75bc0d5d9a3d2145906290f25aa434c43bfcefecd10dec",
                        "size": 529,
                        "platform": {
                                "architecture": "arm64",
                                "os": "linux",
                                "variant": "v8"
                        }
                },
                "SchemaV2Manifest": {
                        "schemaVersion": 2,
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "config": {
                                "mediaType": "application/vnd.docker.container.image.v1+json",
                                "size": 2183,
                                "digest": "sha256:7a51de8a65d533b6706fbd63beea13610e5486e49141610e553a3e784c133a37"
                        },
                        "layers": [
                                {
                                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                                        "size": 74163767,
                                        "digest": "sha256:90c48ff53512085fb5adaf9bff8f1999a39ce5e5b897f5dfe333555eb27547a7"
                                }
                        ]
                }
        },
        {
                "Ref": "docker.io/library/centos:7@sha256:1f832b4e3b9ddf67fd77831cdfb591ce5e968548a01581672e5f6b32ce1212fe",
                "Descriptor": {
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "digest": "sha256:1f832b4e3b9ddf67fd77831cdfb591ce5e968548a01581672e5f6b32ce1212fe",
                        "size": 529,
                        "platform": {
                                "architecture": "386",
                                "os": "linux"
                        }
                },
                "SchemaV2Manifest": {
                        "schemaVersion": 2,
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "config": {
                                "mediaType": "application/vnd.docker.container.image.v1+json",
                                "size": 2337,
                                "digest": "sha256:fe70670fcbec5e3b3081c6800cb531002474c36563689b450d678a34a89b62c3"
                        },
                        "layers": [
                                {
                                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                                        "size": 75654099,
                                        "digest": "sha256:39016a8400a36ce04799adba71f8678ae257d9d8dba638d81b8c5755f01fe213"
                                }
                        ]
                }
        },
        {
                "Ref": "docker.io/library/centos:7@sha256:2d9b27e9c89d511a58873254d86ecf96df0f599daae3d555d896fee9f49fedf4",
                "Descriptor": {
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "digest": "sha256:2d9b27e9c89d511a58873254d86ecf96df0f599daae3d555d896fee9f49fedf4",
                        "size": 529,
                        "platform": {
                                "architecture": "ppc64le",
                                "os": "linux"
                        }
                },
                "SchemaV2Manifest": {
                        "schemaVersion": 2,
                        "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
                        "config": {
                                "mediaType": "application/vnd.docker.container.image.v1+json",
                                "size": 2185,
                                "digest": "sha256:c9744f4afb966c58d227eb6ba03ab9885925f9e3314edd01d0e75481bf1c937d"
                        },
                        "layers": [
                                {
                                        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
                                        "size": 76787221,
                                        "digest": "sha256:deab1c539926c1ca990d5d025c6b37c649bbba025883d4b209e3b52b8fdf514a"
                                }
                        ]
                }
        }
]

 

Each manifest entry contains different information including the image signature digest, the operating system and the supported architecture. Let’s pull the Centos:7 image:

$ docker pull centos:7
7: Pulling from library/centos
8ba884070f61: Pull complete
Digest: sha256:a799dd8a2ded4a83484bbae769d97655392b3f86533ceb7dd96bbac929809f3c
Status: Downloaded newer image for centos:7
docker.io/library/centos:7

 

Let’s have a look at the unique identifier of the centos:7 image:

$ docker inspect --format='{{.Id}}' centos:7sha256:9f38484d220fa527b1fb19747638497179500a1bed8bf0498eb788229229e6e1

 

It corresponds to the SchemaV2Manifest digest value of the manifest entry related to the x64 architecture (please refer to the docker manifest inspect output above). Another official way to query manifest list and architecture is to go through the mplatform/mquery container as follows:

$ docker run mplatform/mquery centos:7
Image: centos:7
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm/v7
   - linux/arm64
   - linux/386
   - linux/ppc64le

 

However, for a Linux Centos 6.6 image (used in my first demo) the architecture support seems to be limited to  the x64 architecture:

$ docker run mplatform/mquery centos:6.6
Image: centos:6.6
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64

 

Now we are aware of manifest lists and multi-architecture images let’s go back to the initial problem. The customer ran into an platform compatibility issue when trying to spin-up a the hello-world:nanoserver container on a Windows Server 2016 Docker host. As a reminder, the error message was:

no matching manifest for windows/amd64 10.0.14393 in the manifest list entries.

In the way, that may be surprising because Windows host and containers also share a single Kernel. That’s true and it was the root cause of my customer’s issue by the way. The image he wanted to pull supports only the following Windows architecture (queried from the manifest list):

> docker run mplatform/mquery hello-world:nanoserver
Image: hello-world:nanoserver
 * Manifest List: Yes
 * Supported platforms:
   - windows/amd64:10.0.17134.885
   - windows/amd64:10.0.17763.615

 

You may notice several supported Windows platforms but with different operating system versions. Let’s have look at the Docker host version in the context of my customer:

> [System.Environment]::OSVersion.Version
Major  Minor  Build  Revision
-----  -----  -----  --------
10     0      14393  0

 

The tricky part is Windows Server 2016 comes with different branches – 1607/1709 and 1803 – which aren’t technically all the same Windows Server version. Each branch comes with a different build number. Referring to the Microsoft documentation when the build number (3rd column) is changing a new operating system version is published. What it means in that case is the OS version between the Windows Docker host and the Docker image we tried to pull are different hence we experienced this compatibility issue. However let’s precise that images and containers may run with newer versions on the host side but the opposite is not true obviously. You can refer to the same Microsoft link to get a picture of Windows container and host compatibility. 

How to fix this issue? Well, we may go two ways here. The first one consists in re-installing a Docker host platform compatible with the corresponding image. The second one consists in using an image compatible with the current architecture and referring to the hello-world image tags we have one. We may check the architecture compatibility by query the manifest file list as follows:

> docker run mplatform/mquery hello-world:nanoserver-sac2016
Image: hello-world:nanoserver-sac2016
 * Manifest List: Yes
 * Supported platforms:
   - windows/amd64:10.0.14393.2551

 

Let’s try to pull the image with the nanoserver-sac2016 tag:

> docker pull hello-world:nanoserver-sac2016
nanoserver-sac2016: Pulling from library/hello-world
bce2fbc256ea: Already exists
6f2071dcd729: Pull complete
909cdbafc9e1: Pull complete
a43e426cc5c9: Pull complete
Digest: sha256:878fd913010d26613319ec7cc83b400cb92113c314da324681d9fecfb5082edc
Status: Downloaded newer image for hello-world:nanoserver-sac2016
docker.io/library/hello-world:nanoserver-sac2016

 

Here we go!

See you!

 

 

 

 

 

 

Cet article Windows Docker containers, when platform matters est apparu en premier sur Blog dbi services.

Driving vs. Being Driven : The reason you fail to get good at anything!

Tim Hall - Wed, 2019-07-31 01:38

It doesn’t matter how many times I’ve gone somewhere. I only know the route when I’ve driven there myself. Everything makes sense when you see someone else do it. You don’t realise how distracted you are, and how much you’ve missed until you have to do it for yourself.

When we have consultants on site to help us with something new, I assume I’m going to drive and they are going to give directions. I make notes as necessary, but the main thing is *I’ve done it*, not them. If I’m told I have to “observe and make notes”, I say I’m not willing to support it, as experience tells me there will be important stuff that gets missed as the consultant rushes through it. Once again, it’s the difference between driving and being driven.

I’ve written a lot about Learning New Things, and I think it always starts with learning to learn for yourself. If you are always relying on other people to lead the way, they are driving and you are being driven. They are getting better and you are just drifting.

I suppose the obvious retort to this is,

“Only a fool learns from his own mistakes. The wise man learns from the mistakes of others.”

Otto von Bismark

There is some truth in that, but the import thing in the second sentence is the wise person *learns* from the mistakes of others. There is still something active going on here. You are learning, not just being passive and waiting to be told what to do.

Standing on the shoulders of giants requires you to climb up on to the shoulders in the first place!

Cheers

Tim…

Driving vs. Being Driven : The reason you fail to get good at anything! was first posted on July 31, 2019 at 7:38 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Alfresco Clustering – Share

Yann Neuhaus - Wed, 2019-07-31 01:00

In previous blogs, I talked about some basis and presented some possible architectures for Alfresco and I talked about the Clustering setup for the Alfresco Repository. In this one, I will work on the Alfresco Share layer. Therefore, if you are using another client like a CMIS/REST client or an ADF Application, it won’t work that way, but you might or might not need Clustering at that layer, it depends how the Application is working.

The Alfresco Share Clustering is used only for the caches, so you could technically have multiple Share nodes working with a single Repository or a Repository Cluster without the Share Clustering. For that, you could disable the caches on the Share layer because if you kept it enabled, you would have, eventually, faced issues. Alfresco introduced a Share Clustering which is used to keep the caches in sync, so you don’t have to disable it anymore. When needed, cache invalidation messages are sent from one Share node to all others, that include runtime application properties changes as well as new/existing site/user dashboards changes.

Just like for the Repository part, it’s really easy to setup the Share Clustering so there is really no reasons not to. It’s also using Hazelcast but it’s not based on properties that you need to configure in the alfresco-global.properties (because it’s a Share configuration), this one must be done in an XML file and there is no possibilities to do that in the Alfresco Admin Console, obviously.

All Share configuration/customization are put in the “$CATALINA_HOME/shared/classes/alfresco/web-extension” folder, this one is no exception. There are two possibilities for the Share Clustering communications:

  • Multicast
  • Unicast (TCP-IP in Hazelcast)

 

I. Multicast

If you do not know how many nodes will participate in your Share Cluster or if you want to be able to add more nodes in the future without having to change the previous nodes’ configuration, then you probably want to check and opt for the Multicast option. Just create a new file “$CATALINA_HOME/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml” and put this content inside it:

[alfresco@share_n1 ~]$ cat $CATALINA_HOME/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:hz="http://www.hazelcast.com/schema/spring"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
                           http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
                           http://www.hazelcast.com/schema/spring
                           http://www.hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd">

  <hz:topic id="topic" instance-ref="webframework.cluster.slingshot" name="share_hz_test"/>
  <hz:hazelcast id="webframework.cluster.slingshot">
    <hz:config>
      <hz:group name="slingshot" password="Sh4r3_hz_Test_pwd"/>
      <hz:network port="5801" port-auto-increment="false">
        <hz:join>
          <hz:multicast enabled="true" multicast-group="224.2.2.5" multicast-port="54327"/>
          <hz:tcp-ip enabled="false">
            <hz:members></hz:members>
          </hz:tcp-ip>
        </hz:join>
        <hz:interfaces enabled="false">
          <hz:interface></hz:interface>
        </hz:interfaces>
      </hz:network>
    </hz:config>
  </hz:hazelcast>

  <bean id="webframework.cluster.clusterservice" class="org.alfresco.web.site.ClusterTopicService" init-method="init">
    <property name="hazelcastInstance" ref="webframework.cluster.slingshot" />
    <property name="hazelcastTopicName">
      <value>share_hz_test</value>
    </property>
  </bean>

</beans>
[alfresco@share_n1 ~]$

 

In the above configuration, be sure to set a topic name (matching the hazelcastTopicName’s value) as well as a group password that is specific to this environment, so you don’t end-up with a single Cluster with members coming from different environments. For the Share layer, it’s less of an issue than for the Repository layer but still. Be sure also to use a network port that isn’t in use, it will be the port that Hazelcast will bind itself to in the local host. For Alfresco Clustering, we used 5701 so here it’s 5801 for example.

Not much more to say about this configuration, we just enabled the multicast with an IP and a port to be used and we disabled the tcp-ip one.

The interfaces is disabled by default but you can enable it, if you want to. If it’s disabled, Hazelcast will list all local interfaces (127.0.0.1, local_IP1, local_IP2, …) and it will choose one in this list. If you want to force Hazelcast to use a specific local network interface, then enable this section and add that here. In can use the following nomenclature (IP only!):

  • 10.10.10.10: Hazelcast will try to bind on 10.10.10.10 only. If it’s not available, then it won’t start
  • 10.10.10.10-11: Hazelcast will try to bind on any IP within the range 10-11 so in this case 2 IPs: 10.10.10.10 or 10.10.10.11. If you have, let’s say, 5 IPs assigned to the local host and you don’t want Hazelcast to use 3 of these, then specify the ones that it can use and it will pick one from the list. This can also be used to have the same content for the custom-slingshot-application-context.xml on different hosts… One server with IP 10.10.10.10 and a second one with IP 10.10.10.11
  • 10.10.10.* or 10.10.*.*: Hazelcast will try to bind on any IP in this range, this is an extended version of the XX-YY range above

 

For most cases, keeping the interfaces disabled is sufficient since it will just pick one available. You might think that Hazelcast may bind itself to 127.0.0.1, technically it’s possible since it’s a local network interface but I have never seen it do so, so I assume that there is some kind of preferred order if another IP is available.

Membership in Hazelcast is based on “age”, meaning that the oldest member will be the one to lead. There is no predefined Master or Slave members, they are all equal, but the oldest/first member is the one that will check if new members are allowed to join (correct config) and if so, it will send the information to all other members that joined already so they are all aligned. If multicast is enabled, a multicast listener is started to listen for new membership requests.

 

II. Unicast

If you already know how many nodes will participate in your Share Cluster or if you prefer to avoid Multicast messages (there is no real need to overload your network with such things…), then it’s preferable to use Unicast messaging. For that purpose, just create the same file as above (“$CATALINA_HOME/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml“) but instead, use the tcp-ip section:

[alfresco@share_n1 ~]$ cat $CATALINA_HOME/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:hz="http://www.hazelcast.com/schema/spring"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
                           http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
                           http://www.hazelcast.com/schema/spring
                           http://www.hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd">

  <hz:topic id="topic" instance-ref="webframework.cluster.slingshot" name="share_hz_test"/>
  <hz:hazelcast id="webframework.cluster.slingshot">
    <hz:config>
      <hz:group name="slingshot" password="Sh4r3_hz_Test_pwd"/>
      <hz:network port="5801" port-auto-increment="false">
        <hz:join>
          <hz:multicast enabled="false" multicast-group="224.2.2.5" multicast-port="54327"/>
          <hz:tcp-ip enabled="true">
            <hz:members>share_n1.domain,share_n2.domain</hz:members>
          </hz:tcp-ip>
        </hz:join>
        <hz:interfaces enabled="false">
          <hz:interface></hz:interface>
        </hz:interfaces>
      </hz:network>
    </hz:config>
  </hz:hazelcast>

  <bean id="webframework.cluster.clusterservice" class="org.alfresco.web.site.ClusterTopicService" init-method="init">
    <property name="hazelcastInstance" ref="webframework.cluster.slingshot" />
    <property name="hazelcastTopicName">
      <value>share_hz_test</value>
    </property>
  </bean>

</beans>
[alfresco@share_n1 ~]$

 

The description is basically the same as for the Multicast part. The main difference is that the multicast was disabled, the tcp-ip was enabled and there is therefore a list of members that needs to be set. This is a comma separated list of hostname or IPs that the Hazelcast will try to contact when it starts. Membership in case of Unicast is managed in the same way except that the oldest/first member will listen for new membership requests on the TCP-IP. Therefore, it’s the same principle, it’s just done differently.

Starting the first Share node in the Cluster will display the following information on the logs:

Jul 28, 2019 11:45:35 AM com.hazelcast.impl.AddressPicker
INFO: Resolving domain name 'share_n1.domain' to address(es): [127.0.0.1, 10.10.10.10]
Jul 28, 2019 11:45:35 AM com.hazelcast.impl.AddressPicker
INFO: Resolving domain name 'share_n2.domain' to address(es): [10.10.10.11]
Jul 28, 2019 11:45:35 AM com.hazelcast.impl.AddressPicker
INFO: Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [share_n1.domain/10.10.10.10, share_n2.domain/10.10.10.11, share_n1.domain/127.0.0.1]
Jul 28, 2019 11:45:35 AM com.hazelcast.impl.AddressPicker
INFO: Prefer IPv4 stack is true.
Jul 28, 2019 11:45:35 AM com.hazelcast.impl.AddressPicker
INFO: Picked Address[share_n1.domain]:5801, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5801], bind any local is true
Jul 28, 2019 11:45:36 AM com.hazelcast.system
INFO: [share_n1.domain]:5801 [slingshot] Hazelcast Community Edition 2.4 (20121017) starting at Address[share_n1.domain]:5801
Jul 28, 2019 11:45:36 AM com.hazelcast.system
INFO: [share_n1.domain]:5801 [slingshot] Copyright (C) 2008-2012 Hazelcast.com
Jul 28, 2019 11:45:36 AM com.hazelcast.impl.LifecycleServiceImpl
INFO: [share_n1.domain]:5801 [slingshot] Address[share_n1.domain]:5801 is STARTING
Jul 28, 2019 11:45:36 AM com.hazelcast.impl.TcpIpJoiner
INFO: [share_n1.domain]:5801 [slingshot] Connecting to possible member: Address[share_n2.domain]:5801
Jul 28, 2019 11:45:36 AM com.hazelcast.nio.SocketConnector
INFO: [share_n1.domain]:5801 [slingshot] Could not connect to: share_n2.domain/10.10.10.11:5801. Reason: ConnectException[Connection refused]
Jul 28, 2019 11:45:37 AM com.hazelcast.nio.SocketConnector
INFO: [share_n1.domain]:5801 [slingshot] Could not connect to: share_n2.domain/10.10.10.11:5801. Reason: ConnectException[Connection refused]
Jul 28, 2019 11:45:37 AM com.hazelcast.impl.TcpIpJoiner
INFO: [share_n1.domain]:5801 [slingshot]

Members [1] {
        Member [share_n1.domain]:5801 this
}

Jul 28, 2019 11:45:37 AM com.hazelcast.impl.LifecycleServiceImpl
INFO: [share_n1.domain]:5801 [slingshot] Address[share_n1.domain]:5801 is STARTED
2019-07-28 11:45:37,164  INFO  [web.site.ClusterTopicService] [localhost-startStop-1] Init complete for Hazelcast cluster - listening on topic: share_hz_test

 

Then starting a second node of the Share Cluster will display the following (still on the node1 logs):

Jul 28, 2019 11:48:31 AM com.hazelcast.nio.SocketAcceptor
INFO: [share_n1.domain]:5801 [slingshot] 5801 is accepting socket connection from /10.10.10.11:34191
Jul 28, 2019 11:48:31 AM com.hazelcast.nio.ConnectionManager
INFO: [share_n1.domain]:5801 [slingshot] 5801 accepted socket connection from /10.10.10.11:34191
Jul 28, 2019 11:48:38 AM com.hazelcast.cluster.ClusterManager
INFO: [share_n1.domain]:5801 [slingshot]

Members [2] {
        Member [share_n1.domain]:5801 this
        Member [share_n2.domain]:5801
}

 

 

Other posts of this series on Alfresco HA/Clustering:

Cet article Alfresco Clustering – Share est apparu en premier sur Blog dbi services.

ServiceManager … Manually start/stop

DBASolved - Tue, 2019-07-30 09:58

Oracle GoldenGate Microservices, starting in 12c (12.3.0.0.1) through 19c (19.1.0.0.1), provide a set of services that you can interact with via a webpage, command line, REST API, and PL/SQL. All of which is great; however, for any of these items to work the ServiceManager has to be up and running.

There are three ways configure ServiceManager when an environment is initally setup. These three ways are:

  • Manually
  • As a daemon
  • Integration with XAG agent (9.1 or later)

For this post, I’ll just show you how to start or stop ServiceManager manually. Manually starting or stopping the ServiceManager is the default setting if you do not select either of the other two options while running Oracle GoldenGate Configuration Assistant (OGGCA.sh).

In order to start or stop the ServiceManager manually, you have to make sure you have two files. These files are:

  • startSM.sh
  • stopSM.sh

Both of these files will be in the $DEPLOYMENT_HOME/bin directory for the ServiceManager. On my system this location is:

/opt/app/oracle/gg_deployments/ServiceManager/bin

Note: If you are running ServiceManager as a daemon, you will not have these files. In the bin directory you will find a file that is used to register ServiceManager as a daemon.

Before you can start or stop the ServiceManager manually, there are two (2) environment variables that need to be set. These environment variables are:

  • OGG_ETC_HOME
  • OGG_VAR_HOME

These environment variables are set to the etc and var directory locations for the ServiceManager deployment. On my system these are set to:

export OGG_ETC_HOME=/opt/app/oracle/gg_deployments/ServiceManager/etc
export OGG_VAR_HOME=/opt/app/oracle/gg_deployments/ServiceManager/var

Now with all these requirements met, I can now go back to the $DEPLOYMENT_HOME/bin directory and start or stop the ServiceManager.

[oracle@ogg19c bin]$ cd /opt/app/oracle/gg_deployments/ServiceManager/bin
[oracle@ogg19c bin]$ sh ./startSM.sh

[oracle@ogg19c bin]$ sh ./startSM.sh
Starting Service Manager process…
Service Manager process started (PID: 376)

In order to stop the ServiceManager manually:

[oracle@ogg19c bin]$ cd /opt/app/oracle/gg_deployments/ServiceManager/bin
[oracle@ogg19c bin]$ sh ./stopSM.sh
Stopping Service Manager process (PID: 376)…
Service Manager stopped

Enjoy!!!

Categories: DBA Blogs

How to add cells in a Column on Excel

VitalSoftTech - Tue, 2019-07-30 09:53

Are you just starting to learn Excel? Are all these boxes confusing you? Adding is the most primary and straightforward function of Excel. Here is a simple and easy guide to help you with different methods of finding the sum of cells in columns. So, here we go! How to add in a Column on […]

The post How to add cells in a Column on Excel appeared first on VitalSoftTech.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator