Cloud, Microservices and Container Workshop in South Africa!

Lot’s of people are talking about these topics nowadays. Heaps of slides and samples are available for download, lots of presentations can simply be streamed from youtube.

In Johannesburg we were working with these solutions hands-on: I delivered a 3 day Cloud, Microservices and Containers workshop on behalf of Oracle.

Find attached some impressions from the smart and fun group of devs and architects I was working with.

 

 

Zero Downtime, REST, Domain Partitions / Multi Tenancy, Elasticity and WLDF. WebLogic 12.2.1 (12c)

I just finished a two week long hands-on consulting session for some pretty experienced application managers and architects.

In 5 days we explored WebLogic 12.2.1 extensively:

  • Zero Downtime
  • REST
  • Domain Partitions / Multi Tenancy
  • Resource Group Management
  • Java Mission Control
  • WLST
  • Elasticity
  • JMS Clustering
  • WLDF

 

p1

Here is some feedback from the group. You can tell we had fun, although we worked very hard.

Screen Shot 2016-07-04 at 10.43.06

 

This is how a happy group looks like.

group2

 

People seemed to be happy, here is what they liked.

Screen Shot 2016-07-04 at 10.44.13

For more details download the flyer from the Oracle WebLogic Server 12.2.1 (12c) course site.

Scaling Failure with Elastic Cluster in Oracle WebLogic Server 12.2.1 (12c)?

The Issue

When you manually scale an elastic cluster let’s say from 2 to 3 there is no issue. Then try scaling the cluster from 3 to 4 and WebLogic admin console will report “FAILED”.

 

How it really Works

Actually it is not broken, it just doesn’t do what you expect it to do because of the cool down period for cluster scaling that has a default value of 900 seconds. This setting is useful to prevent oscillating cluster sizes (possibly due to conflicting rules).

elastic_cluster_fail

You can set this value yourself under Cluster / Configuration /

cooloff

 

What should Oracle Do?

Oracle should change the state from FAILED, to COOLDOWN_PERIOD or so.

Deploy with Deployment Plan (WebLogic 12.2.1)

You cannot deploy an application to WebLogic 12.2.1 and specify an arbitrary location for the deployment plan when using the admin console, but you can update a deployed application and specify the location of a deployment plan.

However, you can deploy an open directory with a app directory (containing, well, your app) and plan subdirectory (containing your deployment plan).

mdbplan

Oracle Service Bus 12.2.1 JVM Settings: PermSize, Heap, Non-Heap, and ResourceManagement

Oracle Service Bus comes with JVM settings that cause questions to some customers. This posting provides answers to the most common questions I discussed in workshops or received so far.

Warning about PermSize Option

Question 1: “I see the following warning:

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=512m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=1024m; support was removed in 8.0

Does that mean that Oracle generates the startup scripts with wrong JVM flags?”

Answer: With Oracle JVM 8 the permanent space was removed. Not having a perm space was a JRockit “feature” that has been ported over to the Oracle JVM. The warnings of course are harmless. Startup scripts for WebLogic only domains are generated correctly for WebLogic 12.2.1. So Oracle needs to change this for OSB domains and they know about it.

Heap Size

Question 2: “How big is Oracle Service Bus now? I used to be able to create and run a cluster on my laptop with earlier versions but now I run into resource problems.”

Answer: Default startup parameters are: -Xms1024m -Xmx2048m, i.e. minimum heap size is 1 GB, maximum heap size is 2 GB. Hence you should expect your process size to be larger than 1 GB right from the start.

Screen Shot 2016-02-17 at 13.27.49 Screen Shot 2016-02-17 at 13.29.38

Roughly speaking after starting up a single OSB instance (everything hosted on the admin server), you should expect more than 400 Mb of heap used. Have a look at the screenshot above. The last drop in the first screenshot was caused by an external garbage collection request (I did it manually). In addition there is more than 500 Mb of non-heap used (100 Mb code cache and 400 Mb meta space, GC of course does not affect this area). This shows that 1GB as a minimum setting makes sense.

JVM ResourceManagement Flag

Question 3: “I see the following warning

<Feb 17, 2016 9:45:56 AM CET> <Info> <RCM> <BEA-2165021> <"ResourceManagement" is not enabled in this JVM. Enable "ResourceManagement" to use the WebLogic Server "Resource Consumption Management" feature. To enable "ResourceManagement", you must specify the following JVM options in the WebLogic Server instance in which the JVM runs: -XX:+UnlockCommercialFeatures -XX:+ResourceManagement.> 
Should I enable -XX:+ResourceManagement? Will it help to improve OSB 12c performance?"

Answer: You probably have read announcements that emphasize that OSB 12.2.1 is running on top of WebLogic 12.2.1 and WebLogic 12.2.1 supports a number of exciting new features. Nothing wrong with that, kind of marketing logic though.

It is important to understand that Oracle Service Bus 12.2.1 (and other up stack 12c products such as Oracle SOA Suite, Oracle BPM etc.) doesn’t use yet some really cool WebLogic 12.2.1 features such as domain partitioning or elastic cluster.

In short: Oracle JVM 8 resource management is used as a commercial feature together with G1 garbage collector to track JVM resource usage on the JVM level per partition. Based on the collected data about memory, file and thread usage WebLogic can then react and ensure that a WebLogic partition within one domain doesn’t steal too many resources from another partition. It’s important to understand that the magic (the reaction) happens in WebLogic and it’s based on the data provided by the JVM.

So will the -XX:+ResourceManagement setting improve OSB 12.2.1 performance? I’d say no. It will only have benefits when used with partions which are so far not supported by OSB 12.2.1.