DevOps tools: Easily tunnel your localhost (WebLogic or some other) server with ngrok to the world

Here is another addition to my DevOps tools series that is worth to know if you work with any kind of servers such as WebLogic, nginx or Apache httpd etc.

ngrok is a fun and very easy tool I use from time to time in demos or when running a training.
It opens a public tunnel (that of course can be protected) to connect to your local server. Really nice if you are in a different network than your audience, or hidden between a DSL router etc.

In the webcast below I show how to access WebLogic 12.2.1 running on localhost:7001

You read it here first 🙂

also check out the webcasts of the DevOps tools series about how to detect high CPU threads, or the usage of lsof.


Another View: Your Data Center’s Degree of Cloud

Here is another view that I just developed to describe a customer’s level of cloud. I used this view because quite often a public cloud is just regarded as an “outsourced datacenter”, however there are many steps in between the two extremes. Sometimes it’s easier to approach the cloud computing topic from the classical perspective of virtualization.

Anything you would like to add?

Screen Shot 2015-09-18 at 20.39.08

I recommend to compare what Amazon, Google or Oracle public cloud offers based on the table above compared to your on premise DC.

12 Public Cloud Benefits and Features You Should Know

20150311_114235The last few years I spent a quite a bit of time talking about public clouds. When I published my cloud computing book, public clouds were mostly still believed to be a hype. Availability, security, persistence of data, and much much more was questioned.

Today only few IT professionals are stuck with this old school thinking. The major public clouds are a superset of what is provided by a classical data centers.

What features would you check for when looking at a potential cloud provider? Does your data center offer every feature and service listed below?

  1. All IT in the cloud is software. There is an API for everything. The whole data center is a set of APIs. This includes load balancers, servers, storage, databases, application servers, API gateways, firewalls, storage, etc.
  2. Short term capacity is very cheap.
  3. Since capacity is cheap, typically you don’t update or redeploy in the cloud, instead you spin up new immutable servers.
  4. Changing your hardware costs nothing. If you find out or assume that your application will run better on high-CPU instances instead of high-memory instances you can simply swap.
  5. Availability comes with no extra cost. You can place two instances in two fully redundant data centers for the same cost as placing two instances in the same data center.
  6. Also parallelism comes with no extra cost. Using 1000 instances for 1h costs as much as using 1 instance for 1000 hours. You’ve got a massively parallel supercomputer at your hands.
  7. You save the time for capacity planning since capacity is available on demand.
  8. Capacity planning still makes sense for predicting future costs.
  9. Procurement happens within seconds or minutes.
  10. You don’t pay for unused resources. Scaling down reduces your costs.
  11. You can put IT resources close to the customer location where they are needed since the public cloud will be globally available.
  12. Cost for resources in the cloud used to drop by around 30% every year. Long term projects with constant resource usage will cost less every year.

You care to disagree?

Oracle Cloud: ICS

Quite a while ago (a year before Larry announced the Oracle Public Cloud) I wrote about SaaS applications and a service bus PaaS to interconnect services: “… services are integrated and virtualized by a service bus in the cloud and orchestrated by a workflow system in the cloud [Oracle Middleware and Cloud Computing] “.

Back then it almost seemed like building castles in Spain. Indeed it took several years to build the PaaS service – yet today Thomas Kurian and Larry Ellison announce Oracle’s ICS. Now it’s out there with all the agility that comes with a cloud based solution.

It’s the cloud! So get a test account, play with it, scale it and try to break it!

Let me know what you think using @frankmunz and add: @soacommunity.

photo: F.M.

DOAG Schulungstag 2014: Cloud Computing

DOAG Schulungstag 2014


Screen Shot 2014-10-16 at 15.11.23

Cloud Computing und Public Clouds: Was Sie wissen müssen – gezeigt anhand von live und hands-on Beispielen

Den Teilnehmern werden herstellerunabhängige Grundlagen zum interessantesten Thema der letzten Jahre – Cloud Computing – vermittelt. Die Schulung beinhaltet die wichtigsten Public Cloud Provider wie Amazon, Google  und Oracle, sowie Netflix OSS, Openshift und Docker. Im Mittelpunkt steht die Technologie. Bereiche wie IaaS (Server mit SSDs und ¼ TB Hauptspeicher für 2€), hochverfügbarer Plattenspeicher, Datenbanken, Load Balancer usw. Ausserdem betrachten wir die Auswirkungen, die Cloud Computing mittlerweile auf den Softwareentwicklungsprozess und den Betrieb hat.


Einsteiger und Teilnehmer mit mittlerem Wissen im Bereich Cloud Computing die sich mit dem Thema vertraut machen wollen, an der Technologie interessiert sind und einen Blick über den Oracle-Tellerand hinaus wagen!

Cloud-Computing und Public Clouds:
Was Sie wissen müssen – gezeigt anhand von Hands-on Beispielen.

✔ Freie Plätze sichern!
✔ Jetzt buchen
✔ DOAG 2014 Schulungstag.