Friday, December 20, 2013

On Kanban: Is there a need for Kanban Certification? My answer is Yes, there is.

David J. Anderson has posted this question in a couple of places. My answer in short is: Yes.

I would certify the theoretical understanding of the individual to act as a coach.

"Kanban Coaching Professional" can be a good title. As the title suggests the certification will support the fact that the professional can coach a team guiding them through continuous improvement, from following Deming's System of Profound Knowledge and the 14 Points for Management all the way up to your 5 steps in the recipe for success.

I personally like the idea of Project Coach better than Project Manager. And of course as any other certification, passing it does not mean that you are actually a seasoned coach but that is of course subject for a different discussion.

Maven and Jenkins for Continuous Delivery (release and deployment)

That is the agile lean goal, to deliver value at a constant pace with minimum manual intervention right?

I have written before about continuously releasing snapshots but in reality what you want to make sure is that once something is tested it can be deployed and that can only be achieved if what you have tested and verified is a release.

Here is how to use Maven and Jenkins to help the team with continuous releases of maven projects.

The proposal here is to use only 3 digits ( M.m.b meaning Major, minor and bug fixing ) for any release version number. We can continuously release increasing the minor version number keeping bug fixing number equal zero. Of course we want to leave the major version number unchangeable because most likely that would say something about the maturity of our platform. This is usually a version that has even commercial meaning so we better leave it as a configuration parameter which will be touched only rarely.


Developers work on a SNAPSHOT project. If the project has modules then you use Aggregation (or Multi-Module) How can we make sure such project and its modules are all released without human intervention? In other words how can use Maven to force a release version from a single command? The below command uses -DdryRun which you would remove in real life but it is handy to see what would the command do as we soon will learn:
mvn clean --batch-mode release:prepare -DdryRun=true -DautoVersionSubmodules=true -DreleaseVersion=2.3000.0 -DdevelopmentVersion=2.3001.0-SNAPSHOT
The above releases all releasable (meaning ending in -SNAPSHOT) modules tagging them with the same ${releaseVersion} and the root project as well. You can confirm that looking at all pom.xml.tag files that are locally generated for each project. It also (as expected) changes the version number for the projects so developers can continue working on new features for the provided ${developmentVersion}. You might wonder why I decided to increase by 1 the minor number for the next development SNAPSHOT version. The reason is I want to make sure the team understand the last number is reserved for patching an existing in production version. The next expected version will have the released minor version number plus one only if the CI server actually did no other release before as you soon will learn so most likely the next released version minor number will be higher than 3001 in reality.

You can try the above again and again with different combinations provided that you remove the tag, next, releaseBackup and release properties temporary files:
find ../ -name "pom.xml.*"|grep -v svn|xargs rm -f; \
find ../ -name ""|xargs rm -f


  1. Check "This build is parametrized". Define a String parameter called "MAJOR_VERSION_NUMBER" with default value equal your current release major version number. In our case this is just "2". Later on when building manually from Jenkins the parameter is pre-completed but definitely changeable.
  2. Configure the build to invoke maven with goals and options that apply to your specific case. Following our example:
    mvn clean --batch-mode release:prepare -DdryRun=true -DautoVersionSubmodules=true -DreleaseVersion=$MAJOR_VERSION_NUMBER.$BUILD_NUMBER.0 -DdevelopmentVersion=$MAJOR_VERSION_NUMBER.$(($BUILD_NUMBER+1)).0-SNAPSHOT
    The shebang here is important to make sure parameter expansion works. You can see how we add 1 as discussed before.
  3. I started configuring the build to invoke maven with goals and options that apply to your specific case. In our case go to "Perform Maven Release" and fill the boxes as in:
    Development version: $MAJOR_VERSION_NUMBER.$(($BUILD_NUMBER+1)).0-SNAPSHOT
    Dry run only?: check it to run just this as a POC
    However at the time of this writing the parameter expansion won't work in the spot above. You will need to use the literal maven command instead.

bash: extract tokens from a string using parameter expansion for example domain or host from url

There is literally no developer or sysadmin which will not understand what $var is but parameter expansion in bash is more than just a simple variable. When it comes to parsing strings most developers are familiar with high level functions that allow to tokenize them. Bash parameter expansion can help with this task using substring removal. Double hash-mark and double percentage-mark are your friends.

Below is a solution for the typical problem related to parsing urls:
echo $url_no_params 
echo $params
echo $host_and_path
echo $host
The above will result in:
$echo $url_no_params
$echo $params
$echo $host_and_path
$echo $host

Java, is the JVM responsible for: OS Unable to fork: Cannot allocate memory ?

CREDITS: I would like to thank my old friend and now again coworker Josu Feijoo for his help on getting to the conclusions below.

This error means that no more processes or threads can be open due to memory starvation. Most likely a hard reset will be needed so certainly it is not a nice problem to have.

Clearly its resolution depends on what you are running but most likely there are too many threads been run in your system so the first step for troubleshooting would be to get all threads and identify the culprit:
$ top -H -b -n 1 
After you identify the culprit (tomcat in our case) take a look at how threads are going up indicating a leak:
$ top -H -b -n 1 | grep tomcat -c
In Java the JVM memory requirements are the sum of maximum heap memory, the maximum "perm space" memory and the number of threads multiplied by the thread stack size:
Xmx + MaxPermSize + (Xss * number of threads)
That is the reason Virtual memory consumption is so important. I see a lot of miss-leading comments and posts on the web claiming that you should only worry about resident memory (top RES) and not much for virtual memory (top VIRT) and under those assumptions some engineers might think the below is simply OK. Note how this JVM is using 2.3GB physical memory but 18GB virtual memory:
$ ps -e -o rss,vsize,cmd | grep tomcat
2298588 18019720 /opt/jdk/bin/java -Xmx2048m -XX:MaxPermSize=512m
This should not be OK for most web applications. Unless on purpose you need to maintain a lot of threads you should be suspecting of thread leaking. At that point you need to use jstack, jvisualvm or any other tool that allows you to connect to the debug port of the JVM to further troubleshoot which is the responsible leaker in the Java code. Or perhaps you can quickly inspect your code for threads you open and simply do not stop. You might be using some library or middleware responsible for it like the Camel ProducerTemplate. Housekeeping is necessary, resources are not infinite and those of us ignoring that rule will pay with a resource starvation surprise. stack

Needless to say that if you really need to spawn so many threads then do not forget to run the JVM with enough memory that accounts also for the thread stack consumption.

Add network routes in MAC OSX

Here is a snippet that shows how to add routes to a network only available after a VPN connection has been established. Note the two commands used, one refers to the interface (the vpn interface ppp0) while the other refers to the gateway (
$ sudo route -n add -net  -interface ppp0
add net gateway ppp0
$ telnet 3389
Connected to
Escape character is '^]'
$ sudo route -n delete
delete host
$ telnet 3389
^C #timeout
$ sudo route -n add -net  -gateway
add net
$ telnet 3389
Connected to
Escape character is '^]'

Monday, December 16, 2013

Find which process owns the socket listener AKA open port

In one word, lsof. For example the below will list the process which holds port 8080 open:
lsof -i tcp:8088
This is a useful one which any Linux sysadmin should be aware of.

Saturday, December 14, 2013

top and cron - Log all commands being run every minute

To troubleshoot what is going on in Unix and Linux systems your first step is usually top. You can run it in two modes: interactive and batch mode. In batch mode the output can be piped to any other command making it ideal to leave it croned in a server to later on analyze what is going on there:
*/1 * * * *  COLUMNS=512 top -b -n 1 >> /tmp/top.log
Option b stands for batch mode. Finally "n 1" means top to end after a single iteration/refresh.

Friday, December 13, 2013

Java client unable to find valid certification path to requested target

The keytool command is used to manipulate the java keystore. Using this POB recipe you should be able to authorize any certificate including self signed certificates, even expired certificates to your keystore. Do not add a self signed or expired certificate to the keystore of production servers though! So today we had the below problem which I have seen before multiple times before:
com.sun.jersey.api.client.ClientHandlerException: PKIX path building failed: unable to find valid certification path to requested target
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(
However adding the certificate to the keystore had no effect. The only explanation for a behavior like this is that the client java program is not using still the certificate meaning most likely is not pointing to the keystore we think it is. Path issues are the first to look for and in my case someone for some reason pointed the java binary to the default java installation in the servers:
$ ls -al /usr/bin/java
lrwxrwxrwx 1 root root 22 Dec 13 13:39 /usr/bin/java -> /etc/alternatives/java
Which was corrected manually as:
sudo rm /usr/bin/java
sudo ln -s /opt/jdk/bin/java  /usr/bin/java

Thursday, December 12, 2013

Kanban: Prioritization, Estimation, Classes of Service, Severity, Priority Number and Technology debt

The blue book is a gem for Software Project Managers. For many years after reading the Bible of Software Engineering I did not find a concise explanation that would give me hope on the possible existence after all of a Silver Bullet for Project Management but I have found in David J. Anderson's book a great must-read-first for those still looking for it.
Prioritization and estimation are important and at the same wasteful activities. At the core of a good decision is a proper triage system.
Every software issue should have a class of service (COS) to make sure its high level priority is understood and that the proper SLA is applied.
IMO they are not to be confused with severity which is an important classification for Standard and Intangible COS or priority number which allows to further qualify tickets within severity.
COS and teams are orthogonal. Let us analyze an example to clarify how technology support tickets can have any COS.
Technology tickets are *not* always intangible. Technology debt can be a huge problem if the tickets are not properly prioritized. As technology leaders we need to make sure we explain these important concepts to the stakeholders. Classes of Services are completely orthogonal to teams.

Expedite Class Of Service

Consider a bug in SSL like Perfect Forward Secrecy (PFS) which might result in a violation of Massachusetts and California Privacy Laws. Resolving this vulnerability in your web server must be considered probably an Expedite class issue.

Fixed Delivery Date Class Of Service

Let's supposed you have received an alert about the need to put another server in the cluster as capacity has gone over the imposed threshold and in less than a week the cluster will most likely become unresponsive. This case will be considered a Fixed Delivery Date class issue.

Standard Class Of Service

Your calendar has reminded you about the need to upgrade to the newest long term supported Operating System before the current one gets out of support in a year from now. This is to be considered a Standard class issue which could escalate to Fixed Delivery Date or even Expedite but at the moment we are plenty of time to get it done.

Intangible Class Of Service

There is a request for a devops script responsible for a server clone to go faster. This will be Intangible class unless demonstrated that it will save hours of development work in which case it might become Standard.

Tuesday, December 10, 2013

Add username in apache logs with form authentication

If you use basic authentication the Apache log option "%u" will log the current logged in user. However if you use form authentication you will need to play with response headers. Below a typical custom log for your apache + Java/JEE:
LogFormat "%h %l %{USER}o %t \"%r\" %>s %b %{JSESSIONID}C" custom
CustomLog /var/log/apache2/ custom
It will generate something like: - myUser - [28/Sep/2013:18:09:40 -0400] "GET /my/path HTTP/1.1" 200 24292 6FCC544E05F7F5D31691C5907F99CFAA.node1
The user will only be logged if "USER" is set as response header in the Java / JEE server.

Friday, December 06, 2013

When ping does not work use tcptraceroute

WARNING: Make sure you own the machine you are troubleshooting. Port probing might be considered illegal where you live.
tcptraceroute 80

Monday, November 25, 2013

JIRA Agile Extended Kanban Board

JIRA Agile does not keep the column width as you add more columns resulting on unusable boards when the number of columns reach certain point and your display device is not big enough.

Here are a couple of hacks I have tried so far until JIRA Agile provides a fix for this issue. All credits here for Jose Garcia who helped me tweak the JIRA standard Kanban Board with javascript plus CSS hacks.

TamperMonkey script

This is our preferred option so far. I have built a Chrome Tampermonkey JIRA Extended Rapidboard plugin. A similar plugin could be tested and released for Firefox GreaseMonkey plugin.

Javascript Injector add-on

I started using this at the beginning but it was buggy in chrome for windows (In a MAC it worked kind of OK)
  1. Install Javascript Injector chrome plugin
  2. Set url as http://your.jira.url/secure/RapidBoard.jspa
  3. Paste the snippet in “script”:
    var clone = $("#ghx-column-header-group").clone();
    clone.attr("id", "newHeader").css("background", "#FFF").css("position", "absolute").css("width", "1465px").css("margin-top", "-90px");
    $("#ghx-pool").css("width", "1500px");
    $("body").removeClass("ghx-scroll-columns").css("overflow-y", "hidden !important");
    $("#ghx-work").attr("id", "ghx-work1").css("overflow-x", "scroll").css("overflow-y", "hidden").width("2000px");
  4. Hit “Inject Now”. You have some options there like unchecking "autorun" or using regex for "url" so most likely you will be able to introduce some more customization for your own needs like when needing something different depending on the specific board.

Other approaches

You can certainly build your own extension. For example it would make sense to have the width for the whole board available for setup as well as turning the script ON/OFF. These features are available via TamperMonkey or GreaseMonkey but I have to agree the interfaces might be a little bit scary for non Javascript programmers.

Friday, November 22, 2013

When the SSL certificate expires one liner

echo | openssl s_client -connect ${host}:${port} 2>/dev/null | openssl x509 -text | grep "Not After"

Wednesday, November 20, 2013

Got OWASP? Tomcat.tomdept vulnerability or bad hardening?

We have known this rule for ages: Do not run services you do not need. Hardening servers is mainstream already, and yet malware gets through those services that should not be running.

Why would someone run the Tomcat "manager" application? It is just one of the first things you should remove when you install your brand new tomcat.

Not doing so will only increase your chances to get compromise with malware like Tomcat.tomdept.

Monday, November 18, 2013

Disk full, beyond resource leaking it could lead to increased business risk

We do our best to identify big files and directories, delete them and so on. But is that enough? We live in a world of abundance and think that pouring more hardware resources is the way to go when we get that "Disk full" error or alike. As a consequence you get developers using better hardware than what a server might have.

This combined with the lack of performance and stress testing ends up hiding important code problems which lead to resource leaking (memory, file system, CPU) and pop up in the servers at a latest phase.

If you constraint resources in developer machines on purpose then you might be able to find some of those problems quicker.

In a developer machine you will see the disk full:

$ ssh df -h
Filesystem                           Size  Used Avail Use% Mounted on
/dev/sda1                             34G   32G  4.0K 100% /

Time to use lsof to find out the open files:
$ ssh lsof >~/lsof.txt
After a reboot I got back 25% of the file resources:
$ ssh df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        34G   23G  8.9G  73% /

Now it is time to analyze lsof:
$ sort -n -k 7 ~/lsof.txt | tail -1
java       1645        dev  202w      REG                8,1 9086951424     530660 /home/dev/.local/share/Trash/expunged/555119177 (deleted)

The 7th column gives us the size so we sort by its numeric value and get the last record which contains the biggest consumer. It tells us there is a 9GB file which was deleted but it is still use by tomcat (process 1645). Most likely there is a resource leak. 

How can we find it? Stop any automated processes in charge of deleting files and run lsof when you run out of HDD space again. It should tell you exactly which file is that and you should be able to look into your source code for the resource not being closed. In Java 7 try-with-resources should be used, previously we used to use libraries or simply those of us coming from C would be way more careful when operating with resources. In anyway look into your IDE or compiler options that could help identifying not closed streams. Java developers should turn on their warnings in their IDEs and cleanup the classes they touch. If this leak is not picked by compiler or IDE warnings then reach the community to find out why. Probably findbugs could help and if not reach them out, they will be more than happy to help as far as I can tell.

I have found in my years as developer that we get "overwhelmed" by alerts, compiler warnings and many other "inconveniences" and as a result we ignore them all. All this happens until the team faces the challenge from IT arguing that the software they have built is not efficient.

Quality of code is important and as in any other business defines it's mere future. Code quality is about Risk management and such as a developer you should not ignore warnings.

Happy (and responsible) coding!

Friday, November 15, 2013

Apache and Tomcat mod_proxy [warn] Proxy client certificate callback: ( downstream server wanted client certificate but none are configured

This warning was coming up un apache logs:
[Fri Nov 15 16:03:13 2013] [warn] Proxy client certificate callback: ( downstream server wanted client certificate but none are configured

Expired or not currently valid Certificate

The certificate might be expired or it could have been issued for a date in the future. You can check the validity using:
openssl s_client -connect | openssl x509 -noout -dates
depth=0 /C=Argentina/ST=FL/L=Buenos Aires/O=My Company, LLC/OU=Operations/
verify error:num=18:self signed certificate
verify return:1
depth=0 /C=Argentina/ST=FL/L=Buenos Aires/O=My Company, LLC/OU=Operations/
verify return:1
notBefore=Jan 24 13:29:12 2012 GMT
notAfter=Jan 21 13:29:12 2024 GMT
Recreating the certificate resolved the issue.

Tomcat miss configuration

The SSL Connector was having the below configuration set to "optional" but when using apache as a reverse proxy for load balancing this configuration is not needed. We should use the default which is "none":

Wednesday, November 13, 2013

Updating Ubuntu Packages through PPA Ubuntu 12.04 with SVN 1.7 svn status -u svn: The path '.' appears to be part of a Subversion 1.7 or greater working copy. Please upgrade your Subversion client to use this working copy.

Personal Package Archives are not trusted however if you know the pusblisher you could at least manage the risk. Subversion 1.7 is not available for Ubuntu 12.04 so you need to trust the svn PPA if you want to install it in your Ubuntu desktop and avoid:
$ svn status -u
svn: The path '.' appears to be part of a Subversion 1.7 or greater
working copy.  Please upgrade your Subversion client to use this
working copy.
I see a lot of posts encouraging to modify /etc/apt/sources.list. Make sure if you do so you revert the changes after the installation. you should never have to edit manually sources.list. To install a particular package out of a PPA:
sudo add-apt-repository -y ppa:svn/ppa
sudo apt-get update
sudo apt-get -y install subversion
Note that this will add some files:
$ ls -al /etc/apt/sources.list.d/
-rw-r--r-- 1 root root  238 Dec  6 14:29 svn-ppa-saucy.list
To remove the ppa repos:
sudo add-apt-repository --remove -y ppa:svn/ppa
You will notice the list files has now size=0
$ ls -al /etc/apt/sources.list.d/
-rw-r--r-- 1 root root  0 Dec  6 14:33 svn-ppa-saucy.list

A Java Runtime Environment must be available in order to run ... No Java Virtual Machine was found in the path

In some Ubuntu Desktops (while not in all) I have found the below error when trying to run Eclipse or Talend from gnome shortcuts:
A Java Runtime Environment must be available in order to run ... No Java Virtual Machine was found in the path ...
I solved it adding the -vm option to the command, for example:
${path-to-talend-binary} -vm ${path-to-jdk}/bin
Why gnome does not find the path when it is included in the profile of the user? Running "${path-to-talend-binary}" directly from the path does work.

Saturday, November 09, 2013

Find Talend Components in use from command line

Here is how to find out a list of components for a particular Talend job:

JVM OutOfMemory troubleshooting - Eclipse MAT to rescue Talend jobs

I will illustrate with an example how to use Eclipse MAT to debug OutOfMemory issues in standalone Talend jobs.

Talend Open Studio for Data Integration generates a shell file that packages a java command. When that shell script returns OutOfmemory errors we need to proced exactly the same way we would troubleshoot OutOfmemory errors in any JVM running process. We need to generate a Heap Memory Dump file (*.hprof) and analyze it with a tool to find out if we are holding more objects in memory than actually needed.

The first thing we need to do is to narrow down the OutOfMemory to a command line:
job/job/ --context_param ...
Then we need to ddd the necessary flags to the shell script to get the heap dump file:
java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="/tmp/dumps"
Now we run the script again and we notice the message:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/dumps/java_pid5394.hprof ...
Note that HeapDumpPath is actually not a directory but the generated file when running JDK 8 so you will see a different message indicating that /tmp/dumps is actually your hprof file.
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/dumps ...
You will need to append the extension in order to load it in Eclipse MAT or simply use the file name directly from the JVM flag:
java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="/tmp/dump.hprof"
Our file has been generated so it is time to load the Heap Memory Dump file (*.hprof) into Eclipse Memory Analyzer (MAT)

Just go to "Menu|File|Open File". Once loaded select "Leak Suspects Report". The pie chart should identify major problem suspect(s) and scrolling down that page you can drill down:

Click in "Details" for each problem suspect, for example:

Look how in the case of Talend the whole consumption is practically happening in the main() method. Talend just produces a huge Java class with a main method. Drill into "Accumulated Objects by Class" available towards the bottom of the page:

As you can see dom4j is used for parsing most likely big XML content (instead of SAX for example). Clicking on the link for objects you will be able to navigate through their herarchy. With Object Query Language (OQL) you can literally inspect anything in the hierarchy. Locate the OQL icon and click on it and get ready to type queries. The result will be a similar hierarchy than the one you get when inspecting individual objects if you just "select * from" the object but you can drill down getting hints about the actual loaded data with OQL. All you need is to look into fields of the object for which you can use the hirarchy inspection or even Javadocs directly:

The solution is most likely to use a different configuration for the faulty component or in the case of a deficiency of it look for an alternative. BTW if your heap is too high you can always limit memory consumption to minimize the size of the hprof file. Most likely with smaller memory footprint the memory leaking will be still revealed by an excesive usage of certain classes of objects.

Profile the application before getting surprising OutOfMemory

You can generate heap dumps from running programs at any time. You just need the pid of the running process, for example:
$ jps
2017 Bootstrap
13667 Jps
13650 talend_sample
$ jmap -dump:format=b,file=/tmp/talend_sample.hprof 13650
Dumping heap to /tmp/talend_sample.hprof  ...
Heap dump file created

Kanban: Show stale issues to reach Kaizen Moments. Show issues which status have not changed for some time with JIRA Agile

Agile Software Development needs to be done as simple as possible. Lean thinking is about that, it is about making sure the SDLC gets to a stage where it cannot be simpler. This is what the philosophy about continuous improvement is about.

High quality production starts with great culture supported by lean processes in place.

Bottlenecks have to be identified and if the culture is right then team will speak out issues in the daily meeting. However using a tool to remind those who forget about the issues they are having would be great and that is exactly what JIRA allows. You can mark issues that have been stale for 2 days for example using the below JQL for "Card Color"
NOT (Status changed AFTER -2d)
Of course identifying the bottleneck is a great part of the equation but resolving it is the most important part of it. Project Managers, Coachers, Coordinators etc should not be spending time chasing what people are doing or where they are stuck. They should be spending time removing roadblocks and eliminating bottlenecks. A tool can be helpful at identifying but intelligence (only found in human brains at the time of this writing) is needed to resolve the challenges we face.

Be happy when you find out issues, be worry when you find none. If you hear from the team they have no issues but you see there are stale items in the board then clearly you have reached a Kaizen moment. Without bottlenecks there will be no process improvement.

Wednesday, November 06, 2013

Share your Google Drawing via SVG with plain HTML

SVG has been there for a while and if you don't care about unsupported old and most likely vulnerable browsers then you should be fine just downloading as SVG your Google Drawings hosted in Google Drive and then pasting the content of them (plain XML) inside any web page like for example your wiki. You can use the SVG as external image or inline (as I show below).

It will not be difficult to get your Google Drawings and build presentations out of them. And yes presentations which lose no quality in bigger screens and which resize nicely depending on the existing canvas.


In the case of Mediawiki you need the below in LocalSettings.php to be able to use html directly in pages:
$wgRawHtml = true;
Now you can include any html code like the <svg> tag inside a literal <html> tag like in:
<html><?xml version="1.0" standalone="yes"?>

<svg version="1.1" viewBox="0.0 0.0 960.0 720.0" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="" xmlns:xlink=""><clipPath id="p.0"><path d="m0 0l960.0 0l0 720.0l-960.0 0l0 -720.0z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l960.0 0l0 720.0l-960.0 0z" fill-rule="nonzero"></path><path fill="#cfe2f3" d="m165.11215 69.882095l0 0c-1.0062714 -7.7441444 2.2975006 -15.410381 8.509399 -19.74564c6.211899 -4.3352585 14.242477 -4.5792274 20.684082 -0.62838364l0 0c2.2818146 -4.5027695 6.458191 -7.611622 11.265884 -8.3861885c4.807678 -0.77456665 9.681976 0.87612915 13.148499 4.4527817l0 0c1.9438171 -4.0825005 5.7605133 -6.8254166 10.095764 -7.2554245c4.3352356 -0.43000793 8.575455 1.5137558 11.216019 5.1415367l0 0c3.5117798 -4.3274384 9.099106 -6.1494713 14.344315 -4.6776924c5.245224 1.4717789 9.206223 5.9730225 10.169052 11.556019l0 0c4.3025208 1.2290115 7.886444 4.353321 9.825775 8.565704c1.939331 4.212387 2.0438538 9.099625 0.28652954 13.399002l0 0c4.236725 5.774544 5.2278137 13.469048 2.6033936 20.212059c-2.6244507 6.743019 -8.470093 11.521484 -15.355469 12.552162c-0.04852295 6.3285675 -3.3627625 12.135574 -8.665298 15.182762c-5.3025208 3.0471878 -11.76532 2.8587189 -16.897324 -0.4927597c-2.1859589 7.579544 -8.338715 13.156502 -15.800034 14.321419c-7.461334 1.164917 -14.893646 -2.291031 -19.085907 -8.8747635c-5.1388397 3.24514 -11.305008 4.17997 -17.10756 2.5935898c-5.8025513 -1.5863724 -10.752701 -5.560318 -13.733795 -11.025391l0 0c-5.251251 0.6435318 -10.328476 -2.2056427 -12.711884 -7.133484c-2.3833923 -4.927841 -1.5655975 -10.885323 2.0475159 -14.915779l0 0c-4.68425 -2.8872223 -7.0744324 -8.616432 -5.924164 -14.200073c1.1502686 -5.583641 5.5803223 -9.756432 10.980072 -10.34243z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="2.0" stroke-linejoin="round" stroke-linecap="butt" d="m165.11215 69.882095l0 0c-1.0062714 -7.7441444 2.2975006 -15.410381 8.509399 -19.74564c6.211899 -4.3352585 14.242477 -4.5792274 20.684082 -0.62838364l0 0c2.2818146 -4.5027695 6.458191 -7.611622 11.265884 -8.3861885c4.807678 -0.77456665 9.681976 0.87612915 13.148499 4.4527817l0 0c1.9438171 -4.0825005 5.7605133 -6.8254166 10.095764 -7.2554245c4.3352356 -0.43000793 8.575455 1.5137558 11.216019 5.1415367l0 0c3.5117798 -4.3274384 9.099106 -6.1494713 14.344315 -4.6776924c5.245224 1.4717789 9.206223 5.9730225 10.169052 11.556019l0 0c4.3025208 1.2290115 7.886444 4.353321 9.825775 8.565704c1.939331 4.212387 2.0438538 9.099625 0.28652954 13.399002l0 0c4.236725 5.774544 5.2278137 13.469048 2.6033936 20.212059c-2.6244507 6.743019 -8.470093 11.521484 -15.355469 12.552162c-0.04852295 6.3285675 -3.3627625 12.135574 -8.665298 15.182762c-5.3025208 3.0471878 -11.76532 2.8587189 -16.897324 -0.4927597c-2.1859589 7.579544 -8.338715 13.156502 -15.800034 14.321419c-7.461334 1.164917 -14.893646 -2.291031 -19.085907 -8.8747635c-5.1388397 3.24514 -11.305008 4.17997 -17.10756 2.5935898c-5.8025513 -1.5863724 -10.752701 -5.560318 -13.733795 -11.025391l0 0c-5.251251 0.6435318 -10.328476 -2.2056427 -12.711884 -7.133484c-2.3833923 -4.927841 -1.5655975 -10.885323 2.0475159 -14.915779l0 0c-4.68425 -2.8872223 -7.0744324 -8.616432 -5.924164 -14.200073c1.1502686 -5.583641 5.5803223 -9.756432 10.980072 -10.34243z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m159.95244 94.723076l0 0c2.2105103 1.3624878 4.7641754 1.980545 7.3181 1.7711868m3.345108 20.278572c1.0982971 -0.13459778 2.17482 -0.41960907 3.201828 -0.847702m27.638 9.279144c-0.77246094 -1.2131119 -1.4192047 -2.50943 -1.9292145 -3.866867m36.816498 -1.5800171l0 0c0.39852905 -1.3818665 0.6567383 -2.8041 0.77033997 -4.242981m24.79132 -10.446487c0.05166626 -6.737732 -3.6026306 -12.90686 -9.39328 -15.857582m22.145233 -16.90593c-0.93777466 2.2943268 -2.3694153 4.32959 -4.182617 5.946213m-5.9288025 -27.911522l0 0c0.15975952 0.9265022 0.23373413 1.8669281 0.2208252 2.8082428m-24.733795 -9.686291l0 0c-0.87602234 1.079483 -1.5977478 2.2857895 -2.1427002 3.5813599m-19.169281 -1.4679413l0 0c-0.46684265 0.980484 -0.81544495 2.0180054 -1.037796 3.0886612m-23.376953 0.84482574l0 0c1.3630524 0.8359947 2.6240387 1.842205 3.7552948 2.9965286m-32.948395 17.377605l0 0c0.13868713 1.0673447 0.3578186 2.1215286 0.655365 3.1526947" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="2.0" stroke-linejoin="round" stroke-linecap="butt" d="m159.95244 94.723076l0 0c2.2105103 1.3624878 4.7641754 1.980545 7.3181 1.7711868m3.345108 20.278572c1.0982971 -0.13459778 2.17482 -0.41960907 3.201828 -0.847702m27.638 9.279144c-0.77246094 -1.2131119 -1.4192047 -2.50943 -1.9292145 -3.866867m36.816498 -1.5800171l0 0c0.39852905 -1.3818665 0.6567383 -2.8041 0.77033997 -4.242981m24.79132 -10.446487c0.05166626 -6.737732 -3.6026306 -12.90686 -9.39328 -15.857582m22.145233 -16.90593c-0.93777466 2.2943268 -2.3694153 4.32959 -4.182617 5.946213m-5.9288025 -27.911522l0 0c0.15975952 0.9265022 0.23373413 1.8669281 0.2208252 2.8082428m-24.733795 -9.686291l0 0c-0.87602234 1.079483 -1.5977478 2.2857895 -2.1427002 3.5813599m-19.169281 -1.4679413l0 0c-0.46684265 0.980484 -0.81544495 2.0180054 -1.037796 3.0886612m-23.376953 0.84482574l0 0c1.3630524 0.8359947 2.6240387 1.842205 3.7552948 2.9965286m-32.948395 17.377605l0 0c0.13868713 1.0673447 0.3578186 2.1215286 0.655365 3.1526947" fill-rule="nonzero"></path></g></svg></html>


Here is a Cloud SVG from Google Drawing which has been included using the inline svg tag above:

Friday, November 01, 2013

OSX Mavericks still without RSS Reader integrated in

This is one of those simple things still does not support (after Lion). I am unsure why it has not been included but if you use Mail with Exchange then you will be able to read RSS with the below *only* if you have outlook client side constantly pulling the RSS which makes this method not practical really unless Microsoft would support pulling the RSS from server side:
  1. Add the feed via Outlook from a Windows machine. Check the Account Settings in Outlook looking for the RSS Feeds tab and "change" your feed to make sure it has the correct "update limit". In my case for Mediawiki I found that the default stopped the feed from being refreshed so I unchecked the "Use the publisher update recommendation". I had to "Send/Receive tab | Send & Receive group | Send/Receive | Define Send/Receive Groups | Group Name | All Accounts | Schedule an automatic send/receive every n minutes check box | When Outlook is Offline | Schedule an automatic send/receive every n minutes"
  2. See the feed inside the "RSS Feeds" folder in

Test connectivity through netcat

You want to test that networking is properly configured before you actually run eny services. For example you are migrating from mod_jk to mod_proxy so your DMZ should be able to connect to tomcat on port 8443 (HTTPS) instead of 8009 (AJP). You cannot make any changes so far but it is Friday and you want to make sure the routes through DMZ firewalls are working correctly. Just put a listener in the tomcat server and use a client to connect to it from Apache. So in tomcat:
$ nc -l 8443
In Apache use a client:
$ nc tomcatServerDomainOrIp 8443
And you should get what you type (Hello) in the server side if the connectivity is correct. This is of course testing TCP traffic on port 8443.

Thursday, October 31, 2013

Running HTTPS services in Tomcat - /home/user/.keystore (No such file or directory)

I found today a tomcat server which was complaining about:
Oct 31, 2013 8:26:22 PM org.apache.catalina.startup.SetAllPropertiesRule begin
WARNING: [SetAllPropertiesRule]{Server/Service/Connector} Setting property 'SSLCertificateFile' to '/opt/tomcat/certs/' did not find a matching property.
Oct 31, 2013 8:26:22 PM org.apache.catalina.startup.SetAllPropertiesRule begin
WARNING: [SetAllPropertiesRule]{Server/Service/Connector} Setting property 'SSLCertificateKeyFile' to '/opt/tomcat/certs/' did not find a matching property.
Oct 31, 2013 8:26:23 PM getStore
SEVERE: Failed to load keystore type JKS with path /home/user/.keystore due to /home/user/.keystore 
(No such file or directory) /home/user/.keystore (No such file or directory)
I was convinced my APR was installed correctly but Tomcat was giving me the clue lines above (despite I missed them because basically it was INFO level ;)
Oct 31, 2013 8:40:47 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
This was the result of a missing line in setenv:
-Djava.library.path=/usr/local/apr/lib \
So make sure the APR path is set in and that you have no INFO stating that APR was not found.

Wednesday, October 30, 2013

On Security: From mod-jk to mod-proxy

Mod_jk is fast, binary protocol which has allowed us to scale Tomcat server java applications for years. However to encrypt the traffic mod-jk configuration will get messy.

Mod_proxy on the other hand is a little bit slower because it works directly on HTTP connectors but being a proxy to regular HTTP(s) it supports encrypted traffic in an easier to configure way.

And since the overhead is really nothing nowadays with the high speed LAN there is no reason that would stop you from migrating from an insecure mod_jk configuration to an encrypted mod_proxy configuration.

Here is what it takes to get your tomcat servers to get the traffic (very simple but fully functional example). Of course as usual there are tons of configuration options for mod_proxy you might need for your needs. All I am presenting here is how to get sticky sessions work. Note that certificates need to be hosted in tomcat (besides apache) as well. In addition note the jvmRoute attribute is still needed the same way it is for modjk.

I am assuming you are using the Apache Portable Runtime (APR) library. Keep in mind that you must not have any warnings about APR. This for example would be a no-no (solve that first installing the library and making sure tomcat library path can reach it):
Oct 31, 2013 8:40:47 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Read more about this issue here.

In tomcat server.xml
       <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
               maxThreads="200" scheme="https" secure="true"
               clientAuth="false" sslProtocol="TLS" />
       <Engine name="Catalina" defaultHost="localhost" jvmRoute="sample-app1">
          <Host name=""  appBase="sample-app"
          unpackWARs="true" autoDeploy="true"
Enable the necessary modules in apache:
$ sudo a2enmod proxy_http
$ sudo a2enmod proxy_balancer
In apache virtual host few things change like everything related to "proxy" and the elimination of any jk directive. Note for example how for static resources we make exceptions through the use of an exclamation mark so those are served directly from Apache instead of Tomcat (instead of the Alias/SetEnvIf Request_URI + no-jk sentences needed for mod_jk).
 #Make sure Proxy errors are not served back to the user (I use a common page for all 500 errros here)
 ErrorDocument 500 /html/error/503.html
 SSLProxyEngine on
 ProxyPreserveHost on
 ProxyRequests off

 # Before using the below see Use the below only if you are working with *expired* self signed certificates. It should be fine if the cert is not expired though.
 # SSLProxyCheckPeerExpire off

 <Proxy balancer://sample-app>
    ProxySet lbmethod=bybusyness
    BalancerMember route=sample-app1
    BalancerMember route=sample-app2
ProxyPass /images !
ProxyPass /js !
ProxyPass /css !
ProxyPass /html !
ProxyPass / balancer://sample-app/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On

Do not forget to restart apache. To test this configuration you can sniff the traffic now in Apache:
$ sudo tcpdump -i eth0 -X port 8443 > /tmp/del
Then hit the service from curl targettimg a particular cluster node (using -k in case you use a self-signed certificate):
$ curl -i -b "JSESSIONID=.sample-app1" -k ""
Now inspect the results in apache and you should see no hits for password:
$ grep password /tmp/del

Thursday, October 24, 2013

Connection is read-only. Queries leading to data modification are not allowed SQL Error: 0, SQLState: S1009

JPA specification does not support changing the transaction isolation level and that is the reason HibernateExtendedJpaDialect and IsolationSupportSessionTransactionData classes have been developed in the wild (Disclaimer: I am merely copying and pasting the ones I found below but all credits go their original authors).

We went ahead and tried HibernateExtendedJpaDialect just to notice that randomly entity persistence would fail with:
WARN [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 0, SQLState: S1009
ERROR [org.hibernate.util.JDBCExceptionReporter] - Connection is read-only. Queries leading to data modification are not allowed
When you compare the code of the two classes you can identify extra cleanup which is essential to avoid leaving existing connections in a dirty state. This is very important especially for pooled connections of course:
package com.sample.utils.jpa;

import java.sql.Connection;
import java.sql.SQLException;

import javax.persistence.EntityManager;
import javax.persistence.PersistenceException;

import org.hibernate.Session;
import org.hibernate.jdbc.Work;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jdbc.datasource.DataSourceUtils;
import org.springframework.orm.jpa.vendor.HibernateJpaDialect;
import org.springframework.transaction.TransactionDefinition;
import org.springframework.transaction.TransactionException;

public class HibernateExtendedJpaDialect extends HibernateJpaDialect {

    private Logger logger = LoggerFactory.getLogger(HibernateExtendedJpaDialect.class);

     * This method is overridden to set custom isolation levels on the connection
     * @param entityManager
     * @param definition
     * @return
     * @throws PersistenceException
     * @throws SQLException
     * @throws TransactionException
    public Object beginTransaction(final EntityManager entityManager,
            final TransactionDefinition definition) throws PersistenceException,
            SQLException, TransactionException {
        Session session = (Session) entityManager.getDelegate();
        if (definition.getTimeout() != TransactionDefinition.TIMEOUT_DEFAULT) {

        logger.debug("Transaction started");

        session.doWork(new Work() {

            public void execute(Connection connection) throws SQLException {
                logger.debug("The connection instance is {}", connection);
                logger.debug("The isolation level of the connection is {} and the isolation level set on the transaction is {}",
                        connection.getTransactionIsolation(), definition.getIsolationLevel());
                DataSourceUtils.prepareConnectionForTransaction(connection, definition);

        return prepareTransaction(entityManager, definition.isReadOnly(), definition.getName());

package com.sample.utils.jpa;

import java.sql.Connection;
import java.sql.SQLException;

import javax.persistence.EntityManager;
import javax.persistence.PersistenceException;

import org.hibernate.Session;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.jdbc.datasource.DataSourceUtils;
import org.springframework.orm.jpa.vendor.HibernateJpaDialect;
import org.springframework.transaction.TransactionDefinition;
import org.springframework.transaction.TransactionException;

public class IsolationSupportHibernateJpaDialect extends HibernateJpaDialect {

    private static final long serialVersionUID = 1L;
    private Logger logger = LoggerFactory.getLogger(IsolationSupportHibernateJpaDialect.class);

     * This method is overridden to set custom isolation levels on the connection
     * @param entityManager
     * @param definition
     * @return
     * @throws PersistenceException
     * @throws SQLException
     * @throws TransactionException
    public Object beginTransaction(final EntityManager entityManager, final TransactionDefinition definition)
        throws PersistenceException, SQLException, TransactionException {
        boolean infoEnabled = false;
        boolean debugEnabled = false;
        Session session = (Session) entityManager.getDelegate();
        if (definition.getTimeout() != TransactionDefinition.TIMEOUT_DEFAULT) {

        Connection connection = session.connection();
        infoEnabled = logger.isInfoEnabled();
        debugEnabled = logger.isDebugEnabled();
        if (infoEnabled) {
  "Connection Info: isolationlevel={} , instance={} ", connection.getTransactionIsolation(), connection);
  "Transaction Info: IsolationLevel={} , PropagationBehavior={} , Timeout={} , Name={}",
                new Object[] { definition.getIsolationLevel(), definition.getPropagationBehavior(), definition.getTimeout(),
                        definition.getName() });
        if (debugEnabled) {
            logger.debug("The isolation level of the connection is {} and the isolation level set on the transaction is {}",
                connection.getTransactionIsolation(), definition.getIsolationLevel());
        Integer previousIsolationLevel = DataSourceUtils.prepareConnectionForTransaction(connection, definition);
        if (infoEnabled) {
  "The previousIsolationLevel {}", previousIsolationLevel);

        if (infoEnabled) {
            logger.debug("Transaction started");

        Object transactionDataFromHibernateJpaTemplate = prepareTransaction(entityManager, definition.isReadOnly(),

        return new IsolationSupportSessionTransactionData(transactionDataFromHibernateJpaTemplate, previousIsolationLevel,

     * (non-Javadoc)
     * @see org.springframework.orm.jpa.vendor.HibernateJpaDialect#cleanupTransaction(java.lang.Object)
    public void cleanupTransaction(Object transactionData) {
        super.cleanupTransaction(((IsolationSupportSessionTransactionData) transactionData)
        ((IsolationSupportSessionTransactionData) transactionData).resetIsolationLevel();

    private static class IsolationSupportSessionTransactionData {

        private final Object sessionTransactionDataFromHibernateJpaTemplate;
        private final Integer previousIsolationLevel;
        private final Connection connection;

        public IsolationSupportSessionTransactionData(Object sessionTransactionDataFromHibernateJpaTemplate,
            Integer previousIsolationLevel, Connection connection) {
            this.sessionTransactionDataFromHibernateJpaTemplate = sessionTransactionDataFromHibernateJpaTemplate;
            this.previousIsolationLevel = previousIsolationLevel;
            this.connection = connection;

        public void resetIsolationLevel() {
            if (this.previousIsolationLevel != null) {
                DataSourceUtils.resetConnectionAfterTransaction(connection, previousIsolationLevel);

        public Object getSessionTransactionDataFromHibernateTemplate() {
            return this.sessionTransactionDataFromHibernateJpaTemplate;



Tuesday, October 22, 2013

RDP into Windows from native Linux or from Linux XRDP session

If you are doing VDI using Ubuntu XRDP you get a nice security gain: No clipboard allowed out of the box. However that might be also an issue for developer productivity.

You can always connect to your Windows based VDI via RDP using rdesktop though, and when doing so you will have access to the clipboard. After all in a Corporate environment most likely you are using Exchange; since Microsoft Outlook is the perfect client for it perhaps it makes sense just to RDP into Windows after all to send those emails sharing some snippets of code with your colleagues.
$ rdesktop -g 90% -d myDomain -u myUser -p -
This starts an RDP session in a window taking 90% of the current Linux box.

Security starts with finding bugs

Finding bugs proactively in your application is not a matter of good practices for the sake of following them. It actually has a big impact in risk management. That is the reason I believe no build should be possible if the application is not bug free.

As usual there is a trade-off though. Ranking the bug is important, you know, severity, impact, service class you name it.

FindBugs is an open source project which we have used for ages in the Java world. Together with Maven it allows us to break the build if the code is not bug free. Here is all you need in pom.xml:
Note that I use maxRank=15 which is the one by default used in the findbugs Eclipse plugin and which I confirmed myself reveals real issues we should not ignore in our code base (The selection of rank will depend on your goals and controls for risk management). As per the documentation "This element matches warnings with a particular bug rank. The value attribute should be an integer value between 1 and 20, where 1 to 4 are scariest, 5 to 9 scary, 10 to 14 troubling, and 15 to 20 of concern bugs". The threshold is another important parameter to setup for this BTW.

Now your typical maven build will fail with information about potential bugs:
[INFO] [findbugs:findbugs {execution: findbugs}]
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis....
[INFO] [findbugs:check {execution: default}]
[INFO] BugInstance size is 1
[INFO] Error size is 0
[INFO] Total bugs: 1
[INFO] Dead store to message in com.sample.sayHi(String, Errors, Errors) ["com.sample.HelloWorld"] At[lines 44-267]
The above is just saying that the variable "message" is a "dead store". Of course you can skip findbugs to speed up development like in:
mvn clean install -Dfindbugs.skip=true
You might want to exclude some warnings like in the case of generated stubs that provide code you know that works but that follows bad coding practices. In those cases you have XML or annotations available. To use annotations you need to include findbugs as a compile scope dependency.
Now you can use exclusions like in:
@edu.umd.cs.findbugs.annotations.SuppressWarnings(value = "NM_SAME_SIMPLE_NAME_AS_SUPERCLASS", justification = "Stub autogenerated classes")
@WebServiceClient(name = "Service", targetNamespace = "", wsdlLocation = "file:/geneva-jax-ws.wsdl")
public class Service extends {

    private final static URL SERVICE_WSDL_LOCATION;

Monday, October 21, 2013

Change your Windows Domain password from Linux or OSX

GUI method

Login via RDP and press CTRL+ALT+DEL (FN+CTRL+ALT+DELETE on a MAC OSX) and get the menu to change the password.

CLI method

Samba is all you need:
$ smbpasswd -U myUser -r
In case you do not have smbpasswd available in OSX should I remind brew can help?
$ brew install samba
Here is one typical error you might get if the password does not meet system necessities:
machine rejected the password change: Error was : Password restriction.

From distributed JOTM XAPool to local Tomcat Pool

The old JOTM driver is not longer supported and while other alternatives exist our project did not longer need the complexities of distributed transactions. I thought moving back to pure Spring + JPA + Hibernate was going to be easy however it took a while to get that downgrade right. Here are the steps we followed Eliminate all dependencies from tomcat:
cd /opt/tomcat/lib
mv -f carol-iiop-delegate.jar carol-interceptors.jar howl.jar jotm-core.jar ow2-connector-1.5-spec.jar ow2-jta-1.1-spec.jar xapool-1.6.beta.jar ~/
Remove dependencies from project:

<!-- Remove the jotm completely when confirmed the new config works
Remove the JotmFactoryBean which for some reason we had to include in our code (Most likely we were using a Spring version where it was not available)
 * Copyright 2002-2008 the original author or authors.
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * See the License for the specific language governing permissions and
 * limitations under the License.

package org.springframework.transaction.jta;

import javax.naming.NamingException;
import javax.transaction.SystemException;

import org.objectweb.jotm.Current;
import org.objectweb.jotm.Jotm;

import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.FactoryBean;

 * {@link FactoryBean} that retrieves the JTA UserTransaction/TransactionManager
 * for ObjectWeb's JOTM. Will retrieve
 * an already active JOTM instance if found (e.g. if running in JOnAS),
 * else create a new local JOTM instance.

With JOTM, the same object implements both the * {@link javax.transaction.UserTransaction} and the * {@link javax.transaction.TransactionManager} interface, * as returned by this FactoryBean. * *

A local JOTM instance is well-suited for working in conjunction with * ObjectWeb's XAPool, e.g. with bean * definitions like the following: * *

 * <bean id="jotm" class="org.springframework.transaction.jta.JotmFactoryBean"/>
 * <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
 *   <property name="userTransaction" ref="jotm"/>
 * </bean>
 * <bean id="innerDataSource" class="org.enhydra.jdbc.standard.StandardXADataSource" destroy-method="shutdown">
 *   <property name="transactionManager" ref="jotm"/>
 *   <property name="driverName" value="..."/>
 *   <property name="url" value="..."/>
 *   <property name="user" value="..."/>
 *   <property name="password" value="..."/>
 * </bean>
 * <bean id="dataSource" class="org.enhydra.jdbc.pool.StandardXAPoolDataSource" destroy-method="shutdown">
 *   <property name="dataSource" ref="innerDataSource"/>
 *   <property name="user" value="..."/>
 *   <property name="password" value="..."/>
 *   <property name="maxSize" value="..."/>
 * </bean>
* * Note that Spring's {@link JtaTransactionManager} will automatically detect * that the passed-in UserTransaction reference also implements the * TransactionManager interface. Hence, it is not necessary to specify a * separate reference for JtaTransactionManager's "transactionManager" property. * *

Implementation note: This FactoryBean uses JOTM's static access method * to obtain the JOTM {@link org.objectweb.jotm.Current} object, which * implements both the UserTransaction and the TransactionManager interface, * as mentioned above. * * @author Juergen Hoeller * @since 21.01.2004 * @see JtaTransactionManager#setUserTransaction * @see JtaTransactionManager#setTransactionManager * @see org.objectweb.jotm.Current */ public class JotmFactoryBean implements FactoryBean, DisposableBean { private Current jotmCurrent; private Jotm jotm; public JotmFactoryBean() throws NamingException { // Check for already active JOTM instance. this.jotmCurrent = Current.getCurrent(); // If none found, create new local JOTM instance. if (this.jotmCurrent == null) { // Only for use within the current Spring context: // local, not bound to registry. this.jotm = new Jotm(true, false); this.jotmCurrent = Current.getCurrent(); } } /** * Set the default transaction timeout for the JOTM instance. *

Should only be called for a local JOTM instance, * not when accessing an existing (shared) JOTM instance. */ public void setDefaultTimeout(int defaultTimeout) { this.jotmCurrent.setDefaultTimeout(defaultTimeout); // The following is a JOTM oddity: should be used for demarcation transaction only, // but is required here in order to actually get rid of JOTM's default (60 seconds). try { this.jotmCurrent.setTransactionTimeout(defaultTimeout); } catch (SystemException ex) { // should never happen } } /** * Return the JOTM instance created by this factory bean, if any. * Will be null if an already active JOTM instance is used. *

Application code should never need to access this. */ public Jotm getJotm() { return this.jotm; } public Object getObject() { return this.jotmCurrent; } public Class getObjectType() { return this.jotmCurrent.getClass(); } public boolean isSingleton() { return true; } /** * Stop the local JOTM instance, if created by this FactoryBean. */ public void destroy() { if (this.jotm != null) { this.jotm.stop(); } } }

Include in your project the code for IsolationSupportSessionTransactionData custom class (Just Google it) which is supposed to take care of custom levels of transaction isolations as they are not supported by the JPA specification. Include JTA dependency in the project:
File persistence.xml changes:
    <!-- Use RESOURCE_LOCAL instead of JTA -->
    <persistence-unit name="persistenceUnit" transaction-type="RESOURCE_LOCAL">
            <!-- Remove completely the below once JOTM is removed and project is working
            <property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.JOTMTransactionManagerLookup" />
            <property name="hibernate.transaction.factory_class" value="org.hibernate.transaction.JTATransactionFactory"/> 
            <property name="hibernate.transaction.flush_before_completion" value="false" />
            <property name="hibernate.transaction.auto_close_session" value="false" />
            <property name="hibernate.current_session_context_class" value="jta" />
            <property name="hibernate.connection.release_mode" value="auto" />
File applicationContext.xml changes (Spring Application Context file):
    <!-- Remove all XA related configuration as soon as we confirm there are no issues with the new pool -->
    <!-- first XA data source -->
    <bean id="innerDataSource" class="org.enhydra.jdbc.standard.StandardXADataSource" destroy-method="shutdown">
        <property name="transactionManager" ref="jotm" />
        <property name="driverName" value="${jdbc.driverClassName}" />
        <property name="url" value="${jdbc.url}" />
        <property name="user" value="${jdbc.username}" />
        <property name="password" value="${jdbc.password}" />
    <!-- first XA data source pool -->
    <bean id="dataSource" class="org.enhydra.jdbc.pool.StandardXAPoolDataSource" destroy-method="shutdown">
        <property name="transactionManager" ref="jotm" />
        <property name="dataSource" ref="innerDataSource" />
        <property name="user" value="${jdbc.username}" />
        <property name="password" value="${jdbc.password}" />
        <property name="minSize" value="${jdbc.minSize}" />
        <property name="maxSize" value="${jdbc.maxSize}" />
        <property name="checkLevelObject" value="2"/>
        <property name="sleepTime" value="${jdbc.sleepTime}" />
        <property name="lifeTime" value="${jdbc.lifeTime}" />
        <property name="jdbcTestStmt" value="SELECT NOW()"/>
        <!--  Data source not pooled -->
 <bean id="dataSourceTemplate" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
     <property name="driverClassName" value="${jdbc.driverClassName}" />
     <property name="url" value="${jdbc.url}" />
     <property name="username" value="${jdbc.username}" />
     <property name="password" value="${jdbc.password}" />

 <!--  Connection pool data source. -->
     <bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource" destroy-method="close">
      <!-- Refer to a separately created bean as a data source template to work around a quirk of Tomcat's class loader. -->
     <property name="dataSource" ref="dataSourceTemplate" />
     <property name="initialSize" value="${jdbc.minSize}" />
            <property name="maxActive" value="${jdbc.maxSize}" />
            <property name="timeBetweenEvictionRunsMillis" value="${jdbc.sleepTime}" />
            <property name="maxAge" value="${jdbc.lifeTime}" />
            <property name="validationQuery" value="SELECT 1"/>
    <!-- Use a JPA Dialect able to handle different transaction isolation levels -->
    <bean id="Emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
        <property name="jpaDialect">
            <!-- <bean class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" /> -->
            <bean class="com.sample.jpa.HibernateExtendedJpaDialect" />
        <!--  Stop using the JOTM transaction manager for @Transactional -->
    <!-- <tx:annotation-driven transaction-manager="jtaTransactionManager" proxy-target-class="false" />
    <tx:annotation-driven transaction-manager="transactionManager" />

Friday, October 18, 2013

Linux or Solaris bash: rm: Arg list too long

Let us remove all files following a wildcard:
$ rm /tmp/log*
bash: /usr/bin/rm: Arg list too long
Commands find and xargs to the rescue:
$ find /tmp -name "logMonitor*" | xargs rm

Wednesday, October 09, 2013

Ubuntu Apache security patches

The most important command to run when looking at current version of apache is not actually the below:
$ apache2 -v
Server version: Apache/2.2.22 (Ubuntu)
Server built:   Jul 12 2013 13:37:15
The above will tell you not much. You need to inspect further:
$ sudo apt-cache policy apache2
  Installed: 2.2.22-1ubuntu1
  Candidate: 2.2.22-1ubuntu1
  Version table:
 *** 2.2.22-1ubuntu1 0
        500 precise/main amd64 Packages
        100 /var/lib/dpkg/status
This is actually telling us that the server is vulnerable. As a minimum we need 2.2.22-1ubuntu2. What should we do? Simple:
$ sudo apt-get update
$ sudo apt-get upgrade
Which BTW will most likely address other security issues because we will be moving from an old Ubuntu version, for example:
$ cat /etc/lsb-release 
To a patched version of it:
$ cat /etc/lsb-release 

Monday, October 07, 2013

Client Java applications performance

When it comes to a Java client application you better use the "-client" flag and the correct garbage collector. Failure to do so will result in slower responses and more memory consumption:
-client -XX:+UseSerialGC
Those running processes like Talend Java based ETLs encapsulated in a shell script should use the above flags for example.

Wednesday, October 02, 2013

lastcomm and turning accounting on in Solaris

The lastcomm command will not work in Solaris:
$ lastcomm
/var/adm/pacct: No such file or directory
Unless you turn accounting on:
$ /usr/lib/acct/turnacct on

Solaris find newest file

Not as simple as it would be in Linux ;-)
$ find /my/directory/path/ -name "someOptionalPatternToFilter" -type f | sed 's/.*/"&"/' | xargs ls -E | awk '{ print $6," ",$7," ",$9 }' | sort | tail -1 | awk '{print $3}'

Saturday, September 28, 2013

Add the session cookie to apache logs

Business intelligence starts with your application logs. How your users use the system is at the core of the questions you need to respond to make sure your product correctly addresses productivity.

Apache can log specific cookies like the session id. Below is an example of such configuration for the typical JEE application:
 LogFormat "%h %l %u %t \"%r\" %>s %b %{JSESSIONID}C" custom
 CustomLog /var/log/apache2/ custom

My Twitter Account was hacked - Solution is double factor authentication

Double Factor Authentication is an absolutely must have. It is between the 10 most important security measures for any application that can be reached one way or the other by public audience.

My twitter account was recently hacked and as a result my account spammed around 10 followers with around three spams each. My apologies for this incident.

I had a strong password and I usually change them every three months (a pain, I know). I even have different passwords for my different online accounts (another necessary pain). I would have saved time but more importantly I would have saved some reputation should I have looked into the Twitter privacy section because the service offers double factor authentication.

In twitter case they support the double factor authentication with SMS or the twitter app.

Double factor authentication is an inconvenient but it is better to go through that pain than getting hacked.

Friday, September 27, 2013

Apache log files statistics - Hits per resource - Finding most consumed resources

In a typical modern web application you have users hitting resources directly and indirectly. Many times, especially in REST approaches there is a number in the path representing the specific Collection member we are currently accessing. Unix Power Tools can quickly give us a response for questions like "List hits by page" or better said in WEB 2.0 "List hits per resource".

Given Apache logs look like: - - [22/Sep/2013:06:25:09 -0400] "POST /my/resource HTTP/1.1" 200 3664
When the below command is run:
grep -o "[^\?]*" access.log | sed 's/[0-9]*//g' | awk '{url[$7]++} END{for (i in url) {print url[i], i}}' | sort -nr
Then an output like the below will be returned:
10000 /my/top/hit/resource
50 /my/number//including/hit/resource
1 /my/bottom/hit/resource
The command first gets rid of the query string, replaces all numbers (This allows us not to consider resources that differ by ids as different), builds an associative array (or map) with key being the resource and content being the number of such resources found, prints it as "counter resource" and finally sorts it descendant (no real need for the -n switch as no numbers will be present in the URL.

Force ssh password instead of public key authentication

ssh -o PubkeyAuthentication=no

Thursday, September 19, 2013

Limit CPU consumption for processes

Compression algorithms eat CPU. While they are needed for backups you do not want to put your resurces down just because of one process. Use cpulimit for it then:
$ & sleep $delay & cpulimit -e gzip -l 30 -z
Looking at the man pages you will realize we are limitting overall usage to 30% of the whole available CPU (if 3 processors then around 10% each) and the -z option will make cpulimit to quit if no gzip process is found, hence the delay for the command will depend on what that command actually does.

Friday, September 13, 2013

Stress to test - Simulate cpu, memory, io load

How to stress test Linux? Just use stress command which is available from apt-get and probably other package managers as well. Here is how you can consume 500MB RAM for a period of 30 seconds.
$ stress --vm 1 --vm-bytes 500M -t 30s --vm-hang 30
stress: info: [7651] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
Test up front, Quality assurance is the very first step to guarantee constant process improvement.

Sunday, September 08, 2013

Reconsider those NewLine in FileName

This is a huge issue. IMO file names with newlines should be considered invalid. I would do the same with file names containing spaces but well, that is a mission impossible. Ask any regular user, simply put, we write using spaces to separate words so why would you be forced to use underscores, dashes or write CamelCase?

So any code generating file names containing new line characters (or anything other than alphanumeric and space should be fix. If that is not possible, like when you do not own the code generating them, and we still need to process such files with our own tools then it is better if we just rename them.

I have scripted this post as:
#!/bin/bash -e

USAGE="Usage: `basename $0` <filePath>"

if [ $# -ne "1" ]
  echo $USAGE
  exit 1

if [[ "$file" == *$pattern* ]]; then
  mv "$file" "${file//$pattern/}"
Now you can use it like:
find . -name "*" -exec /usr/sbin/ {} \;
And the result would be newlines stripped out the file name.

Friday, August 02, 2013

iCalendar for Sysadmin Day - Design is how it works

Sysadmin day is the last Friday of July. So this year (2013) it was last week (Today is August, 1st.). I thought I set it right in my Google Calendar last year but I was wrong. Actually Google Calendar nor Yahoo Calendar support such thing like "Every Year on the Last Friday of July". In iCalendar it is an easy thing:
LOCATION:Company wide
SUMMARY:Sysadmin Day!
END:VCALENDAR (The MSN then Hotmail and now Live, will Microsoft stop changing names at some point?) does support importing the above and it correctly allocates the slots however do not dare to try editing it because the edition interface does not support such statement really. Exchange OWA and Outlook will recognize the iCalendar and correctly allocate the slots but you won't be able to edit the occurrence after that.

Guess who got it right? Yup, MAC OSX did ;-) and as usual with an impressive can't-get-any-simpler User Interface. User experience is on the top of the list for Apple and that is why they can sell the most expensive equipment, they simply get it right. Steve Jobs was so correct when he stated "Design is not just what it looks like and feels like. Design is how it works."

And not only it works from the GUI but you can also import the iCalendar as well.

Wednesday, July 31, 2013

Maven unable to find resource in repository

Lot of similar errors for packages I know we have in our internal repository:
[INFO] Unable to find resource 'com.octo.captcha:jcaptcha-api:jar:2.0-alpha-1' in repository central (
However in ~/.m2/settings.xml I have specified the internal repo as central mirroring absolutely everything. The "mvn -X" command wouldn't say which settings.xml it was parsing. What to do?

The solution was to specify where the settings were (a one time shot):
$ mvn install --settings ~/.m2/settings.xml

Tuesday, July 23, 2013

Agile team? Did you already script your infrastructure?

It's been two days since Ubuntu Forums and Apple Developer Resources websites have been down. I believe that such big down-term is only related to the fact that the infrastructure is not scripted. Am I wrong?

Recipes are the way to go not only for DR situations but for security reasons as you can see.

Furthermore it is thanks to recipes that we can migrate without fear to new packages or whole OS versions.

Finally it is thanks to recipes that documentation and implementation meet together saving not only a lot of time but a lot of human error as well.

Any change affecting OS or services on top of it should be:
  1. scripted
  2. versioned
  3. applied to servers remotely
That is a culture that should exist in the agile team not only for Linux and Unix but for Windows as well. The times where you rely on documented steps and a sysadmin going through them have passed. It is time to script your infrastructure.

For the record, from
We’ll be back soon. Last Thursday, an intruder attempted to secure personal information of our registered developers from our developer website. Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed. In the spirit of transparency, we want to inform you of the issue. We took the site down immediately on Thursday and have been working around the clock since then. In order to prevent a security threat like this from happening again, we’re completely overhauling our developer systems, updating our server software, and rebuilding our entire database. We apologize for the significant inconvenience that our downtime has caused you and we expect to have the developer website up again soon. If your program membership was set to expire during this period, it has been extended and your app will remain on the App Store. If you have any other concerns about your account, please contact us. Thank you for your patience.
Ubuntu Forums is down for maintenance There has been a security breach on the Ubuntu Forums. The Canonical IS team is working hard as we speak to restore normal operations. This page will be updated with progress reports. What we know Unfortunately the attackers have gotten every user's local username, password, and email address from the Ubuntu Forums database. The passwords are not stored in plain text, they are stored as salted hashes. However, if you were using the same password as your Ubuntu Forums one on another service (such as email), you are strongly encouraged to change the password on the other service ASAP. Ubuntu One, Launchpad and other Ubuntu/Canonical services are NOT affected by the breach. Progress report 2013-07-20 2011UTC: Reports of defacement 2013-07-20 2015UTC: Site taken down, this splash page put in place while investigation continues. 2013-07-21: we believe the root cause of the breach has been identified. We are currently reinstalling the forums software from scratch. No data (posts, private messages etc.) will be lost as part of this process. 2013-07-22: work on reinstalling the forums continues. If you're using Ubuntu and need technical support please see the following page for support: Finding Help. If you're looking for a place to discuss Ubuntu, in the meantime we encourage you to check out these sites: The Ubuntu subreddit The Ubuntu Community on Google+ Ubuntu Discourse

Monday, July 22, 2013

Mapping the value stream in Bugzilla - column width in listing pages

We found out that our bugzilla status column width was too little (4 characters) for our mapped value stream which is composed of over a dozen of stages. From the documentation this was an easy fix:
#edit values
$ vi /var/www/bugzilla/template/en/default/list/table.html.tmpl
$ cd /var/www/bugzilla
$ ./ 
The question still remains though: when will Bugzilla provide a Kanban board implementation?

Friday, July 19, 2013

UX: Multiple select versus scrollable checklist

I have always seen implementations that try to resolve issues related to HTML multiple select input occupying more real estate. As usual simpler is better, just "check it, don't select it".

Tuesday, July 16, 2013

Do not cache dynamic resources if you deal with sensitive information

Login in your website using chrome. Right click on the page body and select "inspect element", click on network tab and navigate to a dynamic page showing important/sensitive information. Now click on any other link in the website. Click on the "Clear" button in the bottom of chrome inspector.

Finally hit the back button. On the top of the list do you see that your page was pulled from a cache? If the page is not stating how long it took to render (time latency=0) and/or you see "from cache" for "size content" most likely your server is missing to send some important information in an HTTP header.

Click on the top resource which should be the main page pulled as a result of the back button click action. On the right pane you should be able to see the server response headers. Most likely one or more of the below important Cache-Control header statements is missing resulting in a vulnerable application. Some forensic work in any computer accessing such website could reveal sensitive information that could be used directly or indirectly in other exploits. The data from such website might be accessible for a future intruder.
Cache-Control: no-cache, no-store,private,max-age=0,must-revalidate

Monday, July 15, 2013

Sniffing mysql queries

There are times when sniffing what queries mysql is running is the fastest way to troubleshoot a potential bug. So *temporarily* you can look into what is going with:
mysql> SET GLOBAL general_log = 'ON';
mysql> SET GLOBAL general_log_file = '/var/log/mysql/mysql.log';
$ tail -f   /var/log/mysql/mysql.log
Of course do not forget to put it back to OFF after you get enough log to troubleshoot:
mysql> SET GLOBAL general_log = 'OFF';

Friday, July 12, 2013

Asynchronous bash to run command in multiple remote hosts

I wanted to inspect the date in multiple servers to make sure ntpdate was running correctly. Here is how with a simple Plain Old Bash (POB) script. See below a typical response showing the asynchronous nature of this script

Thursday, July 11, 2013

Fastest way to open and close a socket

I had to replicate an issue in a proprietary application server which was reporting socket fail errors. Basically any connection to a specific port open from monit for example would cause the issue. The command below can be used to open the socket, write something and close it. It helped me recreate the issue:
exec 3<>/dev/tcp/${HOST}/${PORT}; echo -e "Will you crash?\n" >&3; exec 3>&-

Saturday, July 06, 2013

bash stdout and stderr to file and console

Given a typical bash script which would print to stderr and stdout like:
$ cat 
#!/bin/bash -e
# writes to stderr and stdout then dies with exit 1 
echo "out"
echo "err" >&2
exit 1
Suppose you want to send "out" and "err" to a file and also to the console. You might think on doing something like:
#Not good as stderr and stdout are completely redirected (you will see no output on screen)
$ ./ &> results.log
But actually the below will be your only option as far as I can tell:
$ ./ > >(tee -a results.log) 2> >(tee -a results.log >&2)
Basically you send stdout and stderr to a couple of streams that use the 'tee' command to guarantee the content is sent to the console still. Note the last ">&2" which is necessary to avoid tee printing the stderr message to stdout.

Friday, July 05, 2013

Send mail through smarthost in Solaris

$ /opt/csw/bin/gsed -i 's/^DS[\s]*.*/DS' /etc/mail/
$ svcadm restart sendmail #/etc/init.d/sendmail restart is deprecated (Thanks Douglas Perry!)
$ echo "My Body" | mailx  -s "`hostname` My subject"

Tuesday, June 25, 2013

Cleaning up artifactory OSS

Artifactory OSS lacks IMO a clean way to aid on cleaning up unneeded artifacts. But there is also an issue with the algorithm to pick when cleaning artifacts. You cannot go just by which artifacts are old as some old artifacts are still needed and in use.

I have concluded that it is better to share with the team a wiki page (it can be also shared on any SCM like SVN, the important thing is that multiple people can edit the file in a collaborative way) composed of current artifacts and leave the team decides which ones should stay. They just need to remove those they want to keep. After they confirm the job is done you can go ahead and delete them. Below is a step by step procedure to get this done:
  1. Running the below command will list all files from all repos in a file called artifactsToDelete. It uses the current date as "to" to make sure all artifacts are included no matter how old they are, then it extracts from the resulting json string (which contains no new lines === unformatted) the artifact urls. It then removes /api/storage from the resulting URL artifact so we can have the real URL that can be used for deletion purposes.
     curl -XGET -u user:password "$[$(date --date "`date`" +%s)*1000]" | sed 's/uri":"\([^"]*\)"/\n\1\n/g' | grep -o '^http.*/' |  sed 's/' | sort | uniq > artifactsToDelete
    You could add something like the below to remove those artifacts corresponding to a given package/path:
    grep -v "com/sample"
    Or make sure the list includes only those from a certain package/path:
    grep "com/sample"
  2. Offer the list to the team on a wiki page or as SCM resource. The team should delete the entries they are interested on keeping.
  3. Update artifactsToDelete with the filtered content, create a script to delete them and run it:
    $ vi ./ 
    #!/bin/bash -ex
    for URL in `cat artifactsToDelete`
      curl -XDELETE -u user:password $URL
    $ chmod +x ./ 
    $ ./ 
  4. After it runs go to the Maintenance admin page and run the storage garbage collection.