Saturday, August 27, 2016

Stop Apache from rendering icons README file

If you try to access /icons/README or /icons/README.html from your domain and you get a page with content, then you know apache is running on the server side and exposing some default content. How bad is that? Probably not big deal but its removal is mandatory when it comes to hardening apache for most situations. Better not to give the public any chance to access a resource that should not be exposed. You can correct this with s simple POB recipe:

Thursday, August 25, 2016

No Physical Windows Login Screen after accessing it via RDP?

Use Windows logo key + P keyboard shortcut which is intended to extend the desktop to multiple monitors. Why you need to do that and not just hit a key or move your mouse is an excellent question. To me, it is a bug.

Monday, August 22, 2016

Show version number in jenkins job history

Everybody handles versioning in a different way but if your jenkins job sends the version to the output stream you can use the below steps to show such version in the build history. This is handy specially for test jobs as you want to make sure that tests did pass for that specific version.

Let us assume that your job prints something like "Front-End Version: 1.2509". Here is how you print the version number in the jenkins job history:
  1. Install "Groovy Postbuild" plugin.
  2. Add a post build action type "Groovy postbuild"
  3. Insert the below in "Groovy Script" and save
    def m = manager.getLogMatcher("^.*Front-End Version: (.*)\$")
    // to debug using job logs
    // manager.listener.logger.println("found version:" + m.group(1))
    if(m != null && m.matches()) {
      version = m.group(1)
      manager.addShortText(version)
    }
    
  4. Run the job and you will get the version number next to the build.

Wednesday, August 03, 2016

Speeding up end to end (e2e) tests

How fast can you end to end test your application? It depends on how fast is your application.

You should parallelize your tests which of course must be idempotent. This means your tests should not step on each other toes. Your tests should take as long as the longest scenario you are trying to assert.

There are several ways to parallelize the tests. For example for web development when using protractor you can use {capabilities: {shardTestFiles: true, maxInstances: n}}. You should not use for n more than max number of processors - 1. You can use tools like jenkins or custom scripts that will spawn several tests at the same time but you will always face the limitations of the hardware used in each node. However you are clustering right? So why bother about the hardware for each node? Here are some numbers from a test run in a 4 processor VM using protractor and direct web driver configured to use 1,2 and 4 maxinstances:
1 instance: 87 sec
Wed Aug  3 09:53:25 EDT 2016
Wed Aug  3 09:54:52 EDT 2016

2 instances: 73 sec
Wed Aug  3 09:59:41 EDT 2016
Wed Aug  3 10:01:54 EDT 2016

4 instances: 117 sec
Wed Aug  3 10:10:06 EDT 2016
Wed Aug  3 10:12:03 EDT 2016
Not big savings really. We know better. Imposing a limitation on how long each test should take should be your premise. From there you act, build tests that include just enough scenarios that do not go beyond your acceptable time to run all e2e for your application, isolate them from each other, distribute the load. Divide and conquer.

I had an argument with somebody (I believe in stack overflow) over the efficiency of running browsers in the background (so called running the browser in headless mode) or in the foreground. When it comes to development, you should run it in the background most of the time so that your screen does not distract you from some other tasks that you are currently performing. When it comes to live testing it does not matter. There are no performance gains whatsoever in running your tests in the foreground or background. Of course if you run in headless mode you can leverage servers instead of desktops but it turns out that desktops allow you to easily debug what is going on in case you really need it: Just RDP into it and you see what is happening there. We must remember that we are testing the user interface/experience because that is how any program in the world ultimately functions: top-down.

Trying headless mode is straightforward. Here is how to do it in debian/ubuntu which I shared in stack overflow:
curl -k https://gist.githubusercontent.com/terrancesnyder/995250/raw/cdd1f52353bb614a5a016c2e8e77a2afb718f3c3/ephemeral-x.sh -o ~/ephemeral-x.sh
chmod +x ~/ephemeral-x.sh
date; ~/ephemeral-x.sh protractor … ; date
Here are my numbers for one instance running in headless mode. Compare the results between the previous GUI based versus the now headless mode. Having the real browser up and running ended up with a slightly better results. But I would not conclude from this that it is better. I would need to repeat this experiment several times before emitting such statement and I really do not have the time for it, but perhaps you do and want to report back your findings:
1 instance headless mode:

Wed Aug  3 10:35:41 EDT 2016
Wed Aug  3 10:37:14 EDT 2016
Bottom line, stop trying to resolve "test inefficiencies" or departing from UAT because "e2e are slow". Face the fact that perception is reality and take care of the performance of your application, make it fast. The application code is not just the runtime code but it is also the test code. All code should be as efficient as it can possibly be. All user interactions included in a particular scenario or group of scenarios should be wrapped in a unique test but only if those meet your test time objective (TTO). If the TTO = 1 minute then no test should take more that 1 minute. Then, spend money on a cluster of machines to perform the tests in parallel and get all tested in less than a minute. These machines can be spawned on demand of course but then account for the startup time as part of your TTO. Humans should be driven by goals and not by tools' deficiencies.

Followers