Wednesday, April 30, 2014

Small and Medium tests should never fail in Continuous Integration ( CI )

Small and Medium tests should never fail in Continuous Integration ( CI ). If they fail then the developer is not testing locally before a commit to the share version control system.

Large tests are two expensive and most likely you will need a cluster for them to run in a reasonable amount of time. This is better addressed then in a shared environment.

If small and medium tests are run locally why do we need them to run again in CI? That is a very good question. Are we really applying DRY?

Continuous integration ( CI ) makes technology debt irrelevant

Continuous integration ( CI ) makes technology debt irrelevant. Technology bugs are addressed right away to make the CI pipeline happy. There is no need for business consent to open an application bug, nor prioritization related cost, nor user base penalties because of instability.

I would go further and challenge: Why refactoring needs a special tech debt issue tracking number? The team has a velocity / throughput and just needs to strive to make it better. The team knows better what technology debt needs to be addressed with urgency when a ticket related to the code in need pops up. There is no better way to make the team member aware than a marker in the code (TODO ?).

This shift from a ticketing system back to code annotations will allow the team to understand that "nice to have" are equivalent of YAGNI and should be discarded, it will eliminate operational cost around organizing issues that only the team really understand and for which business operations and analysts have nothing to really say. Ultimately this will allow the team to deliver the Minimum Marketable Feature (MMF) with the best possible quality at the fastest possible rate.

Monday, April 28, 2014

List canonical hexadecimal ascii contents of a file

To list canonical hexadecimal + ascii contents of a file use hexdump command:

Sunday, April 27, 2014

API usability is not different than application usability

API usability is not different than application usability. A good application must be designed for the best possible user experience. An Application Programming Interface must be as well.

So next time you are creating that interface to expose to a consumer regardless if that will be the lead of your country or a software developer work *with* your consumer(s) to make sure you get it right.

Separation Of Concerns (SoC) helps on that. Even if you have an all star team where everybody does from A to Z in the SDLC you might end up with greatest results if you divide the work and rotate the team through the different concerns. You will naturally avoid violation of SoC.

Saturday, April 26, 2014

Diversity in the team? Psychology is important

Diversity in the team? Psychology is important for everyone.

Certainly sa the saying goes "when in Rome, do as the Romans do" is an important aspect for the individuals. Bit how about a group of diverse individuals working as part of a team?

While the saying still goes, can't ignore the bit of tolerance you will need to cope with differences. Going more diverse brings strengths for the company but without the right team psychology diversity can result in a double edge sword. Start from making sure the mission, vision and strategy is well understood and accept the fact that everybody is different so know what to expect from each member encouraging everybody to care about the common goal and put aside the differences.

Agile Interdependence: As a software engineer I want to read the founding fathers so that I know my rights and duties

What is agile interdependence? As a software engineer I want to read the founding fathers so that I know my rights. You are excited about a lot of languages and technologies but without the social guidance you will not easily fit as part of a team.

My recommendation: Read and make IT and non IT departments read three important documents:
  1. The Agile manifesto
  2. The Declaration of Interdependence

Have a product vision before blaming the IT mission statement

I think a perfect mission statement for a software development team is "High Quality for predictable delivery".

And without a doubt I believe that with that mindset in place if a team is still not bringing value to the company then the product vision is wrong.

Non Functional Requirements should be sticky: Quality, Usability, Performance and Capacity

Non Functional Requirements should be sticky. I argue that Quality, Usability, Performance and Capacity are the three you must keep an eye on as a priority. These define the success of any product including software applications.

The application must be tested. The tests have to be automated because otherwise the quality cannot be guaranteed over time as the amount of features to be tested goes up. Dr. William Edwards Deming philosophy for quality control can be summarized in the below ratio which should be interpreted as: Quality increases when the ratio as a whole is higher, not when the focus is just in eliminating cost. If you focus just on cutting cost most likely you are pushing problems for a near future when rework will be needed in order to correctly fix your product. The application must be user friendly, it must do what the user expects with the minimum user effort. Any extra mouse action, voice command, keyboard hit does matter. Usability matters.

The application is supposed to wait for the user, not the other way around. Performance matters.

The application must handle the expected load. How many users are expected to hit the new feature? Will the traffic be spontaneous because of a mailing campaign? Do not assume your system will handle any load. Do your math and avoid surprises. Capacity matters.

Wednesday, April 23, 2014

On Agile: Minimum Marketable Feature (MMF) is key for the team survival

Minimum Marketable Feature (MMF) is key for the team survival. This is specially true for small software development teams in non software centric companies.

Ask yourself if that issue you are addressing will have a direct impact on a non software developer geek life. If Business ends up stating that the feature results in a high Return On Investment ( ROI ) in a very short period of time then you have created or contributed to a Marketable feature.

Then it comes MMF. The feature that will take you the minimal possible time to develop and still the reaction from Business will be the above.

If the team is not producing enough MMF most likely Business is actively looking at alternatives.

This is not a Manager concern, this is your concern as a team member, no matter what your position is. I rather read a resume that states "I delivered 12 MMF in a year" than "I saved a million dollar in a one year long project". The first statement denotes clearer and longer term strategical thinking.

This is a great question to ask in an interview like I proposed in LinkedIn, specially for those that are closer to C level executives like the case of Project Managers:

What are the top 12 minimal marketable features your team produced during your best year as PM?

Correct answer: Mention at least 12 MMF explaining real tangible value.

Incorrect answer: Not able to mention at a minimum 12 (1 per month) or not able to provide clear explanation of projected or realized ROI for each of them.

Monday, April 21, 2014

Personal WIP limit directly impacts Cycle Time

Personal WIP limit directly impacts Cycle Time. It is way better to tell your friend "I am completing 'this', when done I will help you out and before the end of the day we will resolve your problem" than "I am completing 'this' but I will IM you when I have some free time to start addressing your issue. The sooner we start looking into it the sooner we will resolve it after all".

Try limiting the amount of work you do at the same time and experiment the satisfaction of getting things done faster. Know your limitations and free yourself from stress.

Productivity should be measured and one of those metrics used in Lean thinking is called cycle time which is the time a unit of work takes from the moment we start working on it till the moment we are done with it.

If t1 is the time issue A will take to be resolved and t2 is the time for issue B to get resolved. Most likely the average cycle time for the resolver is (t1+t2)/2 however this is only considering issue A and B are queued. If on the contrary you try to address them at once your cycle time might be as worst as (2t1+2t2)/2 = t1+t2, literally duplicating your cycle time. You start issue A which takes t1 and issue B which takes t2 at the same time. You switch tasks taking slots of t1 for A and slots of t2 for B. If t1=t2 the worst case scenario is met. By the time you finalize them both (considering a perfect distribution of time slots and switching frequency) you will have burned 2*t1 to complete A and also 2*t1 to complete B. This means that any new similar item will take twice as much as it could take if you would be addressing it alone. Why sacrifice the delivery time of a unit of work which is completely independent of another?

Needless to say how bad this gets with WIP limits over 2. While a number 2 for WIP limits is OK I would challenge that the second item should be only pulled if we know the current bottleneck impeding us from moving with the first will be absolutely temporary. If the impediment will take too long we better negotiate the scope of the delivery pr get help from the rest of the involved team to remove the impediment. That important is to my view the respect we should pay to Personal WIP limits.

Having an exploratory meeting? Ask how will the feature be tested

Having an exploratory meeting? Ask how will the feature be tested.

Exploratory meetings should include the question 'how this functionality will be tested?' up front. Without responding to that fundamental question an exploratory meeting will be setting the stage for yet another functionality that is not completely tested.

Manual Exploratory test should be done only once, it should be documented so it can be automated from the User Interface. The User Interface brings not just the perception about how how good the application works. The User Interface is •how• the application works.

Test Coverage is secondary to defect ratio reduction

Code Test Coverage is secondary to defect ratio reduction. I see so much effort put in tracking what is called "test coverage" just to find out that the more code test coverage a project has does not necessarily has the impact one would expect on defect ratio reduction. So, while I do not oppose test coverage metrics I strongly believe those are secondary to defect ratio metrics.

Track bugs versus any other kind of issues reported by users and your existing automation tests. The ratio must go down if you have the proper test coverage. So test coverage is obtained as a result of measuring defect ratio and adjusting the system to have the correct tests in place. It is not the other way around, test coverage measurement cannot guarantee a defect ratio reduction. Test coverage must increase to address defect ratio reduction. It is way simpler and natural to build automated tests out of manual exploratory tests and measure the defect ratio than relying on complicated algorithms that must test all variants of the intrinsics of software to come up with a percentage of test coverage. So you save time when concentrating on defect ratio with the added value that your software works as expected even if the tool says the test coverage is 99%.

Often overlooked are Marketing and Sales criteria, log files and user feedback. All tests do not rank the same. There is always a crucial part of the system which cannot fail without generating tangible cost, then some part that could fail without such cost. Prioritizing what to test first is as important as prioritizing what to build first.

If we accept the fact that any piece of software is ultimately a "process integration and report unit" which needs input variables and returns output variables then we would agree we should know 100% of the cases what we should obtain as output given any combination of input. Clearly automating such tests and making sure they do not fail is the correct first measurement to apply to our software. The more bugs we get, the less quality the code has and that could be independent of any current code test coverage reported by any tool.

I believe the real issue we face nowadays is that test is still a second thought in each phase of the SDLC. That is something we need to change.

If you are transiting the agile path, if you have gotten into the lean wagon, you know that limiting WIP and knowing your cycle time is just the beginning. To provide continuous improvement you must plot also how your defect ratio is minimized over time. Without it, test coverage won't matter. Perception is reality.

No matter how much test coverage you claim to have if the software fails the user, it is failing the Business.

Sunday, April 20, 2014

On concurrency and Idempotence: TDD just as production code should be thread safe

On Idempotence: TDD just as production code should be thread safe. If it is not then it cannot scale and then you end up with "My build takes too long", and "developers skip tests" and "let us leave to the CI engine to run the tests after commit" and finally a lot of waste generation in your value stream.

Regardless of the scale of your test, did I say before I like Google test sizes definition?, your test must be concurrently idempotent. At a minimum your suite must be concurrently idempotent.

Now that your small and medium tests will always run no matter how other tests work you can safely rely on your Testing framework *and* your build tool to make sure a test does not wait for another in order to finalize.

Of course sooner or later a machine might not be enough to run absolutely all small and medium tests for a big project but then again if your project is such a big monster and you have the right layered architecture why are you building it completely every time you perform a change?. If you have no other option than going monolithic then your suites should be able to run in different machines. This should be easy as you can literally trigger the build and execution of certain suites in different machines relying on plain old remote SSH for example. If you depend on external systems (large tests) though you will need a clustered environment for testing.

I think the Software Community still sees testing as a second class citizen and while nobody argues about clusters, parallelization, thread safe processes and scalability in general when it comes to production code, we still have to hear people looking for workarounds when it comes to testing. Test is serious step that when done correctly is *at least* thought before the production code is written. How would you possibly expect to have clusters of machines to host production code and not clusters of them to run large tests?

Friday, April 18, 2014

Continuous improvement for people

Improving processes is crucial but without improving people a team cannot go that far.

Many times overseen by managers is the inherent cost of not improving the workforce skills. The correct culture of a software development team starts with the necessary mentoring from Senior members, continues with the constant reading, listening, viewing or in one word learning and ends only for those that retire. For the team itself there is no end when it comes to improving.

Wednesday, April 16, 2014

Will heartbleed security vulnerability affect my business?

Will heartbleed security vulnerability affect my business? As ususal the answer is "it depends".

The heartbleed vulnerability which results from an openssl bug related to handling DTLS heartbeat extension packets is easy to correct in a way that moving forward you are not vulnerable to *new attempts* to exploit the risk originated from this OpenSSL bug. In fact all major Linux OS distros support automated security updates and most likely your boxes are now patched with the latest openssl patches addressing the issue. However do make sure you restart services like apache after the fix is applied. To test you can use tools like heartbleeder for local services and external services but if all you want to test is your SSL website you can just go through SSLLabs which BTW should be run often as SSL vulnerabilities are nothing new and these guys do an excellent job about helping everybody know what "secured" (https) sites out there are strongly protecting users privacy and which ones are doing a poor job on it.

So we patched the vulnerability and the tools say we are not vulnerable anymore. We are done. Actually, we are not. What happens if someone already got let us say your site's private key thanks to a previous to the fix attack? Any traffic could be decrypted (this is true apparently only if your server is still vulnerable to FPS), your site might be target for communication eavesdropping (unless you also revoke the old SSL certificate), your site might have leaked some credentials. So what is next after openssl is patched and services are restarted?

Revoke existing SSL Certificates

A quick look into some major institution websites is telling me at the moment that while the fix has be deployed the certificates some of them are using are still old. So I assume those not changing certificates are not really using openssl. Let us hope so.

Force Password Change

Nobody complains about changing their LAN passwords every so often. However you see people hesitant to change web credentials every so often. Guess what? your local network is actually at a lower risk than your web tier, do the math. This is a good opportunity to develop if you still not have it a way to force password changes on the fly whenever something demands such measure. The heartbleed bug is such situation. After you revoke the old certificates and start using new ones you must change all your user passwords. Do not rely on them updating their passwords. You should have available a "Force Password Change" functionality for your authentication service.

Use two-factor authentication

Username and password is not enough. In addition to what the user *knows* (user and password) it should be already a standard to check for what the user *has* (a phone number, a token generator or even the simplest form in many cases ignored, just the email address). Nowadays we should be even moving towards the use of what the user *is* (eye, fingerprint and other possible biometrics like face and even voice) as well. At a minimum be sure the users have an option to use multi-factor authentication.

Honest disclosure

Depending on the nature of your business, regulations, privacy laws, board decision and many other factors you will end up sooner or later reaching your customers explaining that there might be a possibility that their data got compromised. You should assess exactly what kind of data could have been stolen.

How long my data could have been compromised

If you use openssl in your website and you have not revoked the certificates then anybody who could have stolen your private key in the past will be able still to get even more information. While I cannot answer how long this has been known in small circles of hacker communities in looking at the current Google certificate (They co authored the discovery)it appears that they renewed their certificate not before March 12, 2014. So all I can say is that even those who discovered the vulnerabilities were vulnerable for almost the same period the rest of us are. However the more you wait to address all necessary steps the more vulnerable your services and clients are.

Things you should know as Internet user

A vulnerability in OpenSSL used in most of the web servers around the globe was disclosed on April 7th. The vulnerability could allow an unauthenticated, remote attacker to retrieve an otherwise protected/private information held in small portions of the server memory. The retrieved memory could contain sensitive information like private keys (security certificate secret key) and user credentials. Exploits for this vulnerability are currently publicly available.

A compromised certificate secret key could allow the decryption of the otherwise secure network traffic either previously captured or real time. However attackers would require specific positions in the network which makes this exploit difficult for them.

A compromised private key could also be used to impersonate our portal service leading to Man In The Middle (MITM), phishing and spoofing attacks. This will need extra privileged network position as well, so while not impossible it is definitely a difficult exploit as well.

A compromised user/password combination could escalate to compromise other accounts while relying on other potential bugs at application layer. However, strong layered infrastructure and application security should mitigate the extent of a possible exploit.

Since portions of the memory might become available other more sophisticated attacks might happen if OpenSSL is left compromised. However most of the important systems relying on OpenSSL have been patched during this last week making this attack only possible if traffic was captured while the vulnerability was present and further analyzed with compromised private keys. Again the probabilities for such attack are really minimal although not zero for sure.

Things you should do as service provider

On April 7th security advisory (CVE-2014-0160) recognized the code bug in the OpenSSL project and on April 8 most of Linux distributions (at least) were patched. If you use automated patching you got the patches early which is good. Security patches should be automatically applied in all systems across the board, most companies guarantee to have tested them very well before delivering so worst case scenario there will be business disruption for a good purpose. Since that day you should have been working on revoking existing website certificates and planning for a mandatory password change for all of your users.

You should take security very seriously. You should have a person or third company in charge of Security (name that person Security Officer). The experience in security must be over 5 years at a minimum. The knowledge about the OS, protocols, application development, networking and beyond should be there. You should adopt security and privacy management systems already standardized (like ISO 27001 and SSAE 16) to follow best practices. Best practices include (but are not limited to) keeping track of security changes from Firewall rules changes all the way up to fixing Browser related vulnerabilities. Only restricted personnel should have access to production systems. Use a multi tiered security architecture which includes network intrusion detection, passwords stored encrypted and salted, sensitive information stored encrypted in databases, encrypted communication channels between all external and internal servers including email on TLS, SFTP, HTTPS and SSH. Use at three layers of application security (Controllers, Services and Data Access layer, train developers on best practices from OWASP, perform penetration tests in a test box regularly, establish browser support measures prohibiting the use of old and probably not patched client software. There is more to that list. You should be able to see real life examples and guidelines in this blog for a start however do subscribe to security lists and follow closely vulnerabilities related to the software you use. Have a roadmap with planned enhancements to increase protection for private information.

Have a safe browsing.

Tuesday, April 15, 2014

Are you developing a Product or participating in a Project? Projects are the tactic for product strategy

Are you developing a Product or participating in a Project? Project is the tactic for product strategy. So you better learn about how that project affects the product(s) it relates to.

Just like in real life a Product is an always evolving result of some organized work.

A product has a lifecycle and as in life it will begin and someday disappear to give its space to something better. Nothing lasts forever.

The Product gets out in the market, it is then maintained and enhanced, until its end of life (EOL) comes.

In all product phases there are components to build, fix and maintain. Even to retire the Product from the market you might need to develop some components. These product components can be fixes to existing defects, simple cosmetic changes, features that will take a unit o work (for example 8 hours worth of overall work), several units of work or multiple units of work, etc.

If the units of work are to be calculated for absolutely all features you will be probably generating too much waste. Lean thinking teaches us to triage, so save your time on estimations when you can. In those cases consider yourself just being "maintaining your product".

There are cases in which the units of work must be calculated, for example a fixed delivery date feature is needed and it becomes clear the effort will go above certain amount of time and there is absolutely no way to push just small increments to production (It is an all or nothing situation). For these or any other (questionable perhaps) cases where it is decided to define a Project there will be a need for a meeting (estimation) involving the needed subject experts to decide what features will be grouped together (scope), how many *dedicated* people and materials will be needed (resources) and an agreement on a timeframe for completion (schedule). At that point and only then we should consider a "Project" has been born.

Don't you know who is using your sql schema? You should be able to grep your code

Don't you know who is using your sql schema? If your answer is "I don't" then you have a big problem.

You can respond to this question with a "Yes, I do" if you only can grep your code.

Big enterprises have different development groups, many times even distributed around the globe but it would be unacceptable if in 2014 we still have code that does not go to a Version Control System.

Knowing the extend of your change will allow you to make the right decisions without breaking production. It will allow you to stop thinking about backward compatibility, about pushing refactoring for the unknown future and ultimately keeping developers happy in your team. In a competitive world developers are realizing that a salary increase is just part of an equation where the number of hours to work is present. So if you complicate things, you are setting the whole enterprise for failure.

Searching for the schema across all Version Control Systems repos in the company should not be complicated and will aid you on maintaining clarity and simplicity in your code, you will come up with better naming conventions and overall a better design. Think lean for everything you do.

Sunday, April 13, 2014

Got Continuous Product Delivery? Then you got DevOps.

Got Continuous Product Delivery? Then you got DevOps. That is my response to the question Can you use ONE LINER to describe the benefit(s) DevOps adoption can bring to an IT organization ? so I thought about posting a quick statement about this.

I said in a previous post "There has been a lot of talks in the last couple of years about the DevOps movement. To achieve agility in a software development team it is crucial to get an agile infrastructure team. Collaboration towards Automation is the answer.".

The reality is that the term DevOps might be new but those of us that have achieved agility at product delivery (and not just release, sprint, feature delivery) did so because we were already having a DevOps mindset.

CFEngine, Bcfg2, Puppet, Chef and others are tools that excel at aiding the automation goal but the Culture must be there first. Those of us that were able to make teams change their culture know that the tools came as a result of necessity. It is great that the whole community talks so much about DevOps. It's too bad that still some people wonder if the investment is worth it. Is the investment on security worth it? Is the investment in Agile and Lean culture worth it? If they are then the investment on a DevOps culture is also worth it.

Wednesday, April 09, 2014

On Java: Check if certificate is valid or authorized in the default keystore

If you use Java and self signed certificates you will need to maintain those in a keystore. Assuming you use the default cacerts file for it you can use a simple request to the remote server using httpClient from httpcomponents library. Here is an installation script that will take care of installing the tool and will tell you how to use it. It is all hosted as gists, including the code below: You should receive the status code. If you do not most likely you will get an exception. In this case we get an exception because the self signed certificate has not been authorized:
$ cd /tmp/http-test; java -cp ".:./httpcomponents-client-4.3.3/lib/*" GetHttpStatusCode Exception in thread "main" PKIX path building failed: unable to find valid certification path to requested target at at at at at at at at at at at at at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket( at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket( at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection( at at org.apache.http.impl.client.DefaultRequestDirector.tryConnect( at org.apache.http.impl.client.DefaultRequestDirector.execute( at org.apache.http.impl.client.AbstractHttpClient.doExecute( at org.apache.http.impl.client.CloseableHttpClient.execute( at org.apache.http.impl.client.CloseableHttpClient.execute( at org.apache.http.impl.client.CloseableHttpClient.execute( at GetHttpStatusCode.main( Caused by: PKIX path building failed: unable to find valid certification path to requested target at at at at at at at ... 18 more Caused by: unable to find valid certification path to requested target at at at ... 24 more

Friday, April 04, 2014

Lean is Clean without the C for Complexity

Clean methodologies, processes, architectures, code, etc can still be complex. As the old saying from Seba Smith goes "there are more ways than one to skin a cat", so people are just tempted to pick one way that does the job and forget about *improvement*.

It is though the constant search for improvement the driver for lean thinking. It is simplicity what ultimately brings the competitive advantage.

Thursday, April 03, 2014

Write HTML Email using Google Drive documents and Gmail

I have tested the below procedure to send a colored cells table. Whether all markup generated by Google Documents will be compatible with webmails out there like aol is a good question. However given the resources Google has I would say that if it looks good in Gmail chances would be that it would look good in any other mail client.
  1. Create a new "Document" in Google Drive.
  2. Add headers, colors, tables etc.
  3. Select the content and copy it.
  4. Paste it in a new gmail email and send it.

Set execution bit in git

Wednesday, April 02, 2014

On Documentation: Who should write the user manual?

IT teams should take complete ownership of writing not only reference manuals but also the user manuals.

They are nothing more than how the application works. They are the starting point for User Acceptance Tests ( UAT ). So leaving this task to Operations makes no sense whatsoever.

Whoever is doing the business analysis ( BA if you have the luxury or developers ) should be responsible for it.