Tuesday, May 29, 2012

Building and Sharing VirtualBox VM for development purposes

Here is a handy script (tested so far in OSX but hopefully with contributions it will support other OS) to build VirtualBox VMs from command line and share them with your team.

It can be handy when you share development environments within your team as I have posted before. I have tested this with Ubuntu 12.04 server.

Here is how to call the script to build a VM with Ubuntu 12.4 LTS:
$ ./build-vm.sh  Ubuntu_64 ~/Downloads/ubuntu-12.04-server-amd64.iso devbox 20000
The script configures the VM with two network interfaces. The first uses NAT so you get connection to Internet and the second is configured as host-only so you get access to it from your local host only. The internal network is created as 192.168.56.0.

When you use the Ubuntu iso file you are installing the operating system. After you finish the manual installation you will have to manually configure the second interface as it will need a static IP (You want to easily connect to it via SSH):
$vi /etc/network/interfaces 
auto lo
iface lo inet loopback

# NAT network interface
auto eth0
iface eth0 inet dhcp

# Host-Only network interface
iface eth1 inet static
address 192.168.56.3
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
gateway 192.168.56.1
Bring down and up the network interfaces:
$ sudo ifdown eth0
$ sudo ifdown eth1
$ sudo ifup eth0
$ sudo ifup eth1
Note that depending on your actions SSH might be missing and you will need to install it:
$ sudo apt-get install ssh
It is unavoidable that at some point you start fresh however configuring network interfaces again and again does not sound like a great approach. Rather you can distribute the Virtualbox vdi file contained in the "VirtualBox VMs" directory. Before stopping the VM run the below to make sure everybody gets just eth0 and eth1 rather than eth2 and eth3 (Virtual Box assigned MAC addresses will be for sure different)
$ sudo rm /etc/udev/rules.d/70-persistent-net.rules
The receiver will just run the same command as before but pointing to the vdi file, for example:
$ ./build-vm.sh Ubuntu_64 ~/Desktop/devbox.vdi devbox 20000
If you get the below after trying to restart networking or trying to bring up eth1 all you need to do is ignore it as basically that means there is one of the cards already up and apparently NAT and host-only share the same device:
RNETLINK answers: File exists Failed to bring up eth1
Same story if you get the below when trying to put down the interface:
RNETLINK answers: File exists interface eth1 not configured
You will notice ifconfig shows the eth1 interface with the IP but the network is unreachable when you put eth0 down. Go figure.

Monday, May 28, 2012

Web Vulnerability Scanners

UPDATE 20141203: I have posted recent findings about the security scanners mentioned in this post since a new open sourced test bed tool from Google called Firing Range has become available. With this tool we at least have a least of vulnerabilities we can test against to evaluate the vulnerability coverage in terms of Web Application Scanners. Note that nikto is a Web Server scanner which means it does not get into application logic vulnerabilities. When you are performing Penetration Testing usually the first tier you will want to address is that HTTP(S) web URL that you have exposed to the outside. Please do your Business a favor and perform an ssl test as your very first step).

Low budget companies usually opt for relying on software developers to take care of this area and while that can be an apparent cheap path and definitely it is indeed better than nothing be aware that this is like investing in a Fund where no external accounting is performed, where is the Audit really?

A company risks too much if it decides to go without a BCP or DR plan, right? A company without a PenTest plan is risking as much or probably more.

PenTesting is not difficult when you know the Web protocols and languages. Most likely your developers, devops, sysadmins know them. The problem is that it is a time consuming task that demands not just knowing protocols and how to use the tools but also a lot of reading, research and community interaction. As usual certain passion for this job is a must have for the team in charge.

Here are some guidelines for an in house security team. Note that I strongly recommend to reach out professional service in this area however at a minimum there should be pentest done yearly and if you ask me it should be integrated in your delivery pipeline. Every time you are ready to deploy you run automated large (AUAT/E2E) tests don't you? So why not using a proxy like ZAP to inspect for vulnerabilities out of those tests?

I hope this will be useful for others starting on the PenTest arena. The site seclists.org has been my home page for some time now and I can tell you the more you read the more you realize how little you know about protecting your applications.

Here is the list of those tools I have been using so far as Web Vulnerability Scanners. Note that they complement each other. The fact that I use them all is because some of them will report issues the others won't. An extensive list of these are found on OWASP site and even compiled in Linux Security distros like BackTrack distro.
  1. SkipFish: After running the below command (custom for $site) several warnings/errors are provided starting at output_${site}/index.html. As usual some of them are false positives but everything must be inspected. Sometimes big vulnerabilities lie on "info" level warnings:
    ./skipfish -S dictionaries/complete.wl -o output_${site} http://${site}
    
  2. Nikto: Installed with simple apt-get in ubuntu you use the below to scan the web app. Press "v" once the app starts to get verbose information:
    nikto -h $site -o ~/Downloads/nikto-${site}.html
    
  3. w3af: Use just the w3af-gui providing the URL and checking for OWASP TOP 10 configuration for a start. Look at the results tab after even though from command line you get all information you need as well.
    git clone --depth 1 https://github.com/andresriancho/w3af.git
    cd w3af
    ./w3af_gui
    
  4. OWASP ZAP: Run an "active scan" just starting the app and pointing to your URL:
    ./zap.sh
    
    Run a "passive scan" out of your automated tests. This is the most powerful test of all and while it can come out of manual interaction with your application the real power of it comes out of your automated users acceptance tests (AUAT) AKA end to end (E2E) tests. This is yet another example of a must have for UI automated testing. With that in place you can assert vulnerabilities related to common user interactions with the system. You need to turn on the proxy in your browser. Chrome uses the system wide proxy capabilities so for example in Ubuntu you will proceed as below to configure all HTTP traffic including local to go through the proxy, that way ZAP will be able to search for vulnerabilities.

PenTest Weekend Conclusions

False positives: These are common for any of the tools you use. In some you will notice errors about PHP (modules.php) and ASP (members.asp) resources which do not exist as you are running a J2EE app. Some false positives are related to redirections to the login form for example.

No Silver Bullet: w3af was the only one detecting clickjacking threats this time. I am happy it found something as my latest usage of the tool was almost telling me I could live without it.

Proxy interceptors: Of those available in the market free like in beer the OWASP ZAP is the one that has given me better results so far.

Clickjacking, XSS and CSRF Increased Browser Security with HTTP Response headers

There are at the moment some HTTP Response headers that can help increasing security in web applications. Below is an example of a J2EE filter that will bring extra protection against some Clickjacking, XSS and CSRF attacks. You should be able to send these headers (or variations of them according to your use case) from any programming language/web framework.

Of course not all browsers will support this (all newest versions of modern browsers do support X-FRAME-OPTIONS for example) neither this is a recipe to completely protect your application against these type of attacks. There is a knowledge war on the web and those mastering bigger amount of information will always win.
package com.nestorurquiza.web.filter;

import java.io.IOException;

import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletResponse;

public class SecurityHeadersFilter implements Filter {

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
        HttpServletResponse httpServletResponse = (HttpServletResponse) response;
        httpServletResponse.setHeader("X-XSS-Protection", "1; mode=block");
        httpServletResponse.setHeader("X-FRAME-OPTIONS", "SAMEORIGIN");
        chain.doFilter(request, response);
    }

    @Override
    public void destroy() {
    }

    @Override
    public void init(FilterConfig config) throws ServletException {
    }
}

Saturday, May 26, 2012

Using Quartz, Camel and Spring for Distributed Service Orchestration

UPDATE/WARNING: The ProducerTemplate should be used as Singleton. Regardless it should be stopped to cleanup resources. This means you should call the stop() method in ServletContextListener#contextDestroyed() if you have a Singleton ProducerTemplate. Injecting the Singleton is easy as you might have guessed (Just define it inside camelContextNode as "template" node and provide the "id" for injection). Our team has confirmed though that when loaded by Spring, the ProducerTemplate actually gets stopped when the bean is disposed so there is no need to use the Listener. This is because CamelProducerTemplateFactoryBean which implements DisposableBean is used to get an Instance of type ProducerTemplate when declared as camelContext/template[@id] in the XML. Please read http://camel.apache.org/why-does-camel-use-too-many-threads-with-producertemplate.html for more information.

Quartz is a powerful and popular Java Scheduler API. It allows simple timers or more complicated jobs a la Unix cron.

Camel is a Java Open Source API that implements Enterprise Integration Patterns (EIP).

Spring needs no presentation. Using Spring teams can concentrate on delivering business features without knowing much about low level API implementations.

Distributed Computing is how you manage to scale horizontally through different servers and it is an important cost factor that cannot be underestimated when designing a solution.

Services are how you encapsulate functionality that responds to events like user interaction, system triggers, external API usage and more.

Orchestration is how you define the order in which your services are called, retry policies, message adaptation, and in general how you use Enterprise Integration Patterns to externalize service usability. Doing orchestration separated from your Services contributes to build loosely coupled Services and that has an important consequence: There is no need for your Services to know about each other, being independent you can reuse them with minimum effort without breaking previous functionality and last but not least separation of concerns can be easily enforced. Did I mention I believe separation of concerns is the most important concept behind a good architecture?

Camel allows to define routes either in XML or Java DSL. I prefer Java DSL and this is definitely the main factor that made me decide for Camel and not for other possible solutions I was exposed to. Being able to debug with break points is important for the agility of specially small teams.

In Camel you use Endpoints which serve to consume or produce messages, the end points are connected by channels and common tasks have solutions in place through the use of several components. If you need more logic you can always build your own component or use generic implementations from beans and processors. You can start with the endpoints living in a single JVM or you could distribute them using JMS, AMQP or other messaging alternatives to provide asynchronous behavior if your project demands so.

In this post I am presenting a proof of concept to use Quartz 1.8.6 and Camel 2.9.2 (Camel 2.9.2 is incompatible with latest quartz 2.1.5) to provide Distributed Plain Old HTTP Service Orchestration. However Services can be POJOS managed or unmanaged by Spring, Camel will be able to interact with any Java and even non Java code you currently have. So even though I talk about Plain Old GET/POST HTTP Services the same is applicable to other kind of Services. Camel is a routing Engine, it will read messages (Exchange) from any Producer and will be able to produce messages for any Consumer.

I use XML for general configuration from Spring, then DSL to define Camel routes. The code can be tested from JUnit and of course directly from a Servlet Container like tomcat or an application Server like JBoss. I use Quartz scheduler to make sure only one instance in a Servlet Cluster (I have tested this in two Tomcat machines) starts the job at a time, Camel to define the route while aggregating information or using components like FTP or plain old HTTP Services responding JSON, Spring Controllers that can activate a Camel route. I also show how to define a retry policy and I do all that with some comments in the code.

Dependencies

        <!-- Camel -->
        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-core</artifactId>
            <version>${camel.version}</version>
        </dependency>
     <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-http</artifactId>
            <version>${camel.version}</version>
            <exclusions>
                <exclusion>
                    <artifactId>geronimo-servlet_2.5_spec</artifactId>
                    <groupId>org.apache.geronimo.specs</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
          <groupId>org.apache.camel</groupId>
          <artifactId>camel-ftp</artifactId>
          <version>${camel.version}</version>
        </dependency>
        <dependency>
          <groupId>org.apache.camel</groupId>
          <artifactId>camel-mail</artifactId>
          <version>${camel.version}</version>
        </dependency>
        <dependency>
          <groupId>org.apache.camel</groupId>
          <artifactId>camel-jackson</artifactId>
          <version>${camel.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-quartz</artifactId>
            <version>${camel.version}</version>
            <exclusions>
            <!-- If camel would support quartz 2 we would exclude it here
                <exclusion>
                    <artifactId>quartz</artifactId>
                    <groupId>org.quartz-scheduler</groupId>
                </exclusion>
            -->
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-stream</artifactId>
            <version>${camel.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-spring</artifactId>
            <version>${camel.version}</version>
            <exclusions>
             <exclusion>
              <artifactId>spring-context</artifactId>
              <groupId>org.springframework</groupId>
             </exclusion>
             <exclusion>
              <artifactId>spring-aop</artifactId>
              <groupId>org.springframework</groupId>
             </exclusion>
             <exclusion>
              <artifactId>spring-tx</artifactId>
              <groupId>org.springframework</groupId>
             </exclusion>
             <exclusion>
              <artifactId>camel-core</artifactId>
              <groupId>org.apache.camel</groupId>
             </exclusion>
            </exclusions>
        </dependency>
        <!-- Camel already includes the quartz version it can work with
        <dependency>
            <groupId>org.quartz-scheduler</groupId>
            <artifactId>quartz</artifactId>
            <version>2.1.5</version>
        </dependency>
 <!-- Camel Test -->
        <dependency>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-test</artifactId>
            <version>${camel.version}</version>
            <scope>test</scope>
        </dependency>
        -->

Quartz

At the core of this proof of concept is Quartz which we have to setup to provide persistent jobs. We use MySQL database to host the needed tables (The default for Camel 2.9.2 is quartz 1.8.6)
cat /Users/nestor/Downloads/quartz-1.8.6/docs/dbTables/tables_mysql_innodb.sql | mysql -u root -proot nestorurquiza
In order to schedule jobs you need some settings in web.xml to include the Listener that will start Quartz Scheduler when the application starts or shut it down when the application is underplayed:
    
    <!-- Quartz Listener -->
    <!-- Uncomment the below to start Quartz in this instance -->
 <listener>
        <listener-class>com.nestorurquiza.utils.web.QuartzServletContextListener</listener-class>
    </listener>
Here is the Listener code:
package com.nestorurquiza.utils.web;

import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;

import org.quartz.Scheduler;
import org.quartz.SchedulerException;
import org.quartz.impl.StdScheduler;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class QuartzServletContextListener implements
        ServletContextListener {
    Logger log = LoggerFactory.getLogger(QuartzServletContextListener.class.getName());

    @Override
    public void contextInitialized(ServletContextEvent event) {
        
        Scheduler scheduler = (StdScheduler) ApplicationServletContextListener.getBean("schedulerFactoryBean");
        try {
            scheduler.start();
        } catch (SchedulerException e) {
            throw new RuntimeException("Quartz failure:", e);
        }
        log.debug("Quartz Scheduler started");
    }

    @Override
    public void contextDestroyed(ServletContextEvent event) {
        Scheduler scheduler = (StdScheduler) ApplicationServletContextListener.getBean("schedulerFactoryBean");
        try {
            scheduler.shutdown();
        } catch (SchedulerException e) {
            log.error("Quartz failure:", e);
        }
        log.info("Quartz Scheduler destroyed");
    }
}
You will need an interface to manage the triggers for your scheduled jobs. For that you will use the Quartz API.

Here is an example of how to list all triggers per job in the system. It is just a Controller using BHUB that returns the information in JSON:
package com.nestorurquiza.web;

import java.text.ParseException;
import java.util.HashMap;
import java.util.Map;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.camel.CamelContext;
import org.quartz.CronTrigger;
import org.quartz.JobDetail;
import org.quartz.Scheduler;
import org.quartz.SchedulerException;
import org.quartz.Trigger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.quartz.SchedulerFactoryBean;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.servlet.ModelAndView;

/**
 * A sample Controller which can be used to list, delete or schedule cron triggers which execute a Camel trigger job. 
 * Used to schedule when to trigger a particular Camel route. As we use Persistent quartz jobs your usual cluster of servlet containers like Tomcat can be used to distribute the workload performed from Camel.
 * 
 * Samples:
 * http://bhubdev.nestorurquiza.com/quartz/info?ert=json
 * http://bhubdev.nestorurquiza.com/quartz/addOrUpdateCamelCronTrigger?triggerName=timer1&triggerGroup=group1&jobName=job1&jobGroup=DEFAULT&cronExpression=00%2000%2012%20*%20*%20%3F%20*&endPoint=seda:queue.sample&ert=json
 * http://bhubdev.nestorurquiza.com/quartz/deleteCronTrigger?ert=json&triggerName=timer1&triggerGroup=group1
 * http://bhubdev.nestorurquiza.com/quartz/deleteJob?jobName=job1&jobGroup=DEFAULT&ert=json
 * 
 * @author nestor
 *
 */
@Controller
public class QuartzController extends RootController {
    @Autowired
    CamelContext camelContext;
    
    @Autowired
    SchedulerFactoryBean schedulerFactoryBean;
    
    @RequestMapping("/quartz/info")
    public ModelAndView info(HttpServletRequest request,
            HttpServletResponse response) throws SchedulerException {
        //Initialize the context (mandatory)
        ControllerContext ctx = new ControllerContext(request, response);
        init(ctx);

        
        ctx.setRequestAttribute("triggersInJob", retrieveTriggersInJobs());
        return getModelAndView(ctx, "justJsonNoJspNeeded", null);
    }
    
    @RequestMapping("/quartz/deleteJob")
    public ModelAndView deleteJob(HttpServletRequest request,
            HttpServletResponse response,
            @RequestParam(value = "jobName", required = true) String jobName,
            @RequestParam(value = "jobGroup", required = true) String jobGroup) throws SchedulerException, ParseException {
        //Initialize the context (mandatory)
        ControllerContext ctx = new ControllerContext(request, response);
        init(ctx);

        Scheduler scheduler = schedulerFactoryBean.getScheduler();
        scheduler.deleteJob(jobName, jobGroup);
        
        ctx.setRequestAttribute("triggersInJob", retrieveTriggersInJobs());
        
        return getModelAndView(ctx, "justJsonNoJspNeeded", null);
    }
    
    
    @RequestMapping("/quartz/deleteCronTrigger")
    public ModelAndView deleteCronTrigger(HttpServletRequest request,
            HttpServletResponse response,
            @RequestParam(value = "triggerName", required = true) String triggerName,
            @RequestParam(value = "triggerGroup", required = true) String triggerGroup) throws SchedulerException, ParseException {
        //Initialize the context (mandatory)
        ControllerContext ctx = new ControllerContext(request, response);
        init(ctx);

        //QuartzComponent quartz = camelContext.getComponent("quartz", QuartzComponent.class);
        //Scheduler scheduler = quartz.getScheduler();
        
        //SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();
        //Scheduler scheduler = schedFact.getScheduler();
        
        Scheduler scheduler = schedulerFactoryBean.getScheduler();
        
        
        CronTrigger trigger = (CronTrigger) scheduler.getTrigger(triggerName, triggerGroup);
        if( trigger != null ) {
            scheduler.unscheduleJob(triggerName, triggerGroup);
        } 
        
        ctx.setRequestAttribute("triggersInJob", retrieveTriggersInJobs());

        
        return getModelAndView(ctx, "justJsonNoJspNeeded", null);
    }
    
    @RequestMapping("/quartz/addOrUpdateCamelCronTrigger")
    public ModelAndView addCronTrigger(HttpServletRequest request,
            HttpServletResponse response,
            @RequestParam(value = "triggerName", required = true) String triggerName,
            @RequestParam(value = "triggerGroup", required = true) String triggerGroup,
            @RequestParam(value = "jobName", required = true) String jobName,
            @RequestParam(value = "jobGroup", required = true) String jobGroup,
            @RequestParam(value = "cronExpression", required = true) String cronExpression,
            @RequestParam(value = "endPoint", required = false) String endPoint
            ) throws SchedulerException, ParseException {
        //Initialize the context (mandatory)
        ControllerContext ctx = new ControllerContext(request, response);
        init(ctx);

        //QuartzComponent quartz = camelContext.getComponent("quartz", QuartzComponent.class);
        //Scheduler scheduler = quartz.getScheduler();
        
        //SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();
        //Scheduler scheduler = schedFact.getScheduler();
        
        Scheduler scheduler = schedulerFactoryBean.getScheduler();
        
        
        CronTrigger trigger = (CronTrigger) scheduler.getTrigger(triggerName, triggerGroup);
        JobDetail jobDetail = null;
        if( trigger != null ) {
            if(endPoint == null) {
                throw new SchedulerException("Parameter endPoint is mandatory when creating a brand new trigger.");
            }
        } else {
            trigger = new CronTrigger(triggerName, triggerGroup, jobName, jobGroup, cronExpression);
        }
        trigger.setJobName(jobName);
        trigger.setJobGroup(jobGroup);
        trigger.setCronExpression(cronExpression);
        
        jobDetail = scheduler.getJobDetail(jobName, jobGroup);
        boolean reschedule = false;
        if(jobDetail != null) {
            jobDetail.setJobClass(com.nestorurquiza.orchestration.camel.quartz.CamelTriggerJob.class);
            reschedule = true;
        } else {
            jobDetail = new JobDetail(jobName, jobGroup, com.nestorurquiza.orchestration.camel.quartz.CamelTriggerJob.class);
        }
        
        jobDetail.getJobDataMap().put("endPoint", endPoint);
        if(reschedule) {
            scheduler.addJob(jobDetail, true);
            scheduler.rescheduleJob(triggerName, triggerGroup, trigger); 
        } else {
            scheduler.scheduleJob(jobDetail, trigger);
        }
        
        ctx.setRequestAttribute("triggersInJob", retrieveTriggersInJobs());

        
        return getModelAndView(ctx, "justJsonNoJspNeeded", null);
    }
    
    private Map retrieveTriggersInJobs() throws SchedulerException {
        return retrieveTriggersInJob(null, null);
    }
    
    private Map retrieveTriggersInJob(String jobName, String jobGroup) throws SchedulerException {
        String[] jobGroups;
        Map triggersInJobs = new HashMap();
        
        int i;
        
        //QuartzComponent quartz = camelContext.getComponent("quartz", QuartzComponent.class);
        //Scheduler scheduler = quartz.getScheduler();
        
        //SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();
        //Scheduler scheduler = schedFact.getScheduler();
        
        Scheduler scheduler = schedulerFactoryBean.getScheduler();
        
        jobGroups = scheduler.getJobGroupNames();
        for (i = 0; i < jobGroups.length; i++) {
           String jg = jobGroups[i];
           if(jobGroup == null || jobGroup != null && jg.equals(jobGroup)) {
               String[] jobNames = scheduler.getJobNames(jg);
               if(jobNames != null) {
                   for( String jn : jobNames ) {
                       //Add the job to the Map even if there are not triggers defined
                       if(triggersInJobs.get(jn) == null) {
                           triggersInJobs.put(jn, null);
                       }
                       if(jobName == null || jobName != null && jn.equals(jobName)) {
                           Trigger[] jobTriggers = scheduler.getTriggersOfJob(jn, jg);
                           triggersInJobs.put(jn, jobTriggers);
                       }
                   }
               }
           }
        }

        return triggersInJobs;
    }
}
As you can see from the Controller I always schedule a job using the same trigger class which is nothing but a Camel invoker:
package com.nestorurquiza.orchestration.camel.quartz;

import org.apache.camel.CamelContext;
import org.apache.camel.ProducerTemplate;
import org.quartz.JobDataMap;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import org.quartz.StatefulJob;

import com.nestorurquiza.utils.web.ApplicationServletContextListener;

/**
 * A Quartz job to invoke a Camel endPoint
 * @author nestor
 *
 */

public class CamelTriggerJob implements StatefulJob {
    
    @Override
    public void execute(JobExecutionContext context) throws JobExecutionException {
        CamelContext camelContext = (CamelContext) ApplicationServletContextListener.getBean("camelContext");
        ProducerTemplate template = camelContext.createProducerTemplate();
        JobDataMap dataMap = context.getJobDetail().getJobDataMap();
        String endPoint = dataMap.getString("endPoint");
        template.sendBody(endPoint, "Activation from" + this.getClass().getName());
    }
    
}

Spring

Here are the relevant bits from the Spring Web application Context:
    ...
    ... http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">
    ...
    <context:component-scan base-package="com.nestorurquiza.orchestration.camel.route" />
    <context:component-scan base-package="com.nestorurquiza.orchestration.camel.filter" />
    ...
        <bean id="schedulerFactoryBean"
        class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
        <property name="dataSource">
            <ref bean="nestorurquizaDataSource" />
        </property>
        <property name="autoStartup">
            <value>false</value>
        </property>
        <property name="applicationContextSchedulerContextKey">
            <value>applicationContext</value>
        </property>
        <property name="waitForJobsToCompleteOnShutdown">
            <value>true</value>
        </property>
        <property name="quartzProperties">
            <props>
                <prop key="org.quartz.scheduler.instanceName">nestorurquizaScheduler</prop>
                <prop key="org.quartz.scheduler.instanceId">AUTO</prop>
                <!-- ThreadPool -->
                <prop key="org.quartz.threadPool.class">org.quartz.simpl.SimpleThreadPool</prop>
                <prop key="org.quartz.threadPool.threadCount">5</prop>
                <prop key="org.quartz.threadPool.threadPriority">5</prop>
                <!-- Job store -->
                <prop key="org.quartz.jobStore.misfireThreshold">60000</prop>
                <prop key="org.quartz.jobStore.class">org.quartz.impl.jdbcjobstore.JobStoreTX</prop>
                <prop key="org.quartz.jobStore.driverDelegateClass">org.quartz.impl.jdbcjobstore.StdJDBCDelegate</prop>
                <prop key="org.quartz.jobStore.useProperties">false</prop>
                <prop key="org.quartz.jobStore.tablePrefix">QRTZ_</prop>
                <prop key="org.quartz.jobStore.isClustered">true</prop>
                <prop key="org.quartz.jobStore.clusterCheckinInterval">20000</prop>
                <prop key="org.quartz.jobStore.selectWithLockSQL">
                    SELECT * FROM {0}LOCKS UPDLOCK WHERE LOCK_NAME = ?
                </prop>
                <!-- Plugins -->
                <prop key="org.quartz.plugin.shutdownhook.class">
                    org.quartz.plugins.management.ShutdownHookPlugin
                </prop>
                <prop key="org.quartz.plugin.shutdownhook.cleanShutdown">true</prop>
                <prop key="org.quartz.plugin.triggHistory.class">
                    org.quartz.plugins.history.LoggingTriggerHistoryPlugin
                </prop>
                <prop
                    key="org.quartz.plugin.triggHistory.triggerFiredMessage">
                    Trigger {1}.{0} fired job {6}.{5} at: {4, date, HH:mm:ss MM/dd/yyyy}
                </prop>
                <prop
                    key="org.quartz.plugin.triggHistory.triggerCompleteMessage">
                    Trigger {1}.{0} completed firing job {6}.{5} at {4, date, HH:mm:ss
                    MM/dd/yyyy} with resulting trigger instruction code:
                    {9}
                </prop>
            </props>
        </property>
    </bean>
    
    <!-- Uncomment the below only if you use Quartz from Camel -->
    <!--  
    <bean id="quartz" class="org.apache.camel.component.quartz.QuartzComponent">
        <property name="scheduler">
            <ref bean="schedulerFactoryBean" />
        </property>
    </bean>
    -->
    
    <camelContext xmlns="http://camel.apache.org/schema/spring" id="camelContext">
        <packageScan>
            <package>com.nestorurquiza.orchestration.camel.route</package>
        </packageScan>
    </camelContext>
    ...
Here is the application context test file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xmlns:camel="http://camel.apache.org/schema/spring"
       xsi:schemaLocation="http://www.springframework.org/schema/beans 
           http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
           http://www.springframework.org/schema/context
           http://www.springframework.org/schema/context/spring-context-2.5.xsd
           http://camel.apache.org/schema/spring 
           http://camel.apache.org/schema/spring/camel-spring.xsd
           ">
               
    <context:component-scan base-package="com.nestorurquiza.orchestration.camel.route" />
    <context:component-scan base-package="com.nestorurquiza.orchestration.camel.filter" />
     
    <camel:camelContext id="camel">
        <camel:package>com.nestorurquiza.orchestration.camel.route</camel:package>
    </camel:camelContext>
  
</beans>
Here is a Spring Controller that can be used to hit any Camel endPoint consumer:
package com.nestorurquiza.web;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.camel.CamelContext;
import org.apache.camel.ProducerTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.servlet.ModelAndView;

/**
 * A sample Controller showing how to send a simple message to any endPoint in the system
 * 
 * Example:
 * http://bhubdev.nestorurquiza.com/camel/publish?ert=json&endPoint=seda:queue.sample
 * 
 * @author nestor
 *
 */
@Controller
public class SampleCamelController extends RootController {

    @Autowired
    CamelContext camelContext;
    
    @RequestMapping("/camel/publish")
    public ModelAndView publish(HttpServletRequest request,
            HttpServletResponse response,
            @RequestParam(value = "endPoint", required = true) String endPoint) {
        //Initialize the context (mandatory)
        ControllerContext ctx = new ControllerContext(request, response);
        init(ctx);

        ProducerTemplate template = camelContext.createProducerTemplate();
        template.sendBody(endPoint, "Activation from" + this.getClass().getName());
        
        return getModelAndView(ctx, "justJsonNoJspNeeded", null);
    }
}
Here is how to test routes. We are testing here our first User Story. Read the Camel section for more:
package com.nestorurquiza.orchestration.camel.route;

import static org.junit.Assert.assertTrue;

import java.io.File;

import org.apache.camel.Produce;
import org.apache.camel.ProducerTemplate;
import org.apache.camel.builder.RouteBuilder;
import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.AbstractJUnit4SpringContextTests;

/**
 * A simple JUnit test capable of registering spring Singletons needed for our SampleCamelRouteBuilder
 * @author nestor
 *
 */
@ContextConfiguration(locations = {"classpath:/camelSpringContext.xml"})
public class SampleSpringCamelRouteBuilderTest extends AbstractJUnit4SpringContextTests {
    @Autowired
    private ApplicationContext applicationContext;
    
    @Produce(uri = "direct:start")
    protected ProducerTemplate template;
    
    @Configuration
    public static class ContextConfig {
        @Bean
        public RouteBuilder route() {
            return new SampleCamelRouteBuilder();
        }
    }
    
    @Test
    public void testRoute1() throws Exception {
        template.sendBody("seda:queue.sample", "Activation from JUnit");
        Thread.sleep(20000);
        File ulock = new File("/tmp/unlock");
        assertTrue("File not moved", ulock.exists());
    }

}

Camel

You might be tempted to use the Quartz component directly from Camel but at the time of this writing and for the versions I have used for this proof of concept your possibilities to reschedule your jobs will be limited. In any case here is a sample route file which schedules a cron trigger. You need to uncomment of course the relevant portion of the route. As it is the route will just log the routeId as soon as the route is loaded (direct:start):
package com.nestorurquiza.orchestration.camel.route;

import org.apache.camel.LoggingLevel;
import org.apache.camel.builder.RouteBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class SampleSchedulerCamelRouteBuilder extends RouteBuilder {
    private static final String LOG_NAME = SampleCamelRouteBuilder.class.getName();
    private static final Logger log = LoggerFactory.getLogger(LOG_NAME);
    
    @Override
    public void configure() {
        log.debug("Configuring SampleSchedulerCamelRouteBuilder routes");
        
        //It is better to use quartz api outside of camel at the moment http://camel.465427.n5.nabble.com/Configure-Camel-to-consume-externally-configured-quartz-jobs-td5712732.html#a5713361
        //from("quartz://group1/timer1?job.name=job1&stateful=true&trigger.repeatInterval=5000&trigger.repeatCount=0")
        //from("quartz://group1/timer2?job.name=job1&stateful=true&cron=00 50 14 * * ? *")
        
        from("direct:start")
        .log(LoggingLevel.DEBUG, LOG_NAME, "SampleSchedulerCamelRouteBuilder route ${id} triggered");
    }
}
Let us go through the following Routes defined for the purpose of covering some basics of Orchestration. Note that we consume an HTTP Service which accepts a unix command and sends back JSON, this is done using a nodejs server to run shell commands. This JSON Services approach BTW is a great way to reuse Talend jobs.
package com.nestorurquiza.orchestration.camel.route;

import org.apache.camel.Exchange;
import org.apache.camel.LoggingLevel;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.model.dataformat.JsonLibrary;
import org.apache.camel.processor.aggregate.AggregationStrategy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;

import com.nestorurquiza.orchestration.camel.ShellBeanService;
import com.nestorurquiza.orchestration.camel.ShellProcessor;
import com.nestorurquiza.orchestration.camel.ShellResponse;

public class SampleCamelRouteBuilder extends RouteBuilder {
    private static final String LOG_NAME = SampleCamelRouteBuilder.class.getName();
    private static final Logger log = LoggerFactory.getLogger(LOG_NAME);
    public static final int POLLING_DELAY_MSEC = 60 * 1000;
    
    @Autowired
    protected ApplicationContext applicationContext;
    
    @Override
    public void configure() {
        log.debug("Configuring SampleCamelRouteBuilder routes");
        //*** User Story# 1: Given the route seda:queue.sample is triggered; 
        //    When an error occurs 
        //    Then retry up to 5 times using exponential increments (2) for the delay for a maximum of 10 seconds
        //    When the route starts
        //    Then request the JSON GET HTTP Service, parse the response and throw an Exception if there are any errors
        //***
        
        //Solution 1.1: Using a direct route with noErrorHandler() encapsulating all the logic and calling it from the route that will perform the retries works as expected
        //The disadvantage is that in a chained Services process (Service Orchestration) several routes will be needed just to accomplish retries per each service that fails
        from("seda:queue.sample")
        .errorHandler(defaultErrorHandler()
                .log(log)
                .maximumRedeliveries(5)
                .backOffMultiplier(2)
                .useExponentialBackOff()
                .redeliveryDelay(1000)
                .maximumRedeliveryDelay(10000)
                .retryAttemptedLogLevel(LoggingLevel.WARN))
        .to("direct:direct.sample");
        //An intermediate entry point is needed        
        from("direct:direct.sample")
        .errorHandler(noErrorHandler()) 
        .log(LoggingLevel.DEBUG, LOG_NAME, "Processing with http, jackson and processor components in route ${id}")
        .to("http://localhost:8088/?cmd=ls%20/tmp/unlock")
        .unmarshal().json(JsonLibrary.Jackson, ShellResponse.class)
        .process(new ShellProcessor())
        .log(LoggingLevel.DEBUG, LOG_NAME, "Process succesfully unlocked")
        .end();

        
        //Solution 1.2: A second solution is to build a bean that invokes each http service, analyze the response and throws Exception if error
        from("seda:queue.sample2")
        .log(LoggingLevel.DEBUG, LOG_NAME, "Processing with bean component in route ${id}")
        .setHeader("url", constant("http://localhost:8088/?cmd=ls%20/tmp/unlock")).bean(ShellBeanService.class)
        .log(LoggingLevel.DEBUG, LOG_NAME, "Process succesfully unlocked")
        //That way we need only one route:
        //.setHeader("url", constant("http://localhost:8088/?cmd=runSecondService")).bean(ShellBeanService.class)
        //.setHeader("url", constant("http://localhost:8088/?cmd=runThirdService")).bean(ShellBeanService.class)
        //.log(LoggingLevel.DEBUG, LOG_NAME, "All Services in the route have executed successfully");
        .end();
         
        //*** User Story# 2: Given a recurrence of POLLING_DELAY_MSEC and a remote SFTP server directory to query for files:
        //    When polling time is reached 
        //    Then send an email with newer file names in the remoteSFTP directory
        //***
        
        //Solution 2: A sample polling implementation (uncomment to see it working): Checks for files newer than certain POLLING_DELAY_MSEC and sends an email when it finds them, aggregating all in just one message
        //            Note this solution is lacking distributed computing approach. Quartz generating the polling and a Bean in regular routes look like a stronger solution if more than one JVM is involved. 
        from("sftp://bhubint.nestorurquiza.com//home/admin?username=admin&password=pass&fastExistsCheck=true&delay=" + POLLING_DELAY_MSEC + "&filter=#fileAttributesFilter")
        .aggregate(new FileAttributesAggregationStrategy()).constant("all").completionTimeout(5000L)
        .to("log:" + LOG_NAME + "?level=DEBUG&showAll=true")
        .to("smtp://krms.sample.com?Subject=New Files&From=nestor@nestorurquiza.com&To=nestor@nestorurquiza.com")
        .end();
        */
       
    }
    
    private static class FileAttributesAggregationStrategy implements AggregationStrategy {

        public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
            String newCamelFileNameOnly = newExchange.getIn().getHeader("CamelFileNameOnly", String.class);
            if (oldExchange == null) {
                newExchange.getIn().setBody(newCamelFileNameOnly);
                return newExchange;
            }
            String oldListOfFileNames = oldExchange.getIn().getBody(String.class);
            String newListOfFileNames = oldListOfFileNames + ", " + newExchange.getIn().getHeader("CamelFileNameOnly", String.class);
            log.debug(newListOfFileNames);
            newExchange.getIn().setBody(newListOfFileNames);
            return newExchange;
        }
    }
    
}
The first user story has two solutions and to try it you can just invoke the route (from the CamelController or scheduling a Quartz job) then creating the file "/tmp/unlock" after it fails a couple of times, you will see how the route finishes successfully if you create the file at any point during retries.

The second user story is an Example of polling. Even though FTP component is great like many other components there is no persisted way of handling the jobs and so you will be on your own if you decide to put the routes in more than one server as they will compete with each other. A custom Bean taking care of the checking using jsch library in a route that is triggered by a persisted Quartz job will be a better solution at least while Polling is supported in a distributed way from Camel.

There are several classes needed to support the two examples above. Here are all of them:
package com.nestorurquiza.orchestration.camel;

import org.apache.camel.Header;
import org.apache.commons.httpclient.HttpClient;
import org.apache.commons.httpclient.methods.GetMethod;
import org.codehaus.jackson.map.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.nestorurquiza.utils.Utils;

public class ShellBeanService {
    private static final String LOG_NAME = ShellBeanService.class.getName();
    private static final Logger log = LoggerFactory.getLogger(LOG_NAME);
    
    public void process(@Header(value = "url") String url) throws Exception {
        HttpClient httpclient = new HttpClient();
        GetMethod method = new GetMethod(url);
        int responseCode = httpclient.executeMethod(method);
        log.debug(url + " responseCode: " + responseCode);
        byte[] responseStream = method.getResponseBody();

        ObjectMapper mapper = new ObjectMapper();
        ShellResponse shellResponse = (ShellResponse) mapper.readValue(responseStream, ShellResponse.class);
        
        if(shellResponse == null) {
            throw new Exception("No response from remote Shell Server");
        }
        String stderr = shellResponse.getStderr();
        if(!Utils.isEmpty(stderr)) {
            throw new Exception(stderr);
        }
        
    }
}

package com.nestorurquiza.orchestration.camel;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;

import com.nestorurquiza.utils.Utils;

public class ShellProcessor implements Processor {
    
    public void process(Exchange exchange) throws Exception {
        ShellResponse shellResponse = exchange.getIn().getBody(ShellResponse.class);
        if(shellResponse == null) {
            throw new Exception("No response from remote Shell Server");
        }
        String stderr = shellResponse.getStderr();
        if(!Utils.isEmpty(stderr)) {
            throw new Exception(stderr);
        }
        
    }
}


package com.nestorurquiza.orchestration.camel;

public class ShellResponse {
    private String stdout;
    private String stderr;
    private String cmd;
    public String getStdout() {
        return stdout;
    }
    public void setStdout(String stdout) {
        this.stdout = stdout;
    }
    public String getStderr() {
        return stderr;
    }
    public void setStderr(String stderr) {
        this.stderr = stderr;
    }
    public String getCmd() {
        return cmd;
    }
    public void setCmd(String cmd) {
        this.cmd = cmd;
    }
    
}


package com.nestorurquiza.orchestration.camel.filter;

import org.apache.camel.component.file.GenericFile;
import org.apache.camel.component.file.GenericFileFilter;
import org.springframework.stereotype.Component;

import com.nestorurquiza.orchestration.camel.route.SampleCamelRouteBuilder;

@Component("fileAttributesFilter")
public class FileAttributesFilter implements GenericFileFilter {

    public boolean accept(GenericFile file) {
        return file.getLastModified() + SampleCamelRouteBuilder.POLLING_DELAY_MSEC > System.currentTimeMillis();
    }
}

If you deploy the app in two servers pointing to the same database you will notice Quartz will only activate the route in one of your servers. Look at QuartzController for examples about how to use it.

The endPoint consumer can be also triggered from SampleCamelController which uses a Publisher Template. Take a look at the class above for an example.

Camel Route Diagrams

Diagrams can be generated out of a Maven plugin. You will need to checkout or update the project:
$ git clone https://github.com/rmannibucau/diagram-generator-parent.git
$ cd diagram-generator-parent/
Then build it
$ mvn clean install
Include a section near to the below in your pom file and you should get your diagrams in the javadoc maven directory. You should actually exclude the doc-files folder from being committed to your repository (after all it is dynamically generated).
...
<build>
...
  <plugins>
...
 <plugin>
                <groupId>fr.rmannibucau</groupId>
                <artifactId>diagram-generator-maven-plugin</artifactId>
                <version>0.0.1-SNAPSHOT</version>
                <executions>
                    <execution>
                        <id>nestorurquiza-camel-routes</id>
                        <phase>site</phase>
                        <goals>
                            <goal>diagram</goal>
                        </goals>
                        <configuration>
                            <!-- or a qualified RouteBuilder name/a qualified package if you use java routes -->
                            <input>com.nestorurquiza.orchestration.camel.route</input>
                            <!-- default = false, true to show a window containing the diagram -->
                            <view>false</view>
                            <!-- default = 640  -->
                            <width>480</width>
                            <!-- default = 480 -->
                            <height>640</height>
                            <!-- default = target/diagram -->
                            <output>${basedir}/src/main/javadoc/com/nestorurquiza/orchestration/camel/route/doc-files</output>
                            <!-- default = camel -->
                            <type>camel</type>
                            <!-- default = xml, other values = { java  }-->
                            <fileType>java</fileType>
                            <!-- default = png, you can set jpg ... -->
                            <format>png</format>
                            <!-- true allows to resize icons, false force to keep their original size; default: true -->
                            <adjust>true</adjust>
                            <additionalClasspathElements>
                                <additionalClasspathElement>${basedir}/target/classes</additionalClasspathElement>
                            </additionalClasspathElements> 
                        </configuration>
                    </execution>
                </executions>
                
                <dependencies>
                    <dependency>
                        <!-- to use camel generator -->
                        <groupId>fr.rmannibucau</groupId>
                        <artifactId>camel-loader</artifactId>
                        <version>0.0.1-SNAPSHOT</version>
                    </dependency>
                    <!-- route dependencies if needed -->
                </dependencies>
            </plugin>
...
You can find several discussions about the correct use of this plugin in Nable but here is just what I did so far: I created a link in our documentation wiki pointing to the URL containing our javadocs including the path to the generated images for example file://localhost/Users/nestor/eclipse-workspace-test/nestorurquiza-app/target/site/apidocs/com/nestorurquiza/orchestration/camel/route/doc-files/. Note that doc-files directory is mandatory otherwise the images will not be added to the final documentation site. You can certainly add any subdirectory after it if you like.

Here is the output for our sample route:

Consuming Plain Old HTTP Services

Just in case you are new to my posts: I have posted before about how to reuse Talend jobs through the use of a nodejs server. The client invokes a shell command from a GET request to nodejs and a JSON response is then processed.

In our User Story #1 above we test retry policy while issuing a command like "ls /tmp/unlock" that will return an error in the JSON response as the file does not exist. After some retries we will manually create the file with "touch /tmp/unlock" and then we will see how Camel correctly continues and does not retry any more.

Camel needs to invoke the Nodejs HTTP Service using http://localhost:8088/?cmd=ls%20/tmp/unlock and it will need to parse the error if any to retry again:
{"stdout":,"stderr":"ls: /tmp/unlock: No such file or directory
","cmd":"ls /tmp/unlock"}

Learning Camel

  1. Checkout the source code and search especially in JUnit tests for uses of the different components
  2. Search mailing list which is available from nable

Notes on Quartz and scheduling

Quartz ships with a utility class org.quartz.TriggerUtils that allows to provide information about the stored triggers. Clearly this is handy to understand what is scheduled in the system. JWatch is a promising project that works with Quartz 2.x to provide access to Quartz scheduling.

Google uses an implementation of RFC2445 (iCalendar standard) for their Calendar application. Most likely based on the open source project google-rfc-2445. Quartz still does not support the standard. We should vote for such addition as clearly a standard will allow to use cool projects on the front end to report about Quartz jobs and integrate them with Calendar widgets.

As we have seen we have built using a Spring Controller some basic functionality to shedule and list cron triggers however the output of it is still a little bit cryptic as you can see below. A translation to plain spoken language is something you will need to do by your own or better share with us:
Object
autoLogoutTimeout:900000
contextPath:""
debug:false
errors:Object
info:Object
isAdvancedSearchAvailable:false
is_authenticated:true
messages:Object
module:"CLIENT"
pageTitle:"LOCAL"
sid:"4F0B046EA65A4E3295EDC94E53E14BFE.nestorurquiza-app1"
triggersInJob:Object
job1:Array[1]
0:Object
calendarName:null
cronExpression:"00 50 14 * * ? *"
description:null
endTime:null
expressionSummary:"seconds: 0 minutes: 50 hours: 14 daysOfMonth: * months: * daysOfWeek: ? lastdayOfWeek: false nearestWeekday: false NthDayOfWeek: 0 lastdayOfMonth: false years: * "
finalFireTime:null
fireInstanceId:null
fullJobName:"DEFAULT.job1"
fullName:"group1.timer1"
group:"group1"
jobDataMap:Object
jobGroup:"DEFAULT"
jobName:"job1"
key:Object
first:"timer1"
group:"group1"
name:"timer1"
second:"group1"
misfireInstruction:0
name:"timer1"
nextFireTime:1337626200000
previousFireTime:null
priority:5
startTime:1337540312000
timeZone:"America/New_York"
triggerListenerNames:Array[0]
volatile:false
warnings:Object

Wednesday, May 16, 2012

Excel Workbook Encryption from Java

UPDATE 2014/11/10: This is now possible for XML based formats like XLSX only as per http://poi.apache.org/encryption.html but if you want to support encryption in old non XML like XLS your only free option is the below. Bad news, POI nor JExcel API support Excel Workbook password protection so far.

When you think about it though if you are trying to automate the generation of password protected Excel files you will need to store somewhere the password you are using to encrypt the workbook (Granted you are supposed to store that encrypted as well) so a workaround for the issue would be to store not only the password but also an empty Excel Workbook protected with that same password. POI does allow to access a Password Protected File so you can load the protected empty workbook and do all manipulations on it. The result will be an Excel file which holds the expected encrypted content.

On the other hand POI supports Write Password Protection. So if that is what you are looking for then this code will do the trick:
package com.nestorurquiza.utils;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;

import org.apache.poi.hssf.usermodel.HSSFSheet;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;

public class XlsUtil {
    public static void passwordProtect( InputStream is, OutputStream os, String password ) throws IOException {
        HSSFWorkbook targetWorkbook = new HSSFWorkbook(is);
        int numberOfSheets = targetWorkbook.getNumberOfSheets();
        for (int i = 0; i < numberOfSheets; i++) {
            HSSFSheet sheet = targetWorkbook.getSheetAt(i);
            sheet.protectSheet(password);
        }
        targetWorkbook.write(os);
    }
    
    public static void main(String arg[]) {
        try{
            InputStream is = (new FileInputStream("/Users/nestor/Downloads/unprotected.xls"));
            FileOutputStream os =new FileOutputStream(new File("/Users/nestor/Downloads/protected.xls"));
            passwordProtect(is, os, "testPassword");
            os.close();
        } catch(Exception e) {
            e.printStackTrace();
        }
    }

}

Monday, May 14, 2012

On BPM, Workflow, Documentation and BA work

I am currently working on a proof of concept to introduce Camel as Orchestration Engine in our BHUB implementation. While reading through some Camel posts I came across a thread about using Camel for Workflow.

The issues presented by the first poster are not unique to Camel but to the actions of "externalizing workflow logic". In simple words "Debugging gets more complicated". Once again you will notice how experienced developers recommend to put the logic (infamous if/else) in Java beans anyway.

While I see a lot of value for Camel on the Orchestration and Integration side I see as a difficult task (to say the least) to try to use Camel for Workflow. Again this is not unique of Camel. Apache Ofbiz has gone from Workflow engines (state machines) to Messaging (albeit not Camel but just straight seda) to resolve the "externalization dilema".

I believe that we keep on searching the silver bullet instead of having better development processes. The fact that documentation is on the right side of the Agile Manifesto does not mean it is not necessary. I believe the real problem is that Business dictates what has to be done and the developers implement, later Business wants to know what was implemented and the reverse engineering does not sound right of course. As a consequence the development team figures externalizing all the way to expensive BPM implementations (which I have used and in fact built one from scratch using SCXML in the past) should resolve the problem just to realize that debugging is affected and agility is compromised.

How about maintaining plain spoken language (not to say just English as clearly Business speaks not just English) User Stories that are maintained with the gaps as Business introduces new necessities? Is this just telling us we need a Business Analyst ( BA ) in the middle?

Thursday, May 10, 2012

annotations are not supported in -source 1.3 (use -source 5 or higher to enable annotations)

Maven 2 fails to compile some projects where post java 1.3 features are used, like annotations below:
annotations are not supported in -source 1.3 
(use -source 5 or higher to enable annotations)
This is easily addressable when you own the pom files. For example to ensure java 1.6 code will compile you will use:
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                <configuration>
                    <source>1.6</source>
                    <target>1.6</target>
                </configuration>
            </plugin>
However sometimes you are strugling with projects built from maven 3 in which case the default is not 1.3 and so the original author has not added the above bits in the project pom(s). Here is then what you have to do to compile such projects from maven 2:
$ mvn clean install -Dmaven.compiler.source=1.6 -Dmaven.compiler.target=1.6

Saturday, May 05, 2012

Enterprise Security and Penetration Testers - SECaaS

Security in the Enterprise is such a big issue that bad security in place could literally bankrupt a company. This is nothing new but in my years as technologist I have seen how superficial the Penetration Tester duties are taken.

To be a good Pen Tester you need to be a savvy engineer in the particular technology you are trying to exploit, then you must have passion for breaking bits, finally you must be on top of latest news, sniffing in IRC channels and networking with other ethical and why not if possible real hackers.

Let us take the example of a buffer overflow attack. You will most likely have to be a C/Assembly programmer. Then it comes ARP poisoning and you need to be a networking engineer, we talk about CSRF or XSS and then being a web developer is a must, sql injection: DBA, virus: helpdesk and the list is too big to continue.

You will be able to mitigate the risk using some tools, following best practices and what not but for sure if you are serious about security you know you must provide a deep analysis of your infrastructure and architecture implementation, deploy agents for monitoring, do Penetration testing and a lot more. The tests will need to go through all layers of your application stack, network infrastructure, even through real people working as employees (to prevent social hacking). Then you will review your Penetration Testing Plan at least once a year. Security is not a second class citizen, it is as important as your Disaster Recovery strategy: Did you test absolutely all your backups?

This explains why so many companies are nowadays offerring these services. It is a hot market and will continue to be in the near future.

The Security Budget will be consumed either defending or facing the results of malicious attacks.

I believe Security as a Service (SECaaS) might be the answer for some companies with restricted budgets and I can see some companies offerring cloud services today moving into the SECaaS in the near future.

Followers