Tuesday, August 31, 2010

Populate MySQL Database with sample Data

How many times you find yourself creating records from scratch locally so you can get to a point in your program where you can actually debug what is going on?

If the whole team shares some sample data in a populateDB.sql script it could be run at any time by any developer.

The main problem we face is with foreign keys that we need to use but we don't know their ids.

Below is an unobtrusive MySQL script that creates a new employee referencing an existing office and a newly created department. Hopefully this is enough to get any data populated thanks to @variables and the last_insert_id() function.

You can pick any existing id:
SELECT @office_d:=id FROM office LIMIT 1;

You can insert a record and keep the id:
INSERT INTO `department` (name) VALUES ('Engineering');
SET @department_id = last_insert_id();

Then use the variables to insert the new record:
INSERT INTO `employee` (email, first_name, department_id, office_id) VALUES ('test@sample.com','Nestor', @department_id, @office_id);

Sunday, August 29, 2010

LDAP with ApacheDS for Authentication

Regardless which security options you are using LDAP is the place to store user groups and credentials. I am not going to explain why, as the Web is plenty of explanations but I will show here how to get ApacheDS working so you can start using LDAP for authentication purposes.

If you are doing Web Programming in Java I recommend Spring Security. I am providing an example of such integration.

1. Download and install apacheds(v 1.5.7) and ApacheDirectoryStudio(v 1.5.3).

2. The DS Server should start automatically. Telnet to localhost port 10389 (default apacheds port) to test it is running. To start the server in case it is not running:
OSX$ sudo launchctl load /Library/LaunchDaemons/org.apache.directory.server.plist
OSX$ sudo launchctl start org.apache.directory.server
LINUX$ sudo /etc/init.d/apacheds-1.5.7-default start

3. To stop the server:
OSX$ sudo launchctl stop org.apache.directory.server
OSX$sudo launchctl unload /Library/LaunchDaemons/org.apache.directory.server.plist
LINUX$sudo /etc/init.d/apacheds-1.5.7-default stop

4. Download the ldif sample file from http://directory.apache.org/apacheds/1.5/15-about-the-
sample-configurations-and-sample-directory-data.data/apache_ds_tutorial.ldif

5. Open ApacheDirectoryStudio and select File|New|LDAP Browser|LDAP Connection. Name it "localhost", hostname=localhost, port=10389, use No Encryption. Hit “Check Network parameter” and be sure the connection to the server is succesful. Use bind user=”uid=admin,ou=system” and bind password=”secret” (defaults). Hit “Check Authentication” and be sure it is succesful.

6. Stop the server and add to server.xml a new partition:
OSX$sudo vi /usr/local/apacheds-1.5.7//instances/default/conf/server.xml
LINUX$sudo vi /var/lib/apacheds-1.5.7/default/conf/server.xml
...
<jdbmPartition id="sevenSeas" suffix="o=sevenSeas" />
</partitions7gt;
...
7. You might have to close the localhost connection and open it again from ApacheDirectoryStudio.

8. Assign a root entry to the partition. Right click on the DIT on the left panel and select “New Entry|Next|Select domain|RDN:o=sevenSeas. Pick for dc property “o=sevenSeas”

9. Right click on the “o=sevenSeas” entry and import the file apache_ds_tutorial.ldif

10. Now you have “ou=groups” and “ou=people” below “o=sevenSeas”

11. Create a new User: Right click “ou=people”|Add a NewEntry from scratch| select “objectclass: inetOrgPerson”|Next|add RDN: cn=admin. You will need to provide “sn” attribute (surname) as it is specified to be mandatory. Set it as “admin” for example. In addition set “userPassword” and “mail”

12. Create a new group. Right click “groups”|Add a NewEntry from scratch| select “objectclass: groupOfUniqueNames”|Next|add RDN: cn=admin|at least one member must be placed inside the group, use “cn=admin,ou=people,o=sevenSeas”

13. Create a second user called “test” member of a new group called “user”

14. Test your application. Below is a snippet of the security xml spring configuration:
<beans:bean id="customUserDetailsService"

class="com.nestorurquiza.security.DummyForTokenBasedRememberMeServicesUserDetailsService
">
</beans:bean>
<beans:bean id="customUserDetailsContextMapper"
class="com.nestorurquiza.security.LdapUserDetailsContextMapper">
</beans:bean>

<ldap-server url="ldap://localhost:10389" manager-
dn="uid=admin,ou=system"
manager-password="secret" root="o=sevenSeas" />
<authentication-manager>
<ldap-authentication-provider
user-search-filter="mail={0}" user-search-
base="ou=people,o=sevenSeas"
user-context-mapper-ref="customUserDetailsContextMapper" group-search-
base="ou=groups,o=sevenSeas" />
</authentication-manager>

15. Of course you will want to create your own “ou=groups” and “ou=people”. You can do that
from active directory as I already posted or you you can create the ldif file yourself (remember it is just plain text!). Alternatively you can add groups and users manually through the GUI. Creating an ldif file is the easiest way:
dn: ou=people,o=MyCompany
objectclass: organizationalUnit
objectclass: top
description: User entries
ou: people

dn: ou=groups,o=MyCompany
objectclass: organizationalUnit
objectclass: top
description: User Group Entries
ou: groups

dn: CN=Nestor Urquiza,ou=people,o=MyCompany
sn: Urquiza
givenName: Nestor
mail: nurquiza@mycompany.com
uid: nurquiza
userPassword:
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
objectclass: top

dn: CN=Tom Cat,ou=people,o=MyCompany
sn: Cat
givenName: Tom
mail: tcat@mycompany.com
uid: tcat
userPassword:
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
objectclass: top

dn: cn=admin,ou=groups,o=MyCompany
description: Super User
objectclass: groupOfUniqueNames
objectclass: top
cn: admin
uniquemember: cn=Nestor Urquiza,ou=people,o=MyCompany

dn: cn=user,ou=groups,o=MyCompany
description: Regular User
objectclass: groupOfUniqueNames
objectclass: top
cn: user
uniquemember: cn=Nestor Urquiza,ou=people,o=MyCompany
uniquemember: cn=Tom Cat,ou=people,o=MyCompany

16. Secure ApacheDS. Besides using SSL do not forget to disabled anonymous access:
<defaultDirectoryService
    ...
    allowAnonymousAccess="false"
    ...>

Friday, August 27, 2010

LDAP import: From Microsoft Active Directory to ApacheDS

Migrating stuff from Active Directory to ApacheDS is a question that we find in forums every so often.

As the ldif (ldf extension in MS world) format is a plain text file you can use any tool that do search and replace. Here is where Unix Power Tools come (like so often) to your rescue.

My original file came with fields I wanted to delete, others I wanted to rename and even some missing fields. All that can be done with sed (The stream Editor).

Here is a fragment of the original Financing.ldf file:
dn: CN=Peter Pan,OU=Financing,DC=Sample,DC=com
changetype: add
sn: Pan
givenName: Peter
proxyAddresses: smtp:ppan@nl.com
proxyAddresses: X400:c=US;a= ;p=Sample;o=NL;s=NS;
proxyAddresses: smtp:peter.pan@sample.COM
proxyAddresses: smtp:pp@SAMPLE.COM
proxyAddresses: MS:SAMPLE/NL/NS
proxyAddresses: CCMAIL:PP at NL
proxyAddresses: SMTP:ppan@sample.com
sAMAccountName: ppan

Here is a fragment of the needed output format:
dn: CN=Peter Pan,o=nl
sn: Pan
givenName: Peter
mail: ppan@sample.com
uid: pp
objectclass: person
objectclass: organizationalPerson
objectclass: inetOrgPerson
objectclass: top

Here is the command line statement that makes it happen. Please respect the "\new line" as it is responsible for adding the new lines you need.

cat Financing.ldf | sed s/OU=Financing,DC=Sample,DC=com/ou=people,o=nl/ | sed /changetype.*/d | sed /proxyAddresses:.[^S].*/d | sed 's/proxyAddresses:.SMTP:/mail: /' | sed 's/sAMAccountName/uid/' | sed 's/\(uid:.*\)/\1\
userPassword: \
objectclass: person\
objectclass: organizationalPerson\
objectclass: inetOrgPerson\
objectclass: top\
/g' > financing.ldif

Of course we are creating empty passwords here.

Wednesday, August 25, 2010

JPA JSR303 Spring Form Validation

In an ideal world you should declare just once your validation rules and reuse them from javascript in forms, from java when binding your form and of course when persisting to the database. Spring validators and JSR303 address the second and JPA the third. The first can be achieved with JQuery. But the question remains "How can I declare my validations just once"

JPA 2.0 supports JSR 303 annotations.

You can build your own annotations for custom validators. In addition you could use Spring Validators and a little bit of reflection:

public class MyClassValidator implements Validator {
 
    @Override
    public boolean supports(Class clazz) {
        return MyClass.class.isAssignableFrom(clazz);
    }
 
    @Override
    public void validate(Object target, Errors errors) {
         
        Field[] fields = target.getClass().getDeclaredFields();
        for(Field field : fields) {
            boolean accessible = field.isAccessible();
            try {
                if(! accessible) {
                    field.setAccessible(true);
                }
                Annotation[] annotations = field.getAnnotations();
                for(Annotation annotation : annotations) {
                    //Avoiding cyclic references for @CustomTransient 
                    //@Column(nullable = false, length = 50)
                    if( annotation instanceof Column) {
                        Column column = (Column) annotation;
                         
                        if( String.class == field.getType() ){
                            String fieldValue = (String) field.get(target);
                            if( !column.nullable() && Utils.isEmpty(fieldValue) ) {
                                rejectValue(errors, field.getName(), "cannotBeNull");
                            } else if(fieldValue.length() > column.length()){
                                rejectValue(errors, field.getName(), "lengthMustBeLessThan", new Object[] {column.length()});
                            }
                        }
                    }
                }
            } catch (SecurityException e) {
                //Not interested in these cases
            } catch (IllegalArgumentException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            } catch (IllegalAccessException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            } finally {
                if(! accessible) {
                    field.setAccessible(false);
                }
            }
        }
    }
     
    private void rejectValue(Errors errors, String fieldName, String code) {
        rejectValue(errors, fieldName, code, null);
    }
  
    private void rejectValue(Errors errors, String fieldName, String code, Object[] params) {
        errors.rejectValue(fieldName, code, params, "?" + code + "?");
    }
}
Annotations then can be used from a taglib to provide in the front end the basic validations already available in the backend. That is perhaps the subject for another post.

Wednesday, August 18, 2010

Writing Agile Specifications

It all started with diagrams that were filled out very soon with lot of wording, then just plain English Use Cases and finally a combination of wireframe diagrams and user stories.

Now we can deliver while hapilly checking things off the list in a daily basis.

It was all written years ago but still so many of us were trying to go around the inevitable, you must INVEST in good user stories. No need to find culprits, let’s get agile right from the beginning: The “specs”.

Don’t get too extreme though. Nobody said you cannot extend the written portion a little bit so a whole use case makes sense when joining the different scenarios.

1. Do it your way: XP, Scrum or a combination but make sure stake holders know what you will be delivering. Make them participate on the creation of scenarios (User stories) and wireframes. Try (why not?) making them comfortable with the tools (simple tools, keep on reading) you use. You might be surprised finding out that some of them can even provide a whole specification that is actually testable and implementable! If you have a Business Analyst (BA) then you are in luck, without any doubts s(he) will need to become your “story teller” (quoting here the guy behind http://masterstoryteller.co.uk) and your "Mock up drawer".

2. Keep it as short as possible and do the job incremental but without losing the most important features so the software can be used right from first release in production.

3. Make sure the developers agree the scenarios are feasible and commit to get it done in a reasonable amount of time. Without the right crew there is no possible agility.

Writing the Specs

1. Use a rich editor where you can put images on the left and text on the right. Google Documents is perfect for sharing, versioning, and ... well it is Google you know. I just use a two columns table. Google allows to insert the picture from local File System, from URL or web search.

2. Draw a flat wireframe using paper and pencil. Yes just use an eraser for corrections ;-) I commonly use copies of a master template which contains things that are fixed for most of the site like the basic footer, content and header layout.

3. Take a picture of it and put it in the left column. I commonly use an Android phone and upload the picture directly to Picasa. From Google Documents you can import its URL as I said. And yes you can resize to meet half of your landscape page or to get more details about it as Google will save the whole raster image and not just the resized result.

4. Write your user story. Yes, try to make it short but not shorter than needed! (Modified Einstein words here ;-) The most important part is that what you write you can test. In a User story (if well written) you can get the needed information for a developer to provide behavioral/functional/integration tests and at the same time for you as a BA to confirm the delivered product meets the business requirements.

Below is a screen shot taken from Google Docs.

Monday, August 16, 2010

Improving user experience: Return Url

You are filling out a web form and suddenly you face a problem: You cannot find the option you need from the provided list. You soon realize you can add a new value there pretty easy and you jump into a new UI screen. You save your new value and ... you get lost when you realize your initial form has to be filled out again. Simply put: state is gone.

Here is a suggestion:
1. Use JQuery to build a returnUrl that includes the current state of the form and provide it as a parameter when you navigate out of the current form. Here is an example that does that assuming you assign to your "Add" anchor a class named "addAndReturn"
 $(document).ready(function() {  
  //  
  //Change the link to include the returnUrl   
  //       
  $('.addAndReturn').click(function() {  
      var form = $(this).parents('form:first');  
      var queryString = form.serialize();  
      //getting rid of the submission flag, in this case hidden field "submitted"  
      queryString = queryString.replace(/submitted=[^&]*/, '');  
      var returnUrl = window.location.pathname + '?' + queryString;  
      $(this).attr("href", $(this).attr("href") + '?returnUrl=' + $.URLEncode(returnUrl));  
   return true;  
  });   
 });  
2. Include a hidden field in the "Add" page. Showing here some JSTL (assuming JSP is used)
<input type="hidden" name="returnUrl" value="${param.returnUrl}">
3. Modify your controller "add action" to look for the existence of "returnUrl". Change the relevant id from the returnUrl:
  String replaceParamValueInUrl(String url, String param, String newValue) {  
     if(url == null || Utils.isEmpty(param)) {  
       return url;  
     }else{  
       return url.replaceFirst(param + "=[^&]*", param + "=" + newValue);  
     }  
 }  
4. Redirect to returnUrl.

Users will appreciate this.

Using POST

I argue all the time about allowing GET in forms. Let us not get into that argument but rather discuss the alternative when using POST.

To use POST you will need to use the server session. However the session handling might get tedious as you really want to make sure to clean the returnURL parameter from the session. If you are using Spring MVC this is the time for you to look into flash attributes. You will need to pull the parameter value from the getParameter() method (Make sure it is sanitized). Finally you might want to force a post when clicking the link for which a jquery plugin like jquery.postlink.js can help:
//
    //Change the link to include the returnUrl
    //
    $('.addAndReturn').click(function() {
        var form = $(this).parents('form:first');
        var queryString = form.serialize();
        //getting rid of the submission flag, in this case hidden field "submitted"
        queryString = queryString.replace(/submitted=[^&]*/, '');
        var returnUrl = window.location.pathname + '?' + queryString;
        $(this).attr("href", $(this).attr("href") + '&returnUrl=' + $.URLEncode(returnUrl));
        return true;
    });
    
    //
    // Change the link to perform a POST when clicked
    //
    $('.addAndReturn').postlink();

Saturday, August 14, 2010

JSON and cyclical references

JSON is a *lightway* data exchange format that does not handle cyclical references between objects. So if you are using for example ORM techniques and you really need to serialize in JSON format you will find depending on the Java API you use an error message like the below (this one comes from json-lib API)
net.sf.json.JSONException: There is a cycle in the hierarchy!
at net.sf.json.util.CycleDetectionStrategy$StrictCycleDetectionStrategy.handleRepeatedReferenceAsObject(CycleDetectionStrategy.java:97)

The solution for the above issue is to mark as transient the offending field. The problem is you do not want to mark it as transient for just everything. Use then annotations.

1. Declare the Annotation interface
public @interface CustomTransient {}
2. Mark your field as transient for your custom serialization
...
@CustomTransient
private Employee[] employees;
...

3. From your library identify the callback to implement property filtering. Below an example for json-lib
jsonConfig = new JsonConfig();
jsonConfig.setJsonPropertyFilter(new PropertyFilter() {
public boolean apply(Object source, String name, Object value) {
try {
Field field = source.getClass().getDeclaredField(name);
Annotation[] annotations = field.getAnnotations();
for(Annotation annotation : annotations) {
if( annotation.toString().equals("@CustomTransient") ){
return true;
}
}
} catch (SecurityException e) {
//Not interested in these cases
e.printStackTrace();
} catch (NoSuchFieldException e) {
//Not interested in these cases
}
return false;
}
});

Friday, August 13, 2010

MySQL Java Driven Metadata changes

When changes are made to an existing project very often database metadata needs to be changed as well.

Metadata migration scripts are useful SQL bits that are run to ensure a new version of the program will work as expected.

Data migration scripts are needed to prepopulate new or existing tables with predefined values.

Rollback scripts are SQL bits to be run in case the whole deployment goes wrong. They commonly affect metadata in a reverse way when compared to Migration scripts. If database changes are actually backward compatible (they do not break previous deployed program functionality) then there is no need for sql bits inside the rollback script, but still there is a rollback script which happens to do nothing.

The whole purpose of this post is to document what should be done but it is also a starting point to provide some kind of automation in the future.

At the time of this writting Reverse and forward database engineering in MySQL can be done using MySQL Workbench however you need to purchase the standard edition of the program (Features are disabled or not present at all in the Community edition). Still mysqldump comes to your rescue and as it is a command prompt tool the chances for automation are big.

Here are instructions to follow to provide migration/rollback scripts when you use JPA:

1. Create the tables from Java. You will probably need to uncomment the below code from test persistence.xml and run Unit tests. In a real world scenario you will need to comment and uncomment several files as you might be using for example an in memory database like HSQL for your JUnit tests. After tables are created you should uncoment the code to avoid undesired data wipe out during your tests.

#vi src/test/resources/META-INF/persistence.xml
<!--<property name="hibernate.hbm2ddl.auto" value="create" />-->

2. Create variable DB_BASE and DB
DB_BASE=myDB
DB=$DB_BASE.sql

3. Get a local copy of the previously released application. You do that examining the "svn log" output and downloading directly from the tag while getting from there the version number:
svn log http://subversion.sample.com/my-lib/tags/my-lib-1.0.0/src/main/resources/db/$DB
------------------------------------------------------------------------
r8509 | deploymentUser | 2010-08-11 14:42:40 -0400 (Wed, 11 Aug 2010) | 1 line

[maven-release-plugin] copy for tag my-lib.1.0.0
------------------------------------------------------------------------

4. Create DB_VERSION and RELEASE_VERSION variables
DB_VERSION=r8509
RELEASE=my-lib.1.0.0

5. Download the latest released metadata. Your local file will be something like r8509.myDB.sql.
svn export http://subversion.sample.com/my-lib/tags/my-lib-1.0.0/src/main/resources/db/$DB@$DB_VERSION $DB_VERSION.$DB

6. Get the current metadata
mysqldump --no-data -u root -proot $DB_BASE > $DB

7. compare the files:
diff $DB_VERSION.$DB $DB

8. Use the results to create migration_metadata.sql which will contain the sql bits needed to get from $DB_VERSION.$DB to $DB

9. Create migrations_data.sql containing all new data you want to push into the database. Again even if there is nothing to be included make sure you have the file with empty content. Additionally be sure you create data.sql which is the equivalent of $DB but just to populate data.

10. Locally test that creating the DB by hand using previous released $DB and later applying migration_metadata.sql plus migration_data.sql everything goes fine.

10.1. Recreate local DB
mysql> drop database myDB;
mysql> create database myDB;

10.2. Be sure your variables are set. In our case:
$ DB_VERSION=r8509
$ DB_BASE=myDB
$ DB=$DB_BASE.sql

10.3. Get the tagged sql script
#svn export http://subversion.sample.com/my-lib/tags/my-lib-1.0.0/src/main/resources/db/$DB@$DB_VERSION $DB_VERSION.$DB

10.4. Run the below to test the first three scripts:
$ mysql -uroot -proot myDB < r8509.myDB.sql
$ mysql -u root -proot myDB < myDB_migration_metadata.sql
$ mysql -u root -proot myDB < myDB_data.sql

10.5. Do a change to data that will be affected by the migration_data.sql script
mysql> use myDB
mysql> select * from office;
mysql> delete from office where name = 'Miami';

10.6. Run the forth script to verify it will insert back the deleted record(s)
$ mysql -u root -proot myDB < myDB_migration_data.sql
mysql> select * from office;

11. Include the migration*.sql, data.sql, $DB.sql and rollback.sql in your resources/db folder so they get tagged with the next release. If no need for rollback.sql / migration_data.sql still create / update with blank content.

12. Optional: You can use MySQL Workbench to graphically compare the metadata from previous and current relelase or in fact between any releases. This helps to check tables, fields and relationships are generated as expected.

13. Open MySQL Workbench 5.2.26 or above and select “Model | Create a diagram out from Catalog Objects”. Organize your tables and print to pdf using the naming convention $RELEASE.$DB_VERSION.pdf. Publish them on a WIKI location so the whole team can see the underlying model.

14. Release the application.

15. Deploy the application in Staging: Run migration*.sql scripts. Test.

16. As above for production.

Wednesday, August 11, 2010

Email in Ubuntu Mediawiki

I had to spend sometime today configuring email in an Ubuntu Mediawiki installation. Below are the commands and configurations that made this task a success:

#sudo pear upgrade --force http://pear.php.net/get/PEAR
#sudo pear install mail
#sudo pear install mail_mime
#sudo pear install Net_Socket
#sudo pear install Auth_SASL
#sudo pear install Net_SMTP
#sudo vi  /var/www/wiki/LocalSettings.php
...
#to be sure we can debug
error_reporting(E_ALL);
...
#point to the location of Mail.php which in this case was /usr/share/php/
ini_set( "include_path", ".:$IP:$IP/includes:$IP/languages:/usr/share/php/" );
..
#some mandatory settings
$wgEnableEmail      = true;
$wgEnableUserEmail  = true;
...
#SMTP settings. Your's might be different of course
$wgSMTP = array(
'host'     => "smtp.sample.com",
'IDHost'   => "sample.com",
'port'     => 25,
'auth'     => false
);
...
$wgEmergencyContact = "support@sample.com";
$wgNoReplyAddress = "no-reply@sample.com";
$wgPasswordSender = $wgNoReplyAddress;
...
$wgEnotifUserTalk      = true; # UPO
$wgEnotifWatchlist     = true; # UPO
$wgEmailAuthentication = true;
$wgEnotifMinorEdits = true;


Note that in order to get email notifications for pages you watch you must authorize your email and check the box "E-mail me when a page on my watchlist is changed". Just look for "E-mail confirmation" in your Preferences page.

Release and Deploy using Maven SVN Artifactory and Tomcat

Releasing and deploying a Java Web application (WAR application) involves several steps. I have been using Maven and Subversion for this task with very good results.

Preconditions
You need subversion and maven2 (version 2.2.1 or above) packages installed ( For the release server I use linux as OS )

IMO it is *not* good to release from your local machine if you are part of a team. I have released from my local machine from time to time but I have found way safer to do it always from the same server where other developers can login and follow exactly the same steps.

Preparing the environment

1. Modify the project pom.xml to specify the version number. It must end with the string “-SNAPSHOT”.
<version>0.0.1-SNAPSHOT</version>

2. Modify the project pom.xml to specify where to tag the code (svn tags folder).
...
<build>
...
<plugins>
<plugin>
<artifactId>maven-release-plugin</artifactId>
<version>2.0-beta-7</version>
<configuration>
<tagBase>
http://svn.sample.com/project/tags
</tagBase>
</configuration>
</plugin>

3. Modify the project pom.xml to specify where to upload the result of the build (final released product). Here is an example for Artifactory:
<distributionManagement>
<repository>
<id>central</id>
<name>libs-releases-local</name>
<url>
http://sample.com/artifactory/libs-releases-local
</url>
<uniqueVersion>false</uniqueVersion>
</repository>
</distributionManagement>

I have used also Archiva:
<distributionManagement>
<repository>
<id>archiva.internal</id>
<name>Internal Release Repository</name>
<url>
dav:http://archiva.sample.com/archiva/repository/internal/
</url>
<uniqueVersion>false</uniqueVersion>
</repository>
</distributionManagement>


4. Just in case you use Windows or Mac from the machine you will be releasing do yourself a favor and add the below to you pom.xml. This way the build will be using 'UTF-8' encoding to copy filtered resources.
<properties>
...
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
...
</properties>

5. Add the scm connection property to the pom.xml
<scm>
<connection>scm:svn:http://svn.sample.com/trunk/</connection>
</scm>

6. Edit your ~/.m2/settings.xml file making sure the credentials for your artifactory reposiitory are correct. Also be sure to use the mirrors node to ensure you will download dependencies only from your local repository. Here is an example:
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd">
<mirrors>
<mirror>
<id>central</id>
<name>My Central Maven repo</name>
<url>http://sample.com/artifactory/libs-releases</url>
<mirrorOf>*</mirrorOf>
</mirror>
</mirrors>
<servers>
<server>
<id>central</id>
<username>myuser</username>
<password>mypassword</password>
</server>
</servers>
</settings>

Releasing

8. Checkout the project (svn co ). If already checked out be sure there are no differences
svn status -u

9. Prepare the release. Default options should be OK as they ensure the pom.xml gets updated to next version number. Of course when there is a business decision to change the minor or major version numbers then you simply edit the pom.xml and commit the code to SVN before attempting to run release:prepare. Pay attention to dependencies, maven will complain if you try to use snapshots but if you decide so it will allow you to point to them. So be sure you update your own jar dependencies if any forcing the project to use a release version. When you are done the project should be tagged, thing that you can confirm navigating to the tagBase url.
mvn release:prepare

10. Perform the release. This will download the project from the tag, build it and commit the final release to the artifactory repo.
mvn release:perform

Deploying

I have posted a question to the tomcat maven mojo plugin list asking for help on getting this process done the easiest possible way (using a maven plugin)

While posting also to the cargo-user mailing list (Starting from the above link you can see the history) I have come to the conclussion that environments can get really different: clustering, sticky session configurations for load balancing, symlinks to apply DRY and so on. I think still bash (perl, python or any other or your preference ;-) offers the flexibility to use maven power in literally any environment.

WAR deployment

If you are rebuilding to deploy even when static resources are changed (CSS, JS, images etc) then you will be fine with just downloading the released war file from your artifactory server, rename it and move it to the tomcat webapp folder. Below is a bash script that does exactly that:
TOMCAT_WEBAPP=/home/liferay/liferay-portal-5.2.3/tomcat-6.0.18/webapps
if [ $# -lt 1 ]
then
echo "Usage - $0 warFileUrl"
echo " Where warFileUrl is the WAR archive URL"
exit 1
fi
#getting rid of the numbers
URL=$1
LOCAL_FILE=${URL##*/}
NAME=${LOCAL_FILE%-*}
EXT=${LOCAL_FILE##*.}
DEST_FILE=$TOMCAT_WEBAPP/$NAME.$EXT

#delete local file
if [ -e "$LOCAL_FILE" ]
then
rm $LOCAL_FILE
fi

#download the file
wget $1

#tomcat deploy
mv $LOCAL_FILE $DEST_FILE

Just invoke it as
deploy.sh http://sample.com/artifactory/libs-releases/com/sample/my-app-0.0.1.war

Snapshot deployment

While this is something that can be done from a Continuum integration server you might find Business analysts trying to deploy different branches of your project from time to time. It is then a good idea to offer the possibility to actually deploy from an SVN URL. In that case the script above will need to be modified to allow a parameter with the svn project URL (using type=warSvn or explodedSvn for example). The script should then checkout from the repository, build locally and deploy to the server. That is exactly what I have done in a modified version of the script that I will be maintaining at my google repo.

You can use tomcat-deploy so far for:
1. Deploy a WAR file from an artifactory, archiva or any other maven repo:
./tomcat-deploy.sh http://artifactory.sample.com/libs-releases-local/com/sample/my-app/1.1.5/my-app-1.1.5.war warRepo

2. Deploy a WAR file build from SVN. As the repository might be organized in different ways an extra parameter “projectName” must be provided.
./tomcat-deploy.sh http://subversion.sample.com/my-app/trunk/ warSvn my-app

3. Deploy a JAR file from SVN. In case you are wondering why will you need this the answer is to be able to deploy snapshots that depend on snapshots in case you are not publishing them into a maven repo. To be honest you are better off trusting snapshots, you better build them yourself at deployment time.
./tomcat-deploy.sh http://subversion.sample.com/my-lib/trunk/ jarSvn my-lib

Exploded Deployment

Exploded deployment has a unique advantage. Look and feel changes can be pushed into the server without having to restart the whole instance. It is ideal though that your designers are forced to commit to SVN before they can actually see their changes and that is precisely the motivation for having a modification to the script above so it checks for “type” with value “explodedRepo” or "explodedSvn". Basically this is just a version of the above where the WAR is unzipped and then deployed to webapps tomcat folder. As you can imagine that would be some little extra lines of code ...

Front End Deployment

So you are using exploded deployment for something right? The next step is to allow passing type parameter as “frontend” in which case the URL will be actually an SVN URL which must end with the string “webapp”. The new stuff in the downloaded folder will be copied then to tomcat/webapps/appname (using rsync of course). Again some extra lines of code ...

Thursday, August 05, 2010

Authentication request failed AuthenticationServiceException LDAP error code 32 NO_OBJECT

I spent a considerable amount of time today trying to get Spring Security with LDAP (Active Directory) working. The error below was showing up even though the Active Directory server was correctly authenticating the user.

2010-08-05 15:48:44,898 DEBUG [org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter] - <Authentication request failed: org.springframework.security.authentication.AuthenticationServiceException: [LDAP: error code 32 - 0000208D: NameErr: DSID-031001A8, problem 2001 (NO_OBJECT)

Looking deeper in the logs I saw a TRACE (not even an INFO level message):
Not granted any authorities

But I was member of several groups! I added group-search-base with same content as user-search-base (same root) to ldap-authentication-provider and then I got authenticated and got my roles (Spring authorities or in this case Active Directory groups) back from the server.

Below is my final *tainted* settings:

<ldap-server url="ldap://domain.com:port" manager-dn="***"
manager-password="***" root="OU=***,dc=***,dc=***" />
<authentication-manager>

<ldap-authentication-provider
user-search-filter="mail={0}" user-search-base="OU=***,dc=***,dc=***"
user-context-mapper-ref="customUserDetailsContextMapper" group-search-base="OU=***,dc=***,dc=***" group-search-filter="***" />

</authentication-manager>

Tuesday, August 03, 2010

OSX Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)

After installing MySQL from mysql-5.5.5-m3-osx10.6-x86_64.dmg I got:

Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)

I did all I could to get MySQL to work. I reinstalled MySQL from different binaries, used MacPorts, changed permissions, edited configuration files and so on. Reinstalling OSX is not an option at the moment for me so I will need to start MySQL from now on using the below command:

sudo /usr/local/mysql/bin/mysqld_safe &

In the case you need to reinstall MySQL because the above does not work you need to run some commands in order to be able to reinstall.
$ sudo rm -fR /usr/local/mysql*
$ sudo rm -rf /Library/StartupItems/MySQLCOM
$ sudo rm -rf /Library/PreferencePanes/My*
$ rm -rf ~/Library/PreferencePanes/My*
$ sudo rm -rf /Library/Receipts/mysql*
$ sudo rm -rf /Library/Receipts/MySQL*
$ sudo rm -rf /var/db/receipts/com.mysql.*
$ sudo vi /etc/hostconfig
# remove or delete the below line if present
#MYSQLCOM=-YES-

Finally I recommend using version 5.1 instead of 5.5. It works more stable in OSX in my experience.

Followers