Custom format in the Play Framework templating engine example

I have been looking into the Play Framework to do rapid development of web-services. The clean MVC separation and a simple templating system is very attractive to be able to focus on just what is needed, instead of spending a lot of time writing glue code and juggling concepts that should be in the framework.

The template engine in Play supports Html, Xml, Txt and Js. When writing web APIs, I usually want to have control over the content-type used, either for adding entirely new media-types, or for using with versioning of the API. I have been reading the documentation at http://playframework.com/documentation/2.2.x/ScalaCustomTemplateFormat that describes how to add a custom format to the template engine, and it was reasonably good. Just a few points requiring trial and error, because the exact files to put things into was not described, and the example used was how to add Html if Html was not supported, so it was not possible to just take the code and verify that it worked.

This post contains the actual files added and updated to a fresh Play 2.2.2 app to add CSV support. See the doc page referenced above for details and explanations.

Model classes

Although not strictly a part of the support for CSV, the model classes are used to demonstrate that things work, and to provide a record type for basing the concrete template example on:

app/models/Record.scala

package models

case class Record(val foo: String, val bar: String)

object Record {
  def sampleRecords = List(Record("f1","b1"), Record("f2","b2"), Record("f3","b3"))
}

Format definition

The meat of the new format is in the format implementation classes. I have put these in app/views/Csv.scala as that seemed appropriate.
The format contains the classes Csv and CsvFormat inheriting the proper classes as indicated by the official documentation.
The Csv object contains helpers for the CvsFormat object.

app/views/Csv.scala

package views

import play.api.http.ContentTypeOf
import play.api.mvc.Codec
import play.api.templates.BufferedContent
import play.templates.Format

class Csv(buffer: StringBuilder) extends BufferedContent[Csv](buffer) {
  val contentType = Csv.contentType
}

object Csv {
  val contentType = "text/csv"
  implicit def contentTypeCsv(implicit codec: Codec): ContentTypeOf[Csv] = ContentTypeOf[Csv](Some(Csv.contentType))

  def apply(text: String): Csv = new Csv(new StringBuilder(text))
  
  def empty: Csv = new Csv(new StringBuilder)
}

object CsvFormat extends Format[Csv] {
  def raw(text: String): Csv = Csv(text)
  def escape(text: String): Csv = {
    val sb = new StringBuilder(text.length)
    text.foreach {
      case '"' => sb.append("\"\"")
      case c => sb += c
    }
    new Csv(sb)
  }
}

Glue

To actually make the template compiler compile new templates, we need to define the file extension to trigger compilation. This is done by adding to the templateTypes at the end of the top-level build.sbt

build.sbt

name := "csv-play"

version := "1.0-SNAPSHOT"

libraryDependencies ++= Seq(
  jdbc,
  anorm,
  cache
)     

play.Project.playScalaSettings

templatesTypes += ("csv" -> "views.CsvFormat")

Putting it all together

The last part is to define an endpoint in the controller (in app/controllers/Application.scala), point a route at that endpoint (in conf/routes), and write a template to serve results (in app/views/records.scala.csv).

app/controllers/Application.scala

package controllers

import play.api._
import play.api.mvc._
import models.Record

object Application extends Controller {

  def index = Action {
    Ok(views.html.index("Your new application is ready."))
  }

  def records = Action {
    Ok(views.csv.records(Record.sampleRecords))
  }

}

conf/routes

# Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~

# Home page
GET     /                           controllers.Application.index

GET     /records                    controllers.Application.records

# Map static resources from the /public folder to the /assets URL path
GET     /assets/*file               controllers.Assets.at(path="/public", file)

app/views/records.scala.csv

@(records: List[Record])"foo","bar"@for(r <- records) {
"@r.foo","@r.bar"}

The final result

Now everything is in place to test the new format. Run the app by doing play run in a separate terminal. Then you will observe the following result using curl as our HTTP client:

$ curl -i http://localhost:9000/records
HTTP/1.1 200 OK
Content-Type: text/csv
Content-Length: 41

"foo","bar"
"f1","b1"
"f2","b2"
"f3","b3"

Notice that the Content-Type is correctly set to the result rendered from the template.

Conclusion

This post shows a simple and straight-forward way to add a custom content-type to be used in the templating system. Using this approach, you get to write your format in the simple template language, and each endpoint in your controller does not do anything special to support the format.

There are other approaches that may be preferable in certain cases. If you have an XML-like format, you can use the Xml support in Play by having a app/views/foo.scala.xml in the template directory, but then set an explicitly different content-type in your controller, with something like:

...
  def records = Action {
    Ok(views.xml.records(Record.sampleRecords)).as("application/vnd.mycompany.api-v1+xml")
  }

This can of course also be used to abuse the Txt format in a similar way. The big downside to this approach is that you must remember to do that on every result produced by the application.

Good luck, and happy content typing!

Advertisements

Mysterious Problem with JUnit in IntelliJ IDEA – with Mysterious Fix!

Coming back to work after the holidays, I dutifully fired up my favourite IDE to get to work. Full of energy and determination, I added a lot of code, and felt quite good about myself. The only thing left was running the unit tests, to verify that I had not broken anything.
The tests completed fast; too fast. Looking closer, the only output was “Process finished with exit code 0”, and in fact no tests had been run. I tried it a few more times, but still nothing. Even after uninstalling IntelliJ, rebooting, and reinstalling a fresh copy of the IDE; still the same (non)result mocking me “Process finished with exit code 0”.

Completely stumped, I threw in my towel and searched the internet. Usually, I’m less than impressed with answers on stackoverflow.com, but in this case there was actually a hint at the solution!

arcane knowledgeApparently, there is something wrong with some cache somewhere. By going to the File menu and selecting “Invalidate Caches / Restart …”, testing should start working again.
And, as if by magic, it did! Right-clicking a package and selecting “Run Tests in …” started correctly, and lights started showing up beside each test-case. And they where all green. And it was good.

I still don’t understand what was wrong, and what I did to fix it, but now I at least have added another tool to my arsenal of arcane knowledge.

The referenced question at Stack Overflow can be found here:
http://stackoverflow.com/questions/13157815/intellij-idea-sudenly-wont-recognize-tests-in-test-folder

Java Constructor Anti-Pattern

I have never liked the typical java way of creating constructors for a typical “bean” class. The preferred way, by many, is to have the constructor arguments directly map to the fields in the class, and assign each in turn to the “this.” prefixed name.

I have always preferred to have the constructor arguments be short one-or-two-letter variables that you assign to the fields. This avoids the following problem I found today:

public class MetadataEntityDescriptor {
    private ContactPerson contactPerson;
    private IDPSSODescriptor idpSsoDescriptor;

    private MetadataEntityDescriptor(IDPSSODescriptor idpssoDescriptor,
                                     ContactPerson contactPerson) {
        this.idpSsoDescriptor = idpSsoDescriptor;
        this.contactPerson = contactPerson;
    }

    public static final class Builder {
        private IDPSSODescriptor idpssoDescriptor;
        private ContactPerson contactPerson;

        public Builder setIdpssoDescriptor(IDPSSODescriptor idpssoDescriptor) {
            this.idpssoDescriptor = idpssoDescriptor;
            return this;
        }

        public Builder setContactPerson(ContactPerson contactPerson) {
            this.contactPerson = contactPerson;
            return this;
        }

        public MetadataEntityDescriptor build() {
            return new MetadataEntityDescriptor(idpssoDescriptor, contactPerson);
        }
    }

    public Element asXml() {
        return new Element("EntityDescriptor", Constants.NAMESPACE_METADATA)
            .addContent(idpSsoDescriptor.asXml())
            .addContent(contactPerson.asXml());
    }
}

Why does the asXml() method here give a NullPointerException? The code looks good and even compiles. But notice the subtle capitalization bug in the first parameter of the constructor… In fact, the argument was never assigned to the field, and it thus stayed “null”. Fortunately my IDE caught it, but I did not see that until I had scratched my head a few times regarding the strange NPE.

Now, I will admit that an argument FOR having complete argument names is that it looks nicer in the IDE when you see the argument popup help. But to me that is not enough; I’d rather have shorter and less error prone code such as this:

public class MetadataEntityDescriptor {
    private ContactPerson contactPerson;
    private IDPSSODescriptor idpSsoDescriptor;

    private MetadataEntityDescriptor(IDPSSODescriptor isd,
                                     ContactPerson cp) {
        idpSsoDescriptor = isd;
        contactPerson = cp;
    }

    ...
}

And finally, this is of course a great argument for moving to Scala, where the equivalent, error proof code would be:

class MetadataEntityDescriptor(var contactPerson: ContactPerson,
                               var idpSsoDescriptor: IDPSSODescriptor) {
  def asXml = <EntityDescriptor>
      { idpSsoDescriptor.asXml }
      { contactPerson.asXml }
  </EntityDescriptor>
}

Using personal git with cvs

If you are stuck using cvs in your dayjob, you may still get some mileage out of git. I will not discuss the cvs-git roundtrip tools, but rather how you can use git as a simple personal workflow enhancement.

First time setup

After installing git, remember to set your name and email in the global settings: 

git config --global user.name "Your Name"
git config --global user.email your@email.com
git config --global core.excludesfile ~/.gitignore

See ‘man git-config’ for other cool settings… 

Using git locally

I mainly use Git as a local “convenience” of sorts with CVS. I check out a module from CVS, and then do the following inside the toplevel directory: 

git init
git add .
git commit -m 'initial import'

NOTE: for this to work best, I have a global ~/.gitignore on ‘CVS’ and ‘.cvsignore’, to avoid versioning cvs metafiles in the local git repository… 

Then, I can work with the files locally, and commit small changes to git, revert my changes, etc. Whenever I feel like integrating new changes from other people, I do a 

cvs update

I can then use ‘git status‘ and ‘git diff‘ to look at what has happened, and resolve conflicts etc.

If I have made local changes not in CVS, I commit it to CVS. If I have made many small changes in git, this then ends up as a nice “atomic” bigger commit to CVS.

Once CVS is up-to-date with my changes, I usually then just commit everything change in CVS to git with 

git commit -a -m 'cvs sync'

If I do this several times without intervening local check-ins, I use the --amend option to git commit, to just add the new changes to the previous commit.

Using local branches and rebasing on CVS head

For the “advanced” user, it is very cool to do own development on a branch, and just leave the master to be in sync with CVS. 

git branch my_branch
git checkout my_branch
... work ...
git commit ...whatever...

I then switch back to master to sync with cvs, and then rebase the branch on head 

git checkout master
cvs update
git commit -a -m 'cvs sync'
git checkout my_branch
git rebase master
(possibly fix merge conflicts and...) git rebase --continue

Now, the branch is updated to be as if from the tip of HEAD, so future merges to CVS are easier.