Sunday, October 4, 2015

Using Jira's REST API with Scala and Lift-Json

Before we started writing our little Jira Git Stats Collector we were thinking about if it was actually worth the trouble to write a separate tool just to print some git stats, but it turns out that writing it was as much fun as interpreting the information extracted by it. Since providing and consuming APIs is what the internet is all about (from a developer's point of view, that is) i thought today i would share a bit of information about how elegantly REST APIs can be consumed with scala.

There are multiple JSON libraries available (in java and also in scala) and each of them has their strengths and weaknesses. Usually in the scala world most people would probably go for Argonaut, Spray or use ScalaJson if they use Play. Our use case was simple enough to try something we hadn't used before: Lift-Json. For maximum type-safety one would usually create case classes conforming to the responses provided by the API, but since this was an ad-hoc project and we were only about to use exactly one field of the response we lowered ourselves into the dark waters of AST traversal and type casts ;)

Jira's REST API responses are easy to understand and allow for simple parsing and automation.

The Use Case

  • Provide one or multiple Epics as input
  • Extract all issue keys belonging to this epic from the REST API
  • Extract all subtasks belonging to each returned issue key
  • Return the combination of the three lists for further processing

Preparation

Epics

The issue type Epic in Jira is not implemented as a standard feature but as a custom field which is created when you install the Jira Agile Plugin. This means that depending on which custom fields you had before you installed the plugin (either created by yourself or by other plugins), the Epic field's ID will vary. In our case this id was cf[10147] which we found easily by using the advanced issue search in Jira, typing "epic" and looking at the autocomplete popup.

Authentication

Jira's REST API supports HTTP Basic Authentication, so it is easy enough to get authorized by providing some username and password in the correct format within the HTTP request. We wrote a little helper object to supply the required header:

package io.sourcy.jirastatscollector

import java.util.Base64

object HttpBasicAuth {
  private val BASIC = "Basic"
  val AUTHORIZATION = "Authorization"

  private def encodeCredentials(username: String, password: String): String =
    new String(Base64.getEncoder.encode((username + ":" + password).getBytes))

  def getHeader(username: String, password: String): String = BASIC + " " + encodeCredentials(username, password)
}

SSL

Since our use case was purely internal we decided to allow for disabling SSL verification:

private object NoSsl {
  def disableSslChecking(): Unit = {
    HttpsURLConnection.setDefaultSSLSocketFactory(NoSsl.socketFactory)
    HttpsURLConnection.setDefaultHostnameVerifier(NoSsl.hostVerifier)
  }

  private def trustAllCerts = Array[TrustManager] {
    new X509TrustManager() {
      override def getAcceptedIssuers: Array[X509Certificate] = null

      override def checkClientTrusted(x509Certificates: Array[X509Certificate], s: String): Unit = {}

      override def checkServerTrusted(x509Certificates: Array[X509Certificate], s: String): Unit = {}
    }
  }

  def socketFactory: SSLSocketFactory = {
    val sc = SSLContext.getInstance("SSL")
    sc.init(null, trustAllCerts, new SecureRandom())
    sc.getSocketFactory
  }

  def hostVerifier: HostnameVerifier = new HostnameVerifier() {
    override def verify(s: String, sslSession: SSLSession): Boolean = true
  }
}

Implementation

The implementation itself is surprisingly straight forward. First we need a possibility to run search queries against the API to be able to search for epics:

  private def runJql(jql: String): JValue = {
    NoSsl.disableSslChecking()
    val connection = new URL(Settings.jiraUrl + "jql=%s".format(jql)).openConnection
    connection.setRequestProperty(HttpBasicAuth.AUTHORIZATION, HttpBasicAuth.getHeader(Settings.jiraUser, Settings.jiraPassword))
    parse(Source.fromInputStream(connection.getInputStream).mkString)
  }

JSON data are represented as an AST in Lift-Json, so we need a method to exract an issue key from a node in the AST. In this case the actual type of a list of key-value pairs (as in {key=ISSUE-123, key=ISSUE-456}) is a List[Tuple2], so we extract the List using a type cast and only use each tuple's second value.

  private def extractIssuesFromJValue(values: JsonAST.JValue#Values): List[String] =
    values.asInstanceOf[List[(String, String)]].map(tuple => tuple._2)

Then we need a way to extract an Issue's subtasks:

  private def extractSubTasks(issue: String): List[String] = {
    val values = (runJql("parent=%s".format(issue)) \ "issues" \ "key").values
    var ret: List[String] = List()
    try {
      ret = extractIssuesFromJValue(values)
    } catch {
      case e: ClassCastException =>
    }
    issue :: ret
  }

...and a way to extract an Epic's child issues:

  private def extractChildIssues(epic: String): List[String] =
    epic :: extractIssuesFromJValue((runJql(Settings.epicCustomField + "=%s".format(epic)) \ "issues" \ "key").values)

In the end we just need to piece it all together:

  def extractAllIssues(epics: Seq[String]): Seq[String] =
    epics.flatMap(epic => extractChildIssues(epic).flatMap(issue => extractSubTasks(issue)))

This will return

  • The Originally provided Epic(s)
  • All of the Epic's child issues
  • All issues' subtasks
flattened into one Seq.

Take a look at the GitHub repository if you want to see more :)

Friday, October 2, 2015

Book Recommendation: Functional and Reactive Domain Modeling

I have recently finished reading what is already available of Debasish Ghosh's brilliant book Functional and Reactive Domain Modeling which is currently available via Manning's MEAP program. I have been doing functional programming with scala for about 2 years now after almost 15 years of working around side effects with OOP and other paradigms and have also inhaled other phantastic books like Functional Programming in Scala by Rúnar Bjarnason and Paul Chiusano but what makes Functional and Reactive Domain Modeling so outstanding for me is the extremely practical and applicable approach to the more advanced - at least for me - topics of functional programming.

And no, i don't get paid by Manning for this, just in case you were wondering :D

Wednesday, July 8, 2015

The wonderful world of Java 8's default interface methods

For those of you with experience in Scala, Java 8's default methods in interfaces are no big news. For Java-only folks my guess is, that some may have wondered, why an interface - in its essence a contract between a client and a library - would need to contain implementation details.

I think one of the greatest benefits of default methods is, that functionality can be composed through implementing interfaces, thereby using them as mixins or traits as they are called in Scala.

Today, i would like to share an exemplary use case of default methods when working with tree structures.

Suppose you have the following two classes somewhere in your project...one might be used for the internal representation of your binary-tree data, the other one is used as a DTO for a REST API and also needs to be able to handle non-binary trees.

public class BinaryTree {  
    private final String name;
    private final BinaryTree left;
    private final BinaryTree right;

    public BinaryTree(final String name, final BinaryTree left, final BinaryTree right) {
        this.name = name;
        this.left = left;
        this.right = right;
    }

    public String getName() {
        return name;
    }

    public BinaryTree getLeft() {
        return left;
    }

    public BinaryTree getRight() {
        return right;
    }
}
public class SomeTree {  
    private final String name;
    private final Set<SomeTree> children;

    public SomeTree(final String name, final Set<SomeTree> children) {
        this.name = name;
        this.children = children;
    }

    public String getName() {
        return name;
    }

    public Set<SomeTree> getChildren() {
        return children;
    }
}

Many times when working with tree structures, we need to be able to find nodes somehow. So usually we implement some methods to recursively or iteratively locate the node in question and return it. Since our two tree representations are structurally different, we might think that we need to write a separate method for finding nodes for each of our two types.

Fortunately, we can use Java 8's default methods in interfaces to write just one implementation and mix the interface into both our classes. Note, that this could also have been done using an abstract base class for both our tree classes, but that would mean tighter coupling and worse maintainability, since any class in java can only extend one other class, but implement multiple interfaces.

Thus we continue to create a generic implementation enabling us to find nodes in a tree:

public interface FindableNode {

    String getName();

    <T extends FindableNode> Set<T> getChildren();

    default <T extends FindableNode> Optional<T> findByName(final String name) {
        if (getName().equalsIgnoreCase(name)) {
            return Optional.of((T) this);
        }
        return findByName(getChildren(), name);
    }

    default <T extends FindableNode> Optional<T> findByName(final Set<T> nodes, final String name) {
        final Optional<T> matchingNode = nodes.stream()
                .filter(node -> node.getName().equalsIgnoreCase(name))
                .findAny();
        if (matchingNode.isPresent()) {
            return matchingNode;
        }
        return (Optional<T>) nodes.stream()
                .map(c -> findByName(c.getChildren(), name))
                .filter(Optional::isPresent)
                .map(Optional::get)
                .findFirst();
    }
}

Now all we need to do is to mix our new interface into our two tree classes and provide the required abstract methods getName() and getChildren():

public class BinaryTree implements FindableNode {  
    private final String name;
    private final BinaryTree left;
    private final BinaryTree right;

    public BinaryTree(final String name, final BinaryTree left, final BinaryTree right) {
        this.name = name;
        this.left = left;
        this.right = right;
    }

    @Override
    public String getName() {
        return name;
    }

    @Override
    public Set<BinaryTree> getChildren() {
        final ImmutableSet.Builder<BinaryTree> builder = ImmutableSet.<BinaryTree>builder();
        if (left != null) {
            builder.add(left);
        }
        if (right != null) {
            builder.add(right);
        }
        return builder.build();
    }


    public BinaryTree getLeft() {
        return left;
    }

    public BinaryTree getRight() {
        return right;
    }
}

public class SomeTree implements FindableNode {  
    private final String name;
    private final Set<SomeTree> children;

    public SomeTree(final String name, final Set<SomeTree> children) {
        this.name = name;
        this.children = children;
    }

    @Override
    public String getName() {
        return name;
    }

    @Override
    public Set<SomeTree> getChildren() {
        return children;
    }
}

We can check if our construct does what we expect using a Unit Test:

public class FindableNodeTest {

    @Test
    public void testFindBranchByNameWithSomeTree() {
        final Optional<SomeTree> result = createSomeTree().findByName("branch2");
        assertThat(result.isPresent(), is(true));
        assertThat(result.get().getName(), is("branch2"));
        assertThat(result.get().getChildren().size(), is(2));
    }

    @Test
    public void testFindLeafByNameWithSomeTree() {
        final Optional<SomeTree> result = createSomeTree().findByName("leaf3");
        assertThat(result.isPresent(), is(true));
        assertThat(result.get().getName(), is("leaf3"));
        assertThat(result.get().getChildren().size(), is(0));
    }

    @Test
    public void testFindBranchByNameWithBinaryTree() {
        final Optional<BinaryTree> result = createBinaryTree().findByName("branch2");
        assertThat(result.isPresent(), is(true));
        assertThat(result.get().getName(), is("branch2"));
        assertThat(result.get().getChildren().size(), is(2));
    }

    @Test
    public void testFindLeafByNameWithBinaryTree() {
        final Optional<BinaryTree> result = createBinaryTree().findByName("leaf3");
        assertThat(result.isPresent(), is(true));
        assertThat(result.get().getName(), is("leaf3"));
        assertThat(result.get().getChildren().size(), is(0));
    }

    private SomeTree createSomeTree() {
        final SomeTree leaf1 = new SomeTree("leaf1", ImmutableSet.of());
        final SomeTree leaf2 = new SomeTree("leaf2", ImmutableSet.of());
        final SomeTree leaf3 = new SomeTree("leaf3", ImmutableSet.of());
        final SomeTree leaf4 = new SomeTree("leaf4", ImmutableSet.of());
        final SomeTree branch1 = new SomeTree("branch1", ImmutableSet.of(leaf1, leaf2));
        final SomeTree branch2 = new SomeTree("branch2", ImmutableSet.of(leaf3, leaf4));
        return new SomeTree("root", ImmutableSet.of(branch1, branch2));
    }

    private BinaryTree createBinaryTree() {
        final BinaryTree leaf1 = new BinaryTree("leaf1", null, null);
        final BinaryTree leaf2 = new BinaryTree("leaf2", null, null);
        final BinaryTree leaf3 = new BinaryTree("leaf3", null, null);
        final BinaryTree leaf4 = new BinaryTree("leaf4", null, null);
        final BinaryTree branch1 = new BinaryTree("branch1", leaf1, leaf2);
        final BinaryTree branch2 = new BinaryTree("branch2", leaf3, leaf4);
        return new BinaryTree("root", branch1, branch2);
    }
}

Note, that with Java 9 there will be the possibility to create private default methods, which would maybe enable us to hide the default <T extends FindableNode> Optional<T> findByName(final Set<T> nodes, final String name) method from the world, thus making the interface even more concise and clear. Since i haven't tried Java 9 yet i'm not sure if it would really work :D

Also we could get rid of the Type casts inside FindableNode using an inner class with a type parameter, but for the sake of simplicity, i deliberately left the typecasts in.

Happy Hacking!

Monday, July 6, 2015

Playing with the Java 8 Collectors API

I recently came across a problem that looked like it had to be a walk in the park using the Java 8 Collectors API. A short glance at the API doc with its myriad of angle brackets, one letter type parameters and predefined Collectors promised a type-safe solution just waiting to be discovered.

I did indeed find a solution to the problem quite quickly, but was not quite happy about the clumsy way it looked. This blog post is meant to share my findings trying to solve as simple a task as the one represented by the following Unit Test:

public class MyCollectorsTest {  
    private List<KeyValuePair> createKeyValuePairs() {
        return new ImmutableList.Builder<KeyValuePair>()
                .add(new KeyValuePair("java", "christoph"))
                .add(new KeyValuePair("java", "susanne"))
                .add(new KeyValuePair("scala", "susanne"))
                .add(new KeyValuePair("java", "martin"))
                .add(new KeyValuePair("java", "thomas"))
                .add(new KeyValuePair("java", "armin"))
                .add(new KeyValuePair("scala", "armin"))
                .build();
    }

    @Test
    public void testGroupByKeysAndJoinValues() {
        final Map<String, String> result = new MyCollectors().groupByKeysAndJoinValues(createKeyValuePairs());
        assertThat(result.size(), is(2));
        assertThat(result.get("java"), is("armin, christoph, martin, susanne, thomas"));
        assertThat(result.get("scala"), is("armin, susanne"));
    }
}

Version 1

    // Version 1: use groupingBy, get entrySet and collect it to a map, sorting the values in the values function
    public Map<String, String> groupByKeysAndJoinValuesVersion1(final List<KeyValuePair> tuples) {
        return tuples.stream()
                .collect(groupingBy(KeyValuePair::getTheKey))
                .entrySet()
                .stream()
                .collect(toMap(Map.Entry::getKey, this::sortAndJoin1));
    }

    private String sortAndJoin1(final Map.Entry<String, List<KeyValuePair>> e) {
        return e.getValue().stream()
                .map(KeyValuePair::getTheValue)
                .sorted()
                .collect(joining(", "));
    }

Well...it works but it feels kinda cumbersome to have to pick out the EntrySet's values inside the toMap() function only to get sorted values. Also i wasn't happy with the fact that sortAndJoin1() - as its name suggests - did more than one thing. Let's keep trying:

Version 2

    // Version 2: the same as version 1 but implemented with nested collectors
    public Map<String, String> groupByKeysAndJoinValuesVersion2(final List<KeyValuePair> tuples) {
        return tuples.stream()
                .collect(
                        groupingBy(
                                KeyValuePair::getTheKey,
                                mapping(
                                        KeyValuePair::getTheValue,
                                        collectingAndThen(toList(), this::sortAndJoin2)
                                )
                        )
                );
    }

    private String sortAndJoin2(final List<String> stringList) {
        return stringList.stream().sorted().collect(joining(", "));
    }

Oooook...this is basically the same code as above, but uses nested or downstream collectors. To be perfectly honest, i think that having the possibility of downstream collectors is great, but the syntax is rather awkward and it takes some weird formatting to make it readable at all. Furthermore, this solution did not solve the problem i had with version 1: a separate method for sorting and joining. Yes, i know i can nest lamdas until the cows come home, but eventually, some time in the far future someone else might need to read (and understand) my code and i don't want them to curse me when they have nightmares from my code ;)

Anyway, since i'm always sure, there's a better way to do anything i kept on trying and eventually came up with the following:

Version 3

    // Version 3: collect to TreeSet thus sorting the values.
    public Map<String, String> groupByKeysAndJoinValuesVersion3(final List<KeyValuePair> tuples) {
        return tuples.stream()
                .collect(
                        groupingBy(
                                KeyValuePair::getTheKey,
                                mapping(
                                        KeyValuePair::getTheValue,
                                        collectingAndThen(
                                                toCollection(TreeSet::new),
                                                (theSet) -> theSet.stream().collect(joining(", "))
                                        )
                                )
                        )
                );
    }

Using the mapping Collector and TreeSet i could remove the need to manually sort the collection before joining it. Anyway - a small victory given the fact that the code still looks complicated, even though it doesn't do much.

I thought i could avoid it, but it seemed the only way to get readable code was to write my own collector. So, with a little bit of help from IntelliJ completing the type arguments for me i set out in a last desparate try to achieve concise and readable code, removing all the details from the call site:

Version 4

    // Version 4: use custom collector to hide sorting and joining.
    public Map<String, String> groupByKeysAndJoinValuesVersion4(final List<KeyValuePair> tuples) {
        return tuples.stream()
                .collect(groupingBy(KeyValuePair::getTheKey, new KeyValuePairSetStringCollector()));
    }

    private static class KeyValuePairSetStringCollector implements Collector<KeyValuePair, Set<String>, String> {
        @Override
        public Supplier<Set<String>> supplier() {
            return TreeSet::new;
        }

        @Override
        public BiConsumer<Set<String>, KeyValuePair> accumulator() {
            return (strings, keyValuePair) -> strings.add(keyValuePair.getTheValue());
        }

        @Override
        public BinaryOperator<Set<String>> combiner() {
            return (keyValuePairs, keyValuePairs2) -> {
                keyValuePairs.addAll(keyValuePairs2);
                return keyValuePairs;
            };
        }

        @Override
        public Function<Set<String>, String> finisher() {
            return (set) -> set.stream().collect(joining(", "));
        }

        @Override
        public Set<Characteristics> characteristics() {
            return new HashSet<>();
        }
    }
The collector can be rewritten to use a bit less boilerplate like so:
    private static Collector<KeyValuePair, Set<String>, String> toKeyValuePairSet() {
        final Supplier<Set<String>> supplier = TreeSet::new;
        final BiConsumer<Set<String>, KeyValuePair> accumulator = 
            (strings, keyValuePair) -> strings.add(keyValuePair.getTheValue());
        final BinaryOperator<Set<String>> combiner = (keyValuePairs1, keyValuePairs2) -> {
            keyValuePairs1.addAll(keyValuePairs2);
            return keyValuePairs1;
        };
        final Function<Set<String>, String> finisher = (set) -> set.stream().collect(joining(", "));
        return Collector.of(supplier, accumulator, combiner, finisher);
    }

It may depend on the case at hand, but in this case maybe the custom Collector pollutes the call site's code least.

Conclusion

The collector's API is definitely powerful, i do have my honest doubts, however, that the more advanced features will gain a lot of popularity outside of library code. Maybe it was intended to be that way, i don't know. Java 8 gives the developer much better tools to transform and modify data than did its previous versions, but it is still a far cry from being as comfortable or intuitive as other languages.

In the end i decided i needed to know how this would look in scala.

The Final Version

In scala this is basically a three-liner:

keyValuePairs.groupBy(_.theKey).collect {  
  case (key: String, values: List[KeyValuePair]) =>
    (key, values.map(_.theValue).sorted.mkString(", "))
}

Happy Hacking :)

Friday, June 19, 2015

Meet us at June Scala Meetup@Vienna

We will attend this month's Scala Meetup in Vienna, Austria. The program looks very nice, as always.

  • How Scala supports Clean Architecture by Sebastian Nozzi
  • Doctus by Wolfgang Wagner
  • What's Hot in Scala Frameworks by Radim Pavlíček

See you there :)

Wednesday, May 27, 2015

Porting a Scala Play 2.3 application with Slick 2.1.0 to Play 2.4 and Slick 3.0.0

We recently ported a smallish scala web-application using in Play 2.3 and Slick 2.1.0 to Play 2.4.0 and Slick 3.0.0 and would like to share our experiences. The Play 2.4 migration guide covers many issues but it still took us some time to figure everything out.

Bumping all versions

A.k.a.: the easy part.

first we edit build.sbt:

scalaVersion := "2.11.6"  
libraryDependencies ++= Seq(  
  ...
  "com.typesafe.slick" %% "slick" % "3.0.0",
  "com.github.tminglei" %% "slick-pg" % "0.9.0",/* enum support, you might not need that */
  "com.typesafe.play" %% "play-slick" % "1.0.0",
  "com.typesafe.play" %% "play-slick-evolutions" % "1.0.0",
  "org.postgresql" % "postgresql" % "9.4-1201-jdbc41",
  "org.slf4j" % "slf4j-nop" % "1.7.12"
)

also: remove jdbc and anorm from your libraryDependencies.

then project/plugins.sbt:

addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.4.0")

then project/build.properties:

sbt.version=0.13.8  

Finally, do an sbt update clean compile and watch your carefully crafted codebase blow up in your face in a jumble of compile errors you wish you could unsee!

Play changes

Disclaimer: this is not an official instruction manual on porting Play apps, i am just sharing our own experiences :>

Missing implicit Messages

[error] ... could not find implicit value for parameter messages: play.api.i18n.Messages
[error] Messages("registration.email.registration.subject", queueInfo.event.eventName),
[error] ^

This error hit us quite hard, because it means that anything that uses Messages() has to have access to an implicit value of type Messages.

Fixing it meant that we had to

  • add (implicit messages: Messages) to every template that used Messages(), which meant that we had to
  • have every controller which made use of Messages or used views that made use of Messages implement the I18nSupport trait, which meant we had to
  • change all controllers from object to class and the @Inject() annotation, which meant we had to
  • change the routesGenerator to InjectedRoutesGenerator in build.sbt

At that moment we started feeling like Jack :D

So then, in build.sbt we used:

routesGenerator := InjectedRoutesGenerator

When we finally realized that our routing wasn't actually entirely broken now, but instead we were hitting an IntelliJ bug that caused the parsing/highlighting in the routes file to fail, happiness returned to our faces and we went on to Akka.

We also used Message in Akka actors for sending emails, so we had to get those pesky implicits there too.

So our case classes used for messaging changed from

case class RegistrationMessage(queueInfo: QueueInfo)  

to

sealed trait RegMessage {  
  val messages: Messages
}

case class RegistrationMessage(queueInfo: QueueInfo)(implicit val messages: Messages) extends RegMessage 

and our Actors themselves changed from something like

  override def receive = {
    case RegistrationMessage(queueInfo) =>

to

override def receive = {  
    case message: RegMessage => message match {
      case RegistrationMessage(queueInfo) =>
        implicit val messages = message.messages

The world of Actors, Controllers and Templates made sense again, so we could move on.

Logging

Log configuration in application.conf is deprecated, so just create a new file conf/logback.xml with the following content:

<?xml version="1.0" encoding="UTF-8"?>  
<configuration>

    <conversionRule conversionWord="coloredLevel" converterClass="play.api.Logger$ColoredLevel"/>

    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
        <file>${application.home}/logs/application.log</file>
        <encoder>
            <pattern>%date [%level] from %logger in %thread - %message%n%xException</pattern>
        </encoder>
    </appender>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%coloredLevel %logger{15} - %message%n%xException{10}</pattern>
        </encoder>
    </appender>

    <appender name="ASYNCFILE" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="FILE"/>
    </appender>

    <appender name="ASYNCSTDOUT" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="STDOUT"/>
    </appender>

    <logger name="play" level="INFO"/>
    <logger name="application" level="DEBUG"/>

    <!-- Off these ones as they are annoying, and anyway we manage configuration ourself -->
    <logger name="com.avaje.ebean.config.PropertyMapLoader" level="OFF"/>
    <logger name="com.avaje.ebeaninternal.server.core.XmlConfigLoader" level="OFF"/>
    <logger name="com.avaje.ebeaninternal.server.lib.BackgroundThread" level="OFF"/>
    <logger name="com.gargoylesoftware.htmlunit.javascript" level="OFF"/>

    <logger name="slick.jdbc.JdbcBackend.statement" level="DEBUG"/>

    <root level="WARN">
        <appender-ref ref="ASYNCFILE"/>
        <appender-ref ref="ASYNCSTDOUT"/>
    </root>

</configuration>

Slick Changes

Firstly, the configuration format in application.conf changed:

This is actually not so much a change in Slick, but since the Slick documentation advises to use the new Typesafe Config i think it can be mentioned here. We somehow couldn't get play evolutions to work with the Typesafe Config way of configuring the db, so we used the standard slick way, which worked perfectlty fine.

before:

db.default.driver = org.postgresql.Driver  
db.default.url = "jdbc:postgresql://localhost/ea"  
db.default.user = "ea"  
db.default.password = "secret" 

after:

# Database configuration
# ~~~~~
slick.dbs.eaDB.driver="slick.driver.PostgresDriver$" # You must provide the required Slick driver!  
slick.dbs.eaDB.db.driver=org.postgresql.Driver  
slick.dbs.eaDB.db.url="jdbc:postgresql://localhost:5432/ea"  
slick.dbs.eaDB.db.user=ea  
slick.dbs.eaDB.db.password="secret"  
slick.dbs.eaDB.db.numThreads = 10  
slick.dbs.eaDB.db.connectionTimeout = 5000  
slick.dbs.eaDB.db.validationTimeout = 5000

#play.evolutions.db.eaDB.autoApply=true
play.evolutions.db.eaDB.enabled=true ## probably not necessary but we like being explicit  
play.evolutions.db.eaDB.autoCommit=false  

Secondly, in Slick 2.1.0 you would usually define database-related methods like so:

  def findById(id: Int)(implicit s: Session): Option[EventType] =
    filter(_.eventTypeId === id).firstOption

or

  def findById(id: Int): Option[EventType] = {
    DB.withSession { implicit s: Session =>
      filter(_.eventTypeId === id).firstOption
    }
  }

Slick 3.0.0 comes with a new, composable and entirely asynchronous API returning Futures for everything. I love it! It lets you do things like this

val deleteAction = Tiles.delete  
val loadAction = Tiles ++= extractTilesFromDump(new FileInputStream(dumpFile))

val futureResult = db.run(deleteAction.zip(loadAction).transactionally)  
futureResult.onSuccess { case a => println(s"Successfully deleted ${a._1} and imported ${a._2.get} rows") }  
futureResult.onFailure { case a => println(s"Failed to import: $a") }

We didn't want to change all our controllers to accommodate for this change right away though, so as a first step we modified our database classes to keep the same method signatures by hiding the asynchronous nature of the new API:

EaDB.scala:

object EaDB {  
  private val eadb: String = "eaDB"
  private val dbConfig = DatabaseConfigProvider.get[JdbcProfile](eadb)(Play.current)

  def result[R](a: DBIOAction[R, NoStream, Nothing]): R = Await.result(dbConfig.db.run(a), 1 second)

  def async[R](a: DBIOAction[R, NoStream, Nothing]): Future[R] = dbConfig.db.run(a)
}

Note that we had connection leaks using the old Database.forConfig method of acquiring a connection.

EventType.scala:

  def findById(id: Int): Option[EventType] =
    EaDB.result(filter(_.eventTypeId === id).result.headOption)

Note that firstOption was changed to headOption. the same goes for first.

Thirdly, some of the old, lower-level APIs have been deprecated

[warn] ... method list in trait Invoker is deprecated: Invoker convenience features will be removed. Invoker is intended for low-level JDBC use only.
[warn]     for (row <- q(event.eventId).list if currentPosition == -1) {
[warn]                                  ^
[warn] two warnings found

So this

import scala.slick.jdbc.{GetResult, StaticQuery => Q}  
...
implicit val resultMapping = GetResult[(Int, Participant)](r =>  
  (r.<<, Participant(r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<)))
val q = Q[Int, (Int, Participant)] + "select row_number() over() rn, a.* from (select * from participant where event_id = ? order by ts asc) a"  
...
for (row <- q(event.eventId).list if currentPosition == -1) {  
...

became

implicit val resultMapping = GetResult[(Int, Participant)](r =>  
  (r.<<, Participant(r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<, r.<<)))
val queryAction =  
  sql"""select row_number() over() rn , a.* from
       |(select * from participant where 
       |event_id = ${event.eventId} order by ts asc) a""".as[(Int, Participant)]

val result = EaDB.result(queryAction)  
...
for (row <- result if currentPosition == -1) {  
...

Note the neat sql interpolator, that will do parameter binding for you all without question marks. (uuuuh!)

Finally, we do want to use Slick 3's awesome powers of asynchronicity in some places

To that end we change our database code from

object Tiles extends TableQuery(new Tiles(_)) {  
  def list(): Seq[Tile] = {
    EaDB.result(sortBy(_.sortOrder).result) // remember? we used Await.result in there, so this blocks!
  }
}

to

object Tiles extends TableQuery(new Tiles(_)) {  
  def list(): Future[Seq[Tile]] = {
    EaDB.async(sortBy(_.sortOrder).result) // here we just call db.run
  }
}

and our controller from

class TilesResource @Inject()(val messagesApi: MessagesApi) extends Controller with I18nSupport {  
  def list() = Action { implicit rs =>
    ...
    Ok(Json.toJson(Tiles.list()))
  }
}

to

class TilesResource @Inject()(val messagesApi: MessagesApi) extends Controller with I18nSupport {  
  def list() = Action.async { implicit rs => // note the .async here
    ...
    Tiles.list().map { result => Ok(Json.toJson(result)) }
  }
}

Voilà! Play 2.4 and Slick 3.0.0!

Thursday, May 21, 2015

Jira Git Stats Collector utility on GitHub

We have just pushed a very small utility tool named Jira Git Stats Collector to github. You're welcome to check it out :).

Tuesday, May 19, 2015

Meet us at this month's Scala Meetup@Vienna

We will attend tomorrow's Scala Meetup in Vienna, Austria. The program looks absolutely delicious :)

Java 8: Getting rid of checked Exceptions

The scenario

  • you have some DAO that can enrich a DTO with different pieces of information
  • the call site can specify which information options should be added to the DTO
  • there are many different types of options that could potentially be added
  • you hate switch/case but you love java 8 streams
  • you want to play with java 8

The problem

  • we want to collect enrichment-methods in a Map and execute them dynamically
  • our enrichment methods throw SQLException and (for some crazy reason) cannot be changed
  • we still want (for some other crazy reason) to propagate any SQLException up to the call site

The solution

  • we create a @FunctionalInterface representing something that consumes a T and throws a SQLException
  • we allow that interface to transform itself into a Function that takes a T and returns an Optional<SQLException>.
Thus we can remove the checked Exception that would otherwise have stopped us from using method references in options.stream().

import java.sql.SQLException;  
import java.util.Map;  
import java.util.Optional;  
import java.util.Set;  
import java.util.function.Function;

import static java.util.Arrays.asList;

public class SomeDao<T> {  
    private final Map<With, OptionHandler<T>> optionHandlers;

    public SomeDao() {
        this.optionHandlers = ImmutableMap.of(
                With.THIS, this::enrichWithThis,
                With.THAT, this::enrichWithThat
        );
    }

    public void enrich(final T someDto, final With... options)
            throws SQLException {
        final Optional<SQLException> e = asList(options).stream()
                .map(optionHandlers::get)
                .map(OptionHandler::toFunction)
                .map(function -> function.apply(someDto))
                .filter(Optional::isPresent)
                .map(Optional::get)
                .findAny(); // this will short circuit execution
                            // in case a SQLException occurs

        if (e.isPresent()) {
            throw e.get();
        }
    }

    @FunctionalInterface
    private interface OptionHandler<T> {
        void accept(final T t) throws SQLException;

        default Function<T, Optional<SQLException>> toFunction() {
            return argument -> {
                try {
                    accept(argument);
                    return Optional.empty();
                } catch (SQLException e) {
                    return Optional.of(e);
                }
            };
        }
    }

    public enum With {
        THIS, THAT
    }

    private void enrichWithThis(final T dto) throws SQLException {
        // something
    }

    private void enrichWithThat(final T dto) throws SQLException {
        // something
    }
}

The call site would typically look something like this:

 
  private final SomeDao<MyDto> someDao;
  ...

  private void prepareMyDto(final MyDto myDto) {
    someDao.enrich(myDto, With.THIS, With.THAT);
  }
}