Tag Archives: Scala

Marathon Maven plugin from HolidayCheck

At HolidayCheck we are using both Docker and Apache Mesos with Mesosphere’s Marathon. Therefore for our development teams creating services using Maven we wanted to set up easy to use configurations bound to standard Maven lifecycle phases.

Starting with Docker Maven plugin

As wiring a Java/Scala project for Docker is possible by multiple existing plugins, we have chosen Spotify’s Docker Maven plugin, as it best suited our needs, was easy to integrate and allowed to use Git commit hashes (aside from standard artifact versions) in Docker image names for uniqueness.

Marathon Maven plugin usage example

You can have a look at the plugin on the GitHub:
https://github.com/holidaycheck/marathon-maven-plugin
or even start using it already in your project:

<plugin>
     <groupId>com.holidaycheck</groupId>
     <artifactId>marathon-maven-plugin</artifactId>
     <version>0.0.1</version>
     <configuration>
         
         <marathonHost>${marathon.protocol}://${marathon.host}:${marathon.port}</marathonHost>
     </configuration>
     <executions>
         <execution>
             <id>processConfig</id>
             <phase>install</phase>
             <goals>
                 <goal>processConfig</goal>
             </goals>
         </execution>
         <execution>
             <id>deploy</id>
             <phase>deploy</phase>
             <goals>
                 <goal>deploy</goal>
             </goals>
         </execution>
     </executions>
</plugin>

To interact with Marathon the plugin uses Marathon Java API client. In the processConfig goal the plugin takes marathon.json file currently located by default in root project directory, which might
look like the one here:

{
  "id": "/my-service-1",
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "docker-registry.your.org/my-team/my-service",
      "network": "BRIDGE",
      "portMappings": [
        {"containerPort": 7070},
        {"containerPort": 7071}
      ]
    }
  },
  "env": {
    "PATH_PREFIX": "/my-service",
    "_JAVA_OPTIONS": "-Xms64m -Xmx128m -XX:MaxPermSize=64m"
  },
  "instances": 1,
  "cpus": 0.5,
  "mem": 256,
  "healthChecks": [
    {
      "protocol": "HTTP",
      "portIndex": 0,
      "path": "/my-service/v1.0/healthcheck",
      "gracePeriodSeconds": 3,
      "intervalSeconds": 10,
      "timeoutSeconds": 10,
      "maxConsecutiveFailures": 5
    }
  ]
}

replaces container/docker/image with the value provided to the plugin and evaluated taking into account all used variables e.g.: docker-registry.your.org/my-team/my-service-1.0.
The result is put into project’s target directory by default. Then it’s being picked up by the deploy goal and submitted to Marathon’s API endpoint with some minor handling of a situation whether the app already exists or not in the cluster.

Docker and Marathon plugins join forces

As mentioned earlier the Marathon plugin goes well with the Docker plugin, mostly due to the fact that we might bind those two plugins together and hook them nicely to proper Maven lifecycle phases and (what is very important in our scenario) to use Git commit hash as the Docker image name detected previously by the Docker plugin to be used in Marathon plugin’ configuration:

<plugin>
    <groupId>com.spotify</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>0.2.6</version>
    <configuration>
        <imageName>${docker-image-prefix}/${project.build.finalName}:${project.version}-${gitShortCommitId}</imageName>
    </configuration>
    <executions>
        <execution>
            <id>build</id>
            <phase>install</phase>
            <goals>
                <goal>build</goal>
            </goals>
            <configuration>
                <maintainer>HolidayCheck</maintainer>
                <baseImage>${docker-registry}/lightweight/oracle-java-7-debian</baseImage>
                <cmd>java -jar /${project.name}.jar</cmd>
                <exposes>
                    <expose>7070</expose>
                </exposes>
                <resources>
                    <resource>
                        <targetPath>/</targetPath>
                        <directory>${project.build.directory}</directory>
                        <include>${project.build.finalName}.jar</include>
                    </resource>
                </resources>
            </configuration>
        </execution>
        <execution>
            <id>push</id>
            <phase>deploy</phase>
            <goals>
                <goal>push</goal>
            </goals>
        </execution>
    </executions>
</plugin>
<plugin>
    <groupId>com.holidaycheck</groupId>
    <artifactId>marathon-maven-plugin</artifactId>
    <version>0.0.1</version>
    <configuration>
        
        <marathonHost>${marathon.protocol}://${marathon.host}:${marathon.port}</marathonHost>
    </configuration>
    <executions>
        <execution>
            <id>processConfig</id>
            <phase>install</phase>
            <goals>
                <goal>processConfig</goal>
            </goals>
        </execution>
        <execution>
            <id>deploy</id>
            <phase>deploy</phase>
            <goals>
                <goal>deploy</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Now simply type:

mvn clean deploy

and you should have it deployed and running on your target environment.

Have a good time using Marathon Maven plugin! If you are willing to contribute your pull requests are welcome.

An Easy Way to Measure Method Calls in a Java resp. Scala Application

In the following find a description how to measure certain method calls in a Scala application even under production load.

To collect execution times for analysis e.g. for locating performance problems in existing Java™ respectively Scala applications you can use JETM. JETM is a library offering execution measurement. The overhead is little compared to Java™ Virtual Machine Profiling Interface (JVMPI) or Java™ Virtual Machine Tool Interface (JVMTI) and the related profiler extensions. Thus the risk to slow down the application in production environment is also little.

Using the programmatic approach of performance monitoring with HttpConsoleServer and JMX Support.

In maven pom.xml include the dependency

<dependency>
 <groupId>fm.void.jetm</groupId>
 <artifactId>jetm</artifactId>
 </dependency>

for the core measurement functionality and

<dependency>
  <groupId>fm.void.jetm</groupId>
  <artifactId>jetm-optional</artifactId>
</dependency>

for an output in a HttpConsole.  (For version information see e.g. http://repo1.maven.org/maven2/fm/void/jetm/)

Within a Singleton create a nested monitor ( ” true ” parameter) with default ExecutionTimer  and Aggregator by
BasicEtmConfigurator.configure(true).
Start an EtmMonitor with
val etmMonitor = EtmManager.getEtmMonitor
etmMonitor.start()

Start an HttpConsoleServer with
val server: HttpConsoleServer = new HttpConsoleServer(etmMonitor)
server.setListenPort(Config.JETMMonitoring.port)
server.start()

Config.JETMMonitoring.port: the port is configurable by using 

com.typesafe.config.ConfigFactory

for further information see https://github.com/typesafehub/config

Register an MBean for JMX Support:
val mbeanServer: MBeanServer = ManagementFactory.getPlatformMBeanServer

if (mbeanServer != null) {

  val objectName = new ObjectName("etm:service=PerformanceMonitor")
  // register EtmMonitor using EtmMonitorMBean
  try {
    mbeanServer.registerMBean(new EtmMonitorMBean(etmMonitor, "com.holidaycheck.mpg"), objectName)
  } catch ...
}

Keep in mind that you have to take care of stopping the measuring e.g. on shutdown hook.

Mix in the measure call by a trait e.g. named JETM that owns a reference to the monitor ( private val monitor = EtmManager.getEtmMonitor() ):

def measure[T](name: String)(op: => T): T = {
  if (!JETM.monitor.isCollecting()) return op

  val point = JETM.monitor.createPoint(jetmPrefix + name)
  try {
    op
  } finally {
    point.collect()
  }
}

(jetmPrefix is the canonical name of the class that mixes in the trait).

Within the class e.g. OfferMetaDataMap that contains the call to be measured use

class OfferMetaDataMap(...) extends ... with JETM {

  def aMethodCallToMeasure = {

    measure("Get") {
      /** basic method body */
    }

}

“Get” is the flag of the measured method. In HttpConsole this will occur like

|-----------------------------------------------------------------|---|---------|--------|--------|---------|
| Measurement Point | # | Average | Min | Max | Total |
|-----------------------------------------------------------------|---|---------|--------|--------|---------|
| com.holidaycheck.mpg.service.actors.cache.OfferMetaDataMap#Get | 4 | 3.556 | 1.029 | 6.075 | 14.224 |

The measured data is accessible via JMX or via http://[application’s url]:[configuredPort]/index.

 

For further information see  http://jetm.void.fm/doc.html

for instance about persistent aggregation see http://jetm.void.fm/howto/aggregation_persistence.html

Use case of Akka system’s event bus: Logging of unhandled messages

Use case of Akka system’s event bus: Logging of unhandled messages

Akka is a toolkit for building concurrent applications on the JVM using Actor model and relying on asynchronous message-passing.

An actor sends a message to another actor which handles the message by its receive method in case the message type is registered. Look at Akka API and Akka documentation for detailed information.

If the receiver has no matching message type the message cannot be handled i.e. the message is programmatically not expected. An unhandled message is published as an UnhandledMessage(msg, sender,  recipient) to the actor system’s event stream.
If the configuration parameter akka.actor.debug.unhandled = 'on' it is converted into a Debug message. Confer: UntypedActor API, in: Akka Documentation v2.3.7, URL: http://doc.akka.io/docs/akka/2.3.7/java/untyped-actors.html (visited: 2014/11/24).

That’s fine for the configuration akka.loglevel = "DEBUG". But on “INFO” level there is no warning.

To log unhandled messages and that means even to know about such unexpected occurrences of messages you can subscribe an actor to the system’s event stream for the channel akka.actor.UnhandledMessage. This is done e.g. by
system.eventStream.subscribe(system.actorOf(Logger.props()), classOf[UnhandledMessage])

 

object Logger {
  def props() = Props(new Logger)

  val name = "UnhandledMessageLogger"
}

class Logger extends Actor with ActorLogging {

  /** logs on warn level the message and the original recipient (sender is deadLetters) */
  override def receive = {
    case ua@UnhandledMessage(msg, _, recipient) =>
      log.warning(s"Unhandled: $msg to $recipient")
  }

}

This logger actor bypasses the dependency of akka.loglevel = “DEBUG”. The information about unhandled messages is logged to the Akka build-in ActorLogging in the example ahead. But can be logged to the application specific logging component as well.

Testing REST-clients with “Jersey Test Framework” in specs2

In a “Microservice Architecture” there will likely be the case, that one service depends on another one. So you normally implement some kind of client in the service that connects to the other service. When writing unit tests you can then mock this client and make it behave like you expect. But how do you test the client itself? Therefore you need to mock the other service and make it response as you expect.

For writing tests against a jersey-server there exists the “Jersey Test Framework”. Here you use the jersey application as the system under test (SUT). But you can easiliy use this framework to use the application as a mocked service and run tests against the client using this mocked service.

We are mainly writing our code in scala and therefore writing our tests in specs2. Thats the reaseon why I wanted to find a way how I can use JerseyTest easiliy in my specs tests. Inspired by the project specs2-embedmongo I created a FragmentsBuilder that injects an embedded JerseyService into a spec.

trait EmbedService extends FragmentsBuilder {
  self: SpecificationLike =>

  //Override to configure the application
  def configure(): Application

  // Create a JAX-RS web target whose URI refers to the Jersey application
  def target(): WebTarget = {
    client.target(tc.getBaseUri)
  }

  private lazy val tc = {
    val baseUri = UriBuilder.fromUri("http://localhost/").port(8080).build()
    val context = DeploymentContext.builder(configure()).build
    val tcf = new GrizzlyTestContainerFactory
    tcf.create(baseUri, context)
  }

  private lazy val client = {
    ClientBuilder.newClient()
  }

  override def map(fs: => Fragments) = startService ^ fs ^ stopService

  private def startService() = {
    Step({ tc.start() })
  }

  private def stopService() = {
    Step({ tc.stop() })
  }

}

To use it you only have to mix in this trait and configure your expected behavior as a jersey application.

class EmbedServiceSpec extends Specification with EmbedService {

  override def configure(): Application = {
    new ResourceConfig(classOf[HelloResource])
  }

  "Embed service" should {
    "be able to return 'Hello World!' on a GET request" in {
      val hello = target().path("hello").request().get(classOf[String])
      hello must be equalTo("Hello World!")
    }
  }

}

@Path("hello")
class HelloResource {
  @GET
  def getHello() = "Hello World!"
}

The last thing you have to do is to inject the WebTarget into the Client you want to test.

class HelloResourceClientSpec extends Specification with EmbedService {

  override def configure(): Application = {
    new ResourceConfig(classOf[HelloResource])
  }

  val sut = new HelloResourceClient(target())

  "HelloResourceClient" should {
    "return 'Hello World!' on getData" in {
      val hello = sut.getData()
      hello must be equalTo("Hello World!")
    }
  }

}

class HelloResourceClient(target: WebTarget) {
  def getData() = {
    target.path("hello").request().get(classOf[String])
  }
}

That´s it. Have fun writing clients from now on!

HolidayCheck’s Journey with Scala and Akka

I was asked by Heiko Seeberger to provide some more information about our work at HolidayCheck using Typesafe’s technologies Scala and Akka. As you may know, we came from a classical LAMP (MySQL, PHP) development stack and turned our company into one, which is using Java/Scala at backend and CoffeeScript/Node.js at frontend side. All this within 10 months (taking into account the 1st class of pages gone live end 2012). Not that bad…

So here is the Typesafe blogpost (or directly to the case study).