HolidayCheck craftsmen at SoCraTes Tenerife

Last week our developers Tobias Pflug, Alexander Schmidt, Robert Jacob, Robert Mißbach and Roberto Bez went to the SoCraTes Conference on Tenerife. Of course that is kind of an unusual place for a conference, but exactly that makes the difference. Beeng somewhere isolated, like in a lonely hotel near the beach, gave us a very relaxed and comfortable feeling while talking with other great people about software craftmenship and a lot of other interesting topics.

The Venue: Tenerife

After a very windy landing on Tenerife, we went to our venue, the lovely Hotel Aguamarina Golf, located not to far away from the airport. It gave us enough space for all the sessions we needed, as for example next to the pool or on the terrace.

The Hotel Aguamarina Golf

What is the craftsmanship conference about?

SoCraTes Canaries in fact is a Craftmanship retreat for open-minded craftspeople who strive to improve their craft and the software industry as a whole, organised by the local Software Cratfsmanship comunity in the Canary Island.


It is a totally community-focused Open Space, without a predefined agenda and without any speakers known before the event starts. Proposals were presented during the event itself in the morning:

Session planing in the morning

After the proposals were presented, we had five different locations (like classic rooms with projectors, but also near the pool or on the terrace, while enjoying a great view of the atlantic sea).


Discussion about costs of CI and CD


Talks & Discussions

Just to name some topics we discussed:

Jan-Simon Wurst proposed a discussion about Git Flow vs. trunk based development, trying to find out how other developer teams work and which might be the better solution. But as always – there is no perfect way, but a lot of right ones!

Robert Jacob showed us HolidayChecks continuous deployment pipeline with Mesos and Docker, facing a lot of interesting questions about that hyping topic.

Tobias Pflug let us had a deeper insight into his vim-skills, presenting some of his favorite plugins. Very cool stuff!

Roberto Bez initiated a discussion about distributed teams by sharing his experience about that and with a lot of interest in how other companies are currently facing with not co-located teams. For some it might work, for others it does not – but one outcome was clear: It is not always easy!

With two days full of interesting discussions, in the evening there was the time to enjoy a beer together as well.

To sum up, we went back home with a lot of new motivation, which we can hopefully use to become better craftsmen in our daily work!

A special thanks to Carlos Blé and all the other organizers for the great conference. We are already looking forward to visit you again in 2016!

How we do Agile Intro Workshops at HolidayCheck


Why do we do Agile Intro Workshops

When you are situated in a web driven company with more than 200 employees in one location – plus
dozens on top in distributed offices – the need for a common basic understanding of agile product
development is at hand.
Our product development consists of +/- 8 teams – all contributing to different parts of the web platform
plus native mobile apps.
So we experience communication and alignments that go far beyond the dedicated product dev teams only – basically all departments have a smaller or bigger stake in the agile teams.
That leads us to the need to have a basic common understanding of Agile and how we live it at HolidayCheck.

For whom do we do them

The target groups are always totally mixed and come from different departments – this way we ensure that people come together and form a completely new virtual team during the workshop. After the workshops they are again released to their native teams and their well known area – able to bring in and spread the learnings immediately.

How do we do them

We set a timebox to 90mins and have a theoretical part and a practical part – a simulated sprint with the goal to build something that is ready to use in very few minutes.

The theoretical part covers basic knowledge about Agile, Scrum and Kanban:

  • The need for Agile product development
  • The roles and artifacts
  • The specific team constellation here at HolidayCheck
  • Agile Tools we use at HolidayCheck

And now comes the fun part! We use LEGO bricks to simulate a sprint with the goal to build a house, a garden and a car.
After about 45mins the audience is asked to form 1-2 agile teams themselves to do that – they are given roles they need to fulfill the best they can – even if their real role is completely different (believe me, software developers always want to build stuff instead of just telling where the team should head at).
Roles we use are Product Owner, Developers, UX Designer, Test Engineer.


The Scrum Master role is filled by us as the moderators themselves. Also if the teams are rather small, with +/- 4 people, we only use Product Owner and Developers to keep it more simple and going forward.
We use small time boxes to simulate

  • A sprint planning session (2mins)
  • The sprint itself (5mins)
  • A sprint review (2mins)
  • A sprint retrospective (1min)
  • legopic2

The teams apply the given info instantly and also try to prioritise and deal with the limited time.
This gives them a feeling of how real teams have to focus and organise themselves.
Many people think that building a house with LEGO bricks with a whole team is super easy in 5mins – trust me: experience shows that many struggle to finish in time, while others do really great.


What do you think? Please comment and tell us your experiences!

An Easy Way to Measure Method Calls in a Java resp. Scala Application

In the following find a description how to measure certain method calls in a Scala application even under production load.

To collect execution times for analysis e.g. for locating performance problems in existing Java™ respectively Scala applications you can use JETM. JETM is a library offering execution measurement. The overhead is little compared to Java™ Virtual Machine Profiling Interface (JVMPI) or Java™ Virtual Machine Tool Interface (JVMTI) and the related profiler extensions. Thus the risk to slow down the application in production environment is also little.

Using the programmatic approach of performance monitoring with HttpConsoleServer and JMX Support.

In maven pom.xml include the dependency


for the core measurement functionality and


for an output in a HttpConsole.  (For version information see e.g.

Within a Singleton create a nested monitor ( ” true ” parameter) with default ExecutionTimer  and Aggregator by
Start an EtmMonitor with
val etmMonitor = EtmManager.getEtmMonitor

Start an HttpConsoleServer with
val server: HttpConsoleServer = new HttpConsoleServer(etmMonitor)

Config.JETMMonitoring.port: the port is configurable by using 


for further information see

Register an MBean for JMX Support:
val mbeanServer: MBeanServer = ManagementFactory.getPlatformMBeanServer

if (mbeanServer != null) {

  val objectName = new ObjectName("etm:service=PerformanceMonitor")
  // register EtmMonitor using EtmMonitorMBean
  try {
    mbeanServer.registerMBean(new EtmMonitorMBean(etmMonitor, "com.holidaycheck.mpg"), objectName)
  } catch ...

Keep in mind that you have to take care of stopping the measuring e.g. on shutdown hook.

Mix in the measure call by a trait e.g. named JETM that owns a reference to the monitor ( private val monitor = EtmManager.getEtmMonitor() ):

def measure[T](name: String)(op: => T): T = {
  if (!JETM.monitor.isCollecting()) return op

  val point = JETM.monitor.createPoint(jetmPrefix + name)
  try {
  } finally {

(jetmPrefix is the canonical name of the class that mixes in the trait).

Within the class e.g. OfferMetaDataMap that contains the call to be measured use

class OfferMetaDataMap(...) extends ... with JETM {

  def aMethodCallToMeasure = {

    measure("Get") {
      /** basic method body */


“Get” is the flag of the measured method. In HttpConsole this will occur like

| Measurement Point | # | Average | Min | Max | Total |
| com.holidaycheck.mpg.service.actors.cache.OfferMetaDataMap#Get | 4 | 3.556 | 1.029 | 6.075 | 14.224 |

The measured data is accessible via JMX or via http://[application’s url]:[configuredPort]/index.


For further information see

for instance about persistent aggregation see

Continuous Improvement

Continuous Improvement Culture in HolidayCheck

In this blog post we will explain how HolidayCheck is creating a continuous improvement culture across our IT and Product Development departments.

A lot of companies talk about this topic but few of them actually share it with the rest of the world. This is where HolidayCheck wants to be different. We want to share our experiences with all of you in order to help you to get better on your Agile Implementation. So lets start from the beginning.

Some months ago I had the opportunity to be a beta reader of a fantastic book called: “Lean Change Management” written by Jason Little. This book is about Agile Change management in our companies and how can we create change in our companies in a very effective way. Highly recommendable.

In HolidayCheck we are trying one of Jason´s tools. This tool is called Experiment Board. Jason´s explanation for this term is simple; he feels the word “change” can be quite disruptive, so he calls it “experiment”.

Another reason for naming it like this its related with the fact that if you call changes, experiments you are developing an approach that accepts the fact that you cannot know everything upfront. Which is what usually happens in our companies.

The first step in order to use this tool is to create a hypothesis, after all, experiments start with a hypothesis. The hypothesis is an idea of something that we want to improve in HolidayCheck.

To help with this task (Hypothesis creation) Jason created a template:

We hypothesize by <implementing this change>

We will <solve this problem>

Which will have <these benefits>

as measured by <this measurement>

So mapping it now to HolidayCheck we can have something like this:

We hypothesize by “Fixing the team server instability

We will “Never fail a sprint because of team Server problems

Which will “Allow us to release new features every sprint

As measured by “The number of successful releases over the next 10 sprints

There are several ways to successful tackle a Hypothesis. These “ways” are called “options”. The next step is to brainstorm several different possible options that will allow us to solve the problem stated in our hypothesis.

Each option has a value and a cost associated to it, so the trick is to select the options with the biggest amount of value and with the smallest required effort/cost.

The selected options will be the ones that we will be implement in the near future. In HolidayCheck we define weekly options. This will allow us to get fast feedback about what we are doing.

With the pre selected options at our hand we can then chose what are the options that we want to implement during the next week.

When we are done with the option we must review what was achieved, we must see if our option caused the desired outcome. If this is the case then we are done and we succeed with our experiment. If not, we can create a new option with what we learnt.

In order to implement this process, we created a Kanban board to track all the changes that we are trying out in our company. Every week we come together and discuss with eachother what we did learn with this experiments and how these experiments are actually improving our company.

Continuous Improvement

At this point we are trying this approach with the Scrum Masters but the plan is to expand to the rest of the organization. I believe this is a fantastic way to improve companies.

Every Scrum Master has completely freedom to create hypothesis, generate different options and drive the whole improvement experiment.

My vision as Agile Coach is to create an environment where these boards are spread in different parts of the company and everyone in the whole company can pick up different topics to improve.

Can you imagine a company that provides a framework for every single employee to implement daily improvements allowing him or her to create a culture of continuous improvement?

I can, this company is called HolidayCheck.

I am an Agile Coach in HolidayCheck, and I am preparing something great for you: “Agile Retrospectives Program” this program will help you to become better and better on your continuous improvement effort.


Use case of Akka system’s event bus: Logging of unhandled messages

Use case of Akka system’s event bus: Logging of unhandled messages

Akka is a toolkit for building concurrent applications on the JVM using Actor model and relying on asynchronous message-passing.

An actor sends a message to another actor which handles the message by its receive method in case the message type is registered. Look at Akka API and Akka documentation for detailed information.

If the receiver has no matching message type the message cannot be handled i.e. the message is programmatically not expected. An unhandled message is published as an UnhandledMessage(msg, sender,  recipient) to the actor system’s event stream.
If the configuration parameter = 'on' it is converted into a Debug message. Confer: UntypedActor API, in: Akka Documentation v2.3.7, URL: (visited: 2014/11/24).

That’s fine for the configuration akka.loglevel = "DEBUG". But on “INFO” level there is no warning.

To log unhandled messages and that means even to know about such unexpected occurrences of messages you can subscribe an actor to the system’s event stream for the channel This is done e.g. by
system.eventStream.subscribe(system.actorOf(Logger.props()), classOf[UnhandledMessage])


object Logger {
  def props() = Props(new Logger)

  val name = "UnhandledMessageLogger"

class Logger extends Actor with ActorLogging {

  /** logs on warn level the message and the original recipient (sender is deadLetters) */
  override def receive = {
    case ua@UnhandledMessage(msg, _, recipient) =>
      log.warning(s"Unhandled: $msg to $recipient")


This logger actor bypasses the dependency of akka.loglevel = “DEBUG”. The information about unhandled messages is logged to the Akka build-in ActorLogging in the example ahead. But can be logged to the application specific logging component as well.

How to find a vision for your team


What is a team vision

A vision is a long term goal – a declaration of the team’s future.

During one of the last retrospectives in our team I was quite excited about the possibility to form a team vision.

Basically a well formulated sentence which would then be hung up in our team room – visible for all internals and externals to drop by.

Why is it important

The team vision should serve as common ground for our day to day work, help with decisions to make and also differ us from other teams respective their doings and paths to follow.

So far to my optimistic theory.

How does it work

The method I tried to apply to form a team vision was like this: In the beginning, the whole team was asked to answer the question:

Why is a team vision important for us?

That should serve for some basic common ideas and get everyone to speed up for the next part.
It worked pretty well and the team came up rapidly with statements like ‘Motivation’, ‘Focus’, ‘Group Identity’, ’Solidarity’, ‘code of behaviour’.

In the next part, the team was asked to form a team vision statement – a statement in which they believe and which they support and also vice versa.
It should follow this pattern:

For (target organization)
Who (statement of the need or opportunity)
The (team name) is a (team classification, category)
That (team singularity, compelling reason for the team existence).

Unlike (current alternative without the team)
Our team (statement of primary differentation).


This turned out to be much harder than expected and I tell you why:

Not that the team did not believe in their strengths and abilities – much more the fact that it was nearly impossible to look at the company’s vision and extract a meaningful vision statement for the team blocked all from coming up with something round and smooth.

The team was forced to come up with something to fit in this predefined pattern – a template that was just not the correct one to apply.
Before things got more and more complicated and artificial we decided to not make up something that is not suitable for the team and – in the end even worse – makes them uncomfortable when talking about it.

As you can imagine, this felt quite like a failed approach for me as Scrum Master of the team. However, having let this experience sinked in for a while, it was the correct consequence chosen by the team.

What if something had been published that would serve as a burden rather than something motivational for everyone?
The team gets along as a team quite well – each one supports the others when stuck, we have some rituals and some truly common ground we base our work on.
The vision remains unspoken for now – still it is felt day to day when working in the team. Maybe it needs just some more time before a vision itself is “ready to release” – and therefore visual for everyone.

Why did it fail

What I learned from this experience is that trying to push something like a team vision (statement) is not easy at all.
The reason behind it needs to be explained very well and detailed first. And even then you have to nail the critical slot for the right timing.
This means that – instead of pushing something adHoc – it can be easily released during a coffee talk or team event just at the right point of time – and then feel right.
When the time is right. Stay patient. Inspect and adapt 😉

Did you like the post? Send me feedback. I would love to hear from you!

The Theory of Responsive Websites

Responsive Webdesign

Here at HolidayCheck responsive web design is a big challenge, although we’re all big fans.

I recently joined the company as a Frontend Developer and I have been a big fan of responsive web design for years. The latest article I wrote about the subject is called the Theory of Responsive Webdesign. (Currently only in German, sorry)

Here you have a small teaser…

There is a big difference between responsive websites and good responsive websites, and of course it takes much more than to only consider some CSS improvements. Principles like mobile first are pretty hyped these days, but the challenge is not only a technical one.

Consider loosing 80% of the available space of a desktop screen and try to put everything important inside the mobile version. Designers have to rethink their concepts to provide a good user experience on small devices. Then, step by step, details can be added (progressive enhancement). One of the main benefits of this principle is that adding details is much easier than removing them from an already blown site.

You don’t have to develop everything by yourself, because there are a lot of frameworks out there. Going from the small ones, which only provide a small set of CSS classes for a responsive grid, to the all-inclusive frameworks, which provide a full set of components like buttons, tables and forms.

One of the biggest and most discussed topics in RWD is performance. As there is only one website, the smallest device has to load the entire HTML code. Also images can become an annoying mess, because loading high resolution images on a slow device can be the overkill for loading time. There are some approaches as partial loading to resolve these problems, but as nearly everywhere, there is a lot of space for improvements.

You can read this and many other responsive things in the full article on heise. In the next few weeks I am going to write the practical part of RWD, stay tuned!

Testing REST-clients with “Jersey Test Framework” in specs2

In a “Microservice Architecture” there will likely be the case, that one service depends on another one. So you normally implement some kind of client in the service that connects to the other service. When writing unit tests you can then mock this client and make it behave like you expect. But how do you test the client itself? Therefore you need to mock the other service and make it response as you expect.

For writing tests against a jersey-server there exists the “Jersey Test Framework”. Here you use the jersey application as the system under test (SUT). But you can easiliy use this framework to use the application as a mocked service and run tests against the client using this mocked service.

We are mainly writing our code in scala and therefore writing our tests in specs2. Thats the reaseon why I wanted to find a way how I can use JerseyTest easiliy in my specs tests. Inspired by the project specs2-embedmongo I created a FragmentsBuilder that injects an embedded JerseyService into a spec.

trait EmbedService extends FragmentsBuilder {
  self: SpecificationLike =>

  //Override to configure the application
  def configure(): Application

  // Create a JAX-RS web target whose URI refers to the Jersey application
  def target(): WebTarget = {

  private lazy val tc = {
    val baseUri = UriBuilder.fromUri("http://localhost/").port(8080).build()
    val context = DeploymentContext.builder(configure()).build
    val tcf = new GrizzlyTestContainerFactory
    tcf.create(baseUri, context)

  private lazy val client = {

  override def map(fs: => Fragments) = startService ^ fs ^ stopService

  private def startService() = {
    Step({ tc.start() })

  private def stopService() = {
    Step({ tc.stop() })


To use it you only have to mix in this trait and configure your expected behavior as a jersey application.

class EmbedServiceSpec extends Specification with EmbedService {

  override def configure(): Application = {
    new ResourceConfig(classOf[HelloResource])

  "Embed service" should {
    "be able to return 'Hello World!' on a GET request" in {
      val hello = target().path("hello").request().get(classOf[String])
      hello must be equalTo("Hello World!")


class HelloResource {
  def getHello() = "Hello World!"

The last thing you have to do is to inject the WebTarget into the Client you want to test.

class HelloResourceClientSpec extends Specification with EmbedService {

  override def configure(): Application = {
    new ResourceConfig(classOf[HelloResource])

  val sut = new HelloResourceClient(target())

  "HelloResourceClient" should {
    "return 'Hello World!' on getData" in {
      val hello = sut.getData()
      hello must be equalTo("Hello World!")


class HelloResourceClient(target: WebTarget) {
  def getData() = {

That´s it. Have fun writing clients from now on!

Profiling MongoDB with Logstash and Kibana

We’re using MongoDB as a storage for our web pages. Because many different background processes are using this database, it’s difficult to find the reason for high CPU or IO load on the server. MongoDB has some built-in tools to investigate the current behavior (like mongotop, mongostat or db.currentOp()), but we missed a single easy-to-use tool, which can be used by every developer and shows statistics over a specific period.

Continue reading