Category Archives: Allgemein

Keeping C.A.L.M.S. at HolidayCheck

In the last article called “Adapting DevOps culture with C.A.L.M.S. I’ve described the C.A.L.M.S. model and showed its importance and usefulness for proper adaptation of DevOps culture.

At HolidayCheck we believe in DevOps culture and try to follow it on daily basis. As a DevOps Engineer, I’m a member of developer teams and provide them with support regarding infrastructure and system engineering. Having the ability to work with several different scrum teams in this company I have noticed that there are better and worse adaptations of DevOps culture. However, the most recent one I had a chance to work with did it surprisingly well and I would like to share my thoughts about last six months working with them.

Six months ago I was assigned to a newly formed squad with one focus: to integrate external user handling solution with our platform. At first, it sounds simple. However, if you have three different platforms, written in different tech stacks, some of them during migration, some considered legacy with lack of documentation or people knowing how it works –it can be more difficult than one could think. Apart from writing our own services, we were also meant to work with code owned by other teams. That meant sending pull requests and asking people to review and accept. It’s also worth mentioning that there was already one approach to rewrite user handling modules. It was very painful and did not finish, which made user handling the least pleasant part to be developed in the IT department.
On top of that, the new team was assembled from people who hadn’t worked with each other before. They were taken from other teams and an external outsourcing company. It all made me a bit skeptical about this project.

Surprisingly, I have noticed that every team member independently brought elements of DevOps culture to the team. People had a strong sense of ownership, willingness to make a change despite the known UH reputation. Every sprint they focused on minimizing work-in-progress to deliver as much as possible, even if that caused their own tasks not to be delivered. But what I liked most was no fear of stepping out of one’s comfort zone and do stuff they were not specialized in.

The code we started to develop was kept as simple as possible, allowing people to take over development in case of someone else’s unexpected absence. Also, automation was kept lean. We chose Jenkins 2.X as our build/deployment server, set up a hook on whole GitHub repository and agreed that every repo, every branch will have a Jenkinsfile describing a pipeline. Although I was the one to set up the initial tool, the whole process of building and improving pipeline was quickly taken over by all team members adjusting it to their needs while I was providing support if needed. Demands, expectations, and being pushy were replaced here by pairing, contributing, and supporting to have it ready sooner.

We remembered to stay lean. Having a continuous delivery pipeline, every change merged to the release branch was immediately deployed to production. Also, we focused on the absolute MVP to be able to go live and handle at least some test traffic. This gave us very important feedback regarding possible improvements. Pull requests to other repositories were prepared and posted in advance. Therefore we had our changes in foreign code already deployed to production, not interfering with current functionalities and waiting for a moment to take over user handling flow.
As we were a pretty small team with three developers, a product owner and a devop, we were trying to keep meetings brief. People didn’t need reminding to prepare for refinements or plannings, so the time needed to work out an agreement was also short.

Before going live we started defining metrics and aggregating logs in one place. We had to put some extra effort to automatically pull logs from the provider but that paid off with detailed user monitoring and ability to cross check error logs with events delivered by a third party site. As the output, we got multiple dashboards and log filters analyzing almost every aspect of the running application: from pure system metrics like resource consumption, latency, and uptime, to detailed information about user behavior with an ability to trace errors back to few requests before to better understand the context. After exposing a new login to live traffic every 5xx error was immediately alerted on team’s Slack channel and, thanks to gathered links and dashboards, we could identify a root cause within a few minutes.

I also saved one more surprise for the end. Although the company policy was to have co-located teams, due to a shortage of personnel our team was partly distributed. Apart from me, all team members were sitting in the same room in Munich, DE and I was working from the office in Poznań, PL. Also due to other responsibilities I could not allocate more than 60% of my time for this team.
Our internal communication, sharing opinions and ideas, was so good that most of the time I didn’t feel excluded at all. To be honest, working with them, even as a remote devop, was more enjoyable to me than working with some other teams co-located in one room.

Now that our goal is achieved and I am switching teams once again, I have decided to take a look back at the last 12 sprints and try to learn from it. And of course to share with you.

Was it really that candy-sweet all the time? Of course not, we had our problems. Starting with me not being 100% in the team. I regret that I could not get more involved in coding. Sometimes, especially in the very middle of the project, our meetings were too long and seemed pointless to me. Out CI pipeline crashed several times, blocking the whole development process and causing lot of tension. We were dependent on other teams which sometimes weren’t willing to help us because it was not compliant with their OKRs. It all happened more than I’d like, but I’m happy that we did work it out together and fixed it instead of pointing fingers.

What I personally learned from it was that:

  • the proper mindset is an absolute foundation for good DevOps culture
  • having a smaller team of engineers inclined to be full-stack means it’s better at self-managing and does not suffer in case of someone being suddenly absent
  • automation should be lean and constantly improved. Don’t put too much overhead on it at the beginning.
  • we should treat our applications as our own piece of production cake, equip it with a number of useful metrics and get knowledge out of them
  • ideas for the technological process should not be turned down by product people, as they influence greater delivery speed in the end of the day

I hope that I can take this knowledge and use it in the new project I’m about to join.

Adapting DevOps culture with C.A.L.M.S.

DevOps is still quite a buzzword. There are already plenty of articles describing what it is and what it isn’t. I think we can agree that it’s a culture, a way of work. I’m also sure that most of us have a general impression about what it should look like: development and operations working together, breaking down silos, deliver faster, automate, etc. All of these are important and true, but still only seem to be a partial description. I started looking for a more complex description. And I found a very interesting model describing the culture. It’s C.A.L.M.S.

C.A.L.M.S. is an acronym for five major points describing a DevOps culture. Let’s have a quick look at them:

C – Culture
This is something you cannot implement. First, you should start with people having a proper mindset and it should concern ALL team members. Everyone should be focused on a common goal and help others achieve it whether it’s within your specialization area or not. Stepping out of your comfort zone and leaning towards becoming a full-stack engineer is encouraged.

A – Automation
We want to do as little boring stuff as possible. Therefore everything that can be automated should be done this way. And that’s not only writing scripts for testing and deployment but also adapting the idea of programmable infrastructure and having everything written down, versioned, and automatically managed.

L – Lean
Automating everything can be a pitfall that overcomplicates the infrastructure. Therefore engineers should focus on keeping everything minimal, yet useful. That doesn’t concern only automation – code deployments to production environment should be small and frequent and whole applications being developed – simple and easy to understand. It also applies to team size: larger teams find it more difficult to agree on something.

M – Measurement
Frequent releases give great flexibility but also can put the production environment in danger. That’s why a developed application should be equipped with useful metrics and monitored in real time. In case of problems the team can be notified quickly and is able to develop a fix. Teams can also monitor how new features influence user behavior.

S – Sharing
Sharing is essential for improving the communication flow and making people work together. Therefore it’s important to share ideas, experiences, thoughts: inside the team, among teams, and even outside the company.

What I like most about this model is how these points interact with each other. Automation should always be lean and robust. Providing an automated CI/CD pipeline helps teams to stay lean. While setting up monitoring it’s better to choose only valuable metrics and set up handy dashboards and alerts. The metrics can be shared among teams to set up a more complex application analysis tool that would automatically provide some wider context into the data we collect, which can be automatically analyzed and trigger lean changes in features …
The foundation for all these things is Culture. In my opinion that’s the most difficult point of all five. Without it, the other four points are just minor improvements to everyday work.

If you liked this article and would like to read about how this model applies to the team I used to work with, please let me know by leaving a comment.

How we stopped worrying and learned to use Kotlin in our Androd App

Kotlin is a relatively new programming language, becoming more and more popular among Android developers. It is officially defined as Statically typed programming language for the JVM, Android and the browser by its creators, JetBrains – who are also responsible for the Android Studio (and other great IDEs).

Kotlin at HolidayCheck

After the release of Kotlin in stable version, Android team at HolidayCheck decided to finaly give it a go. We’ve implemented a significant feature of Android app (Booking Module) over the course of three months, learned from our mistakes and enjoyed every single moment of the Kotlin experiment. We can safely say it went well and we’re not looking back.

Here are the top 5 reasons why we recommend everyone to try it in their Android apps.

NullPointerException hell is gone

This might not be the most important reason, but it resonates with every Java developer so strongly, that it needed to be put in the first place. Kotlin is type and null safe. If you use it correctly, NPEs will not happen, it is guaranteed by the language. References that might contain null value, must be explicitly marked as nullable – everything else must have value and the compiler makes sure of it. This is built into type system, no Optional<T> needed.

Lambda expressions & collections

Lambda expressions were one of the most imporant additions to Java 8 – it’s finally available for Android so it’s not such a game changer now, but still – Kotlin adds its own lambda support without the need for Java 8 or external libraries for Android.

Lambdas alone can greatly reduce boilerplate code, but their use with collections shows real power in expressiveness and conciseness. Simple mapping and filtering collections is a touch of functional programming that every modern application needs.

Simplified and more powerful syntax

Operator overloading and extension functions greatly improve expressiveness of the code – no static helper classes needed for simple calculations performed on your custom objects. Setting text on your TextView or hiding it when the text is empty – simple as writing one function shared among all TextViews, instead of polluting view code with logic and if statements.

As a matter of fact, if in Kotlin is an expression, so it can return a value. The same applies to when expression, an improved switch operator.

Default function parameters? Check. No semicolons required? Check. String interpolation? Check, check, check.

You don’t need to go all in

Greenfield projects are rare, most of our day-to-day work is all about maintaining software and building features on top of existing codebase. Some projects (even in the worst tech-debt imaginable), cannot be migrated (even into the most promising programming language ever) for the cost of stopping development. We wouldn’t do that either, but fortunately this is not the case. You can migrate old code or write only new one in Kotlin and everything works fine with Java, because the bytecode is JVM-compatible.

Great way to dive into Kotlin development in your Android app is to separate one single Activity in your project and (re)write it in Kotlin. If anything goes wrong, you can always go to Java, even for single, specific classes.

Java Interoperability

Of course the code of your beloved Android app doesn’t exist in vacuum. What about all those Java libraries, that are not (and probably never will be) ported to Kotlin? Fear not, they don’t need to. You can simply cal Java code from Kotlin and vice versa. Thanks to that, Kotlin can be easily adopted in current code, hopefully phasing out Java in the future.

This list is of course not exhaustive, Kotlin has many other features available. In case this doesn’t convince you, here’s a sneak peek of upcoming changes in version 1.1.

Get started right away

This tutorial shows how you can start with Kotlin, migrating single classes one by one. If anything goes wrong, make sure you have the latest Android Studio and Kotlin plugin installed.

Bonus: Anko library

Anko is a DSL for Android, written in Kotlin. It greatly simplifies development, both in terms of code we must write, as well as its complexity, getting rid of many points of interaction with Android SDK and Java quirks. Be aware though – it’s specific to Android and can potentially introduce yet-another level of complexity if you’re just starting out with migration to Kotlin.

Android Runtime Permissions

Android Runtime Permissions

Permissions are used to restrict access to sensitive resources, data and hardware features. Android 6.0 Marshmallow (SDK level 23) introduced the concept of Runtime Permissions, changing many aspects of managing them in Android. The change is aimed to greatly improve user experience and it seems the Android Team did just that. On the other hand, developers need to incorporate those changes into their apps, which might not look that easy at first glance.

Traditional permission model

  • application install requires all of the permissions to be accepted by the user
  • the only way to revoke them is to uninstall application (all or nothing approach)
  • new permissions prevent apps from auto-updating, they need additional review by the user1
  • required permissions are declared in Android Manifest and application can assume, that if it is installed, it has all of them granted

New runtime permission model

Now, every permission falls into one of three protection levels:

  • normal applications get them by default at the time of installation (for instance Internet connectivity)
  • dangerous must be granted explicitly by the user
  • signature custom permissions based on signing certificates that should match in order to request them, typically used by applications by the same developer in order to exchange some private, proprietary data

In addition to protection levels, new concept of permission groups has been introduced:

  • few separate permissions (like we are used to) might be grouped together into one group
  • user implicitly grants us all permissions from the group, that requested permission belongs to
  • if we ask for more than one permission in a group, they are granted/denied automatically based on users decision regarding the group

Dont assume that if you possess a permission from particular group, you also have all the other permissions from that group always ask for specific ones, because current behavior might change in the future.

What all of that means for your users, is they can just install any given app from the Play Store and start using it right away (no questions during installation asked). Permissions falling into normal protection level are granted by default, and those should be enough for typical app to function, at least at the beginning. The process of asking for permissions is pushed back to the last possible moment the moment that user wants to perform an action requiring dangerous permission, for example action of taking a photo would be briefly interrupted by a system dialog with permission request. Finally, user can grant or revoke2 any permission at any time from system settings, without uninstalling app.

App permissions can be fine-tuned separately app by app

When user revokes permissions, our applications process gets killed.

All is fine for the user, but life of an ordinary Android developer has just got slightly more complicated. Were now required to take extra care and prepare for few scenarios, making sure that we:

  • declare required permissions in Android Manifest as usual
  • check on-the-fly, that we are actually granted permissions required to perform given actions
  • disable certain UI controls or indicate in other ways that application could perform those actions, if it had those permissions
  • are prepared for permission revocation at any given time
  • make a clear distinction between permissions vital to your application and optional ones
  • show disclaimers and reasoning behind requests up-front or in context as they are needed

Asking for Runtime Permissions

All features are backported to both Support Library v4 and v13.

1. Opt-in for Runtime Permissions

In order to opt-in for Runtime Permissions, you need to build your application with Target SDK 23 or higher. If youre using Gradle (which you should be doing), make sure its set in build.gradle:

compileSdkVersion 23
targetSdkVersion 23

2. Declare permissions in AndroidManifest

No changes here, declare needed permissions in AndroidManifest.xml as usual:

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
          package="io.github.adamjodlowski.playground">

    <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>

    <application>...</application>

</manifest>

3. Make sure to check if permissions have been granted and ask for them if necessary

System window pops up to get user's approval

public class MainActivity extends AppCompatActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        
        if (ActivityCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED) {
            ActivityCompat.requestPermissions(this, new String[]{ Manifest.permission.ACCESS_FINE_LOCATION }, 123);
        } else {
            Log.d("PLAYGROUND", "Permission was already granted");
        }
    }

    public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
        if (requestCode == 123) {
            if (grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
                Log.d("PLAYGROUND", "Permission has been just granted");
            } else {
                Log.d("PLAYGROUND", "Permission has been denied or request cancelled");
            }
        }
    }
}

As you can see, we use few new methods from Runtime Permissions API. Those are fairly straightforward, but there are few things to note:

  • you can request many permissions, passing an array to ActivityCompat#requestPermissions() method
  • you need to make sure the code is prepared for brief interruption introduced by the request, as well as correctly receive result in onRequestPermissionsResult(), the mechanism is the same as onActivityResult() that you should be already familiar with
  • you can check approval/denial on single permission level notice that response contains requested permissions and separate results for them

You are required to check permissions status every time you might need them, dont assume that you got permission once and its available forever.

Additional considerations

I suppose youve noticed that after first denial user is given the option not to bug them anymore with our request. This is virtually our last chance to explain them reasoning behind the request. In order to do that gracefully, API contains method that returns true, if we previously requested particular permission, but user denied the request:

if (ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.ACCESS_FINE_LOCATION)) {
	// the request has been denied previously, show explanation why you need this, then request permission again!
} else {
	// ask without explanations
}

Please note, this method returns false if user already checked Never ask again checkbox.

Asking second time, user can permanently dismiss this dialog for our app

Theres another feature introduced in SDK 23. You can ask for permissions only if the platform supports runtime permissions, so you can optionally provide new features, but dont request new permissions from users on older platforms upon auto-update of existing installed app. All we need is declaration in Android Manifest:

<uses-permission-sdk-23 android:name="android.permission.ACCESS_FINE_LOCATION"/>

This permission would behave as runtime permission and request for it would appear only on Android 6.0 and newer. For users on older platforms, nothing changes.

Few points to keep in mind

  • apps compiled with Target SDK < 23 are treated as usual
  • many basic permissions are granted by default
  • you still need to declare them in the Manifest
  • you must check every time you need them (Activity#onCreate() is sufficient)

Protip: Android Studio parses annotations and warns you if you try to make an API call but dont request needed permissions.


  1. That leads to developers requesting more permissions that they really need, in order to prevent already installed applications from bugging user again when new app features arrive. This is bad practice. ©
  2. This doesnt mean legacy apps are going to crash like crazy after you revoke permissions, throwing Exceptions all over the place. What really happens is, on Android 23+, API calls that had been restricted would return empty data sets or default values, as if there were no data available. ©

Content Freaks & Mob programming adventure

cf-logo-v3In Holidaycheck’s Poznań IT department we work in teams that consist of frontend & backend developers. We pay a lot of attention to knowledge sharing within a team in order to have a common understanding of our architecture and why we’ve chosen one solution over another . For a long time we’ve had well balanced sprints, meaning that we had a few frontend and backend tasks in each sprint. Recently however, due to our business strategy and migration process of our platform, more and more tasks have became frontend focused.

We didn’t want our backend devs to get bored! We wanted to use the potential of the whole team, which meant involving backend devs in frontend tasks. We started looking for solutions that can be helpful in learning new technology stack. One of the things that came to mind was pair programing, which is a great idea for sharing knowledge, but… our scrum master proposed to try MOB programming… We thought: “Why not!? We are an agile team, so why not to try something different and learn !?” 😉

MOB programming in a nutshell, is a scaled up version of Pair programming, with more than two developers. In our case it was four: two frontend and two backend guys. We’ve prepared one of our conference rooms  to be the headquarters of our MOB programming session. We used a big TV as an external monitor – just to feel more comfortable and not to squeeze four people in front of a tiny laptop screen for hours 🙂  Second laptop was used for research, to not disturb the main flow of coding.

mob2

We’ve picked one frontend story, which was not very complex, but great for learning the basics. At the beginning of our MOB  session, backend part of our team had a feeling that the story is just a piece of cake … but it wasn’t. Backend guys haven’t had any experience with our current frontend stack and didn’t have a clue about how complex and tricky the story  might be. After we’ve discussed how we would implement the story, we were ready to code! Development process looked a little bit different from pair programming. One of the developers was a “driver”, sat in front of the laptop and owned the keyboard. He was coding what was discussed and agreed on by everyone.   The session revolved around frontend guys suggesting how the story should be done, which was then being discussed by everyone. From time to time, we also had a discussion regarding more general programming ideas like testing approach etc. The MOB session was divided into short iterations  (15-20 min) in which each of us became the driver, which allowed previous driver to focus on research and discussion.

Good parts:

  • sharing knowledge
  • new angle of looking at solutions that we are using on daily basis
  • rise  of team spirit and collaboration
  • backend guys started to like coding on frontend stack

Other observations:

  • the session slowed down the progress of product features in the current sprint, but that was expected and accounted for, as we believe it will enable us to be faster in future
  • frontend guys would love to do a similar session with a backend story 🙂
  • a whole day is too long to develop in that style
  • MOB approach shouldn’t be abused!  🙂

In summary, although we had certain reservations before we started, MOB programming turned out to be a really great experience for us! We think we will be using it   in future, as a mean  of fast learning and knowledge sharing… Maybe frontend guys will learn & love Scala? 🙂   Time will tell… 🙂

Marathon Maven plugin from HolidayCheck

At HolidayCheck we are using both Docker and Apache Mesos with Mesosphere’s Marathon. Therefore for our development teams creating services using Maven we wanted to set up easy to use configurations bound to standard Maven lifecycle phases.

Starting with Docker Maven plugin

As wiring a Java/Scala project for Docker is possible by multiple existing plugins, we have chosen Spotify’s Docker Maven plugin, as it best suited our needs, was easy to integrate and allowed to use Git commit hashes (aside from standard artifact versions) in Docker image names for uniqueness.

Marathon Maven plugin usage example

You can have a look at the plugin on the GitHub:
https://github.com/holidaycheck/marathon-maven-plugin
or even start using it already in your project:

<plugin>
     <groupId>com.holidaycheck</groupId>
     <artifactId>marathon-maven-plugin</artifactId>
     <version>0.0.1</version>
     <configuration>
         
         <marathonHost>${marathon.protocol}://${marathon.host}:${marathon.port}</marathonHost>
     </configuration>
     <executions>
         <execution>
             <id>processConfig</id>
             <phase>install</phase>
             <goals>
                 <goal>processConfig</goal>
             </goals>
         </execution>
         <execution>
             <id>deploy</id>
             <phase>deploy</phase>
             <goals>
                 <goal>deploy</goal>
             </goals>
         </execution>
     </executions>
</plugin>

To interact with Marathon the plugin uses Marathon Java API client. In the processConfig goal the plugin takes marathon.json file currently located by default in root project directory, which might
look like the one here:

{
  "id": "/my-service-1",
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "docker-registry.your.org/my-team/my-service",
      "network": "BRIDGE",
      "portMappings": [
        {"containerPort": 7070},
        {"containerPort": 7071}
      ]
    }
  },
  "env": {
    "PATH_PREFIX": "/my-service",
    "_JAVA_OPTIONS": "-Xms64m -Xmx128m -XX:MaxPermSize=64m"
  },
  "instances": 1,
  "cpus": 0.5,
  "mem": 256,
  "healthChecks": [
    {
      "protocol": "HTTP",
      "portIndex": 0,
      "path": "/my-service/v1.0/healthcheck",
      "gracePeriodSeconds": 3,
      "intervalSeconds": 10,
      "timeoutSeconds": 10,
      "maxConsecutiveFailures": 5
    }
  ]
}

replaces container/docker/image with the value provided to the plugin and evaluated taking into account all used variables e.g.: docker-registry.your.org/my-team/my-service-1.0.
The result is put into project’s target directory by default. Then it’s being picked up by the deploy goal and submitted to Marathon’s API endpoint with some minor handling of a situation whether the app already exists or not in the cluster.

Docker and Marathon plugins join forces

As mentioned earlier the Marathon plugin goes well with the Docker plugin, mostly due to the fact that we might bind those two plugins together and hook them nicely to proper Maven lifecycle phases and (what is very important in our scenario) to use Git commit hash as the Docker image name detected previously by the Docker plugin to be used in Marathon plugin’ configuration:

<plugin>
    <groupId>com.spotify</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>0.2.6</version>
    <configuration>
        <imageName>${docker-image-prefix}/${project.build.finalName}:${project.version}-${gitShortCommitId}</imageName>
    </configuration>
    <executions>
        <execution>
            <id>build</id>
            <phase>install</phase>
            <goals>
                <goal>build</goal>
            </goals>
            <configuration>
                <maintainer>HolidayCheck</maintainer>
                <baseImage>${docker-registry}/lightweight/oracle-java-7-debian</baseImage>
                <cmd>java -jar /${project.name}.jar</cmd>
                <exposes>
                    <expose>7070</expose>
                </exposes>
                <resources>
                    <resource>
                        <targetPath>/</targetPath>
                        <directory>${project.build.directory}</directory>
                        <include>${project.build.finalName}.jar</include>
                    </resource>
                </resources>
            </configuration>
        </execution>
        <execution>
            <id>push</id>
            <phase>deploy</phase>
            <goals>
                <goal>push</goal>
            </goals>
        </execution>
    </executions>
</plugin>
<plugin>
    <groupId>com.holidaycheck</groupId>
    <artifactId>marathon-maven-plugin</artifactId>
    <version>0.0.1</version>
    <configuration>
        
        <marathonHost>${marathon.protocol}://${marathon.host}:${marathon.port}</marathonHost>
    </configuration>
    <executions>
        <execution>
            <id>processConfig</id>
            <phase>install</phase>
            <goals>
                <goal>processConfig</goal>
            </goals>
        </execution>
        <execution>
            <id>deploy</id>
            <phase>deploy</phase>
            <goals>
                <goal>deploy</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Now simply type:

mvn clean deploy

and you should have it deployed and running on your target environment.

Have a good time using Marathon Maven plugin! If you are willing to contribute your pull requests are welcome.

HolidayCheck craftsmen at SoCraTes Tenerife

Last week our developers Jan-Simon Wurst, Tobias Pflug, Alexander Schmidt, Robert Jacob, Robert Mißbach and Roberto Bez went to the SoCraTes Conference on Tenerife. Of course that is kind of an unusual place for a conference, but exactly that makes the difference. Beeng somewhere isolated, like in a lonely hotel near the beach, gave us a very relaxed and comfortable feeling while talking with other great people about software craftmenship and a lot of other interesting topics.

The Venue: Tenerife

After a very windy landing on Tenerife, we went to our venue, the lovely Hotel Aguamarina Golf, located not to far away from the airport. It gave us enough space for all the sessions we needed, as for example next to the pool or on the terrace.

The Hotel Aguamarina Golf

What is the craftsmanship conference about?

SoCraTes Canaries in fact is a Craftmanship retreat for open-minded craftspeople who strive to improve their craft and the software industry as a whole, organised by the local Software Cratfsmanship comunity in the Canary Island.

Tenerife-Hotel-View

It is a totally community-focused Open Space, without a predefined agenda and without any speakers known before the event starts. Proposals were presented during the event itself in the morning:

Session planing in the morning

After the proposals were presented, we had five different locations (like classic rooms with projectors, but also near the pool or on the terrace, while enjoying a great view of the atlantic sea).

20150228_101344903_iOSDiscussions

Discussion about costs of CI and CD

 

Talks & Discussions

Just to name some topics we discussed:

Jan-Simon Wurst proposed a discussion about Git Flow vs. trunk based development, trying to find out how other developer teams work and which might be the better solution. But as always – there is no perfect way, but a lot of right ones!

Robert Jacob showed us HolidayChecks continuous deployment pipeline with Mesos and Docker, facing a lot of interesting questions about that hyping topic.

Tobias Pflug let us had a deeper insight into his vim-skills, presenting some of his favorite plugins. Very cool stuff!

Roberto Bez initiated a discussion about distributed teams by sharing his experience about that and with a lot of interest in how other companies are currently facing with not co-located teams. For some it might work, for others it does not – but one outcome was clear: It is not always easy!

With two days full of interesting discussions, in the evening there was the time to enjoy a beer together as well.

To sum up, we went back home with a lot of new motivation, which we can hopefully use to become better craftsmen in our daily work!

A special thanks to Carlos Blé and all the other organizers for the great conference. We are already looking forward to visit you again in 2016!

How we do Agile Intro Workshops at HolidayCheck

legopic1

Why do we do Agile Intro Workshops

When you are situated in a web driven company with more than 200 employees in one location – plus
dozens on top in distributed offices – the need for a common basic understanding of agile product
development is at hand.
Our product development consists of +/- 8 teams – all contributing to different parts of the web platform
plus native mobile apps.
So we experience communication and alignments that go far beyond the dedicated product dev teams only – basically all departments have a smaller or bigger stake in the agile teams.
That leads us to the need to have a basic common understanding of Agile and how we live it at HolidayCheck.

For whom do we do them

The target groups are always totally mixed and come from different departments – this way we ensure that people come together and form a completely new virtual team during the workshop. After the workshops they are again released to their native teams and their well known area – able to bring in and spread the learnings immediately.

How do we do them

We set a timebox to 90mins and have a theoretical part and a practical part – a simulated sprint with the goal to build something that is ready to use in very few minutes.

The theoretical part covers basic knowledge about Agile, Scrum and Kanban:

  • The need for Agile product development
  • The roles and artifacts
  • The specific team constellation here at HolidayCheck
  • Agile Tools we use at HolidayCheck

And now comes the fun part! We use LEGO bricks to simulate a sprint with the goal to build a house, a garden and a car.
After about 45mins the audience is asked to form 1-2 agile teams themselves to do that – they are given roles they need to fulfill the best they can – even if their real role is completely different (believe me, software developers always want to build stuff instead of just telling where the team should head at).
Roles we use are Product Owner, Developers, UX Designer, Test Engineer.

legopic4

The Scrum Master role is filled by us as the moderators themselves. Also if the teams are rather small, with +/- 4 people, we only use Product Owner and Developers to keep it more simple and going forward.
We use small time boxes to simulate

  • A sprint planning session (2mins)
  • The sprint itself (5mins)
  • A sprint review (2mins)
  • A sprint retrospective (1min)
  • legopic2

The teams apply the given info instantly and also try to prioritise and deal with the limited time.
This gives them a feeling of how real teams have to focus and organise themselves.
Many people think that building a house with LEGO bricks with a whole team is super easy in 5mins – trust me: experience shows that many struggle to finish in time, while others do really great.

legopic3

What do you think? Please comment and tell us your experiences!

An Easy Way to Measure Method Calls in a Java resp. Scala Application

In the following find a description how to measure certain method calls in a Scala application even under production load.

To collect execution times for analysis e.g. for locating performance problems in existing Java™ respectively Scala applications you can use JETM. JETM is a library offering execution measurement. The overhead is little compared to Java™ Virtual Machine Profiling Interface (JVMPI) or Java™ Virtual Machine Tool Interface (JVMTI) and the related profiler extensions. Thus the risk to slow down the application in production environment is also little.

Using the programmatic approach of performance monitoring with HttpConsoleServer and JMX Support.

In maven pom.xml include the dependency

<dependency>
 <groupId>fm.void.jetm</groupId>
 <artifactId>jetm</artifactId>
 </dependency>

for the core measurement functionality and

<dependency>
  <groupId>fm.void.jetm</groupId>
  <artifactId>jetm-optional</artifactId>
</dependency>

for an output in a HttpConsole.  (For version information see e.g. http://repo1.maven.org/maven2/fm/void/jetm/)

Within a Singleton create a nested monitor ( ” true ” parameter) with default ExecutionTimer  and Aggregator by
BasicEtmConfigurator.configure(true).
Start an EtmMonitor with
val etmMonitor = EtmManager.getEtmMonitor
etmMonitor.start()

Start an HttpConsoleServer with
val server: HttpConsoleServer = new HttpConsoleServer(etmMonitor)
server.setListenPort(Config.JETMMonitoring.port)
server.start()

Config.JETMMonitoring.port: the port is configurable by using 

com.typesafe.config.ConfigFactory

for further information see https://github.com/typesafehub/config

Register an MBean for JMX Support:
val mbeanServer: MBeanServer = ManagementFactory.getPlatformMBeanServer

if (mbeanServer != null) {

  val objectName = new ObjectName("etm:service=PerformanceMonitor")
  // register EtmMonitor using EtmMonitorMBean
  try {
    mbeanServer.registerMBean(new EtmMonitorMBean(etmMonitor, "com.holidaycheck.mpg"), objectName)
  } catch ...
}

Keep in mind that you have to take care of stopping the measuring e.g. on shutdown hook.

Mix in the measure call by a trait e.g. named JETM that owns a reference to the monitor ( private val monitor = EtmManager.getEtmMonitor() ):

def measure[T](name: String)(op: => T): T = {
  if (!JETM.monitor.isCollecting()) return op

  val point = JETM.monitor.createPoint(jetmPrefix + name)
  try {
    op
  } finally {
    point.collect()
  }
}

(jetmPrefix is the canonical name of the class that mixes in the trait).

Within the class e.g. OfferMetaDataMap that contains the call to be measured use

class OfferMetaDataMap(...) extends ... with JETM {

  def aMethodCallToMeasure = {

    measure("Get") {
      /** basic method body */
    }

}

“Get” is the flag of the measured method. In HttpConsole this will occur like

|-----------------------------------------------------------------|---|---------|--------|--------|---------|
| Measurement Point | # | Average | Min | Max | Total |
|-----------------------------------------------------------------|---|---------|--------|--------|---------|
| com.holidaycheck.mpg.service.actors.cache.OfferMetaDataMap#Get | 4 | 3.556 | 1.029 | 6.075 | 14.224 |

The measured data is accessible via JMX or via http://[application’s url]:[configuredPort]/index.

 

For further information see  http://jetm.void.fm/doc.html

for instance about persistent aggregation see http://jetm.void.fm/howto/aggregation_persistence.html

Use case of Akka system’s event bus: Logging of unhandled messages

Use case of Akka system’s event bus: Logging of unhandled messages

Akka is a toolkit for building concurrent applications on the JVM using Actor model and relying on asynchronous message-passing.

An actor sends a message to another actor which handles the message by its receive method in case the message type is registered. Look at Akka API and Akka documentation for detailed information.

If the receiver has no matching message type the message cannot be handled i.e. the message is programmatically not expected. An unhandled message is published as an UnhandledMessage(msg, sender,  recipient) to the actor system’s event stream.
If the configuration parameter akka.actor.debug.unhandled = 'on' it is converted into a Debug message. Confer: UntypedActor API, in: Akka Documentation v2.3.7, URL: http://doc.akka.io/docs/akka/2.3.7/java/untyped-actors.html (visited: 2014/11/24).

That’s fine for the configuration akka.loglevel = "DEBUG". But on “INFO” level there is no warning.

To log unhandled messages and that means even to know about such unexpected occurrences of messages you can subscribe an actor to the system’s event stream for the channel akka.actor.UnhandledMessage. This is done e.g. by
system.eventStream.subscribe(system.actorOf(Logger.props()), classOf[UnhandledMessage])

 

object Logger {
  def props() = Props(new Logger)

  val name = "UnhandledMessageLogger"
}

class Logger extends Actor with ActorLogging {

  /** logs on warn level the message and the original recipient (sender is deadLetters) */
  override def receive = {
    case ua@UnhandledMessage(msg, _, recipient) =>
      log.warning(s"Unhandled: $msg to $recipient")
  }

}

This logger actor bypasses the dependency of akka.loglevel = “DEBUG”. The information about unhandled messages is logged to the Akka build-in ActorLogging in the example ahead. But can be logged to the application specific logging component as well.