Profiling MongoDB with Logstash and Kibana

We’re using MongoDB as a storage for our web pages. Because many different background processes are using this database, it’s difficult to find the reason for high CPU or IO load on the server. MongoDB has some built-in tools to investigate the current behavior (like mongotop, mongostat or db.currentOp()), but we missed a single easy-to-use tool, which can be used by every developer and shows statistics over a specific period.

We decided to use Kibana, because we are already using Kibana and Logstash for log file analysis. These versions of the tools are used:

Getting relevant data from MongoDB

MongoDB writes profiling measurements into a system.profile collection and its log file. The profiling is activated by the following command on the MongoDB console:


This activates the slow log for all queries which require more than 50ms to process. We decided to use the output of the log file for further processing, because it’s easier to access by Logstash and it combines the slow logs from all databases. The slow log entries look like this:

Mon Feb 10 14:40:30.398 [conn781] query basedata.v1.hc.content.core.Hotel query: { id: "396dff92-66b8-353b-87ed-4c034a27ad52" } ntoreturn:1 ntoskip:0 nscanned:1 keyUpdates:0 locks(micros) r:8131 nreturned:1 reslen:2467 11ms

The official description about profiling MongoDB can be found here.

Writing slow logs into Elasticsearch

Logstash is the preferred tool to prepare data for Kibana. It receives messages from lots of different inputs, extracts the important information and writes it into different outputs. A configuration to write the slow log unchanged into Elasticsearch would be:

input {
  file {
    type => "log"
    path => "/var/log/mongodb/mongodb.log"
    type => "mongodb"
output {
  elasticsearch {
    cluster => "elasticsearch"

This works, but the message is saved as one big string without any meaning. Queries are possible in Kibana now, but no sorting by elapsed time or scanned lines.

That’s why we need a filter between input and output to extract relevant terms and number from the message. Logstash offers a built-in grok filter for such tasks. It extract data from a string with a simple syntax (easier than regex) and saves it into single fields.

After some try-and-error (hint: test Logstash with the stdin input and stdout output), the result configuration looks like this:

filter {
  grok { pattern => ["(?m)%{GREEDYDATA} \[conn%{NUMBER:mongoConnection}\] %{WORD:mongoCommand} %{NOTSPACE:mongoDatabase} %{WORD}: \{ %{GREEDYDATA:mongoStatement} \} %{GREEDYDATA} %{NUMBER:mongoElapsedTime:int}ms"] }
  grok { pattern => [" cursorid:%{NUMBER:mongoCursorId}"] }
  grok { pattern => [" ntoreturn:%{NUMBER:mongoNumberToReturn:int}"] }
  grok { pattern => [" ntoskip:%{NUMBER:mongoNumberToSkip:int}"] }
  grok { pattern => [" nscanned:%{NUMBER:mongoNumberScanned:int}"] }
  grok { pattern => [" scanAndOrder:%{NUMBER:mongoScanAndOrder:int}"] }
  grok { pattern => [" idhack:%{NUMBER:mongoIdHack:int}"] }
  grok { pattern => [" nmoved:%{NUMBER:mongoNumberMoved:int}"] }
  grok { pattern => [" nupdated:%{NUMBER:mongoNumberUpdated:int}"] }
  grok { pattern => [" keyUpdates:%{NUMBER:mongoKeyUpdates:int}"] }
  grok { pattern => [" numYields: %{NUMBER:mongoNumYields:int}"] }
  grok { pattern => [" locks\(micros\) r:%{NUMBER:mongoReadLocks:int}"] }
  grok { pattern => [" locks\(micros\) w:%{NUMBER:mongoWriteLocks:int}"] }
  grok { pattern => [" nreturned:%{NUMBER:mongoNumberReturned:int}"] }
  grok { pattern => [" reslen:%{NUMBER:mongoResultLength:int}"] }

The first line extracts all the common data, like connection, command, database/collection, statement and elapsed time. The first GREEDYDATA skips the timestamp (Logstash adds another one anyway) and the second GREEDYDATA skips the block with the additional profiling info. The profiling info is splitted into useful data by the other thirteen grok filters below. This is necessary, because the provided data differs from query to query. With the latest version of the grok filter, all these filters could be merged into one filter.

All values are saved into fields with the prefix “mongo”. The “:int” is important for sorting and aggregation inside Kibana (otherwise the values are saved as strings).

The result of the transformation of the log message mentioned above looks like this:

  "mongoConnection": "781",
  "mongoCommand": "query",
  "mongoDatabase": "basedata.v1.hc.content.core.Hotel",
  "mongoStatement": "id: \"396dff92-66b8-353b-87ed-4c034a27ad52\"",
  "mongoElapsedTime": 11,
  "mongoNumberToReturn": 1,
  "mongoNumberToSkip": 0,
  "mongoNumberScanned": 1,
  "mongoKeyUpdates": 0,
  "mongoReadLocks": 8131,
  "mongoNumberReturned": 1,
  "mongoResultLength": 2467

Adding a dashboard to Kibana



Beside the standard filter for the time-stamp we added a filter for the @type to keep the dashboard clean from other logs. It’s easy possible to add other filters here, e.g. to reduce the dashboard for a single database collection.



The incoming messages can be  labeled and colored into different groups with the help of queries. We split them into four performance classes, depending on the response time. The defined queries are (from green to red):

  • @fields.mongoElapsedTime:[0 TO 100]
  • @fields.mongoElapsedTime:[100 TO 200]
  • @fields.mongoElapsedTime:[200 TO 500]
  • @fields.mongoElapsedTime:[500 TO *]



A histogram is used to display the total consumed time by the queries.  By choosing “total” in the “Mode” drop-down and adding “@field.mongoElapsedTime” in the “Value Field”, the elapsed times are summed up for every measuring point. This shows a more realistic picture than just using the count of queries.



For a quick overview of the worst queries, three tables were added to the dashboard. They are just normal tables with a fixed descending sorting, truncated to ten rows and without any headers or pagination. Following sorts are interesting:

  • @field.mongoElapsedTime: Slowest queries
  • @fields.mongoResultLength: Biggest result sets
  • @fields.mongoNumberScanned: Missing indexes



Complete dashboard with some more (unnecessary) optical elements.


  • Because we’re only logging queries which are slower than 50ms, the statistics are a little bit misleading. Many fast queries are not visible, but could be worth to look at, too. It’s possible to log all queries, but we would not recommend it for a production system.
  • The used Logstash, Kibana and Elasticsearch versions are quite old. We will update them in the next months.

5 thoughts on “Profiling MongoDB with Logstash and Kibana”

  1. Brilliant… been working on this for a day now… then I found this… thanks for your efforts… works a charm.

  2. Thanks for this! Quite useful but wanted to mention

    Also, would it be possible for you to export and gist your dashboard config?

  3. Whoops, last comment got cut off there.

    “Wanted to mention” – the pattern setting is deprecated and should be updated to as

    match => { "message" => "your pattern here" }

  4. grok {
    match => [ 'message', '(?m)%{GREEDYDATA} \[conn%{NUMBER:mongoConnection}\] %{WORD:mongoCommand} %{NOTSPACE:mongoDatabase} %{WORD}: \{ %{GREEDYDATA:mongoStatement} \} %{GREEDYDATA} %{NUMBER:mongoElapsedTime:int}ms',
    'message', 'cursorid:%{NUMBER:mongoCursorId}',
    'message', 'ntoreturn:%{NUMBER:mongoNumberToReturn:int}',
    'message', 'ntoskip:%{NUMBER:mongoNumberToSkip:int}',
    'message', 'nscanned:%{NUMBER:mongoNumberScanned:int}',
    'message', 'scanAndOrder:%{NUMBER:mongoScanAndOrder:int}',
    'message', 'idhack:%{NUMBER:mongoIdHack:int}',
    'message', 'nmoved:%{NUMBER:mongoNumberMoved:int}',
    'message', 'nupdated:%{NUMBER:mongoNumberUpdated:int}',
    'message', 'keyUpdates:%{NUMBER:mongoKeyUpdates:int}',
    'message', 'numYields: %{NUMBER:mongoNumYields:int}',
    'message', 'locks\(micros\) r:%{NUMBER:mongoReadLocks:int}',
    'message', 'locks\(micros\) w:%{NUMBER:mongoWriteLocks:int}',
    'message', 'nreturned:%{NUMBER:mongoNumberReturned:int}',
    'message', 'reslen:%{NUMBER:mongoResultLength:int}'

Leave a Reply

Your email address will not be published. Required fields are marked *