Skip to content

jerzykrlk/jmh-gradle-plugin

 
 

Repository files navigation

JMH Gradle Plugin

This plugin integrates the JMH micro-benchmarking framework with Gradle.

Usage

To enable the plugin, you must add this to your build.gradle file:

build.gradle
plugins {
   id 'me.champeau.gradle.jmh' version '0.2.0'
}

apply plugin: 'me.champeau.gradle.jmh'

Configuration

The plugin makes it easy to integrate into an existing project thanks to a specific configuration. In particular, benchmark source files are expected to be found in the src/jmh directory:

src/jmh
     |- java       : java sources for benchmarks
     |- resources  : resources for benchmarks

The plugin creates a jmh configuration that you should use if your benchmark files depend on a 3rd party library. For example, if you want to use commons-io, you can add the dependency like this:

build.gradle
dependencies {
    jmh 'commons-io:commons-io:2.4'
}

The plugin uses JMH '1.3.2'. You can change JMH version in the jmh block:

build.gradle
jmh {
    jmhVersion = '1.3.2'
}

The jars

  • org.openjdk.jmh:jmh-core

  • org.openjdk.jmh:jmh-generator-annprocess

will be automatically added to the list of library dependencies for the jmh configuration.

Tasks

The project will add several tasks:

  • jmhClasses : compiles the benchmarks

  • jmhJar : builds the JMH jar containing the JHM runtime and your compiled benchmark classes

  • jmh : executes the benchmarks

The jmh task is the main task and depends on the others so it is in general sufficient to execute this task:

gradle jmh

Configuration options

By default, all benchmarks will be executed, and the results will be generated into $buildDir/reports/jmh. But you can change various options thanks to the jmh configuration block. All configurations variables apart from include are unset, implying that they fall back to the default JMH values:

build.gradle
jmh {
   include = 'some regular expression' // include pattern (regular expression) for benchmarks to be executed
   exclude = 'some regular expression' // exclude pattern (regular expression) for benchmarks to be executed
   iterations = 10 // Number of measurement iterations to do.
   benchmarkMode = 'thrpt' // Benchmark mode. Available modes are: [Throughput/thrpt, AverageTime/avgt, SampleTime/sample, SingleShotTime/ss, All/all]
   batchSize = 1 // Batch size: number of benchmark method calls per operation. (some benchmark modes can ignore this setting)
   fork = 2 // How many times to forks a single benchmark. Use 0 to disable forking altogether
   failOnError = false // Should JMH fail immediately if any benchmark had experienced the unrecoverable error?
   forceGC = false // Should JMH force GC between iterations?
   jvm = 'myjvm' // Custom JVM to use when forking.
   jvmArgs = 'Custom JVM args to use when forking.'
   jvmArgsAppend = 'Custom JVM args to use when forking (append these)'
   jvmArgsPrepend = 'Custom JVM args to use when forking (prepend these)'
   humanOutputFile = project.file("${project.buildDir}/reports/jmh/human.txt") // human-readable output file
   resultsFile = project.file("${project.buildDir}/reports/jmh/results.txt") // results file
   operationsPerInvocation = 10 // Operations per invocation.
   benchmarkParameters =  [:] // Benchmark parameters.
   profilers = [] // Use profilers to collect additional data.
   timeOnIteration = '1s' // Time to spend at each measurement iteration.
   resultFormat = 'CSV' // Result format type (one of CSV, JSON, NONE, SCSV, TEXT)
   synchronizeIterations = false // Synchronize iterations?
   threads = 4 // Number of worker threads to run with.
   threadGroups = [2,3,4] //Override thread group distribution for asymmetric benchmarks.
   timeUnit = 'ms' // Output time unit. Available time units are: [m, s, ms, us, ns].
   verbosity = 'NORMAL' // Verbosity mode. Available modes are: [SILENT, NORMAL, EXTRA]
   warmup = '1s' // Time to spend at each warmup iteration.
   warmupBatchSize = 10 // Warmup batch size: number of benchmark method calls per operation.
   warmupForks = 0 // How many warmup forks to make for a single benchmark. 0 to disable warmup forks.
   warmupIterations = 1 // Number of warmup iterations to do.
   warmupMode = 'INDI' // Warmup mode for warming up selected benchmarks. Warmup modes are: [INDI, BULK, BULK_INDI].
   warmupBenchmarks = ['.*Warmup'] // Warmup benchmarks to include in the run in addition to already selected. JMH will not measure these benchmarks, but only use them for the warmup.

   zip64 = true // Use ZIP64 format for bigger archives
   jmhVersion = '1.3.2' // Specifies JMH version
   includeTests = false // Allows to include test sources into generate JMH jar, i.e. use it when benchmarks depend on the test classes.
}

Dependency on project files

The jmh plugin makes it easy to test existing sources without having to create a separate project for this. This is the reason why you must put your files into src/jmh/java instead of src/main/java. This means that by default, the jmh task depends on your main source set.

It is possible a dependency on the test source set by setting property includeTests to true inside jmh block.

Using JMH Gradle Plugin with Shadow Plugin

Optionally it is possible to use Shadow Plugin to do actual JMH jar creation. The configuration of Shadow Plugin for JMH jar is done via jmhJar block. For example:

build.gradle
jmhJar {
  append('META-INF/spring.handlers')
  append('META-INF/spring.schemas')
  exclude 'LICENSE'
}

About

Integrates the JMH benchmarking framework with Gradle

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Java 53.4%
  • Groovy 46.6%