Skip to content

Releases: confluentinc/confluent-kafka-go

v1.7.0

11 May 08:20
13ae115
Compare
Choose a tag to compare

confluent-kafka-go is based on librdkafka v1.7.0, see the librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

Enhancements

  • Experimental Windows support (by @neptoess).
  • The produced message headers are now available in the delivery report
    Message.Headers if the Producer's go.delivery.report.fields
    configuration property is set to include headers, e.g.:
    "go.delivery.report.fields": "key,value,headers"
    This comes at a performance cost and are thus disabled by default.

Fixes

  • AdminClient.CreateTopics() previously did not accept default value(-1) of
    ReplicationFactor without specifying an explicit ReplicaAssignment, this is
    now fixed.

v1.6.1

11 Mar 11:15
ef84d2e
Compare
Choose a tag to compare

v1.6.1

v1.6.1 is a feature release:

  • KIP-429: Incremental consumer rebalancing - see cooperative_consumer_example.go
    for an example how to use the new incremental rebalancing consumer.
  • KIP-480: Sticky producer partitioner - increase throughput and decrease
    latency by sticking to a single random partition for some time.
  • KIP-447: Scalable transactional producer - a single transaction producer can
    now be used for multiple input partitions.
  • Add support for go.delivery.report.fields by @kevinconaway

Fixes

  • For dynamically linked builds (-tags dynamic) there was previously a possible conflict
    between the bundled librdkafka headers and the system installed ones. This is now fixed. (@KJTsanaktsidis)

confluent-kafka-go is based on and bundles librdkafka v1.6.1, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.5.2

05 Nov 12:09
8bd1e41
Compare
Choose a tag to compare

confluent-kafka-go v1.5.2

v1.5.2 is a maintenance release with the following fixes and enhancements:

  • Bundles librdkafka v1.5.2 - see release notes for all enhancements and fixes.
  • Documentation fixes

confluent-kafka-go is based on librdkafka v1.5.2, see the
librdkafka release notes
for a complete list of changes, enhancements, fixes and upgrade considerations.

v1.4.2

07 May 07:06
bb5bb31
Compare
Choose a tag to compare

confluent-kafka-go v1.4.2

v1.4.2 is a maintenance release:

  • The bundled librdkafka directory (kafka/librdkafka) is no longer pruned by Go mod vendor import.
  • Bundled librdkafka upgraded to v1.4.2, highlights:
    • System root CA certificates should now be picked up automatically on most platforms
    • Fix produce/consume hang after partition goes away and comes back,
      such as when a topic is deleted and re-created (regression in v1.3.0).

librdkafka v1.4.2 changes

See the librdkafka v1.4.2 release notes for changes to the bundled librdkafka included with the Go client.

v1.4.0

08 Apr 15:32
00f7f54
Compare
Choose a tag to compare

confluent-kafka-go v1.4.0

  • Added Transactional Producer API and full Exactly-Once-Semantics (EOS) support.
  • A prebuilt version of the latest version of librdkafka is now bundled with the confluent-kafka-go client. A separate installation of librdkafka is NO LONGER REQUIRED or used.
  • Added support for sending client (librdkafka) logs to Logs() channel.
  • Added Consumer.Position() to retrieve the current consumer offsets.
  • The Error type now has additional attributes, such as IsRetriable() to deem if the errored operation can be retried. This is currently only exposed for the Transactional API.
  • Removed support for Go < 1.9

Transactional API

librdkafka and confluent-kafka-go now has complete Exactly-Once-Semantics (EOS) functionality, supporting the idempotent producer (since v1.0.0), a transaction-aware consumer (since v1.2.0) and full producer transaction support (in this release).
This enables developers to create Exactly-Once applications with Apache Kafka.

See the Transactions in Apache Kafka page for an introduction and check the transactions example for a complete transactional application example.

Bundled librdkafka

The confluent-kafka-go client now comes with batteries included, namely prebuilt versions of librdkafka for the most popular platforms, you will thus no longer need to install or manage librdkafka separately.

Supported platforms are:

  • Mac OSX
  • glibc-based Linux x64 (e.g., RedHat, Debian, etc) - lacks Kerberos/GSSAPI support
  • musl-based Linux x64 (Alpine) - lacks Kerberos/GSSAPI support

These prebuilt librdkafka has all features (e.g., SSL, compression, etc) except for the Linux builds which due to libsasl2 dependencies does not have Kerberos/GSSAPI support.
If you need Kerberos support, or you are running on a platform where the prebuilt librdkafka builds are not available (see above), you will need to install librdkafka separately (preferably through the Confluent APT and RPM repositories) and build your application with -tags dynamic to disable the builtin librdkafka and instead link your application dynamically to librdkafka.

librdkafka v1.4.0 changes

Full librdkafka v1.4.0 release notes.

Highlights:

  • KIP-98: Transactional Producer API
  • KIP-345: Static consumer group membership (by @rnpridgeon)
  • KIP-511: Report client software name and version to broker
  • SASL SCRAM security fixes.

v1.3.0

17 Dec 20:36
Compare
Choose a tag to compare

confluent-kafka-go v1.3.0

  • Purge messages API (by @khorshuheng at GoJek).

  • ClusterID and ControllerID APIs.

  • Go Modules support.

  • Fixed memory leak on calls to NewAdminClient(). (discovered by @gabeysunda)

  • Requires librdkafka v1.3.0 or later

librdkafka v1.3.0 changes

Full librdkafka v1.3.0 release notes.

  • KIP-392: Fetch messages from closest replica/follower (by @mhowlett).
  • Experimental mock broker to make application and librdkafka development testing easier.
  • Fixed consumer_lag in stats when consuming from broker versions <0.11.0.0 (regression in librdkafka v1.2.0).

v1.1.0

15 Jul 13:38
Compare
Choose a tag to compare

confluent-kafka-go v1.1.0

  • OAUTHBEARER SASL authentication (KIP-255) by Ron Dagostini (@rondagostino) at StateStreet.
  • Offset commit metadata (@damour, #353)
  • Requires librdkafka v1.1.0 or later

Noteworthy librdkafka v1.1.0 changes

Full librdkafka v1.1.0 release notes.

  • SASL OAUTHBEARER support (by @rondagostino at StateStreet)
  • In-memory SSL certificates (PEM, DER, PKCS#12) support (by @noahdav at Microsoft)
  • Pluggable broker SSL certificate verification callback (by @noahdav at Microsoft)
  • Use Windows Root/CA SSL Certificate Store (by @noahdav at Microsoft)
  • ssl.endpoint.identification.algorithm=https (off by default) to validate the broker hostname matches the certificate. Requires OpenSSL >= 1.0.2.
  • Improved GSSAPI/Kerberos ticket refresh

Upgrade considerations

  • Windows SSL users will no longer need to specify a CA certificate file/directory (ssl.ca.location), librdkafka will load the CA certs by default from the Windows Root Certificate Store.
  • SSL peer (broker) certificate verification is now enabled by default (disable with enable.ssl.certificate.verification=false)
  • %{broker.name} is no longer supported in sasl.kerberos.kinit.cmd since kinit refresh is no longer executed per broker, but per client instance.

SSL

New configuration properties:

  • ssl.key.pem - client's private key as a string in PEM format
  • ssl.certificate.pem - client's public key as a string in PEM format
  • enable.ssl.certificate.verification - enable(default)/disable OpenSSL's builtin broker certificate verification.
  • enable.ssl.endpoint.identification.algorithm - to verify the broker's hostname with its certificate (disabled by default).
  • The private key data is now securely cleared from memory after last use.

Enhancements

  • Bump message.timeout.ms max value from 15 minutes to 24 days (@sarkanyi)

Fixes

  • SASL GSSAPI/Kerberos: Don't run kinit refresh for each broker, just per client instance.
  • SASL GSSAPI/Kerberos: Changed sasl.kerberos.kinit.cmd to first attempt ticket refresh, then acquire.
  • SASL: Proper locking on broker name acquisition.
  • Consumer: max.poll.interval.ms now correctly handles blocking poll calls, allowing a longer poll timeout than the max poll interval.

v1.0.0

29 Mar 09:09
Compare
Choose a tag to compare

confluent-kafka-go v1.0.0

This release adds support for librdkafka v1.0.0, featuring the EOS Idempotent Producer, Sparse connections, KIP-62 - max.poll.interval.ms support, zstd, and more.

See the librdkafka v1.0.0 release notes for more information and upgrade considerations.

Go client enhancements

  • Now requires librdkafka v1.0.0.
  • A new IsFatal() function has been added to KafkaError to help the application differentiate between temporary and fatal errors. Fatal errors are currently only triggered by the idempotent producer.
  • Added kafka.NewError() to make it possible to create error objects from user code / unit test (Artem Yarulin)

Go client fixes

  • Deprecate the use of default.topic.config. Topic configuration should now be set on the standard ConfigMap.
  • Reject delivery.report.only.error=true on producer creation (#306)
  • Avoid use of "Deprecated: " prefix (#268)
  • PartitionEOF must now be explicitly enabled thru enable.partition.eof

Make sure to check out the Idempotent Producer example

v0.11.6

25 Oct 07:46
Compare
Choose a tag to compare

Admin API

This release adds support for the Topic Admin API (KIP-4):

  • Create and delete topics
  • Increase topic partition count
  • Read and modify broker and topic configuration
  • Requires librdkafka >= v0.11.6
results, err := a.CreateTopics(
	ctx,
	// Multiple topics can be created simultaneously
	// by providing additional TopicSpecification structs here.
	[]kafka.TopicSpecification{{
		Topic:             "mynewtopic",
		NumPartitions:     20,
		ReplicationFactor: 3}})

More examples.

Fixes and enhancements

  • Make sure plugins are set before other configuration options (#225, @dtheodor)
  • Fix metadata memory leak
  • Clone config before mutating it in NewProducer and NewConsumer (@vlad-alexandru-ionescu)
  • Enable Error events to be emitted from librdkafka errors, e.g., ErrAllBrokersDown, et.al (#200)

v0.11.4

28 Mar 17:08
Compare
Choose a tag to compare

Announcements

  • This release drops support for Golang < 1.7

  • Requires librdkafka v0.11.4 or later

Message header support

Support for Kafka message headers has been added (requires broker version >= v0.11.0).

When producing messages, pass a []kafka.Header list:

        err = p.Produce(&kafka.Message{
                TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
                Value:          []byte(value),
                Headers:        []kafka.Header{{"myTestHeader", []byte("header values are binary")}},
        }, deliveryChan)

Message headers are available to the consumer as Message.Headers:

	msg, err := c.ReadMessage(-1)
	if err != nil {
		fmt.Printf("%% Consumer error: %v\n", err)
		continue
	}
	fmt.Printf("%% Message on %s:\n%s\n", msg.TopicPartition, string(msg.Value))
	if msg.Headers != nil {
		fmt.Printf("%% Headers: %v\n", msg.Headers)
	}

Enhancements

  • Message Headers support
  • Close event channel when consumer is closed (#123 by @czchen)
  • Added ReadMessage() convenience method to Consumer
  • producer: Make events channel size configurable (@agis)
  • Added Consumer.StoreOffsets() (#72)
  • Added ConfigMap.Get() (#26)
  • Added Pause() and Resume() APIs
  • Added Consumer.Committed() API
  • Added OffsetsForTimes() API to Consumer and Producer

Fixes

  • Static builds should now work on both OSX and Linux (#137, #99)
  • Update error constants from librdkafka
  • Enable produce.offset.report by default (unless overriden)
  • move test helpers that need testing pkg to _test.go file (@gwilym)
  • Build and run-time checking of librdkafka version (#88)
  • Remove gotos (@jadekler)
  • Fix Producer Value&Key slice referencing to avoid cgo pointer checking failures (#24)
  • Fix Go 1.10 build errors (drop pkg-config --static ..)