Skip to content

Commit

Permalink
Merge branch 'main' into alter_comment
Browse files Browse the repository at this point in the history
  • Loading branch information
elasticmachine committed Feb 15, 2024
2 parents 8b8f0e3 + 4b21d67 commit 1feed58
Show file tree
Hide file tree
Showing 28 changed files with 573 additions and 283 deletions.
6 changes: 6 additions & 0 deletions docs/changelog/105365.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 105365
summary: Fix bug in `rule_query` where `text_expansion` errored because it was not
rewritten
area: Application
type: bug
issues: []
165 changes: 127 additions & 38 deletions docs/reference/security/fips-140-compliance.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,59 +8,75 @@ government computer security standard used to approve cryptographic modules.
{es} offers a FIPS 140-2 compliant mode and as such can run in a FIPS 140-2
configured JVM.

IMPORTANT: The JVM bundled with {es} is not configured for FIPS 140-2. You must
IMPORTANT: The JVM bundled with {es} is not configured for FIPS 140-2. You must
configure an external JDK with a FIPS 140-2 certified Java Security Provider.
Refer to the {es}
https://www.elastic.co/support/matrix#matrix_jvm[JVM support matrix] for
supported JVM configurations.
supported JVM configurations. See https://www.elastic.co/subscriptions[subscriptions] for required licensing.

After configuring your JVM for FIPS 140-2, you can run {es} in FIPS 140-2 mode by
setting the `xpack.security.fips_mode.enabled` to `true` in `elasticsearch.yml`.
Compliance with FIPS 140-2 requires using only FIPS approved / NIST recommended cryptographic algorithms. Generally this can be done by the following:

For {es}, adherence to FIPS 140-2 is ensured by:

- Using FIPS approved / NIST recommended cryptographic algorithms.
- Delegating the implementation of these cryptographic algorithms to a NIST
validated cryptographic module (available via the Java Security Provider
in use in the JVM).
- Allowing the configuration of {es} in a FIPS 140-2 compliant manner, as
documented below.
- Installation and configuration of a FIPS certified Java security provider.
- Ensuring the configuration of {es} is FIPS 140-2 compliant as documented below.
- Setting `xpack.security.fips_mode.enabled` to `true` in `elasticsearch.yml`. Note - this setting alone is not sufficient to be compliant
with FIPS 140-2.

[discrete]
=== Upgrade considerations
=== Configuring {es} for FIPS 140-2

[IMPORTANT]
====
include::fips-java17.asciidoc[]
====
Detailed instructions for the configuration required for FIPS 140-2 compliance is beyond the scope of this document. It is the responsibility
of the user to ensure compliance with FIPS 140-2. {es} has been tested with a specific configuration described below. However, there are
other configurations possible to achieve compliance.

The following is a high-level overview of the required configuration:

If you plan to upgrade your existing cluster to a version that can be run in
a FIPS 140-2 configured JVM, we recommend to first perform a rolling
upgrade to the new version in your existing JVM and perform all necessary
configuration changes in preparation for running in FIPS 140-2 mode. You can then
perform a rolling restart of the nodes, starting each node in a FIPS 140-2 JVM.
During the restart, {es}:
* Use an externally installed Java installation. The JVM bundled with {es} is not configured for FIPS 140-2.
* Install a FIPS certified security provider .jar file(s) in {es}'s `lib` directory.
* Configure Java to use a FIPS certified security provider (xref:java-security-provider[see below]).
* Configure {es}'s security manager to allow use of the FIPS certified provider (xref:java-security-manager[see below]).
* Ensure the keystore and truststore are configured correctly (xref:keystore-fips-password[see below]).
* Ensure the TLS settings are configured correctly (xref:fips-tls[see below]).
* Ensure the password hashing settings are configured correctly (xref:fips-stored-password-hashing[see below]).
* Ensure the cached password hashing settings are configured correctly (xref:fips-cached-password-hashing[see below]).
* Configure `elasticsearch.yml` to use FIPS 140-2 mode, see (xref:configuring-es-yml[below]).
* Verify the security provider is installed and configured correctly (xref:verify-security-provider[see below]).
* Review the upgrade considerations (xref:fips-upgrade-considerations[see below]) and limitations (xref:fips-limitations[see below]).

- Upgrades <<secure-settings,secure settings>> to the latest, compliant format.
A FIPS 140-2 JVM cannot load previous format versions. If your keystore is
not password-protected, you must manually set a password. See
<<keystore-fips-password>>.
- Upgrades self-generated trial licenses to the latest FIPS 140-2 compliant format.

If your {subscriptions}[subscription] already supports FIPS 140-2 mode, you
can elect to perform a rolling upgrade while at the same time running each
upgraded node in a FIPS 140-2 JVM. In this case, you would need to also manually
regenerate your `elasticsearch.keystore` and migrate all secure settings to it,
in addition to the necessary configuration changes outlined below, before
starting each node.
[discrete]
[[java-security-provider]]
==== Java security provider

Detailed instructions for installation and configuration of a FIPS certified Java security provider is beyond the scope of this document.
Specifically, a FIPS certified
https://docs.oracle.com/en/java/javase/17/security/java-cryptography-architecture-jca-reference-guide.html[JCA] and
https://docs.oracle.com/en/java/javase/17/security/java-secure-socket-extension-jsse-reference-guide.html[JSSE] implementation is required
so that the JVM uses FIPS validated implementations of NIST recommended cryptographic algorithms.

Elasticsearch has been tested with Bouncy Castle's https://repo1.maven.org/maven2/org/bouncycastle/bc-fips/1.0.2.4/bc-fips-1.0.2.4.jar[bc-fips 1.0.2.4]
and https://repo1.maven.org/maven2/org/bouncycastle/bctls-fips/1.0.17/bctls-fips-1.0.17.jar[bctls-fips 1.0.17].
Please refer to the [Support Matrix] for details on which combinations of JVM and security provider are supported in FIPS mode. Elasticsearch does not ship with a FIPS certified provider. It is the responsibility of the user
to install and configure the security provider to ensure compliance with FIPS 140-2. Using a FIPS certified provider will ensure that only
approved cryptographic algorithms are used.

To configure {es} to use additional security provider(s) configure {es}'s <<set-jvm-options, JVM property>> `java.security.properties` to point to a file
(https://raw.githubusercontent.com/elastic/elasticsearch/main/build-tools-internal/src/main/resources/fips_java.security[example]) in {es}'s
`config` directory. Ensure the FIPS certified security provider is configured with the lowest order. This file should contain the necessary
configuration to instruct Java to use the FIPS certified security provider.

[discrete]
=== Configuring {es} for FIPS 140-2
[[java-security-manager]]
==== Java security manager

All code running in {es} is subject to the security restrictions enforced by the Java security manager.
The security provider you have installed and configured may require additional permissions in order to function correctly. You can grant these permissions by providing your own
https://docs.oracle.com/javase/8/docs/technotes/guides/security/PolicyFiles.html#FileSyntax[Java security policy]

To configure {es}'s security manager configure the JVM property `java.security.policy` to point a file
(https://raw.githubusercontent.com/elastic/elasticsearch/main/build-tools-internal/src/main/resources/fips_java.policy[example])in {es}'s
`config` directory with the desired permissions. This file should contain the necessary configuration for the Java security manager
to grant the required permissions needed by the security provider.

Apart from setting `xpack.security.fips_mode.enabled`, a number of security
related settings need to be configured accordingly in order to be compliant
and able to run {es} successfully in a FIPS 140-2 configured JVM.

[discrete]
[[keystore-fips-password]]
Expand All @@ -78,6 +94,7 @@ Note that when the keystore is password-protected, you must supply the password
Elasticsearch starts.

[discrete]
[[fips-tls]]
==== TLS

SSLv2 and SSLv3 are not allowed by FIPS 140-2, so `SSLv2Hello` and `SSLv3` cannot
Expand Down Expand Up @@ -172,6 +189,78 @@ hashes using non-compliant algorithms will be discarded and the new
ones will be created using the algorithm you have selected.

[discrete]
[[configuring-es-yml]]
==== Configure {es} elasticsearch.yml

* Set `xpack.security.fips_mode.enabled` to `true` in `elasticsearch.yml`. This setting is used to ensure to configure some internal
configuration to be FIPS 140-2 compliant and provides some additional verification.

* Set `xpack.security.autoconfiguration.enabled` to `false`. This will disable the automatic configuration of the security settings.
Users must ensure that the security settings are configured correctly for FIPS-140-2 compliance. This is only applicable for new installations.

* Set `xpack.security.authc.password_hashing.algorithm` appropriately see xref:fips-stored-password-hashing[above].

* Other relevant security settings. For example, TLS for the transport and HTTP interfaces. (not explicitly covered here or in the example below)

* Optional: Set `xpack.security.fips_mode.required_providers` in `elasticsearch.yml` to ensure the required security providers (8.13+).
see xref:verify-security-provider[below].

[source,yaml]
--------------------------------------------------
xpack.security.fips_mode.enabled: true
xpack.security.autoconfiguration.enabled: false
xpack.security.fips_mode.required_providers: ["BCFIPS", "BCJSSE"]
xpack.security.authc.password_hashing.algorithm: "pbkdf2_stretch"
--------------------------------------------------

[discrete]
[[verify-security-provider]]
==== Verify the security provider is installed

To verify that the security provider is installed and in use, you can use any of the following steps:

* Verify the required security providers are configured with the lowest order in the file pointed to by `java.security.properties`.
For example, `security.provider.1` is a lower order than `security.provider.2`

* Set `xpack.security.fips_mode.required_providers` in `elasticsearch.yml` to the list of required security providers.
This setting is used to ensure that the correct security provider is installed and configured. (8.13+)
If the security provider is not installed correctly, {es} will fail to start. `["BCFIPS", "BCJSSE"]` are the values to
use for Bouncy Castle's FIPS JCE and JSSE certified provider.

[discrete]
[[fips-upgrade-considerations]]
=== Upgrade considerations
include::fips-java17.asciidoc[]

[IMPORTANT]
====
Some encryption algorithms may no longer be available by default in updated FIPS 140-2 security providers.
Notably, Triple DES and PKCS1.5 RSA are now discouraged and https://www.bouncycastle.org/fips-java[Bouncy Castle] now
requires explicit configuration to continue using these algorithms.
====

If you plan to upgrade your existing cluster to a version that can be run in
a FIPS 140-2 configured JVM, we recommend to first perform a rolling
upgrade to the new version in your existing JVM and perform all necessary
configuration changes in preparation for running in FIPS 140-2 mode. You can then
perform a rolling restart of the nodes, starting each node in a FIPS 140-2 JVM.
During the restart, {es}:

- Upgrades <<secure-settings,secure settings>> to the latest, compliant format.
A FIPS 140-2 JVM cannot load previous format versions. If your keystore is
not password-protected, you must manually set a password. See
<<keystore-fips-password>>.
- Upgrades self-generated trial licenses to the latest FIPS 140-2 compliant format.

If your {subscriptions}[subscription] already supports FIPS 140-2 mode, you
can elect to perform a rolling upgrade while at the same time running each
upgraded node in a FIPS 140-2 JVM. In this case, you would need to also manually
regenerate your `elasticsearch.keystore` and migrate all secure settings to it,
in addition to the necessary configuration changes outlined below, before
starting each node.

[discrete]
[[fips-limitations]]
=== Limitations

Due to the limitations that FIPS 140-2 compliance enforces, a small number of
Expand Down
13 changes: 5 additions & 8 deletions docs/reference/security/fips-java17.asciidoc
Original file line number Diff line number Diff line change
@@ -1,10 +1,7 @@
{es} {version} requires Java 17 or later.
There is not yet a FIPS-certified security module for Java 17
that you can use when running {es} {version} in FIPS 140-2 mode.
If you run in FIPS 140-2 mode, you will either need to request
an exception from your security organization to upgrade to {es} {version},
or remain on {es} 7.x until Java 17 is certified.
ifeval::["{release-state}"=="released"]
{es} 8.0+ requires Java 17 or later. {es} 8.13+ has been tested with https://www.bouncycastle.org/java.html[Bouncy Castle]'s Java 17
https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616[certified] FIPS implementation and is the
recommended Java security provider when running {es} in FIPS 140-2 mode.
Note - {es} does not ship with a FIPS certified security provider and requires explicit installation and configuration.
Alternatively, consider using {ess} in the
https://www.elastic.co/industries/public-sector/fedramp[FedRAMP-certified GovCloud region].
endif::[]
Original file line number Diff line number Diff line change
Expand Up @@ -12,18 +12,18 @@
import org.elasticsearch.TransportVersions;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.core.Releasables;
import org.elasticsearch.search.DocValueFormat;
import org.elasticsearch.search.aggregations.AggregationReduceContext;
import org.elasticsearch.search.aggregations.AggregatorReducer;
import org.elasticsearch.search.aggregations.InternalAggregation;
import org.elasticsearch.search.aggregations.InternalAggregations;
import org.elasticsearch.search.aggregations.InternalMultiBucketAggregation;
import org.elasticsearch.search.aggregations.bucket.FixedMultiBucketAggregatorsReducer;
import org.elasticsearch.search.aggregations.support.SamplingContext;
import org.elasticsearch.xcontent.XContentBuilder;

import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Objects;
Expand Down Expand Up @@ -248,50 +248,32 @@ protected AggregatorReducer getLeaderReducer(AggregationReduceContext reduceCont

return new AggregatorReducer() {

final List<InternalBinaryRange> aggregations = new ArrayList<>(size);
final FixedMultiBucketAggregatorsReducer<Bucket> reducer = new FixedMultiBucketAggregatorsReducer<>(
reduceContext,
size,
getBuckets()
) {

@Override
protected Bucket createBucket(Bucket proto, long docCount, InternalAggregations aggregations) {
return new Bucket(proto.format, proto.keyed, proto.key, proto.from, proto.to, docCount, aggregations);
}
};

@Override
public void accept(InternalAggregation aggregation) {
aggregations.add((InternalBinaryRange) aggregation);
InternalBinaryRange binaryRange = (InternalBinaryRange) aggregation;
reducer.accept(binaryRange.getBuckets());
}

@Override
public InternalAggregation get() {
reduceContext.consumeBucketsAndMaybeBreak(buckets.size());
long[] docCounts = new long[buckets.size()];
InternalAggregations[][] aggs = new InternalAggregations[buckets.size()][];
for (int i = 0; i < aggs.length; ++i) {
aggs[i] = new InternalAggregations[aggregations.size()];
}
for (int i = 0; i < aggregations.size(); ++i) {
InternalBinaryRange range = aggregations.get(i);
if (range.buckets.size() != buckets.size()) {
throw new IllegalStateException(
"Expected [" + buckets.size() + "] buckets, but got [" + range.buckets.size() + "]"
);
}
for (int j = 0; j < buckets.size(); ++j) {
Bucket bucket = range.buckets.get(j);
docCounts[j] += bucket.docCount;
aggs[j][i] = bucket.aggregations;
}
}
List<Bucket> buckets = new ArrayList<>(getBuckets().size());
for (int i = 0; i < getBuckets().size(); ++i) {
Bucket b = getBuckets().get(i);
buckets.add(
new Bucket(
format,
keyed,
b.key,
b.from,
b.to,
docCounts[i],
InternalAggregations.reduce(Arrays.asList(aggs[i]), reduceContext)
)
);
}
return new InternalBinaryRange(name, format, keyed, buckets, metadata);
return new InternalBinaryRange(name, format, keyed, reducer.get(), metadata);
}

@Override
public void close() {
Releasables.close(reducer);
}
};
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,10 @@

import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.core.Releasables;
import org.elasticsearch.search.aggregations.AggregationReduceContext;
import org.elasticsearch.search.aggregations.AggregatorReducer;
import org.elasticsearch.search.aggregations.AggregatorsReducer;
import org.elasticsearch.search.aggregations.InternalAggregation;
import org.elasticsearch.search.aggregations.InternalAggregations;
import org.elasticsearch.search.aggregations.bucket.InternalSingleBucketAggregation;
Expand All @@ -20,8 +22,6 @@
import org.elasticsearch.xcontent.XContentBuilder;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;

public class InternalRandomSampler extends InternalSingleBucketAggregation implements Sampler {
Expand Down Expand Up @@ -79,24 +79,28 @@ protected InternalSingleBucketAggregation newAggregation(String name, long docCo
protected AggregatorReducer getLeaderReducer(AggregationReduceContext reduceContext, int size) {
return new AggregatorReducer() {
long docCount = 0L;
final List<InternalAggregations> subAggregationsList = new ArrayList<>(size);
final AggregatorsReducer subAggregatorReducer = new AggregatorsReducer(reduceContext, size);

@Override
public void accept(InternalAggregation aggregation) {
docCount += ((InternalSingleBucketAggregation) aggregation).getDocCount();
subAggregationsList.add(((InternalSingleBucketAggregation) aggregation).getAggregations());
subAggregatorReducer.accept(((InternalSingleBucketAggregation) aggregation).getAggregations());
}

@Override
public InternalAggregation get() {
InternalAggregations aggs = InternalAggregations.reduce(subAggregationsList, reduceContext);
InternalAggregations aggs = subAggregatorReducer.get();
if (reduceContext.isFinalReduce() && aggs != null) {
SamplingContext context = buildContext();
aggs = InternalAggregations.from(aggs.asList().stream().map(agg -> agg.finalizeSampling(context)).toList());
}

return newAggregation(getName(), docCount, aggs);
}

@Override
public void close() {
Releasables.close(subAggregatorReducer);
}
};
}

Expand Down

0 comments on commit 1feed58

Please sign in to comment.