diff --git a/website/content/api-docs/features/consistency.mdx b/website/content/api-docs/features/consistency.mdx index 4cbf9037ef9f..8feee9a889d4 100644 --- a/website/content/api-docs/features/consistency.mdx +++ b/website/content/api-docs/features/consistency.mdx @@ -33,12 +33,12 @@ followers may have a slightly outdated, or "stale", view of Consul's state. If a read request is handled by the current leader, the response is guaranteed to be fully _consistent_ (as up-to-date as possible). If the same request were handled by a follower, the response may be less consistent: -based on a _stale_ (outdated) copy of the leader's state. +based on a _stale_ (outdated) copy of the leader's state. Consistency is highest if the response comes from the leader. But ensuring only the leader can respond to the request prevents spreading read request load across all Consul servers. -The consistency mode controls which Consul servers can repond to read requests, +The consistency mode controls which Consul servers can respond to read requests, enabling control over this inherent trade-off between consistency and performance. ## Available Consistency Modes @@ -56,7 +56,7 @@ Each HTTP API endpoint documents its support for the three read consistency mode - `default` - [Consul HTTP API queries use `default` mode by default](#consul-http-api-queries). - It is strongly consistent in almost all cases. + It is strongly consistent in almost all cases. However, there is a small window in which a new leader may be elected during which the old leader may respond with stale values. The trade-off is fast reads but potentially stale values. @@ -108,7 +108,7 @@ per consistency mode and the relative trade-offs between level of consistency an ### Cross-Datacenter Request Behavior When making a request across federated Consul datacenters, requests are forwarded from -a local server to any remote server. Once in the remote datecenter, the request path +a local server to any remote server. Once in the remote datacenter, the request path is the same as a [local request with the same consistency mode](#intra-datacenter-request-behavior). The following diagrams show the cross-datacenter request paths when Consul servers in datacenters are [federated either directly or via mesh gateways](/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways). @@ -214,7 +214,7 @@ Services can still override this default on a per-request basis by [specifying a supported consistency mode as a query parameter in the request](#overriding-a-request-s-consistency-mode). To configure Consul servers that receive service discovery requests to use `stale` -consistency mode unless overriden, +consistency mode unless overridden, set [`discovery_max_stale`] to a value greater than zero in their agent configuration. The `stale` consistency mode will be used by default unless the data is sufficiently stale: its Raft log's index is more than [`discovery_max_stale`] indices behind the leader's. @@ -275,4 +275,4 @@ semantics as `stale` consistency mode but different trade offs. This behavior is [`dns_config.allow_stale`]: /docs/agent/options#allow_stale) [`dns_config.max_stale`]: /docs/agent/options#max_stale -[`discovery_max_stale`]: /docs/agent/options#discovery_max_stale \ No newline at end of file +[`discovery_max_stale`]: /docs/agent/options#discovery_max_stale diff --git a/website/content/api-docs/index.mdx b/website/content/api-docs/index.mdx index cdb940c8ed4c..70646f3b1018 100644 --- a/website/content/api-docs/index.mdx +++ b/website/content/api-docs/index.mdx @@ -14,7 +14,7 @@ The Consul HTTP API is a RESTful interface that allows you leverage Consul funct Use the following API endpoints to configure and connect your services. - [`/catalog`](/api-docs/catalog): Register and deregister nodes, services, and health checks. -- [`/health`](/api-docs/health): Query node health when health checks are enabled. +- [`/health`](/api-docs/health): Query node health when health checks are enabled. - [`/query`](/api-docs/query): Create and manage prepared queries in Consul. Prepared queries allow you to register a complex service query and send it later. - [`/coordinate`](/api-docs/coordinate): Query the network coordinates for nodes in the local datacenter and Consul servers in the local datacenter and remote datacenters. @@ -29,15 +29,15 @@ The following endpoints are specific to service mesh: The following API endpoints give you control over access to services in your network and access to the Consul API. -- [`/acl`](/api-docs/acl): Create and manage tokens that authenticate requests and authorize access to resources in the network. We recommend enabling access control lists (ACL) to secure access to the Consul API, UI, and CLI. +- [`/acl`](/api-docs/acl): Create and manage tokens that authenticate requests and authorize access to resources in the network. We recommend enabling access control lists (ACL) to secure access to the Consul API, UI, and CLI. - [`/connect/intentions`](/api-docs/connect/intentions): Create and manage service intentions. ## Observe your network Use the following API endpoints enable network observability. -- [`/status`](/api-docs/status): Debug your Consul datacenter by returning low-level Raft information about Consul server peers. -- [`/agent/metrics`](/api-docs/agent#view-metrics): Retrieve metrics for the most recent finished intervals. For more information about metrics, refere to [Telemetry](/docs/agent/telemetry). +- [`/status`](/api-docs/status): Debug your Consul datacenter by returning low-level Raft information about Consul server peers. +- [`/agent/metrics`](/api-docs/agent#view-metrics): Retrieve metrics for the most recent finished intervals. For more information about metrics, refer to [Telemetry](/docs/agent/telemetry). ## Manage consul @@ -54,6 +54,6 @@ The following API endpoints help you manage Consul operations. The following API endpoints enable you to dynamically configure your services. -- [`/event`](/api-docs/event): Start a custom event that you can use to build scripts and automations. +- [`/event`](/api-docs/event): Start a custom event that you can use to build scripts and automations. - [`/kv`](/api-docs/kv): Add, remove, and update metadata stored in the Consul KV store. - [`/session`](/api-docs/session): Create and manage [sessions](/docs/dynamic-app-config/sessions) in Consul. You can use sessions to build distributed and granular locks to ensure nodes are properly writing to the Consul KV store. diff --git a/website/content/api-docs/query.mdx b/website/content/api-docs/query.mdx index 77aa76dfbb2a..521719eabcf6 100644 --- a/website/content/api-docs/query.mdx +++ b/website/content/api-docs/query.mdx @@ -87,7 +87,7 @@ populate the query before it is executed. All of the string fields inside the empty string. - `${agent.segment}` - the network segment of the agent that - initiated the query. This varaible can be used with the `NodeMeta` field to limit the results + initiated the query. This variable can be used with the `NodeMeta` field to limit the results of a query to service instances within its own network segment: ```json diff --git a/website/content/docs/api-gateway/configuration/gatewayclass.mdx b/website/content/docs/api-gateway/configuration/gatewayclass.mdx index 70684e73080c..dc366d4c0500 100644 --- a/website/content/docs/api-gateway/configuration/gatewayclass.mdx +++ b/website/content/docs/api-gateway/configuration/gatewayclass.mdx @@ -48,7 +48,7 @@ Defines an API object that references additional configurations required by the | --- | --- | --- | --- | | `group` | Specifies the Kubernetes group that the `parametersRef` is a member of.
The value must always be `api-gateway.consul.hashicorp.com`.
The `parametersRef.group` is always the same across all deployments of Consul API Gateway. | String | Required | | `kind` | Specifies the type of Kubernetes object that the `parametersRef` configuration defines.
The value must always be `GatewayClassConfig`.
This `parametersRef.kind` is always the same across all deployments of Consul API Gateway. | String | Required | -| `name` | Specfies a name for the `GatewayClassConfig` object. | String | Required | +| `name` | Specifies a name for the `GatewayClassConfig` object. | String | Required | ### description diff --git a/website/content/docs/api-gateway/configuration/routes.mdx b/website/content/docs/api-gateway/configuration/routes.mdx index dced0cfa042b..be711b32eef7 100644 --- a/website/content/docs/api-gateway/configuration/routes.mdx +++ b/website/content/docs/api-gateway/configuration/routes.mdx @@ -108,7 +108,7 @@ The `rules` field contains a list of objects that define behaviors for network t * [`backendRefs`](#rules-backendrefs): Specifies which backend services the `Route` references when processing traffic. * [`filters`](#rules-filters): Specifies which operations Consul API Gateway performs when traffic goes through the `Route`. -* [`matches`](#rules-matches): Deterines which requests Consul API Gateway processes. +* [`matches`](#rules-matches): Determines which requests Consul API Gateway processes. Rules are optional. diff --git a/website/content/docs/api-gateway/upgrades.mdx b/website/content/docs/api-gateway/upgrades.mdx index 24c29fde17be..87a7993152c2 100644 --- a/website/content/docs/api-gateway/upgrades.mdx +++ b/website/content/docs/api-gateway/upgrades.mdx @@ -169,7 +169,7 @@ If you have any active `ReferencePolicy` resources, you will receive output simi ## Upgrade to v0.3.0 from v0.2.0 or lower -Consul API Gateway v0.3.0 introduces a change for people upgrading from lower versions. Gateways with `listeners` with a `certificateRef` defined in a different namespace now require a [`ReferencePolicy`](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferencePolicy) that explicitly allows `Gateways` from the gateway's namesapce to use `certificateRef` in the `certificateRef`'s namespace. +Consul API Gateway v0.3.0 introduces a change for people upgrading from lower versions. Gateways with `listeners` with a `certificateRef` defined in a different namespace now require a [`ReferencePolicy`](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferencePolicy) that explicitly allows `Gateways` from the gateway's namespace to use `certificateRef` in the `certificateRef`'s namespace. ### Requirements diff --git a/website/content/docs/connect/config-entries/exported-services.mdx b/website/content/docs/connect/config-entries/exported-services.mdx index 60a247e43854..32f730240b3e 100644 --- a/website/content/docs/connect/config-entries/exported-services.mdx +++ b/website/content/docs/connect/config-entries/exported-services.mdx @@ -20,7 +20,7 @@ You can configure the settings defined in the `exported-services` configuration ## Requirements -- A 1.11.0+ Consul Enteprise binary or a 1.13.0+ Consul OSS binary. +- A 1.11.0+ Consul Enterprise binary or a 1.13.0+ Consul OSS binary. - **Enterprise Only**: A corresponding partition that the configuration entry can export from. For example, the `exported-services` configuration entry for a partition named `frontend` requires an existing `frontend` partition. ## Usage diff --git a/website/content/docs/connect/transparent-proxy.mdx b/website/content/docs/connect/transparent-proxy.mdx index 82d98f60cb8b..c03d397eb13e 100644 --- a/website/content/docs/connect/transparent-proxy.mdx +++ b/website/content/docs/connect/transparent-proxy.mdx @@ -15,9 +15,9 @@ This topic describes how to use Consul’s transparent proxy feature, which allo When transparent proxy is enabled, Consul is able to perform the following actions automatically: -- Infer the location of upstream services using service intentions. -- Redirect outbound connections that point to KubeDNS through the proxy. -- Force traffic through the proxy to prevent unauthorized direct access to the application. +- Infer the location of upstream services using service intentions. +- Redirect outbound connections that point to KubeDNS through the proxy. +- Force traffic through the proxy to prevent unauthorized direct access to the application. The following diagram shows how transparent proxy routes traffic: @@ -28,7 +28,7 @@ When transparent proxy is disabled, you must manually specify the following conf * Explicitly configure upstream services by specifying a local port to access them. * Change application to access `localhost:`. * Configure applications to only listen on the loopback interface to prevent unauthorized traffic from bypassing the mesh. - + The following diagram shows how traffic flows through the mesh without transparent proxy enabled: ![Diagram demonstrating that without transparent proxy, applications must "opt in" to connecting to their dependencies through the mesh](/img/consul-connect/without-transparent-proxy.png) @@ -42,10 +42,10 @@ Your network must meet the following environment and software requirements to us * Transparent proxy is available for Kubernetes environments. * Consul 1.10.0+ * Consul Helm chart 0.32.0+. If you want to use the Consul CNI plugin to redirect traffic, Helm chart 0.48.0+ is required. Refer to [Enable the Consul CNI plugin](#enable-the-consul-cni-plugin) for additional information. -* [Service intentions](/docs/connect/intentions) must be configured to allow communication between intended services. -* The `ip_tables` kernel module must be running on all worker nodes within a Kubernetes cluster. If you are using the `modprobe` Linux utility, for example, issue the following command: +* [Service intentions](/docs/connect/intentions) must be configured to allow communication between intended services. +* The `ip_tables` kernel module must be running on all worker nodes within a Kubernetes cluster. If you are using the `modprobe` Linux utility, for example, issue the following command: - `$ modprobe ip_tables` + `$ modprobe ip_tables` ~> **Upgrading to a supported version**: Always follow the [proper upgrade path](/docs/upgrading/upgrade-specific/#transparent-proxy-on-kubernetes) when upgrading to a supported version of Consul, Consul on Kubernetes (`consul-k8s`), and the Consul Helm chart. @@ -78,7 +78,7 @@ kubectl label namespaces my-app "consul.hashicorp.com/transparent-proxy=true" ``` #### Individual service -Apply the `consul.hashicorp.com/transparent-proxy=true` annotation to eanble transparent proxy on the Pod for each service. The annotation overrides the Helm value and the namespace label. The following example enables transparent proxy for the `static-server` service: +Apply the `consul.hashicorp.com/transparent-proxy=true` annotation to enable transparent proxy on the Pod for each service. The annotation overrides the Helm value and the namespace label. The following example enables transparent proxy for the `static-server` service: ```yaml apiVersion: v1 @@ -130,19 +130,19 @@ spec: ### Enable the Consul CNI plugin -By default, Consul generates a `connect-inject init` container as part of the Kubernetes Pod startup process. The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated CAP_NET_ADMIN privileges, which may not be compatible with security policies in your organization. +By default, Consul generates a `connect-inject init` container as part of the Kubernetes Pod startup process. The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated CAP_NET_ADMIN privileges, which may not be compatible with security policies in your organization. Instead, you can enable the Consul container network interface (CNI) plugin to perform traffic redirection. Because the plugin is executed by the Kubernetes kubelet, it already has the elevated privileges necessary to configure the network. Additionally, you do not need to specify annotations that automatically overwrite Kubernetes HTTP health probes when the plugin is enabled (see [Overwrite Kubernetes HTTP health probes](#overwrite-kubernetes-http-health-probes)). -The Consul Helm chart installs the CNI plugin, but it is disabled by default. Refer to the [instructions for enabling the CNI plugin](/docs/k8s/installation/install#enable-the-consul-cni-plugin) in the Consul on Kubernetes installation documentation for additional information. +The Consul Helm chart installs the CNI plugin, but it is disabled by default. Refer to the [instructions for enabling the CNI plugin](/docs/k8s/installation/install#enable-the-consul-cni-plugin) in the Consul on Kubernetes installation documentation for additional information. ### Traffic redirection -There are two mechanisms for redirecting traffic through the sidecar proxies. By default, Consul injects an init container that redirects all inbound and outbound traffic. The default mechanism requires elevated permissions (CAP_NET_ADMIN) in order to redirect traffic to the service mesh. +There are two mechanisms for redirecting traffic through the sidecar proxies. By default, Consul injects an init container that redirects all inbound and outbound traffic. The default mechanism requires elevated permissions (CAP_NET_ADMIN) in order to redirect traffic to the service mesh. -Alternatively, you can enable the Consul CNI plugin to handle traffic redirection. Because the Kubernetes kubelet runs CNI plugins, the Consul CNI plugin has the necessary privileges to apply routing tables in the network. +Alternatively, you can enable the Consul CNI plugin to handle traffic redirection. Because the Kubernetes kubelet runs CNI plugins, the Consul CNI plugin has the necessary privileges to apply routing tables in the network. -Both mechanisms redirect all inbound and outbound traffic, but you can configure exceptions for specific Pods or groups of Pods. The following annotations enable you to exclude certain traffic from being redirected to sidecar proxies. +Both mechanisms redirect all inbound and outbound traffic, but you can configure exceptions for specific Pods or groups of Pods. The following annotations enable you to exclude certain traffic from being redirected to sidecar proxies. #### Exclude inbound ports @@ -177,8 +177,8 @@ The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/docs/k8s/ #### Exclude outbound CIDR blocks -The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) annotation -defines a comma-separated list of outbound CIDR blocks to exclude from traffic redirection when running in transparent proxy mode. The CIDR blocks are string data values. +The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) annotation +defines a comma-separated list of outbound CIDR blocks to exclude from traffic redirection when running in transparent proxy mode. The CIDR blocks are string data values. In the following example, services in the `3.3.3.3/24` IP range are not redirected through the transparent proxy: @@ -194,8 +194,8 @@ In the following example, services in the `3.3.3.3/24` IP range are not redirect #### Exclude user IDs -The [`consul.hashicorp.com/transparent-proxy-exclude-uids`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-uids) annotation -defines a comma-separated list of additional user IDs to exclude from traffic redirection when running in transparent proxy mode. The user IDs are string data values. +The [`consul.hashicorp.com/transparent-proxy-exclude-uids`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-uids) annotation +defines a comma-separated list of additional user IDs to exclude from traffic redirection when running in transparent proxy mode. The user IDs are string data values. In the following example, services with the IDs `4444 ` and `44444 ` are not redirected through the transparent proxy: @@ -215,7 +215,7 @@ In the following example, services with the IDs `4444 ` and `44444 ` are not red By default, `connect-inject` is disabled. As a result, Consul on Kubernetes uses a mechanism for traffic redirection that interferes with [Kubernetes HTTP health probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). This is because probes expect the kubelet to reach the application container on the probe's endpoint. Instead, traffic is redirected through the sidecar proxy. As a result, health probes return errors because the kubelet does not encrypt that traffic using a mesh proxy. -There are two methods for solving this issue. The first method is to set the `connectInject.transparentProxy.defaultOverwriteProbes` annotation to overwrite the Kubernetes HTTP health probes so that they point to the proxy. The second method is to [enable the Consul container network interface (CNI) plugin](#enable-the-consul-cni-plugin) to perform traffic redirection. Refer to the [Consul on Kubernetes installation instructions](/docs/k8s/installation/install) for additional information. +There are two methods for solving this issue. The first method is to set the `connectInject.transparentProxy.defaultOverwriteProbes` annotation to overwrite the Kubernetes HTTP health probes so that they point to the proxy. The second method is to [enable the Consul container network interface (CNI) plugin](#enable-the-consul-cni-plugin) to perform traffic redirection. Refer to the [Consul on Kubernetes installation instructions](/docs/k8s/installation/install) for additional information. #### Overwrite Kubernetes HTTP health probes @@ -225,9 +225,9 @@ Refer to [Kubernetes Health Checks in Consul on Kubernetes](/docs/k8s/connect/he ### Dial services across Kubernetes cluster -If your [Consul servers are federated between Kubernetes clusters](/docs/k8s/installation/multi-cluster/kubernetes), -then you must configure services in one Kubernetes cluster to explicitly dial a service in the datacenter of another Kubernetes cluster using the -[consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. +If your [Consul servers are federated between Kubernetes clusters](/docs/k8s/installation/multi-cluster/kubernetes), +then you must configure services in one Kubernetes cluster to explicitly dial a service in the datacenter of another Kubernetes cluster using the +[consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. The following example configures the service to dial an upstream service called `my-service` in datacenter `dc2` on port `1234`: ```yaml @@ -235,27 +235,27 @@ The following example configures the service to dial an upstream service called ``` If your Consul cluster is deployed to a [single datacenter spanning multiple Kubernetes clusters](/docs/k8s/deployment-configurations/single-dc-multi-k8s), -then you must configure services in one Kubernetes cluster to explicitly dial a service in another Kubernetes cluster using the -[consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. +then you must configure services in one Kubernetes cluster to explicitly dial a service in another Kubernetes cluster using the +[consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. The following example configures the service to dial an upstream service called `my-service` in another Kubernetes cluster on port `1234`: ```yaml "consul.hashicorp.com/connect-service-upstreams": "my-service:1234" ``` -You do not need to configure services to explicitlly dial upstream services if your Consul clusters are connected with a [peering connection](/docs/connect/cluster-peering). +You do not need to configure services to explicitly dial upstream services if your Consul clusters are connected with a [peering connection](/docs/connect/cluster-peering). ## Usage -When transparent proxy is enabled, traffic sent to [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) -or Pod IP addresses is redirected through the proxy. You must use a selector to bind Kubernetes Services to Pods as you define Kubernetes Services in the mesh. -The Kubernetes Service name must match the Consul service name to use KubeDNS. This is the default behavior unless you have applied the `consul.hashicorp.com/connect-service` +When transparent proxy is enabled, traffic sent to [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) +or Pod IP addresses is redirected through the proxy. You must use a selector to bind Kubernetes Services to Pods as you define Kubernetes Services in the mesh. +The Kubernetes Service name must match the Consul service name to use KubeDNS. This is the default behavior unless you have applied the `consul.hashicorp.com/connect-service` Kubernetes annotation to the service pods. The annotation overrides the Consul service name. Consul configures redirection for each Pod bound to the Kubernetes Service using `iptables` rules. The rules redirect all inbound and outbound traffic through an inbound and outbound listener on the sidecar proxy. Consul configures the proxy to route traffic to the appropriate upstream services based on [service intentions](/docs/connect/config-entries/service-intentions), which address the upstream services using KubeDNS. -In the following example, the Kubernetes service selects `sample-app` application Pods so that they can be reached within the mesh. +In the following example, the Kubernetes service selects `sample-app` application Pods so that they can be reached within the mesh. @@ -285,5 +285,5 @@ Note that when dialing individual instances, Consul ignores the HTTP routing rul ## Known Limitations - Deployment configurations with federation across or a single datacenter spanning multiple clusters must explicitly dial a service in another datacenter or cluster using annotations. - + - When dialing headless services, the request is proxied using a plain TCP proxy. Consul does not take into consideration the upstream's protocol. diff --git a/website/content/docs/ecs/manual/secure-configuration.mdx b/website/content/docs/ecs/manual/secure-configuration.mdx index c888e5cba36a..ec80419b39e1 100644 --- a/website/content/docs/ecs/manual/secure-configuration.mdx +++ b/website/content/docs/ecs/manual/secure-configuration.mdx @@ -189,7 +189,7 @@ In the `-config` option, the following fields are required: The following binding rule is used to associate a service identity with each token created by successful login to this instance of the auth method. The service identity name is taken from the -`consul.hashicorp.com.service-name` tag from the authenticaing IAM role identity. +`consul.hashicorp.com.service-name` tag from the authenticating IAM role identity. #### Create Binding Rule @@ -271,7 +271,7 @@ consul acl auth-method create \ | -------------------------------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `-partition` | string | The Consul Enterprise admin partition in which the auth method is created. | | `-namespace-rule-selector` | string | When this expression evaluates to true during login, the `-namespace-rule-bind-namespace` is applied. As shown, it evaluates to true when the `consul.hashicorp.com.namespace` tag is non-empty on the task IAM role. | -| `-namespace-rule-bind-namespace` | string | This expression is evaluted to determine the namespace where the token is created during login. As shown, it uses the namespace from the `consul.hashicorp.com.namespace` tag on the task IAM role. | +| `-namespace-rule-bind-namespace` | string | This expression is evaluated to determine the namespace where the token is created during login. As shown, it uses the namespace from the `consul.hashicorp.com.namespace` tag on the task IAM role. | | `IAMEntityTags` | list | Must include `consul.hashicorp.com.namespace` to enable use of this tag in binding rules. | ## Secret storage diff --git a/website/content/docs/ecs/terraform/install.mdx b/website/content/docs/ecs/terraform/install.mdx index 45399f6c3821..faebcb4b13b4 100644 --- a/website/content/docs/ecs/terraform/install.mdx +++ b/website/content/docs/ecs/terraform/install.mdx @@ -22,7 +22,7 @@ The following procedure describes the general workflow: 2. [Run Terraform](#running-terraform) to deploy the resources in AWS -If you want to operate Consul in production environments, follow the instructions in the [Secure Configuration](/docs/ecs/terraform/secure-configuration) documentation. The instructions describe how to enable ACLs and TLS and gossip encyption, which provide network security for production-grade deployments. +If you want to operate Consul in production environments, follow the instructions in the [Secure Configuration](/docs/ecs/terraform/secure-configuration) documentation. The instructions describe how to enable ACLs and TLS and gossip encryption, which provide network security for production-grade deployments. ## Requirements diff --git a/website/content/docs/enterprise/admin-partitions.mdx b/website/content/docs/enterprise/admin-partitions.mdx index cfe962ff9477..e1635bede7b6 100644 --- a/website/content/docs/enterprise/admin-partitions.mdx +++ b/website/content/docs/enterprise/admin-partitions.mdx @@ -58,7 +58,7 @@ The partition in which [`proxy-defaults`](/docs/connect/config-entries/proxy-def ### Cross-partition Networking -You can configure services to be discoverable by downstream services in any partition within the datacenter. Specify the upstream services that you want to be available for discovery by configuring the `exported-services` configuration entry in the partition where the services are registered. Refer to the [`exported-services` documentation](/docs/connect/config-entries/exported-services) for details. Additionally, the requests made by dowstream applications must have the correct DNS name for the Virtual IP Service lookup to occur. Service Virtual IP lookups allow for communications across Admin Partitions when using Transparent Proxy. Refer to the [Service Virtual IP Lookups for Consul Enterprise](/docs/discovery/dns#service-virtual-ip-lookups-for-consul-enterprise) for additional information. +You can configure services to be discoverable by downstream services in any partition within the datacenter. Specify the upstream services that you want to be available for discovery by configuring the `exported-services` configuration entry in the partition where the services are registered. Refer to the [`exported-services` documentation](/docs/connect/config-entries/exported-services) for details. Additionally, the requests made by downstream applications must have the correct DNS name for the Virtual IP Service lookup to occur. Service Virtual IP lookups allow for communications across Admin Partitions when using Transparent Proxy. Refer to the [Service Virtual IP Lookups for Consul Enterprise](/docs/discovery/dns#service-virtual-ip-lookups-for-consul-enterprise) for additional information. ## Requirements @@ -72,12 +72,12 @@ Your Consul configuration must meet the following requirements to use admin part All Consul clients must be able to initiate Gossip, HTTPS, and RPC connections to the servers. All servers must also be able to initiate Gossip connections to the clients. -For Consul on Kubernetes, a dedicated `partition` Kubernetes `LoadBalancer` service is deployed to allow communication from clients to servers for admin partitions support (refer to [Kubernetes Requirements](#kubernetes-requirements) for additional information). +For Consul on Kubernetes, a dedicated `partition` Kubernetes `LoadBalancer` service is deployed to allow communication from clients to servers for admin partitions support (refer to [Kubernetes Requirements](#kubernetes-requirements) for additional information). For other runtimes, refer to the documentation for your infrastructure environment for instructions on how to allow communication on the following ports: -- 8300 (RPC) +- 8300 (RPC) - 8301 (Gossip) -- 443 (HTTPS API requests) +- 443 (HTTPS API requests) ### Security Configurations @@ -107,11 +107,11 @@ One of the primary use cases for admin partitions is for enabling a service mesh - The helm chart for consul-k8s v0.39.0 or greater. - Consul 1.11.1-ent or greater. - A designated Kubernetes `LoadBalancer` service must be exposed on the Consul server cluster. This enable the following communication channels to the Consul servers: - - RPC on port 8300 + - RPC on port 8300 - Gossip on port 8301 - - HTTPS API requests on port 443 API requests + - HTTPS API requests on port 443 API requests - Mesh gateways must be deployed as a Kubernetes `LoadBalancer` service on port 443 across all Kubernetes clusters. -- Cross-partition networking must be implemented as described in [Cross-Partition Networking](#cross-partition-networking). +- Cross-partition networking must be implemented as described in [Cross-Partition Networking](#cross-partition-networking). ## Usage @@ -128,7 +128,7 @@ The following procedure will result in an admin partition in each Kubernetes clu Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernetes-requirements) before proceeding. 1. Verify that your VPC is configured to enable connectivity between the pods running Consul clients and servers. Refer to your virtual cloud provider's documentation for instructions on configuring network connectivity. -1. Set environment variables to use with shell commands. +1. Set environment variables to use with shell commands. ```shell-session $ export HELM_RELEASE_SERVER=server @@ -136,24 +136,24 @@ Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernet $ export SERVER_CONTEXT= $ export CLIENT_CONTEXT= ``` - + 1. Create the license secret in server cluster. ```shell-session - $ kubectl create --context ${SERVER_CONTEXT} ns consul + $ kubectl create --context ${SERVER_CONTEXT} ns consul $ kubectl create secret --context ${SERVER_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic ``` -1. Create the license secret in the workload client cluster. This step must be repeated for every additional workload client cluster. +1. Create the license secret in the workload client cluster. This step must be repeated for every additional workload client cluster. ```shell-session - $ kubectl create --context ${CLIENT_CONTEXT} ns consul + $ kubectl create --context ${CLIENT_CONTEXT} ns consul $ kubectl create secret --context ${CLIENT_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic ``` - + #### Install the Consul server cluster -1. Set your context to the server cluster. +1. Set your context to the server cluster. ```shell-session $ kubectl config use-context ${SERVER_CONTEXT} @@ -237,13 +237,13 @@ Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernet ```shell-session $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-partitions-acl-token --context ${SERVER_CONTEXT} --namespace consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename - ``` - -#### Install the workload client cluster - + +#### Install the workload client cluster + 1. Switch to the workload client clusters: ```shell-session - $ kubectl config use-context ${CLIENT_CONTEXT} + $ kubectl config use-context ${CLIENT_CONTEXT} ``` 1. Create the workload configuration for client nodes in your cluster. Create a configuration for each admin partition. @@ -307,19 +307,19 @@ Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernet 1. Install the workload client clusters: ```shell-session - $ helm install ${HELM_RELEASE_CLIENT} hashicorp/consul --version "0.43.0" --create-namespace --namespace consul --values client.yaml + $ helm install ${HELM_RELEASE_CLIENT} hashicorp/consul --version "0.43.0" --create-namespace --namespace consul --values client.yaml ``` ### Verifying the Deployment You can log into the Consul UI to verify that the partitions appear as expected. -1. Set your context to the server cluster. +1. Set your context to the server cluster. ```shell-session - $ kubectl config use-context ${SERVER_CONTEXT} + $ kubectl config use-context ${SERVER_CONTEXT} ``` - + 1. If ACLs are enabled, you will need the partitions ACL token, which can be read from the Kubernetes secret. The token is an encoded string that must be decoded in base64, e.g.: ```shell-session diff --git a/website/content/docs/k8s/annotations-and-labels.mdx b/website/content/docs/k8s/annotations-and-labels.mdx index 1c45a12c5721..a24d7412a1d1 100644 --- a/website/content/docs/k8s/annotations-and-labels.mdx +++ b/website/content/docs/k8s/annotations-and-labels.mdx @@ -80,7 +80,7 @@ The following Kubernetes resource annotations could be used on a pod to control "consul.hashicorp.com/connect-service-upstreams":"[service-name]:[port]:[optional datacenter]" ``` - Namespace (requires Consul Enterprise 1.7+): Upstream services may be running in different a namespace. Place - the upstream namespace after the service name. For additional details about configuring the injector, refer to + the upstream namespace after the service name. For additional details about configuring the injector, refer to [Consul Enterprise Namespaces](#consul-enterprise-namespaces) . ```yaml annotations: @@ -202,7 +202,7 @@ The following Kubernetes resource annotations could be used on a pod to control - `consul.hashicorp.com/sidecar-proxy-memory-request` - Override the default memory request. - `consul.hashicorp.com/consul-envoy-proxy-concurrency` - Override the default envoy worker thread count. This should be set low for sidecar - usecases and can be raised for edge proxies like gateways. + use cases and can be raised for edge proxies like gateways. - `consul.hashicorp.com/consul-sidecar-` - Override default resource settings for the `consul-sidecar` container. @@ -233,12 +233,12 @@ The following Kubernetes resource annotations could be used on a pod to control - `consul.hashicorp.com/service-metrics-port` - Set the port where the Connect service exposes metrics. - `consul.hashicorp.com/service-metrics-path` - Set the path where the Connect service exposes metrics. - `consul.hashicorp.com/connect-inject-mount-volume` - Comma separated list of container names to mount the connect-inject volume into. The volume will be mounted at `/consul/connect-inject`. The connect-inject volume contains Consul internals data needed by the other sidecar containers, for example the `consul` binary, and the Pod's Consul ACL token. This data can be valuable for advanced use-cases, such as making requests to the Consul API from within application containers. -- `consul.hashicorp.com/consul-sidecar-user-volume` - JSON objects as specified by the [Volume pod spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core), that define volumes to add to the Envoy sidecar. +- `consul.hashicorp.com/consul-sidecar-user-volume` - JSON objects as specified by the [Volume pod spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core), that define volumes to add to the Envoy sidecar. ```yaml annotations: "consul.hashicorp.com/consul-sidecar-user-volume": "[{\"name\": \"secrets-data\", \"hostPath\": "[{\"path\": \"/mnt/secrets-path\"}]"}]" ``` -- `consul.hashicorp.com/consul-sidecar-user-volume-mount` - JSON objects as specified by the [Volume mount pod spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core), that define volumeMounts to add to the Envoy sidecar. +- `consul.hashicorp.com/consul-sidecar-user-volume-mount` - JSON objects as specified by the [Volume mount pod spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core), that define volumeMounts to add to the Envoy sidecar. ```yaml annotations: "consul.hashicorp.com/consul-sidecar-user-volume-mount": "[{\"name\": \"secrets-store-mount\", \"mountPath\": \"/mnt/secrets-store\"}]" diff --git a/website/content/docs/lambda/index.mdx b/website/content/docs/lambda/index.mdx index d8540285e228..90b716ae41c6 100644 --- a/website/content/docs/lambda/index.mdx +++ b/website/content/docs/lambda/index.mdx @@ -24,7 +24,7 @@ to automatically synchronize Lambda functions into Consul. Lambda functions can also be manually registered into Consul when using Lambda registrator is not possible. See the [Registration page](/docs/lambda/registration) for more information -about registring Lambda functions into Consul. +about registering Lambda functions into Consul. ### Invoking Lambda Functions from Consul Service Mesh diff --git a/website/content/docs/lambda/registration.mdx b/website/content/docs/lambda/registration.mdx index e289475cf3fa..9fe9ba0da5c9 100644 --- a/website/content/docs/lambda/registration.mdx +++ b/website/content/docs/lambda/registration.mdx @@ -88,11 +88,11 @@ You can deploy the Lambda registrator to your environment to automatically regis The registrator runs as a Lambda function that is invoked by AWS EventBridge. Refer to the [AWS EventBridge documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) for additional information. -EventBridge invokes the registrator using either [AWS CloudTrail](https://docs.aws.amazon.com/lambda/latest/dg/logging-using-cloudtrail.html) to syncronize with Consul in real-time or in [scheduled intervals](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule-schedule.html). +EventBridge invokes the registrator using either [AWS CloudTrail](https://docs.aws.amazon.com/lambda/latest/dg/logging-using-cloudtrail.html) to synchronize with Consul in real-time or in [scheduled intervals](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule-schedule.html). CloudTrail events typically synchronize updates, registration, and deregistration within one minute, but events may occasionally be delayed. -Scheduled events fully synchronize functions betwen Lambda and Consul to prevent entropy. By default, EventBridge triggers a full sync every five minutes. +Scheduled events fully synchronize functions between Lambda and Consul to prevent entropy. By default, EventBridge triggers a full sync every five minutes. The following diagram shows the flow of events from EventBridge into Consul: diff --git a/website/content/docs/nia/configuration.mdx b/website/content/docs/nia/configuration.mdx index e4a39d0b09a0..be282a4ab5f1 100644 --- a/website/content/docs/nia/configuration.mdx +++ b/website/content/docs/nia/configuration.mdx @@ -216,7 +216,7 @@ The default health check is an [HTTP check](/docs/discovery/checks#http-interval ## High availability -Add a `high_availability` block to your configuration to enable CTS to run in high availability mode. Refer to [Run Consul-Terrform-Sync with High Availability](/docs/nia/usage/run-ha) for additional information. The `high_availability` block contains the following configuration items. +Add a `high_availability` block to your configuration to enable CTS to run in high availability mode. Refer to [Run Consul-Terraform-Sync with High Availability](/docs/nia/usage/run-ha) for additional information. The `high_availability` block contains the following configuration items. ### High availability cluster @@ -229,7 +229,7 @@ The `cluster` parameter contains configurations for the cluster you want to oper #### High availability cluster storage -The `high_availability.cluster.storage` object contains the following configurations. +The `high_availability.cluster.storage` object contains the following configurations. | Parameter | Description| Required | Type | | --------- | ---------- | -------- | ------| diff --git a/website/content/docs/release-notes/consul/v1_13_x.mdx b/website/content/docs/release-notes/consul/v1_13_x.mdx index 0c6be7329617..03a08c57be8d 100644 --- a/website/content/docs/release-notes/consul/v1_13_x.mdx +++ b/website/content/docs/release-notes/consul/v1_13_x.mdx @@ -13,15 +13,15 @@ description: >- - **Transparent proxying through terminating gateways**: This version adds egress traffic control to destinations outside of Consul's catalog, such as APIs on the public internet. Transparent proxies can dial [destinations defined in service-defaults](/docs/connect/config-entries/service-defaults#destination) and have the traffic routed through terminating gateways. For more information, refer to the [terminating gateway](/docs/connect/gateways/terminating-gateway#terminating-gateway-configuration) documentation. -- **Enables TLS on the Envoy Prometheus endpoint**: The Envoy prometheus endpoint can be enabled when `envoy_prometheus_bind_addr` is set and then secured over TLS using new CLI flags for the `consul connect envoy` command. These commands are: `-prometheus-ca-file`, `-prometheus-ca-path`, `-prometheus-cert-file` and `-prometheus-key-file`. The CA, cert, and key can be provided to Envoy by a Kubernetes mounted volume so that Envoy can watch the files and dynamically reload the certs when the volume is updated. +- **Enables TLS on the Envoy Prometheus endpoint**: The Envoy prometheus endpoint can be enabled when `envoy_prometheus_bind_addr` is set and then secured over TLS using new CLI flags for the `consul connect envoy` command. These commands are: `-prometheus-ca-file`, `-prometheus-ca-path`, `-prometheus-cert-file` and `-prometheus-key-file`. The CA, cert, and key can be provided to Envoy by a Kubernetes mounted volume so that Envoy can watch the files and dynamically reload the certs when the volume is updated. -- **UDP Health Checks**: Adds the ability to register service discovery health checks that periodically send UDP datagrams to the specified IP/hostname and port. Refer to [UDP checks](/docs/discovery/checks#udp-interval). +- **UDP Health Checks**: Adds the ability to register service discovery health checks that periodically send UDP datagrams to the specified IP/hostname and port. Refer to [UDP checks](/docs/discovery/checks#udp-interval). ## What's Changed -- Removes support for Envoy 1.19.x and adds suport for Envoy 1.23. Refer to the [Envoy Compatibility matrix](/docs/connect/proxies/envoy) for more details. +- Removes support for Envoy 1.19.x and adds support for Envoy 1.23. Refer to the [Envoy Compatibility matrix](/docs/connect/proxies/envoy) for more details. -- The [`disable_compat_19`](/docs/agent/config/config-files#telemetry-disable_compat_1.9) telemetry configuration option is now removed. In Consul versions 1.10.x through 1.11.x, the config defaulted to `false`. In version 1.12.x it defaulted to `true`. Before upgrading you should remove this flag from your config if the flag is being used. +- The [`disable_compat_19`](/docs/agent/config/config-files#telemetry-disable_compat_1.9) telemetry configuration option is now removed. In Consul versions 1.10.x through 1.11.x, the config defaulted to `false`. In version 1.12.x it defaulted to `true`. Before upgrading you should remove this flag from your config if the flag is being used. ## Upgrading @@ -31,7 +31,7 @@ For more detailed information, please refer to the [upgrade details page](/docs/ The following issues are know to exist in the 1.13.0 release: - Consul 1.13.1 fixes a compatibility issue when restoring snapshots from pre-1.13.0 versions of Consul. Refer to GitHub issue [[GH-14149](https://github.com/hashicorp/consul/issues/14149)] for more details. -- Consul 1.13.0 and Consul 1.13.1 default to requiring TLS for gRPC communication with Envoy proxies when auto-encrypt and auto-config are enabled. In environments where Envoy proxies are not already configured to use TLS for gRPC, upgrading Consul 1.13 will cause Envoy proxies to disconnect from the control plane (Consul agents). A future patch release will default to disabling TLS by default for GRPC communication with Envoy proxies when using Service Mesh and auto-config or auto-encrypt. Refer to GitHub issue [[GH-14253](https://github.com/hashicorp/consul/issues/14253)] and [Service Mesh deployments using auto-config and auto-enrypt](https://www.consul.io/docs/upgrading/upgrade-specific#service-mesh-deployments-using-auto-encrypt-or-auto-config) for more details. +- Consul 1.13.0 and Consul 1.13.1 default to requiring TLS for gRPC communication with Envoy proxies when auto-encrypt and auto-config are enabled. In environments where Envoy proxies are not already configured to use TLS for gRPC, upgrading Consul 1.13 will cause Envoy proxies to disconnect from the control plane (Consul agents). A future patch release will default to disabling TLS by default for GRPC communication with Envoy proxies when using Service Mesh and auto-config or auto-encrypt. Refer to GitHub issue [[GH-14253](https://github.com/hashicorp/consul/issues/14253)] and [Service Mesh deployments using auto-config and auto-encrypt](https://www.consul.io/docs/upgrading/upgrade-specific#service-mesh-deployments-using-auto-encrypt-or-auto-config) for more details. ## Changelogs