Skip to content

Peripli/istio-broker-proxy

Repository files navigation

Overview

Plugin scenario

Standalone scenario

Data Plane

Data Flow

The traffic from a consumer app to a managed service will be routed via an Istio sidecar proxy, an Istio egress gateway and an Istio ingress gateway.

  1. The Istio sidecar proxy captures all traffic originating in the application container.
  2. For each managed service, there is a K8s service that consumer apps can address.
  3. For TCP traffic with the IP of a K8s service, the sidecar originates TLS with the Istio provided certificate and <service-name>.<producer-id> (e.g. pinger.istio.cf.sapcloud.io) as SNI to the egress gateway on port 443.
  4. The egress gateway terminates TLS presenting the Istio provided certificate.
  5. The egress gateway originates TLS with a certificate subject alternative name (SAN) set to a global cluster identifier (the consumer id) and <service-name>.<producer-id> as SNI.
  6. The traffic is routed via the public internet.
  7. The Istio ingress gateway terminates TLS with a certificate SAN set to a global cluster identifier (the producer id)
  8. The Istio ingress gateway routes traffic with correct SAN and SNI host to the service instance.

Configuration Resources

Consumer Side

  1. A ServiceEntry: One Kubernetes service for the consumer to address (one per endpoint) defines a virtual IP, which is used to distinguish dispatch services on the sidecar.
  2. One VirtualService on mesh: route from the sidecar to the egress gateway (destinationSubnets points to the VIP of the kubernetes service (1) created in the cluster).
  3. A DestinationRule sets up mTLS Istio-mutual between sidecar and egress gateway, sets SNI to . so that the request can be dispatched on the egress gateway.
  4. A Gateway on egress gateway port 443 describes listener on egress gateway and is configured to use the default certificates for (Istio-)mutual TLS
  5. Another VirtualService on egress gateway: routes traffic to the provider public endpoint <service-name>.<system-domain>1. This is the public endpoint visible on internet on AWS an elastic load balancer (e.g. pinger.istio.cf.dev01.aws.istio.sapcloud.io). The host name used will be resolved via a wildcard DNS entry (here: *.istio.cf.dev01.aws.istio.sapcloud.io).
  6. A DestinationRule sets up mTLS with the provider public endpoint, sets SNI to <service-name>.<producer-id> so that the request can be dispatched on provider side. Moreover, the expected SAN is set to .

1) Note that system-domain and provider-id/producer-id are equal in our scenario (istio.cf.<landscape-domain>).

Provider Side

  1. A Gateway to set up mTLS with consumer clusters, configures own certificate, requires client certificate.
  2. A VirtualService the dispatches on SNI to the correct ServiceEntry
  3. A ServiceEntry that describes the endpoint on which the real service is reachable.

Control Plane

The user request to create/bind/unbind/delete a service is routed through a chain of open service broker proxy implementations.

  1. The consumer side Istio broker proxy adds Istio metadata to the request.
  2. The Service Fabrik creates services/bindings.
  3. The provider side Istio broker proxy adds Istio metadata to the response.
  4. Both Istio broker proxies configure Istio to setup routing according to the above data plane description.

Binding Service

1 Credentials

{
    "credentials": { "host": "my-cf-service",
                     "port": 1234,
                     "uri": "http://my-cf-service:1234"
                }
}

2 Credentials and Endpoints

{
    "credentials": { "host": "my-cf-service",
                     "port": 1234,
                     "uri": "http://my-cf-service:1234"
                },
    "endpoints": [
        {  "host": "my-cf-service", "port": 1234  }
    ]
}

3 Call adapt_credentials

{
    "credentials": { "host": "my-cf-service",
                     "port": 1234,
                     "uri": "http://my-cf-service:1234"
                },
    "endpoint_mappings": [
        {
            "source": {  "host": "my-cf-service", "port": 1234  },
            "target": {  "host": "my-k8s-service", "port": 6789  }
        }
    ]
}

4 Translated Credentials

{
    "credentials": { "host": "my-k8s-service",
                     "port": 6789,
                     "uri": "http://my-k8s-service:6789"
                }
}

Misc

Install pre-commit hook:

The hook will

  • Call go fmt
cd .git/hooks
ln -s ../../hooks/pre-commit pre-commit

istio-broker

Forward all requests to the service fabrik.

Steps to deploy:

  • Deploy the app:
  cf push istio-broker
  • Delete the service fabrik broker (There might be some services, that have to be deleted before).
  cf delete-service-broker service-fabrik-broker
  • Create a new service broker with the credentials of the service fabrik and the URL of the pushed app. The credentials are found in deployments/service-fabrik/credentials.yml as credentials.broker.user and credentials.broker.password.
cf create-service-broker istio-broker <user> <password> https://istio-broker.cfapps.<landscape-domain>
cf create-security-group istio-broker-service-fabrik sec_group.json
  • List services using cf service-access and enable services using cf enable-service-access.

  • Check that services are available

cf marketplace

Steps to validate

  • Use service broker
cf service-brokers
cf create-service postgresql v9.4-dev mydb
cf delete-service mydb
  • Check tracking
curl https://istio-broker.cfapps.<landscape-domain>/info

Integration tests

Running the integration tests in a k8s cluster requires setting up a cloudfoundry back-end with a separate example broker. In order to get detailed information on how to do this, please open an issue.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages