Skip to content

kroekle/chat-svc

Repository files navigation

Chat Service

This sample code demonstrates a gRPC Web service that is intended to be used with chat-front. The docker containers that are an output from this source are publicly available on GCR (you could skip the build steps and just deploy the continers). This project is licensed under the terms of the MIT license. The author holds no responsibility for the use.

Target Audience

The target audience for this demo is Engineers that want to test using gRPC-Web in their browser based applications.

Prerequisites

This demo assumes that you have a working Kubernetes cluster with Istio setup and kubectl command line access to it. The author strictly works in a Linux environment, so some commands may need to be altered.

All three of the major cloud providers have a managed kuberenetes offering. To the best of my knowledge (at the current time), only Google GKE offers a managed Istio along with it. All of these examples were testing running in the managed GKE with Istio enabled cluster.

Code was written/tested using Eclipse 2018-12 and relies on the included Gradle build script (runing with the Buildship Eclipse plug-in).

Description of chat-svc

chat-svc is not production quality, but instead meant to demonstrate the basics of grpc-web. It includes one server streaming rpc and multiple Unary rpc's. The important gRPC code is housed in com.kurtspace.cncf.svc.ChatService, whereas the "datastore" logic is located in com.kurtspace.cncf.store.ChatStore. ChatStore is a memory only store, so if installing into a k8s cluster you can only run one instance of the pod.

Keep in mind, every restart of the service will clear out any stored data (mostly just the names/statuses of the people).

A logical update to the code would be to utilize some sort of streaming storage, but this would complicate the code's intent to demonstrate grpc-web.

Running locally

The back end relies on files generated from the proto files (in the src/main/proto/ folder). Any changes in the proto files (and if there are, they should be sync'd with chat-front) will require them to be rebuilt. To run with chat-front, the envoy proxy will need to be running along with the chat-svc.

Dependencies

This app uses gradle to manage dependencies using the build.gradle file. With the Buildship plugin in Eclipse you can just right click on the project and choose "Gradle>Refresh Gradle Project" from the menu.

Generating Protobuf/gRPC classes

Protobuf/gRPC classes can be generated by excuting the gradle task generateProto (generateProto is also included when the build task is executed). If using the Buildship plugin in Eclipse you can find the task in ChatSvc>grpc>generateProto. Or you can run gradle from the command line.

gradle generateProto

Starting the app

The grpc generated code (along with the transitive dependencies) includes a netty server in order to run the app. All you need to do is run com.kurtspace.cncf.app.ChatAppServer, this will startup a server on port 9000, the port can also be changed in that class, but changing it will require other changes when building/deploying.

Testing the Service

The service can now be tested by using a gRPC client. BloomRPC is an adequate client for testing (though it's lacking compared to it's Rest client counterparts). Using a standalone client you can test directly at the running service (no need for the proxy).

With BloomRPC you would need to import the chat.proto (located in src/main/proto). Each time you make a change to the proto, you will need to refresh in Bloom. In the address bar you will need to put in the full host and port: localhost:9000. There is a simple Ping rpc that takes no arguments and returns nothing, that is suitable for testing if the service is working.

Setting up the Envoy Proxy

Envoy has filters in order to correctly handle gRPC-Web out to the browser. (Normal gRPC is http/2 everywhere, gRPC-web has to deal with the current state of browsers that do not give enough control or even being able to force http/2) The envoy config is located in envoy.yaml.

On Linux you can find you local IPs by running (You'll need to figure out which one you are currently using):

ip addr

Use the IP address to update the following in the envoy.yaml (located in the last line of the yaml file)

  clusters:
  - name: chat_service
    connect_timeout: 0.25s
    type: static
    http2_protocol_options: {}
    lb_policy: round_robin
    hosts: [{ socket_address: { address: 192.168.86.22, port_value: 9000}}]

I've tried 127.0.0.1 and that didn't work like I would have expected, I suspect it's more on how the Netty server is binding to the address(es), but my knowledge of networking is limited in this area.

By the way, if you changed the port in the java service, this is one of the places you will need to change the port as well.

Building/Running the Proxy

The following dockerfile is all you need to build the envoy proxy:

FROM envoyproxy/envoy:latest

COPY envoy.yaml /etc/envoy/envoy.yaml

CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml -l trace --log-path /tmp/envoy_info.log
docker build . -t chat-proxy

Then you can run it with this docker command:

docker run -p 9095:9095 -d --name=my-chat-proxy chat-proxy

Testing the Proxy

The proxy can be tested in the same way the java service was tested using a gRPC client. Only in this case, you would use the port 9095.

Connecting to chat-front

At this point, all is ready for connecting to the React app chat-front. I'll pause now and wait for you do do that.

Packaging and Deploying

The included Gradle build script does all the heavy lifting for packing up the service and pushing to a container registry. No docker runtime is needed in order to do this.

Packaging and Pushing

The Gradle script uses Jib to both package and push the container to the container registry in one (fast) step. The script by default will push to GCR and authenticates using docker-credential-gcr. But it can push to any registry and authenticate in other ways, just reference the Jib documentation.

You can execute the Jib task by Gradle Tasks ChatSvc>jib>jib or on the command line:

gradle jib

Deploying to Kubernetes

The first thing you need is a gateway, if you are installing into an existing cluster you may have one available to you and you don't need to install the gateway.yaml file here (though you will need to change the references in the VirtualService both in chat-svc and chat-front). Assuming that you need the gateway, edit the gateway.yaml file to update the host with the correct host.

    hosts:
    - "chat.kurtspace.com"

You can now apply the yaml:

kubectl apply -f gateway.yaml

The chat-svc.yaml will create four k8s objects; a Deployment, a Service, an EnvoyFilter, and a VirtualService. You will again need to change the host in the VirtualService to match your domain. Also, if you did change the port on Netty server, you will want to change it in several places in the file.

The gRPC-Web magic happens in the combination of the EnvoyFilter and the VirtualService.

EnvoyFilter:

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: chat-svc-grpc-web-filter
spec:
  workloadLabels:
    app: chat-svc
  filters:
  - listenerMatch:
      listenerType: SIDECAR_INBOUND
      listenerProtocol: HTTP
    insertPosition:
      index: FIRST
    filterType: HTTP
    filterName: "envoy.grpc_web"
    filterConfig: {}

This applies the native envoy.grpc_web filter to the envoy sidecar. This is configuration is also in the local envoy config in the http_filters section.

VirtualService:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: chat-svc
  namespace: default
  labels:
    app: chat-svc
spec:
  gateways:
  - chat-gateway
  hosts:
  - chat-svc
  - chat.kurtspace.com
  http:
  - match:
    - uri:
        prefix: /v1.chat.ChatService
    route:
    - destination:
        host: chat-svc
        port:
          number: 9000
    corsPolicy:
      allowOrigin: 
      - '*'
      allowCredentials:  true
      allowHeaders:
      - grpc-timeout
      - content-type
      - keep-alive
      - user-agent
      - cache-control
      - content-type
      - content-transfer-encoding
      - custom-header-1
      - x-accept-content-transfer-encoding
      - x-accept-response-streaming
      - x-user-agent
      - x-grpc-web
      allowMethods:
      - POST
      - GET
      - OPTIONS
      - PUT
      - DELETE
      exposeHeaders:
      - custom-header-1
      - grpc-status
      - grpc-message

There are a couple items in the VirtualService to call out. First the uri prefix, matches the package + service name from the proto file. Next, there are grpc headers added to the corsPolicy in both the allowHeaders and exposeHeaders. On a side, I believe that only the POST and OPTIONS methods are needed for the allowMethods, but I include the others anyways.

We can now apply the config:

kubectl apply -f chat-svc.yaml

Testing the Service

The service can be tested the same way the local services were tested. Again, if you are using BloomRPC don't forget to include the port in the url string.

Putting it all Together

You can now deploy the chat-front and if all is correct it should give you a working chat app that demonstrates gRPC-Web.

Cleaning up Kubernetes

You can remove all of the k8s objects by do doing a delete with the chat-svc.yaml and gateway.yaml (if it was used) files:

kubectl delete -f chat-svc.yaml
kubecel delete -f gateway.yaml

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published