Skip to content

Latest commit

 

History

History
306 lines (239 loc) · 12 KB

libnetwork_remote_driver_design.rst

File metadata and controls

306 lines (239 loc) · 12 KB

Libnetwork Remote Network Driver Design

What is Kuryr

Kuryr implements a libnetwork remote network driver and maps its calls to OpenStack Neutron. It works as a translator between libnetwork's Container Network Model (CNM) and Neutron's networking model.

Goal

Through Kuryr any Neutron plugin can be used as libnetwork backend with no additional effort. Neutron APIs are vendor agnostic and thus all Neutron plugins will have the capability of providing the networking backend of Docker for a similar small plugging snippet as they have in nova.

Kuryr also takes care of binding one of a veth pair to a network interface on the host, e.g., Linux bridge, Open vSwitch datapath and so on.

Kuryr Workflow - Host Networking

Kuryr resides in each host that runs Docker containers and serves APIs required for the libnetwork remote network driver. It is planned to use the Adding tags to resources new Neutron feature by Kuryr, to map between Neutron resource Id's and Docker Id's (UUID's)

  1. libnetwork discovers Kuryr via plugin discovery mechanism
    • During this process libnetwork makes a HTTP POST call on /Plugin.Active and examines if it's a network driver
  2. libnetwork registers Kuryr as a remote driver
  3. A user makes requests against libnetwork with the network driver specifier for Kuryr
    • i.e., --driver=kuryr or -d kuryr for the Docker CLI
  4. libnetwork makes API calls against Kuryr
  5. Kuryr receives the requests and calls Neutron APIs with Neutron client
  6. Kuryr receives the responses from Neutron and compose the responses for libnetwork
  7. Kuryr returns the responses to libnetwork
  8. libnetwork stores the returned information to its key/value datastore backend
    • the key/value datastore is abstracted by libkv

Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking

  1. A user creates a network foo

    $ sudo docker network create --driver=kuryr foo
    286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364
    

    This makes a HTTP POST call on /NetworkDriver.CreateNetwork with the following JSON data.

    {
        "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
        "IPv4Data": [{
            "Pool": "172.18.0.0/16",
            "Gateway": "172.18.0.1/16",
            "AddressSpace": ""
        }],
        "IPv6Data": [],
        "Options": { "com.docker.network.generic": {}}
    }
    

    The Kuryr remote network driver will then generate a Neutron API request to create an underlying Neutron network. When the Neutron network has been created, the Kuryr remote network driver will generate an empty success response to the docker daemon. Kuryr tags the Neutron network with the NetworkID from docker.

  2. A user launches a container against network foo

    $ sudo docker run --net=foo -itd --name=container1 busybox
    78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703
    

    This makes a HTTP POST call on /NetworkDriver.CreateEndpoint with the following JSON format.

    {
        "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
        "Interface": {
            "AddressIPv6": "",
            "MacAddress": "",
            "Address": "172.18.0.2/16"
        },
        "Options": {
            "com.docker.network.endpoint.exposedports": [],
            "com.docker.network.portmap": []
        },
        "EndpointID": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd"
    }
    

    The Kuryr remote network driver then generate a Neutron API request to create a Neutron subnet and a port with the matching fields for interface in the request. Kuryr needs to create the subnet dynamically as it has no information on the interface IP.

    Following steps are taken:

    1. On the endpoint creation Kuryr examine if there's a subnet which CIDR corresponds to Address or AddressIPv6 requested.
    2. If there's a subnet, Kuryr tries to reuse it without creating a new subnet. otherwise it create a new one with the given CIDR
    3. Kuryr creates a port assigning the IP address to it and associating the port with the subnet based on what it has already allocated in 2.
    4. Kuryr tags the Neutron subnet and port with EndpointID.

    On the subnet creation described in (2) and (3) above, Kuryr tries to grab the allocation pool greedily by not specifying allocation_pool. Without allocation_pool, Neutron allocates all IP addresses in the range of the subnet CIDR as described in Neutron's API reference.

    When the Neutron port has been created, the Kuryr remote driver will generate an empty response to the docker daemon indicating the SUCCESS. {} (https://github.com/docker/libnetwork/blob/master/docs/remote.md#create-endpoint)

    {
        "Interface": {"MacAddress": "08:22:e0:a8:7d:db"}
    }
    

    On receiving success response, libnetwork makes a HTTP POST call on /NetworkDriver.Join with the following JOSN data.

    {
        "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
        "SandboxKey": "/var/run/docker/netns/052b9aa6e9cd",
        "Options": null,
        "EndpointID": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd"
    }
    

    Kuryr connects the container to the corresponding neutron network by doing the following steps:

    1. Generate a veth pair
    2. Connect one end of the veth pair to the container (which is running in a namespace that was created by Docker)
    3. Perform a neutron-port-type-dependent VIF-binding to the corresponding Neutron port using the VIF binding layer and depending on the specific port type.

    After the VIF-binding is completed, the Kuryr remote network driver generates a response to the Docker daemon as specified in the libnetwork documentation for a join request. (https://github.com/docker/libnetwork/blob/master/docs/remote.md#join)

  3. A user requests information about the network

    $ sudo docker network inspect foo
     {
         "name": "foo",
         "id": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
         "scope": "local",
         "driver": "kuryr",
         "ipam": {
             "driver": "default",
             "config": [
                 {}
             ]
         },
         "containers": {
             "78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703": {
                 "endpoint": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd",
                 "mac_address": "02:42:c0:a8:7b:cb",
                 "ipv4_address": "172.18.0.2/24",
                 "ipv6_address": ""
             }
         }
     }
    
  4. A user connects one more container to the network

    $ sudo docker network connect foo container2
     d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646
    
    $ sudo docker network inspect foo
     {
         "name": "foo",
         "id": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
         "scope": "local",
         "driver": "kuryr",
         "ipam": {
             "driver": "default",
             "config": [
                 {}
             ]
         },
         "containers": {
             "78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703": {
                 "endpoint": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd",
                 "mac_address": "02:42:c0:a8:7b:cb",
                 "ipv4_address": "172.18.0.2/24",
                 "ipv6_address": ""
             },
             "d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646": {
                 "endpoint": "a55976bafaad19f2d455c4516fd3450d3c52d9996a98beb4696dc435a63417fc",
                 "mac_address": "02:42:c0:a8:7b:cc",
                 "ipv4_address": "172.18.0.3/24",
                 "ipv6_address": ""
             }
         }
     }
    
  5. A user disconnects a container from the network

    $ CID=d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646
    $ sudo docker network disconnet foo $CID
    

    This makes a HTTP POST call on /NetworkDriver.Leave with the following JSON data.

    {
        "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
        "EndpointID": "a55976bafaad19f2d455c4516fd3450d3c52d9996a98beb4696dc435a63417fc"
    }
    

    Kuryr remote network driver will remove the VIF binding between the container and the Neutron port, and generate an empty response to the Docker daemon.

    Then libnetwork makes a HTTP POST call on /NetworkDriver.DeleteEndpoint with the following JSON data.

    {
        "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
        "EndpointID": "a55976bafaad19f2d455c4516fd3450d3c52d9996a98beb4696dc435a63417fc"
    }
    

    Kuryr remote network driver generates a Neutron API request to delete the associated Neutron port, in case the relevant port subnet is empty, Kuryr also deletes the subnet object using Neutron API and generate an empty response to the Docker daemon: {}

  1. A user deletes the network

    $ sudo  docker network rm foo
    

    This makes a HTTP POST call on /NetworkDriver.DeleteNetwork with the following JSON data.

       {
           "NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364"
       }
    
    Kuryr remote network driver generates a Neutron API request to delete the corresponding Neutron network.
    When the Neutron network has been deleted, the Kuryr remote network driver  generate an empty response
    to the docker daemon: {}
    

Mapping between the CNM and the Neutron's Networking Model

Kuryr communicates with Neutron via Neutron client and bridges between libnetwork and Neutron by translating their networking models. The following table depicts the current mapping between libnetwork and Neutron models:

libnetwork Neutron
Network Network
Sandbox Subnet, Port and netns
Endpoint Port

libnetwork's Sandbox and Endpoint can be mapped into Neutron's Subnet and Port, however, Sandbox is invisible from users directly and Endpoint is only the visible and editable resource entity attachable to containers from users' perspective. Sandbox manages information exposed by Endpoint behind the scene automatically.