Connecting Containers to Faucet

One of the installation paths for Faucet is to run inside of a Docker container and happily control an OpenFlow network of both physical and virtual switches. However, what about using Faucet to control the network of Docker containers themselves? You might be asking yourself right about now why you would even want to do that though. One of the motivations behind it is to be able to restrict L2 connectivity between containers just like Faucet lets you do with a hardware switch. In a hybrid world where containers, virtual machines, and bare metal servers all have their own MAC addresses and IP addresses on the same network, having a single centralized controller for routing, access control, monitoring, and management of that network makes for an appealing option.

Docker networking is a complex beast and has made different iterations over the years. One of the current patterns that seems to have stuck is the use of Docker Networking Drivers. These drivers are particularly nice because it allows developers to write their own plugins that can be used to create networks that containers attach to, without having to hack, wrap, or otherwise change Docker directly.

One of the early examples of such a driver was from Weaveworks, who wrote this, which was the inspiration for other developers and companies to start writing their own plugins that integrated their own variations of networking that folks might want to attach containers to. One such group were the folks behind gopher-net who wrote an Open vSwitch network plugin for Docker.

Hey, now we’re on to something, Faucet already works with Open vSwitch! After taking a poke at the existing code base, now approaching 5 years being stale, it seemed like the right path forward was forking the project and modernizing the packages and making it Faucet friendly. So that’s exactly what we did. Introducing dovesnap.

With dovesnap, connecting containers to a Faucet controlled network is a breeze. Dovesnap uses docker-compose to build and start the containers used for Open vSwitch itself and the plugin that interfaces with Docker. The following are the basic steps to get it up and running:

First, make sure you have the Open vSwitch kernel module loaded:

$ sudo modprobe openvswitch

Next, clone the repo, start the service, and verify that it is up:

$ git clone https://github.com/cyberreboot/dovesnap
$ cd dovesnap
$ docker-compose up -d --build
$ docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                    PORTS               NAMES
2048530e312a        cyberreboot/dovesnap   "dovesnap -d"            11 minutes ago      Up 30 seconds                                 dovesnap_plugin_1_115df2016fd9
ddbd1fcd6148        openvswitch:2.13.0     "/usr/bin/supervisord"   11 minutes ago      Up 30 seconds (healthy)                       dovesnap_ovs_1_d0a0cee6f49d

Now create a network with our new ovs driver (note you can make as many of these as you like as long as they don’t have overlapping IP space):

$ docker network create mynet -d ovs -o ovs.bridge.mode=nat -o ovs.bridge.dpid=0x1 -o ovs.bridge.controller=tcp:127.0.0.1:6653 --subnet 172.12.0.0/24 --ip-range=172.12.0.8/29 --gateway=172.12.0.1
2ce5e00010331ec9115afd0adfc972a3beb530d3086bea20932e5edc85cfa4de

In this example we included a lot of options, but the only ones necessary are:
-d ovs (for the driver)
-o ovs.bridge.controller=tcp:127.0.0.1:6653 (so that it can connect to Faucet)

By default, creating a network with this driver will use “flat” mode, which means no natting and you’ll need to connect a network interface as a port on the OVS bridge for routing (i.e. -o ovs.bridge.add_ports=enx0023569c0114). For this example, we chose the “nat” mode which will use natting and not require routing from another interface. Additionally, we’ve supplied the DPID for the bridge making it easy to add to the Faucet config. Lastly, we have set what subnet, IP range, and gateway it should create and use.

Finally with our network created, we can start up containers that are attached to it:

$ docker run -td --net=mynet busybox
205e47b076195513a54b98ea78c4c449c5ac403371508e457a6631f64c0c3596
$ docker run -td --net=mynet busybox
8486e5a8dd8df0969a25b934bac8a3ebd7c040a91816403a2e3ddb067e559aa3

Once they are started, we can inspect the network to see all of our settings, as well as the IPs that were allocated for the two containers we just created:

$ docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "2ce5e00010331ec9115afd0adfc972a3beb530d3086bea20932e5edc85cfa4de",
        "Created": "2020-05-21T13:23:45.13078807+12:00",
        "Scope": "local",
        "Driver": "ovs",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.12.0.0/24",
                    "IPRange": "172.12.0.8/29",
                    "Gateway": "172.12.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "205e47b076195513a54b98ea78c4c449c5ac403371508e457a6631f64c0c3596": {
                "Name": "cool_hawking",
                "EndpointID": "3190227dd6ae96568c18101cbbd55c11b0b9d62fd907784d29b64cc8df7ce1b7",
                "MacAddress": "",
                "IPv4Address": "172.12.0.8/24",
                "IPv6Address": ""
            },
            "8486e5a8dd8df0969a25b934bac8a3ebd7c040a91816403a2e3ddb067e559aa3": {
                "Name": "vibrant_haibt",
                "EndpointID": "7154d47c2db8820990538c9a673a319c0c0c4afd32e18d68fa878f1ce5a723b0",
                "MacAddress": "",
                "IPv4Address": "172.12.0.9/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "ovs.bridge.controller": "tcp:127.0.0.1:6653",
            "ovs.bridge.dpid": "0x1",
            "ovs.bridge.mode": "nat"
        },
        "Labels": {}
    }
]

If we go into one of the containers we can also verify the network matches what we expect:

$ docker exec -ti 8486e5a8dd8d ifconfig
eth0      Link encap:Ethernet  HWaddr DE:BC:94:F9:63:33
          inet addr:172.12.0.9  Bcast:172.12.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:726 (726.0 B)  TX bytes:0 (0.0 B)

Here’s an example minimal faucet.yaml that connects the OVS bridge to Faucet. We’ve supplied a range of 10 ports on the same native VLAN, which the first 10 containers attached to the network will get assigned to.

dps:
  docker-ovs:
    dp_id: 0x1
    hardware: Open vSwitch
    interfaces:
      0xfffffffe:
        native_vlan: 100
    interface_ranges:
      1-10:
        native_vlan: 100
Note: the interface 0xfffffffe is needed specifically for only the "nat" mode of our plugin with OVS. 

Now you can monitor and control the network your containers use with Faucet!