Docker Desktop includes a standalone Kubernetes server that runs on your Mac, so that you can test deploying your Docker workloads on Kubernetes. Sudo ip netns exec netnsdustin ping 10.0.0.21 -c 1 sudo ip netns exec netnsleah ping 10.0.0.11 -c 1 will both fail because our bridge isn’t enabled to forward traffic. Currently bridgehome will receive traffic from vethdustin and vethleah, but all packets that need to then be forwarded to vethleah and vethdustin, respectively, will.
Update (2018-03-22) Since I wrote this document back in 2014,Docker has developed the macvlan networkdriver. That gives you asupported mechanism for direct connectivity to a local layer 2network. I’ve written an article about working with the macvlandriver.
This article discusses four ways to make a Docker container appear ona local network. These are not suggested as practical solutions, butare meant to illustrate some of the underlying network technologyavailable in Linux.
If you were actually going to use one of these solutions as anythingother than a technology demonstration, you might look to the pipework script, which can automate many of these configurations.
Goals and Assumptions
In the following examples, we have a host with address 10.12.0.76 onthe 10.12.0.0/21 network. We are creating a Docker container that wewant to expose as 10.12.0.117.
I am running Fedora 20 with Docker 1.1.2. This means, in particular,that my utils-linux
package is recent enough to include thensenter command. If you don’t have that handy, there is aconvenient Docker recipe to build it for you at jpetazzo/nsenteron GitHub.
A little help along the way
In this article we will often refer to the PID of a docker container.In order to make this convenient, drop the following into a scriptcalled docker-pid
, place it somewhere on your PATH
, and make itexecutable:
This allows us to conveniently get the PID of a docker container byname or ID:
Nets Docker For Mac Installer
In a script called docker-ip
, place the following:
And now we can get the ip address of a container like this:
Using NAT
This uses the standard Docker network model combined with NAT rules onyour host to redirect inbound traffic to/outbound traffic from theappropriate IP address.
Assign our target address to your host interface:
Start your docker container, using the -p
option to bind exposedports to an ip address and port on the host:
With this command, Docker will set up the standard network model:
- It will create a veth interface pair.
- Connect one end to the
docker0
bridge. - Place the other inside the container namespace as
eth0
. - Assign an ip address from the network used by the
docker0
bridge.
Because we added -p 10.12.0.117:80:80
to our command line, Dockerwill also create the following rule in the nat
table DOCKER
chain (which is run from the PREROUTING
chain):
This matches traffic TO our target address (-d 10.12.0.117/32
) notoriginating on the docker0
bridge (! -i docker0
) destined fortcp
port 80
(-p tcp -m tcp --dport 80
). Matching traffic hasit’s destination set to the address of our docker container (-j DNAT --to-destination 172.17.0.4:80
).
From a host elsewhere on the network, we can now access the web serverat our selected ip address:
If our container were to initiate a network connection with anothersystem, that connection would appear to originate with ip address ofour host. We can fix that my adding a SNAT
rule to thePOSTROUTING
chain to modify the source address:
Note here the use of -I POSTROUTING
, which places the rule at thetop of the POSTROUTING
chain. This is necessary because, bydefault, Docker has already added the following rule to the top of thePOSTROUTING
chain:
Because this MASQUERADE
rule matches traffic from any container, weneed to place our rule earlier in the POSTROUTING
chain for it tohave any affect.
With these rules in place, traffic to 10.12.0.117 (port 80) isdirected to our web
container, and traffic originating in the webcontainer will appear to come from 10.12.0.117.
With Linux Bridge devices
The previous example was relatively easy to configure, but has a fewshortcomings. If you need to configure an interface using DHCP, or ifyou have an application that needs to be on the same layer 2 broadcastdomain as other devices on your network, NAT rules aren’t going towork out.
This solution uses a Linux bridge device, created using brctl
, toconnect your containers directly to a physical network.
Start by creating a new bridge device. In this example, we’ll createone called br-em1
:
We’re going to add em1
to this bridge, and move the ip address fromem1
onto the bridge.
WARNING: This is not something you should do remotely, especiallyfor the first time, and making this persistent varies fromdistribution to distribution, so this will not be a persistentconfiguration.
Look at the configuration of interface em1
and note the existing ipaddress:
Look at your current routes and note the default route:
Now, add this device to your bridge:
Configure the bridge with the address that used to belong toem1
:
And move the default route to the bridge:
If you were doing this remotely; you would do this all in one linelike this:
At this point, verify that you still have network connectivity:
Start up the web container:
This will give us the normal eth0
interface inside the container,but we’re going to ignore that and add a new one.
Create a veth interface pair:
Add the web-ext
link to the br-eth0
bridge:
And add the web-int
interface to the namespace of the container:
Next, we’ll use the nsenter command (part of the util-linux
package) to run some commands inside the web
container. Start by bringing up the link inside the container:
Assign our target ip address to the interface:
And set a new default route inside the container:
Again, we can verify from another host that the web server isavailable at 10.12.0.117:
Note that in this example we have assigned a static ip address, but wecould just have easily acquired an address using DHCP. After running:
We can run:
With Open vSwitch Bridge devices
This process is largely the same as in the previous example, but weuse Open vSwitch instead of the legacy Linux bridge devices.These instructions assume that you have already installed and startedOpen vSwitch on your system.
Create an OVS bridge using the ovs-vsctl
command:
And add your external interface:
And then proceed as in the previous set of instructions.
The equivalent all-in-one command is:
Once that completes, your openvswitch configuration should look likethis:
To add the web-ext
interface to the bridge, run:
Instead of:
WARNING: The Open vSwitch configuration persists between reboots.This means that when your system comes back up, em1
will still be amember of br-em
, which will probably result in no networkconnectivity for your host.
Before rebooting your system, make sure to ovs-vsctl del-port br-em1 em1
.
With macvlan devices
This process is similar to the previous two, but instead of using abridge device we will create a macvlan, which is a virtual networkinterface associated with a physical interface. Unlike the previoustwo solutions, this does not require any interruption to your primarynetwork interface.
Start by creating a docker container as in the previous examples:
Create a macvlan
interface associated with your physical interface:
This creates a new macvlan
interface named em1p0
(but you canname it anything you want) associated with interface em1
. We aresetting it up in bridge
mode, which permits all macvlan
interfacesto communicate with eachother.
Add this interface to the container’s network namespace:
Bring up the link:
Nets Docker For Mac Catalina
And configure the ip address and routing:
And demonstrate that from another host the web server is availableat 10.12.0.117:
But note that if you were to try the same thing on the host, you wouldget:
The host is unable to communicate with macvlan
devices via theprimary interface. You can create anothermacvlan
interface onthe host, give it an address on the appropriate network, and then setup routes to your containers via that interface:
The information in this section explains configuring container networks within the Docker default bridge. This is a bridge
network named bridge
createdautomatically when you install Docker.
Note: The Docker networks feature allows you to create user-defined networks in addition to the default bridge network.
While Docker is under active development and continues to tweak and improve its network configuration logic, the shell commands in this section are rough equivalents to the steps that Docker takes when configuring networking for each new container.
Review some basics
To communicate using the Internet Protocol (IP), a machine needs access to at least one network interface at which packets can be sent and received, and a routing table that defines the range of IP addresses reachable through that interface. Network interfaces do not have to be physical devices. In fact, the lo
loopback interface available on every Linux machine (and inside each Docker container) is entirely virtual – the Linux kernel simply copies loopback packets directly from the sender’s memory into the receiver’s memory.
Docker uses special virtual interfaces to let containers communicate with the host machine – pairs of virtual interfaces called “peers” that are linked inside of the host machine’s kernel so that packets can travel between them. They are simple to create, as we will see in a moment.
The steps with which Docker configures a container are:
- Create a pair of peer virtual interfaces.
Give one of them a unique name like
veth65f9
, keep it inside of the main Docker host, and bind it todocker0
or whatever bridge Docker is supposed to be using.Toss the other interface over the wall into the new container (which will already have been provided with an
lo
interface) and rename it to the much prettier nameeth0
since, inside of the container’s separate and unique network interface namespace, there are no physical interfaces with which this name could collide.Set the interface’s MAC address according to the
--mac-address
parameter or generate a random one.- Give the container’s
eth0
a new IP address from within the bridge’s range of network addresses. The default route is set to the IP address passed to the Docker daemon using the--default-gateway
option if specified, otherwise to the IP address that the Docker host owns on the bridge. The MAC address is generated from the IP address unless otherwise specified. This prevents ARP cache invalidation problems, when a new container comes up with an IP used in the past by another container with another MAC.
With these steps complete, the container now possesses an eth0
(virtual) network card and will find itself able to communicate with other containers and the rest of the Internet.
Nets Docker For Mac Os
You can opt out of the above process for a particular container by giving the --net=
option to docker run
, which takes these possible values.
--net=bridge
– The default action, that connects the container to the Docker bridge as described above.--net=host
– Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to not containerize the container’s networking! While container processes will still be confined to their own filesystem and process list and resource limits, a quickip addr
command will show you that, network-wise, they live “outside” in the main Docker host and have full access to its network interfaces. Note that this does not let the container reconfigure the host network stack – that would require--privileged=true
– but it does let container processes open low-numbered ports like any other root process. It also allows the container to access local network services like D-bus. This can lead to processes in the container being able to do unexpected things like restart your computer. You should use this option with caution.--net=container:NAME_or_ID
– Tells Docker to put this container’s processes inside of the network stack that has already been created inside of another container. The new container’s processes will be confined to their own filesystem and process list and resource limits, but will share the same IP address and port numbers as the first container, and processes on the two containers will be able to connect to each other over the loopback interface.--net=none
– Tells Docker to put the container inside of its own network stack but not to take any steps to configure its network, leaving you free to build any of the custom configurations explored in the last few sections of this document.--net=<network-name>|<network-id>
– Tells Docker to connect the container to a user-defined network.
Manually network
To get an idea of the steps that are necessary if you use --net=none
as described in that last bullet point, here are the commands that you would run to reach roughly the same configuration as if you had let Docker do all of the configuration:
At this point your container should be able to perform networking operations as usual.
Nets Docker For Mac Download
When you finally exit the shell and Docker cleans up the container, the network namespace is destroyed along with our virtual eth0
– whose destruction in turn destroys interface A
out in the Docker host and automatically un-registers it from the docker0
bridge. So everything gets cleaned up without our having to run any extra commands! Well, almost everything:
Also note that while the script above used modern ip
command instead of old deprecated wrappers like ipconfig
and route
, these older commands would also have worked inside of our container. The ip addr
command can be typed as ip a
if you are in a hurry.
Finally, note the importance of the ip netns exec
command, which let us reach inside and configure a network namespace as root. The same commands would not have worked if run inside of the container, because part of safe containerization is that Docker strips container processes of the right to configure their own networks. Using ip netns exec
is what let us finish up the configuration without having to take the dangerous step of running the container itself with --privileged=true
.