Docker Networking -


i trying understand relationship between:

  • eth0 on host machine; and
  • docker0 bridge; and
  • eth0 interface on each container

it understanding docker:

  1. creates docker0 bridge , assigns available subnet not in conflict running on host; then
  2. docker binds docker0 eth0 running on host; then
  3. docker binds each new container spins docker0, such container's eth0 interface connects docker0 on host, in turn connected eth0 on host

this way, when external host tries communicate container, must send message port on host's ip, gets forwarded docker0 bridge, gets broadcasted containers running on host, yes?

also, way, when container needs communicate outside host, has own ip (leased docker0 subnet) , remote caller see message having came container's ip.

so if have stated above incorrect, please begin clarifying me!

assuming i'm more or less correct, main concerns are:

  • when remote services "call in" container, containers broadcasted same message, creates lot of traffic/noise, security risk (where container 1 should recipient of message, other containers running on message well); and
  • what happens when docker chooses identical subnets on different hosts? in case, container 1 living on host 1 might have same ip address container 2 living on host 2. if container 1 needs "call out" external/remote system (not living on host), how remote system differentiate between container 1 vs container 2 (both show same egress ip)?

i won't clear concept of networking in docker. let me clarify part first:

so here's how goes:

  1. docker uses feature of linux kernel called namespaces classify/divide resources.
  2. when container starts, docker creates set of namespaces container.
  3. this provides layer of isolation.
  4. one of these "net namespace": used managing network interfaces.

now talking bit network namespaces:

net namespace,

  • lets each container have own network resources, own network stack.
    • it’s own network interfaces.
    • it’s own routing tables.
    • it’s own iptables rules.
    • it’s own sockets (ss, netstat)
  • we can move network interface across net namespaces.
  • so can create netns somewhere , use @ other container.
  • typically: 2 virtual interfaces used, act cross-over cable.
  • eth0 @ container netns, paired virtual interface vethxxx in host network ns. ➔ virtual interfaces vethxxx bridged together. (using bridge docker0)

now, apart namespaces, there's second feature in linux kernel makes creation of containers possible: c-groups (or control-groups).

  • control groups let implement metering , limiting of:
    • memory
    • cpu
    • block i/o
    • network

tl/dl

in short: containers made possible because of 2 main features of kernel: namespaces , c-groups.

cgroups ---> limits how can use.

namespaces ---> limits can see.

and can't effect can't see.


coming question, when packet received host intended container, encapsulated in layers such each layer helps network controller, peels packet layer after layer send it's destination. (and while outgoing, packets encapsulated layer layer)

so, think answers both of questions well.

  1. it's not broadcast. other containers can't see packet that's not related them (namespaces).
  2. since layers added packet goes external network, external layer (different different hosts) packet identify it's recipient uniquely.

p.s.:

if find information erroneous, please let me know in comments. have written in hurry, update better reviewed text soon.

thank you.


Comments

Popular posts from this blog

html - Outlook 2010 Anchor (url/address/link) -

javascript - Why does running this loop 9 times take 100x longer than running it 8 times? -

Getting gateway time-out Rails app with Nginx + Puma running on Digital Ocean -