共计 38919 个字符,预计需要花费 98 分钟才能阅读完成。
这篇文章将为大家详细讲解有关 Docker 容器虚拟化网络的示例分析,丸趣 TV 小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。
一、docker 网络简介
网络作为 docker 容器化实现的 6 个名称空间的其中之一,是必不可少的。其在 Linux 内核 2.6 时已经被加载进内核支持了。网络名称空间主要用于实现网络设备和协议栈的隔离,列如;一个 docker host 有 4 块网卡,在创建容器的时候,将其中一块网卡分配给该名称空间,那么其他名称空间是看不到这块网卡的。且:一个设备只能属于一个名称空间。因为一个名称空间绑定一个物理网卡和外界通信,且一个物理网卡不能分配多个名称空间,这使得我们只能创建 4 个名称空间。如果要创建的名称空间多于我们的物理网卡数量,那该怎么办呢?
1、 虚拟网络通信的三种方式
1.1、桥接网络:在 kvm 的虚拟网络中,我们使用的是虚拟网卡设备(用纯软件的方式来模拟一组设备来使用),而在 docker 中,也不例外。在 Linux 内核级,支持两种级别设备的模拟,分别是 2 层设备(工作在链路层能实现封装物理报文并在各网络设备中报文转发的组件);而这个功能,是可以在 Linux 上利用内核中对二层虚拟设备的支持创建虚拟网卡接口的。而且,这种虚拟网卡接口非常独特,每一个网络接口设备是成对出现的,可以模拟一根网线的两端,其中,一端可以插在主机上,另一端可以插在交换机上。这就相当于让一个主机连接到一个交换机上了。而 Linux 内核原生支持二层虚拟网桥设备(用软件来构建一个交换机)。例如;我有两个名称空间,都分别使用虚拟网络创建一对网络接口,一头插在名称空间上,另一头插在虚拟网桥设备上,并且两个名称空间配置在同一个网段上,这样就实现了容器间的通信,但是这种桥接方式,如果用在有 N 多个容器的网络中,由于所有容器全部是桥接在同一块虚拟网桥设备上,会产生广播风暴,在隔离上也是极为不易的,因此在规模容器的场景中,使用桥接这种方式无疑是自讨苦吃,否则都不应该直接桥接的。
1.2、nat 网络:如果不桥接,又能与外部通信,用的是 nat 技术。NAT(network address transfer)网络地址转换,就是替换 IP 报文头部的地址信息,通过将内部网络 IP 地址替换为出口的 IP 地址提供不同网段的通信。比如:两个容器都配置了不同的私网地址,并且为容器配置了虚拟网桥(虚拟交换机),把容器 1 的网关指向虚拟网桥的 IP 地址,而后在 docker host 上打开核心转发功能,这时,当容器 1 与容器 2 通信时,报文先送给各自的虚拟网桥经由内核,内核判定目的 IP 不是自己,会查询路由表,而后将报文送给对应的网卡,物理网卡收到报文之后报文的原地址替换成自己的 IP(这个操作称为 snat),再将报文发送给容器 2 的物理网卡,物理网卡收到报文后,会将报文的原 IP 替换为自己的 IP(这个操作称作 dnat)发送给虚拟交换机,最后在发送给容器 2。容器 2 收到报文之后,同样的也要经过相同的操作,将回复报文经过改写原 ip 地址的操作(snat 和 dnat)送达给容器 1 的物理网卡,物理网卡收到报文之后在将报文转发给虚拟网桥送给容器 1。在这种网络中,如果要跨物理主机,让两个容器通信,必须经过两次 nat(snat 和 dnat),造成了通信效率的低下。在多容器的场景中也不适合。
1.3、Overlay Network
叠加网络,在这种网络中,不同主机的容器通信会借助于一个虚拟网桥,让当前主机的各个容器连接到这个虚拟网桥上来,随后,他们通信时,借助物理网络,来完成报文的隧道转发,从而可以实现容器可以直接看到不同主机的其他容器,进而互相通信。例如;容器 1 要和其他 host 上的容器 2 通信,容器 1 会把报文发送给虚拟网桥,虚拟网桥发现目的 IP 不在本地物理服务器上,于是这个报文会从物理网卡发送出去,在发出去之前不在做 snat,而是在添加一层 IP 报头,原地址是容器 1 的物理网卡地址,目的地址是容器 2 所在主机的物理网卡地址。报文到达主机,主机拆完第一层数据报文,发现还有一层报头,并且 IP 地址是当前主机的容器地址,进而将报文发送给虚拟网桥,最后在发送给容器 2。这种用一个 IP 来承载另外一个 IP 的方式叫做隧道。
2、docker 支持的四种网络模型
2.1、Closed container:只有 loop 接口,就是 null 类型
2.2、Bridged container A:桥接式类型,容器网络接入到 docker0 网络上
2.3、joined container A:联盟式网络,让两个容器有一部分名称空间隔离(User、Mount、Pid),这样两个容器间就拥有同一个网络接口,网络协议栈
2.4、Open container:开放式网络:直接共享物理机的三个名称空间(UTS、IPC、Net),世界使用物理主机的网卡通信,赋予容器管理物理主机网络的特权
二、Docker 网络的指定
1、bridge 网络(NAT)
docker 在安装完以后自动提供了 3 种网络,默认使用 bridge(nat 桥接)网络,如果启动容器时,不指定 –network=string,就是用的 bridge 网络,使用 docker network ls 可以看到这三种网络类型
[root@bogon ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
ea9de27d788c bridge bridge local
126249d6b177 host host local
4ad67e37d383 none null local
docker 在安装完成后,会自动在本机创建一个软交换机(docker0),可以扮演二层的交换机设备,也可以扮演二层的网卡设备
[root@bogon ~]# ifconfig
docker0: flags=4099 UP,BROADCAST,MULTICAST mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
当我们在创建容器时,docker 会通过软件自动创建 2 个虚拟的网卡,一端接在容器上,另一端接在 docker0 交换机上,从而使得容器就好像连接在了交换机上。
这是我还没有启动容器之前本地 host 的网络信息
[root@bogon ~]# ifconfig
docker0: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:2fff:fe51:412d prefixlen 64 scopeid 0x20 link
ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14 bytes 1758 (1.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 192.168.31.186 netmask 255.255.255.0 broadcast 192.168.31.255
inet6 fe80::a3fa:7451:4298:fe76 prefixlen 64 scopeid 0x20 link
ether 00:0c:29:fb:f6:a1 txqueuelen 1000 (Ethernet)
RX packets 2951 bytes 188252 (183.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 295 bytes 36370 (35.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 UP,LOOPBACK,RUNNING mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10 host
loop txqueuelen 1000 (Local Loopback)
RX packets 96 bytes 10896 (10.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 96 bytes 10896 (10.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099 UP,BROADCAST,MULTICAST mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:1a:be:ae txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@bogon ~]#
[root@bogon ~]#
[root@bogon ~]#
下面我启动两个容器,查看网络信息的变化,可以看到多出来两个 vethf 的虚拟网卡
这就是 docker 为容器启动创建的一对虚拟网卡中的一半
[root@bogon ~]# docker container run --name=nginx1 -d nginx:stable
11b031f93d019640b1cd636a48fb9448ed0a7fc6103aa509cd053cbbf8605e6e
[root@bogon ~]# docker container run --name=redis1 -d redis:4-alpine
fca571d7225f6ce94ccf6aa0d832bad9b8264624e41cdf9b18a4a8f72c9a0d33
[root@bogon ~]# ifconfig
docker0: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:2fff:fe51:412d prefixlen 64 scopeid 0x20 link
ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14 bytes 1758 (1.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 192.168.31.186 netmask 255.255.255.0 broadcast 192.168.31.255
inet6 fe80::a3fa:7451:4298:fe76 prefixlen 64 scopeid 0x20 link
ether 00:0c:29:fb:f6:a1 txqueuelen 1000 (Ethernet)
RX packets 2951 bytes 188252 (183.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 295 bytes 36370 (35.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 UP,LOOPBACK,RUNNING mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10 host
loop txqueuelen 1000 (Local Loopback)
RX packets 96 bytes 10896 (10.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 96 bytes 10896 (10.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth0a95d3a: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet6 fe80::cc12:e7ff:fe27:2c7f prefixlen 64 scopeid 0x20 link
ether ce:12:e7:27:2c:7f txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethf618ec3: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet6 fe80::882a:aeff:fe73:f6df prefixlen 64 scopeid 0x20 link
ether 8a:2a:ae:73:f6:df txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 22 bytes 2406 (2.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099 UP,BROADCAST,MULTICAST mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:1a:be:ae txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@bogon ~]#
[root@bogon ~]#
另一半在容器中
[root@bogon ~]# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fca571d7225f redis:4-alpine docker-entrypoint.s?? About a minute ago Up About a minute 6379/tcp redis1
11b031f93d01 nginx:stable nginx -g daemon of?? 10 minutes ago Up 10 minutes 80/tcp nginx1
并且他们都被关联到了 docker0 虚拟交换机中,可以使用 brctl 和 ip link show 查看到
[root@bogon ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02422f51412d no veth0a95d3a
vethf618ec3
[root@bogon ~]# ip link show
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:fb:f6:a1 brd ff:ff:ff:ff:ff:ff
3: virbr0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:1a:be:ae brd ff:ff:ff:ff:ff:ff
4: virbr0-nic: BROADCAST,MULTICAST mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:1a:be:ae brd ff:ff:ff:ff:ff:ff
5: docker0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:2f:51:41:2d brd ff:ff:ff:ff:ff:ff
7: vethf618ec3@if6: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 8a:2a:ae:73:f6:df brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth0a95d3a@if8: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether ce:12:e7:27:2c:7f brd ff:ff:ff:ff:ff:ff link-netnsid 1
可以看到,vethf 虚拟网卡后面还有一半“@if6 和 @if8”,这两个就是在容器中的虚拟网卡
bridge0 是一个 nat 桥,因此 docker 在启动容器后,还会自动为容器生成一个 iptables 规则
[root@bogon ~]# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 43 packets, 3185 bytes)
pkts bytes target prot opt in out source destination
53 4066 PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
53 4066 PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
53 4066 PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 3 packets, 474 bytes)
pkts bytes target prot opt in out source destination
24 2277 OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 3 packets, 474 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
2 267 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24
0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255
0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24
22 2010 POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
22 2010 POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
22 2010 POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT_direct (1 references)
pkts bytes target prot opt in out source destination
Chain POSTROUTING_ZONES (1 references)
pkts bytes target prot opt in out source destination
12 953 POST_public all -- * ens33 0.0.0.0/0 0.0.0.0/0 [goto]
10 1057 POST_public all -- * + 0.0.0.0/0 0.0.0.0/0 [goto]
Chain POSTROUTING_ZONES_SOURCE (1 references)
pkts bytes target prot opt in out source destination
Chain POSTROUTING_direct (1 references)
pkts bytes target prot opt in out source destination
Chain POST_public (2 references)
pkts bytes target prot opt in out source destination
22 2010 POST_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
22 2010 POST_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
22 2010 POST_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
Chain POST_public_allow (1 references)
pkts bytes target prot opt in out source destination
Chain POST_public_deny (1 references)
pkts bytes target prot opt in out source destination
Chain POST_public_log (1 references)
pkts bytes target prot opt in out source destination
Chain PREROUTING_ZONES (1 references)
pkts bytes target prot opt in out source destination
53 4066 PRE_public all -- ens33 * 0.0.0.0/0 0.0.0.0/0 [goto]
0 0 PRE_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto]
Chain PREROUTING_ZONES_SOURCE (1 references)
pkts bytes target prot opt in out source destination
Chain PREROUTING_direct (1 references)
pkts bytes target prot opt in out source destination
Chain PRE_public (2 references)
pkts bytes target prot opt in out source destination
53 4066 PRE_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
53 4066 PRE_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
53 4066 PRE_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
Chain PRE_public_allow (1 references)
pkts bytes target prot opt in out source destination
Chain PRE_public_deny (1 references)
pkts bytes target prot opt in out source destination
Chain PRE_public_log (1 references)
pkts bytes target prot opt in out source destination
其中在 POSTROUTING 的 chain 上,有一个“MASQUERADE”从任何地址进入,只要不从 docker0 出去,原地址是 172.17 网段,到任何地址去的数据,都将被地址转换,snat
上面提到过,当 docker 使用 nat 网络时,仅仅只有当前 docker host 和当前 docker host 上的容器之间可以互相访问,那么不同主机的容器要进行通信,就必须要进行 dnat(端口映射的方式),且同一个端口只能映射一个服务,那么在这个 docker host 中如果有多个 web 服务,就只能映射到一个 80 端口,其他的 web 服务就只能改默认端口,这也为我们带来了很大的局限性。
1.1、使用 ip 命令操作 net 名称空间
由于 docker 的 Net、UTS 以及 IPC 是可以被容器共享的,所以能够构建出一个此前在 KVM 的虚拟化网络中所谓的隔离式网络、桥接式网络、NET 式网络、物理桥式网络初次之外所不具有的特殊网络模型,我们可以用 ip 命令手动去操作网络名称空间的,ip 命令所能操作的众多对象当中包括 netns
查询是否安装 ip 命令
[root@bogon ~]# rpm -q iproute
iproute-4.11.0-14.el7.x86_64
创建 net 名称空间
[root@bogon ~]# ip netns help
Usage: ip netns list
ip netns add NAME
ip netns set NAME NETNSID
ip [-all] netns delete [NAME]
ip netns identify [PID]
ip netns pids NAME
ip [-all] netns exec [NAME] cmd ...
ip netns monitor
ip netns list-id
[root@bogon ~]# ip netns add ns1
[root@bogon ~]# ip netns add ns2
如果没有单独为 netns 创建网卡接口的话,那么默认就只有一个 loop 网卡
[root@bogon ~]# ip netns exec ns1 ifconfig -a
lo: flags=8 LOOPBACK mtu 65536
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@bogon ~]# ip netns exec ns2 ifconfig -a
lo: flags=8 LOOPBACK mtu 65536
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
创建网卡接口对并放入 net 名称空间
[root@bogon ~]# ip link add name veth2.1 type veth peer name veth2.2
[root@bogon ~]# ip link show
7: veth2.2@veth2.1: BROADCAST,MULTICAST,M-DOWN mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 06:9d:b4:1f:96:88 brd ff:ff:ff:ff:ff:ff
8: veth2.1@veth2.2: BROADCAST,MULTICAST,M-DOWN mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 22:ac:45:de:61:5d brd ff:ff:ff:ff:ff:ff
[root@bogon ~]# ip netns exec ns1 ip link set dev veth2.1 name eth0
[root@bogon ~]# ip netns exec ns2 ip link set dev veth2.2 name eth0
[root@bogon ~]# ip netns exec ns1 ifconfig eth0 10.10.1.1/24 up
[root@bogon ~]# ip netns exec ns2 ifconfig eth0 10.10.1.2/24 up
[root@bogon ~]# ip netns exec ns1 ifconfig
eth0: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 10.10.1.1 netmask 255.255.255.0 broadcast 10.10.1.255
inet6 fe80::20ac:45ff:fede:615d prefixlen 64 scopeid 0x20 link
ether 22:ac:45:de:61:5d txqueuelen 1000 (Ethernet)
RX packets 8 bytes 648 (648.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@bogon ~]# ip netns exec ns2 ifconfig
eth0: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 10.10.1.2 netmask 255.255.255.0 broadcast 10.10.1.255
inet6 fe80::49d:b4ff:fe1f:9688 prefixlen 64 scopeid 0x20 link
ether 06:9d:b4:1f:96:88 txqueuelen 1000 (Ethernet)
RX packets 8 bytes 648 (648.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@bogon ~]# ip netns exec ns1 ping 10.10.1.2
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
64 bytes from 10.10.1.2: icmp_seq=1 ttl=64 time=0.261 ms
64 bytes from 10.10.1.2: icmp_seq=2 ttl=64 time=0.076 ms
这样就完成了 ip 命令创建 netns 并设置网卡接口的配置
2、Host 网络
重新启动一个容器,指定 –network 为 host 网络
[root@bogon ~]# docker container run --name=myhttpd --network=host -d httpd:1.1
17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769
[root@bogon ~]#
[root@bogon ~]# ip netns list
ns1
使用交互模式连接到容器内部,查看网络信息
可以看到,这个容器使用的网络和物理主机的一模一样。注意:在这个容器内部更改网络信息,就和改物理主机的网络信息是同等的。
[root@bogon ~]# docker container exec -it myhttpd /bin/sh
sh-4.1#
sh-4.1# ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:2F:51:41:2D
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:2fff:fe51:412d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:1758 (1.7 KiB)
ens33 Link encap:Ethernet HWaddr 00:0C:29:FB:F6:A1
inet addr:192.168.31.186 Bcast:192.168.31.255 Mask:255.255.255.0
inet6 addr: fe80::a3fa:7451:4298:fe76/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:30112 errors:0 dropped:0 overruns:0 frame:0
TX packets:2431 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1927060 (1.8 MiB) TX bytes:299534 (292.5 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:96 errors:0 dropped:0 overruns:0 frame:0
TX packets:96 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10896 (10.6 KiB) TX bytes:10896 (10.6 KiB)
veth0a95d3a Link encap:Ethernet HWaddr CE:12:E7:27:2C:7F
inet6 addr: fe80::cc12:e7ff:fe27:2c7f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:648 (648.0 b)
virbr0 Link encap:Ethernet HWaddr 52:54:00:1A:BE:AE
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
sh-4.1# ping www.baidu.com
PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.
64 bytes from 61.135.169.125: icmp_seq=1 ttl=46 time=6.19 ms
64 bytes from 61.135.169.125: icmp_seq=2 ttl=46 time=6.17 ms
64 bytes from 61.135.169.125: icmp_seq=3 ttl=46 time=6.11 ms
使用 inspect 也可以看到该容器的网络信息使用的是 host
sh-4.1# exit
[root@bogon ~]# docker container inspect myhttpd
{
Id : 17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769 ,
Created : 2018-11-03T13:29:08.34016135Z ,
Path : /usr/sbin/apachectl ,
Args : [
-D ,
FOREGROUND
],
State : {
Status : running ,
Running : true,
Paused : false,
Restarting : false,
OOMKilled : false,
Dead : false,
Pid : 4015,
ExitCode : 0,
Error : ,
StartedAt : 2018-11-03T13:29:08.528631643Z ,
FinishedAt : 0001-01-01T00:00:00Z
},
Image : sha256:bbffcf779dd42e070d52a4661dcd3eaba2bed898bed8bbfe41768506f063ad32 ,
ResolvConfPath : /var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/resolv.conf ,
HostnamePath : /var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/hostname ,
HostsPath : /var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/hosts ,
LogPath : /var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769-json.log ,
Name : /myhttpd ,
RestartCount : 0,
Driver : overlay2 ,
Platform : linux ,
MountLabel : ,
ProcessLabel : ,
AppArmorProfile : ,
ExecIDs : null,
HostConfig : {
Binds : null,
ContainerIDFile : ,
LogConfig : {
Type : json-file ,
Config : {}
},
NetworkMode : host ,
PortBindings : {},
RestartPolicy : {
Name : no ,
MaximumRetryCount : 0
},
AutoRemove : false,
VolumeDriver : ,
VolumesFrom : null,
CapAdd : null,
CapDrop : null,
Dns : [],
DnsOptions : [],
DnsSearch : [],
ExtraHosts : null,
GroupAdd : null,
IpcMode : shareable ,
Cgroup : ,
Links : null,
OomScoreAdj : 0,
PidMode : ,
Privileged : false,
PublishAllPorts : false,
ReadonlyRootfs : false,
SecurityOpt : null,
UTSMode : ,
UsernsMode : ,
ShmSize : 67108864,
Runtime : runc ,
ConsoleSize : [
0,
0
],
Isolation : ,
CpuShares : 0,
Memory : 0,
NanoCpus : 0,
CgroupParent : ,
BlkioWeight : 0,
BlkioWeightDevice : [],
BlkioDeviceReadBps : null,
BlkioDeviceWriteBps : null,
BlkioDeviceReadIOps : null,
BlkioDeviceWriteIOps : null,
CpuPeriod : 0,
CpuQuota : 0,
CpuRealtimePeriod : 0,
CpuRealtimeRuntime : 0,
CpusetCpus : ,
CpusetMems : ,
Devices : [],
DeviceCgroupRules : null,
DiskQuota : 0,
KernelMemory : 0,
MemoryReservation : 0,
MemorySwap : 0,
MemorySwappiness : null,
OomKillDisable : false,
PidsLimit : 0,
Ulimits : null,
CpuCount : 0,
CpuPercent : 0,
IOMaximumIOps : 0,
IOMaximumBandwidth : 0,
MaskedPaths : [
/proc/acpi ,
/proc/kcore ,
/proc/keys ,
/proc/latency_stats ,
/proc/timer_list ,
/proc/timer_stats ,
/proc/sched_debug ,
/proc/scsi ,
/sys/firmware
],
ReadonlyPaths : [
/proc/asound ,
/proc/bus ,
/proc/fs ,
/proc/irq ,
/proc/sys ,
/proc/sysrq-trigger
]
},
GraphDriver : {
Data : {
LowerDir : /var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa-init/diff:/var/lib/docker/overlay2/619fd02d3390a6299f2bb3150762a765dd68bada7f432037769778a183d94817/diff:/var/lib/docker/overlay2/fd29d7fada3334bf5dd4dfa4f38db496b7fcbb3ec070e07fe21124a4f143b85a/diff ,
MergedDir : /var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa/merged ,
UpperDir : /var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa/diff ,
WorkDir : /var/lib/docker/overlay2/75fab11f1bf93cae37d9725ee9cbd167a0323379f383013241457220d62159fa/work
},
Name : overlay2
},
Mounts : [],
Config : {
Hostname : bogon ,
Domainname : ,
User : ,
AttachStdin : false,
AttachStdout : false,
AttachStderr : false,
ExposedPorts : { 5000/tcp : {}
},
Tty : false,
OpenStdin : false,
StdinOnce : false,
Env : [
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
],
Cmd : [
/usr/sbin/apachectl ,
-D ,
FOREGROUND
],
ArgsEscaped : true,
Image : httpd:1.1 ,
Volumes : null,
WorkingDir : ,
Entrypoint : null,
OnBuild : null,
Labels : {}
},
NetworkSettings : {
Bridge : ,
SandboxID : 91444230e357927973371cb315b9a247463320beffcde3b56248fa840bd24547 ,
HairpinMode : false,
LinkLocalIPv6Address : ,
LinkLocalIPv6PrefixLen : 0,
Ports : {},
SandboxKey : /var/run/docker/netns/default ,
SecondaryIPAddresses : null,
SecondaryIPv6Addresses : null,
EndpointID : ,
Gateway : ,
GlobalIPv6Address : ,
GlobalIPv6PrefixLen : 0,
IPAddress : ,
IPPrefixLen : 0,
IPv6Gateway : ,
MacAddress : ,
Networks : {
host : {
IPAMConfig : null,
Links : null,
Aliases : null,
NetworkID : 126249d6b1771dc8aeab4aa3e75a2f3951cc765f6a43c4d0053d77c8e8f23685 ,
EndpointID : b87ae83df3424565b138c9d9490f503b9632d3369ed01036c05cd885e902f8ca ,
Gateway : ,
IPAddress : ,
IPPrefixLen : 0,
IPv6Gateway : ,
GlobalIPv6Address : ,
GlobalIPv6PrefixLen : 0,
MacAddress : ,
DriverOpts : null
}
}
}
}
]
3、none 网络
[root@bogon ~]# docker container run --name=myhttpd2 --network=none -d httpd:1.1
3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3
[root@bogon ~]#
[root@bogon ~]# docker container exec -it myhttpd2 /bin/sh
sh-4.1# ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
使用 inspect 查看详细信息,可以看到,网络信息变成了 none。
sh-4.1# exit
[root@bogon ~]# docker container inspect myhttpd2
{
Id : 3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3 ,
Created : 2018-11-03T13:37:53.153680433Z ,
Path : /usr/sbin/apachectl ,
Args : [
-D ,
FOREGROUND
],
State : {
Status : running ,
Running : true,
Paused : false,
Restarting : false,
OOMKilled : false,
Dead : false,
Pid : 4350,
ExitCode : 0,
Error : ,
StartedAt : 2018-11-03T13:37:53.563817908Z ,
FinishedAt : 0001-01-01T00:00:00Z
},
Image : sha256:bbffcf779dd42e070d52a4661dcd3eaba2bed898bed8bbfe41768506f063ad32 ,
ResolvConfPath : /var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/resolv.conf ,
HostnamePath : /var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/hostname ,
HostsPath : /var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/hosts ,
LogPath : /var/lib/docker/containers/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3/3e7148946653a10f66158d2b1410ffb290657de81c3833efe971cad85174abc3-json.log ,
Name : /myhttpd2 ,
RestartCount : 0,
Driver : overlay2 ,
Platform : linux ,
MountLabel : ,
ProcessLabel : ,
AppArmorProfile : ,
ExecIDs : null,
HostConfig : {
Binds : null,
ContainerIDFile : ,
LogConfig : {
Type : json-file ,
Config : {}
},
NetworkMode : none ,
PortBindings : {},
RestartPolicy : {
Name : no ,
MaximumRetryCount : 0
},
AutoRemove : false,
VolumeDriver : ,
VolumesFrom : null,
CapAdd : null,
CapDrop : null,
Dns : [],
DnsOptions : [],
DnsSearch : [],
ExtraHosts : null,
GroupAdd : null,
IpcMode : shareable ,
Cgroup : ,
Links : null,
OomScoreAdj : 0,
PidMode : ,
Privileged : false,
PublishAllPorts : false,
ReadonlyRootfs : false,
SecurityOpt : null,
UTSMode : ,
UsernsMode : ,
ShmSize : 67108864,
Runtime : runc ,
ConsoleSize : [
0,
0
],
Isolation : ,
CpuShares : 0,
Memory : 0,
NanoCpus : 0,
CgroupParent : ,
BlkioWeight : 0,
BlkioWeightDevice : [],
BlkioDeviceReadBps : null,
BlkioDeviceWriteBps : null,
BlkioDeviceReadIOps : null,
BlkioDeviceWriteIOps : null,
CpuPeriod : 0,
CpuQuota : 0,
CpuRealtimePeriod : 0,
CpuRealtimeRuntime : 0,
CpusetCpus : ,
CpusetMems : ,
Devices : [],
DeviceCgroupRules : null,
DiskQuota : 0,
KernelMemory : 0,
MemoryReservation : 0,
MemorySwap : 0,
MemorySwappiness : null,
OomKillDisable : false,
PidsLimit : 0,
Ulimits : null,
CpuCount : 0,
CpuPercent : 0,
IOMaximumIOps : 0,
IOMaximumBandwidth : 0,
MaskedPaths : [
/proc/acpi ,
/proc/kcore ,
/proc/keys ,
/proc/latency_stats ,
/proc/timer_list ,
/proc/timer_stats ,
/proc/sched_debug ,
/proc/scsi ,
/sys/firmware
],
ReadonlyPaths : [
/proc/asound ,
/proc/bus ,
/proc/fs ,
/proc/irq ,
/proc/sys ,
/proc/sysrq-trigger
]
},
GraphDriver : {
Data : {
LowerDir : /var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0-init/diff:/var/lib/docker/overlay2/619fd02d3390a6299f2bb3150762a765dd68bada7f432037769778a183d94817/diff:/var/lib/docker/overlay2/fd29d7fada3334bf5dd4dfa4f38db496b7fcbb3ec070e07fe21124a4f143b85a/diff ,
MergedDir : /var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0/merged ,
UpperDir : /var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0/diff ,
WorkDir : /var/lib/docker/overlay2/ce58a73cd982d40a80723180c4e09ca16d74471a1eb3888c05da9eec6a2f4ac0/work
},
Name : overlay2
},
Mounts : [],
Config : {
Hostname : 3e7148946653 ,
Domainname : ,
User : ,
AttachStdin : false,
AttachStdout : false,
AttachStderr : false,
ExposedPorts : { 5000/tcp : {}
},
Tty : false,
OpenStdin : false,
StdinOnce : false,
Env : [
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
],
Cmd : [
/usr/sbin/apachectl ,
-D ,
FOREGROUND
],
ArgsEscaped : true,
Image : httpd:1.1 ,
Volumes : null,
WorkingDir : ,
Entrypoint : null,
OnBuild : null,
Labels : {}
},
NetworkSettings : {
Bridge : ,
SandboxID : f9402b5b2dbb95c2736f25626704dec79f75800c33c0905c362e79af3810234d ,
HairpinMode : false,
LinkLocalIPv6Address : ,
LinkLocalIPv6PrefixLen : 0,
Ports : {},
SandboxKey : /var/run/docker/netns/f9402b5b2dbb ,
SecondaryIPAddresses : null,
SecondaryIPv6Addresses : null,
EndpointID : ,
Gateway : ,
GlobalIPv6Address : ,
GlobalIPv6PrefixLen : 0,
IPAddress : ,
IPPrefixLen : 0,
IPv6Gateway : ,
MacAddress : ,
Networks : {
none : {
IPAMConfig : null,
Links : null,
Aliases : null,
NetworkID : 4ad67e37d38389253ca55c39ad8d615cef40c6bb9b535051679b2d1ed6cb01e8 ,
EndpointID : 83913b6eaeed3775fbbcbb9375491dd45e527d81837048cffa63b3064ad6e7e3 ,
Gateway : ,
IPAddress : ,
IPPrefixLen : 0,
IPv6Gateway : ,
GlobalIPv6Address : ,
GlobalIPv6PrefixLen : 0,
MacAddress : ,
DriverOpts : null
}
}
}
}
]
4、Joined 网络
joined 网络是联合其他容器启动,使用共同的 NET、UTS 和 IPC 名称空间,但是其余名称空间是不共享的
启动两个容器,并且第二个容器使用第一个容器的网络名称空间
[root@bogon ~]# docker container run --name myhttpd -d httpd:1.1
7053b88aacb35d859e00d47133c084ebb9288ce3fb47b6c588153a5e6c6dd5f0
[root@bogon ~]# docker container run --name myhttpd1 -d --network container:myhttpd redis:4-alpine
99191b8fc853f546f3b381d36cc2f86bc7f31af31daf0e19747411d2f1a10686
[root@bogon ~]# docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
99191b8fc853 redis:4-alpine docker-entrypoint.s?? 5 seconds ago Up 3 seconds myhttpd1
7053b88aacb3 httpd:1.1 /usr/sbin/apachectl?? 3 minutes ago Up 3 minutes 5000/tcp myhttpd
[root@bogon ~]#
登录第一个容器开始验证
[root@bogon ~]# docker container exec -it myhttpd /bin/sh
sh-4.1#
sh-4.1#
sh-4.1# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 b) TX bytes:0 (0.0 b)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
sh-4.1# ps -ef |grep httpd
root 7 1 0 08:15 ? 00:00:00 /usr/sbin/httpd -D FOREGROUND
apache 8 7 0 08:15 ? 00:00:00 /usr/sbin/httpd -D FOREGROUND
apache 9 7 0 08:15 ? 00:00:00 /usr/sbin/httpd -D FOREGROUND
sh-4.1# mkdir /tmp/testdir
sh-4.1#
sh-4.1# ls /tmp/
testdir
登录第二个容器验证
[root@bogon ~]# docker container exec -it myhttpd1 /bin/sh
/data # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/data # ps -ef |grep redis
1 redis 0:00 redis-server
/data # ls /tmp/
/data #
可以看到,在容器 httpd 上创建了一个目录,但是容器 httpd1 上并没有,使用的 ip 地址也是同一个。由此看出,joined 网络是隔离 mount、user 以及 pid 名称空间但是共享同一组 net、ipc 和 uts 名称空间的
三、启动容器并设置网络相关配置
3.1、指定容器的 HOSTNAME
容器启动后默认使用的是容器 ID 作为 hostname,需要指定 hostname 需要在容器启动时加上参数
hostname 参数会为容器设置指定的 hostname,并且会自动在 hosts 文件中添加本地解析
[root@bogon ~]# docker container run --name mycentos -it centos:6.6 /bin/sh
sh-4.1# hostname
02f68247b097
[root@bogon ~]# docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02f68247b097 centos:6.6 /bin/sh 15 seconds ago Exited (0) 7 seconds ago mycentos
[root@bogon ~]# docker container run --name mycentos --hostname centos1.local -it centos:6.6 /bin/sh
sh-4.1#
sh-4.1# hostname
centos1.local
sh-4.1# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 centos1.local centos1
3.2、指定容器的 DNS
如果不指定 DNS 容器启动默认使用的是宿主机配置的 DNS 地址
[root@bogon ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 202.106.196.115
[root@bogon ~]# docker container run --name mycentos -it centos:6.6 /bin/sh
sh-4.1# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 202.106.196.115
[root@bogon ~]# docker container run --name mycentos --dns 114.114.114.114 -it --rm centos:6.6 /bin/sh
sh-4.1#
sh-4.1# cat /etc/resolv.conf
nameserver 114.114.114.114
3.3、手动添加 hosts 本地解析
[root@bogon ~]# docker container run --name mycentos --rm --add-host bogon:192.168.31.186 --add-host www.baidu.com:1.1.1.1 -it centos:6.6 /bin/sh
sh-4.1#
sh-4.1# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.31.186 bogon
1.1.1.1 www.baidu.com
172.17.0.2 ea40852f5871
3.4、容器的端口暴漏
如果容器使用的网络是 bridge,并且容器内部的服务需要让外部客户端访问。那么就需要做容器内端口暴漏
1、动态暴漏(将指定容器内的端口映射至物理主机的所有地址中的一个动态端口)
[root@bogon ~]# docker container run --name myhttpd --rm -d -p 80 httpd:1.1
54c1b69f4a8b28abc8d65d836d3ed1ae916d982947800da5bace2fa41d2a0ce5
[root@bogon ~]#
[root@bogon ~]# curl 172.17.0.2
h2 Welcom To My Httpd /h2
[root@bogon ~]# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 17 packets, 1582 bytes)
pkts bytes target prot opt in out source destination
34 3134 PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
34 3134 PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
34 3134 PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
1 52 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 3 packets, 310 bytes)
pkts bytes target prot opt in out source destination
28 2424 OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 3 packets, 310 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
2 267 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24
0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255
0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24
26 2157 POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
26 2157 POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
26 2157 POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.2:80
[root@bogon ~]# docker port myhttpd
80/tcp - 0.0.0.0:32768
可以看到,启动容器指定了端口暴漏会自动在 docker host 上创建一个 iptables 的 nat 规则,主机的所有地址 32768 端口映射到了容器的 80 端口上
在另一台主机上访问容器的 httpd 服务
[root@centos7-node2 ~]# curl 192.168.31.186:32768
h2 Welcom To My Httpd /h2
[root@centos7-node2 ~]#
2、静态暴漏(如果想映射主机的指定地址)
2.1、暴漏到物理主机指定地址的随机端口上
如果不指定物理主机端口,使用两个冒号分隔,主机地址:: 容器端口
[root@bogon ~]# docker container run --name myhttpd -d -p 192.168.31.186::80 httpd:1.1
50f3788eefe1016b9df2a3f2fcc1bfa19a2110675396daed075d1d4d0e69798b
[root@bogon ~]# docker port myhttpd
80/tcp - 192.168.31.186:32768
[root@bogon ~]#
[root@centos7-node2 ~]# curl 192.168.31.186:32768
h2 Welcom To My Httpd /h2
2.2、暴漏到物理主机所有地址的指定端口上
如果不指定地址,可以将地址省略,主机端口: 容器端口
[root@bogon ~]# docker container run --name myhttpd --rm -d -p 80:80 httpd:1.1
2fde0e49c3545fb28624b01b737b22650ba98dfa09674e8ccb3b6722c7dcd257
[root@bogon ~]# docker port myhttpd
80/tcp - 0.0.0.0:80
2.3、暴漏到物理主机指定地址的指定端口
[root@bogon ~]# docker container run --name myhttpd --rm -d -p 192.168.31.186:8080:80 httpd:1.1
a9152173fafc650c47c6a35040e0c50876f841756529334cb509fdff53ce60c7
[root@bogon ~]#
[root@bogon ~]# docker port myhttpd
80/tcp - 192.168.31.186:8080
四、自定义 docker 配置文件
1、可以通过修改 docker 的配置文件修改默认的 docker0 网桥地址信息
[root@bogon ~]# vim /etc/docker/daemon.json
registry-mirrors : [https://registry.docker-cn.com],
bip : 10.10.1.2/16
[root@bogon ~]# systemctl restart docker.service
[root@bogon ~]# ifconfig
docker0: flags=4099 UP,BROADCAST,MULTICAST mtu 1500
inet 10.10.1.2 netmask 255.255.0.0 broadcast 10.10.255.255
inet6 fe80::42:cdff:fef5:e3ba prefixlen 64 scopeid 0x20 link
ether 02:42:cd:f5:e3:ba txqueuelen 0 (Ethernet)
RX packets 38 bytes 3672 (3.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53 bytes 5152 (5.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
配置文件中只需要加入 bip 即可,网关等信息 docker 会自动算出来。可以看到,docker0 的 ip 地址已经变成了刚才我们改的网段
2、修改 docker 监听方式
docker 默认只监听在 socket 文件上,如果要监听 tcp,需要更改配置文件
[root@bogon ~]# vim /etc/docker/daemon.json
registry-mirrors : [https://registry.docker-cn.com],
bip : 10.10.1.2/16 ,
hosts : [tcp://0.0.0.0:33333 , unix:///var/run/docker.sock]
[root@bogon ~]# systemctl restart docker.service
[root@bogon ~]# netstat -tlunp |grep 33333
tcp6 0 0 :::33333 :::* LISTEN 6621/dockerd
[root@bogon ~]#
修改后,就可以从其他安装了 docker 的主机上远程访问这台 docker 服务器了
[root@centos7-node2 ~]# docker -H 192.168.31.186:33333 image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
httpd 1.1 bbffcf779dd4 4 days ago 264MB
nginx stable ecc98fc2f376 3 weeks ago 109MB
centos 6.6 4e1ad2ce7f78 3 weeks ago 203MB
redis 4-alpine 05097a3a0549 4 weeks ago 30MB
[root@centos7-node2 ~]# docker -H 192.168.31.186:33333 container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a8409019e310 redis:4-alpine docker-entrypoint.s?? 2 hours ago Exited (0) 29 minutes ago redis2
99191b8fc853 redis:4-alpine docker-entrypoint.s?? 3 hours ago Exited (0) 29 minutes ago myhttpd1
7053b88aacb3 httpd:1.1 /usr/sbin/apachectl?? 3 hours ago Exited (137) 28 minutes ago myhttpd
[root@centos7-node2 ~]#
3、为 docker 创建网络
docker 支持的网络模型有 bridge、none、host、macvlan、overlay,创建时不指定默认创建 bridge 网桥
[root@bogon ~]# docker info
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 4
Server Version: 18.06.1-ce
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
创建自定义网络
[root@bogon ~]# docker network create --driver bridge --subnet 192.168.30.0/24 --gateway 192.168.30.1 mybridge0
859e5a2975979740575d6365de326e18991db7b70188b7a50f6f842ca21e1d3d
[root@bogon ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
f23c1f889968 bridge bridge local
126249d6b177 host host local
859e5a297597 mybridge0 bridge local
4ad67e37d383 none null local
[root@bogon ~]# ifconfig
br-859e5a297597: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 192.168.30.1 netmask 255.255.255.0 broadcast 192.168.30.255
inet6 fe80::42:f4ff:feeb:6a16 prefixlen 64 scopeid 0x20 link
ether 02:42:f4:eb:6a:16 txqueuelen 0 (Ethernet)
RX packets 5 bytes 365 (365.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 29 bytes 3002 (2.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099 UP,BROADCAST,MULTICAST mtu 1500
inet 10.10.1.2 netmask 255.255.0.0 broadcast 10.10.255.255
inet6 fe80::42:cdff:fef5:e3ba prefixlen 64 scopeid 0x20 link
ether 02:42:cd:f5:e3:ba txqueuelen 0 (Ethernet)
RX packets 38 bytes 3672 (3.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53 bytes 5152 (5.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
启动容器加入 mybridge0 网络
[root@bogon ~]# docker container run --name redis1 --network mybridge0 -d redis:4-alpine
6d6d11266e3208e45896c40e71c6e3cecd9f7710f2f3c39b401d9f285f28c2f7
[root@bogon ~]# docker container exec -it redis1 /bin/sh
/data #
/data # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:1E:02
inet addr:192.168.30.2 Bcast:192.168.30.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:24 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2618 (2.5 KiB) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
4、删除自定义 docker 网络
删除之前要先关闭正在运行在此网络上的容器
[root@bogon ~]# docker network rm mybridge
[root@bogon ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
f23c1f889968 bridge bridge local
126249d6b177 host host local
4ad67e37d383 none null local
关于“Docker 容器虚拟化网络的示例分析”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,使各位可以学到更多知识,如果觉得文章不错,请把它分享出去让更多的人看到。