站长资源服务器

Docker overlay 网络搭建的方法

整理:jimmy2024/12/29浏览2
简介Overlay网络是指通过在现有网络上叠加一个软件定义的逻辑网络,最大程度的保留原有网络,通过定义其上的逻辑网络,实现业务逻辑,解决原有数据中心的网络问题。快速开始Docker跨主机网络方案docker 原生overlaymacvlan第三方方案flannelweavecalico之前介绍了Cons

Overlay网络是指通过在现有网络上叠加一个软件定义的逻辑网络,最大程度的保留原有网络,通过定义其上的逻辑网络,实现业务逻辑,解决原有数据中心的网络问题。

快速开始

Docker跨主机网络方案

docker 原生

  1. overlay
  2. macvlan

第三方方案

  1. flannel
  2. weave
  3. calico

之前介绍了Consul搭建集群,所以在此直接选用docker自带的overlay方案来做,和Consul完美结合。

环境准备

参考之前的一篇《Consul 搭建集群》准备三台虚机。

ip n1 172.20.20.10 n2 172.20.20.11 n3 172.20.20.12

在这三台虚机上启动consul并创建集群

n1

[root@n1 vagrant]# consul agent -server -bootstrap-expect 3 -data-dir /etc/consul.d -node=node1 -bind=172.20.20.10 -ui -client 0.0.0.0

n2

[root@n2 vagrant]# consul agent -server -bootstrap-expect 3 -data-dir /etc/consul.d -node=node2 -bind=172.20.20.11 -ui -client 0.0.0.0 -join 172.20.20.10

n3

[root@n3 vagrant]# consul agent -server -bootstrap-expect 3 -data-dir /etc/consul.d -node=node3 -bind=172.20.20.12 -ui -client 0.0.0.0 -join 172.20.20.10
[root@n1 vagrant]# consul members
Node  Address      Status Type  Build Protocol DC  Segment
node1 172.20.20.10:8301 alive  server 1.1.0 2     dc1 <all>
node2 172.20.20.11:8301 alive  server 1.1.0 2     dc1 <all>
node3 172.20.20.12:8301 alive  server 1.1.0 2     dc1 <all>

配置 docker

登录n1

修改 /etc/sysconfig/docker-network

# /etc/sysconfig/docker-network
DOCKER_NETWORK_OPTIONS=--cluster-store=consul://172.20.20.10:8500 --cluster-advertise=172.20.20.10:2376

其中ip部分为consul容器节点的ip。

在命令行中输入 docker network create -d overlay myoverlay创建一个名为myoverlay的网络,并用 docker network ls检查docker网络列表

[root@n1 sysconfig]# docker network ls
NETWORK ID     NAME        DRIVER       SCOPE
5a8df7650e34    bridge       bridge       local
8e574df4fb90    docker_gwbridge   bridge       local
d69aab5b2621    host        host        local
7301c62bca4d    none        null        local
[root@n1 sysconfig]# docker network create -d overlay myoverlay
36feac75fb49edcf8920ed39109424b833501268942fb563708aa306fccfb15c
[root@n1 sysconfig]# docker network ls
NETWORK ID     NAME        DRIVER       SCOPE
5a8df7650e34    bridge       bridge       local
8e574df4fb90    docker_gwbridge   bridge       local
d69aab5b2621    host        host        local
36feac75fb49    myoverlay      overlay       global
7301c62bca4d    none        null        local

登录 n2

修改 /etc/sysconfig/docker-network

# /etc/sysconfig/docker-network
DOCKER_NETWORK_OPTIONS=--cluster-store=consul://172.20.20.11:8500 --cluster-advertise=172.20.20.11:2376

这里不用再次新建立myoverlay网络了,因为他们是一个集群。直接检查网络列表

[root@n2 vagrant]# docker network ls
NETWORK ID     NAME        DRIVER       SCOPE
9f2b7d40a69f    bridge       bridge       local
1d9ee9546c81    docker_gwbridge   bridge       local
e1f72fa7710c    host        host        local
36feac75fb49    myoverlay      overlay       global
372109bb13bc    none        null        local

发现myoverlay已经在其中。

同理操作n3

[root@n3 vagrant]# docker network ls
NETWORK ID     NAME        DRIVER       SCOPE
14cf16d37c9b    bridge       bridge       local
ca426545fedb    docker_gwbridge   bridge       local
b57d2f555fa2    host        host        local
36feac75fb49    myoverlay      overlay       global
fcb5da0380e4    none        null        local

启动容器 验证overlay网络

登录n1以myoverlay启动一个busybox的容器

[root@n1 sysconfig]# docker run --network myoverlay busybox

查看busybox容器详情,在最底部有网络相关的信息

"Networks": {
        "myoverlay": {
          "IPAMConfig": null,
          "Links": null,
          "Aliases": [
            "e7d558b35607"
          ],
          "NetworkID": "36feac75fb49edcf8920ed39109424b833501268942fb563708aa306fccfb15c",
          "EndpointID": "6b1c975847b506a151940893e3ac189a7053cb34dda4ec2b5797c93f6eeb3534",
          "Gateway": "",
          "IPAddress": "10.0.0.2",
          "IPPrefixLen": 24,
          "IPv6Gateway": "",
          "GlobalIPv6Address": "",
          "GlobalIPv6PrefixLen": 0,
          "MacAddress": "02:42:0a:00:00:02"
        }
      }

可以看到网络是myoverlay ip 10.0.0.2

登录n2myoverlay启动一个busybox的容器

[root@n2 sysconfig]# docker run --network myoverlay busybox

查看busybox容器详情,在最底部有网络相关的信息

"Networks": {
        "myoverlay": {
          "IPAMConfig": null,
          "Links": null,
          "Aliases": [
            "f673ccb5ab32"
          ],
          "NetworkID": "36feac75fb49edcf8920ed39109424b833501268942fb563708aa306fccfb15c",
          "EndpointID": "39f8e9e098ce3faf039aa60e275ec90428f86c6378f5b4c54d8682741e71673f",
          "Gateway": "",
          "IPAddress": "10.0.0.3",
          "IPPrefixLen": 24,
          "IPv6Gateway": "",
          "GlobalIPv6Address": "",
          "GlobalIPv6PrefixLen": 0,
          "MacAddress": "02:42:0a:00:00:03"
        }
      }

这个busybox的ip为10.0.0.3

进入容器,ping 10.0.0.2

[root@n2 vagrant]# docker ps
CONTAINER ID    IMAGE        COMMAND         CREATED       STATUS       PORTS                         NAMES
f673ccb5ab32    busybox       "sh"           2 minutes ago    Up 2 minutes                                objective_pare
[root@n2 vagrant]# docker exec -ti f673ccb5ab32 /sh
/ # ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=1.309 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.535 ms
64 bytes from 10.0.0.2: seq=2 ttl=64 time=1.061 ms
64 bytes from 10.0.0.2: seq=3 ttl=64 time=0.764 ms
^C
--- 10.0.0.2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.535/0.917/1.309 ms

ping 是通的, overlay 网络搭建成功!

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。