Minio分布式集群部署——Swarm

最近研究minio分布式集群部署,发现网上大部分都是单服务器部署,而minio官方在github上现在也只提供了k8s和docker-compose的方式,网上有关与swarm启动minio集群的文章都不完整,这里我研究了近一周时间,终于是成功使用swarm部署minio集群,并用nginx统一入口

环境准备

四台虚拟机

  • 192.168.2.38(管理节点)
  • 192.168.2.81(工作节点)
  • 192.168.2.100(工作节点)
  • 192.168.2.102(工作节点)

时间同步

yum install -y ntp cat <<EOF>>/var/spool/cron/root 00 12 * * * /usr/sbin/ntpdate -u ntp1.aliyun.com && /usr/sbin/hwclock -w EOF ##查看计划任务 crontab -l ##手动执行 /usr/sbin/ntpdate -u ntp1.aliyun.com && /usr/sbin/hwclock -w 

Docker

安装Docker

curl -sSL https://get.daocloud.io/docker | sh 

启动docker

sudo systemctl start docker sudo systemctl enable docker 

搭建Swarm集群

打开防火墙(Swarm需要)

  • 管理节点打开2377

    # manager firewall-cmd --zone=public --add-port=2377/tcp --permanent 
  • 所有节点打开以下端口

    # 所有node firewall-cmd --zone=public --add-port=7946/tcp --permanent firewall-cmd --zone=public --add-port=7946/udp --permanent firewall-cmd --zone=public --add-port=4789/tcp --permanent firewall-cmd --zone=public --add-port=4789/udp --permanent 
  • 所有节点重启防火墙

    # 所有node firewall-cmd --reload systemctl restart docker 
  • 图个方便可以直接关闭防火墙

创建Swarm

docker swarm init --advertise-addr your_manager_ip 

加入Swarm

docker swarm join --token SWMTKN-1- 51b7t8whxn8j6mdjt5perjmec9u8qguxq8tern9nill737pra2-ejc5nw5f90oz6xldcbmrl2ztu 192.168.2.38:2377 #查看节点 docker node ls 

服务约束

添加label

sudo docker node update --label-add minio1=true 管理节点名称 sudo docker node update --label-add minio2=true 工作节点名称 sudo docker node update --label-add minio3=true 工作节点名称 sudo docker node update --label-add minio4=true 工作节点名称 

Minio分布式集群部署——Swarm

为MinIO创建Docker secret

echo "minioadmin" | docker secret create access_key - echo "12345678" | docker secret create secret_key - 

Minio集群部署文件

创建文件存放目录

管理节点执行

cd /root mkdir minio-swarm vi docker-compose-nginx.yml 

Docker-Compose.yml

version: '3.7'  services:    nginx:     image: nginx     hostname: minionginx     volumes:       - /root/minio-swarm/conf/swarm-nginx.conf:/etc/nginx/nginx.conf     ports:       - "9090:80"       - "9000:9000"     deploy:       replicas: 1       restart_policy:         delay: 10s         max_attempts: 10         window: 60s       placement:         constraints:           - node.labels.minio1==true       resources:         limits:           # cpus: '0.001'           memory: 1024M         reservations:           # cpus: '0.001'           memory: 64M       resources:         limits:           memory: 2048M         reservations:           memory: 512M     networks:       - minio_distributed     depends_on:       - minio1       - minio2       - minio3       - minio4    minio1:     image: quay.io/minio/minio:RELEASE.2022-02-12T00-51-25Z     hostname: minio1     volumes:       - data1-1:/data1       - data1-2:/data2     deploy:       replicas: 1       restart_policy:         delay: 10s         max_attempts: 10         window: 60s       placement:         constraints:           - node.labels.minio1==true       resources:         limits:           memory: 2048M         reservations:           memory: 512M     command: server --console-address ":9001" http://minio{1...4}/data{1...2}     networks:       - minio_distributed     secrets:       - secret_key       - access_key     healthcheck:       test:         [           "CMD",           "curl",           "-f",           "http://localhost:9000/minio/health/live"         ]       interval: 30s       timeout: 20s       retries: 3    minio2:     image: quay.io/minio/minio:RELEASE.2022-02-12T00-51-25Z     hostname: minio2     volumes:       - data2-1:/data1       - data2-2:/data2     deploy:       replicas: 1       restart_policy:         delay: 10s         max_attempts: 10         window: 60s       placement:         constraints:           - node.labels.minio2==true       resources:         limits:           memory: 2048M         reservations:           memory: 512M     command: server --console-address ":9001" http://minio{1...4}/data{1...2}     networks:       - minio_distributed     secrets:       - secret_key       - access_key     healthcheck:       test:         [           "CMD",           "curl",           "-f",           "http://localhost:9000/minio/health/live"         ]       interval: 30s       timeout: 20s       retries: 3    minio3:     image: quay.io/minio/minio:RELEASE.2022-02-12T00-51-25Z     hostname: minio3     volumes:       - data3-1:/data1       - data3-2:/data2     deploy:       replicas: 1       restart_policy:         delay: 10s         max_attempts: 10         window: 60s       placement:         constraints:           - node.labels.minio3==true       resources:         limits:           memory: 2048M         reservations:           memory: 512M     command: server --console-address ":9001" http://minio{1...4}/data{1...2}     networks:       - minio_distributed     secrets:       - secret_key       - access_key     healthcheck:       test:         [           "CMD",           "curl",           "-f",           "http://localhost:9000/minio/health/live"         ]       interval: 30s       timeout: 20s       retries: 3    minio4:     image: quay.io/minio/minio:RELEASE.2022-02-12T00-51-25Z     hostname: minio4     volumes:       - data4-1:/data1       - data4-2:/data2     deploy:       replicas: 1       restart_policy:         delay: 10s         max_attempts: 10         window: 60s       placement:         constraints:           - node.labels.minio4==true       resources:         limits:           memory: 2048M         reservations:           memory: 512M     command: server --console-address ":9001" http://minio{1...4}/data{1...2}     networks:       - minio_distributed     secrets:       - secret_key       - access_key     healthcheck:       test:         [           "CMD",           "curl",           "-f",           "http://localhost:9000/minio/health/live"         ]       interval: 30s       timeout: 20s       retries: 3  volumes:   data1-1:   data1-2:   data2-1:   data2-2:   data3-1:   data3-2:   data4-1:   data4-2:   networks:   minio_distributed:     driver: overlay  secrets:   secret_key:     external: true   access_key:     external: true  

说明:

  • secret_key和access_key由上一步通过docker secret create xxx - 创建的
  • 一个节点上只能部署一个minio服务,如果部署多个会出现磁盘被占用的情况,所以最好是增加机器再部署

nginx.conf

创建目录

cd /root/minio-swarm mkdir conf cd conf vi swarm-nginx.conf 

如果需要增加集群的节点,需要在Upstream中添加新节点的服务名:9001

user  nginx; worker_processes  auto;  error_log  /var/log/nginx/error.log warn; pid        /var/run/nginx.pid;  events {     worker_connections  4096; }  http {     include       /etc/nginx/mime.types;     default_type  application/octet-stream;      log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '                       '$status $body_bytes_sent "$http_referer" '                       '"$http_user_agent" "$http_x_forwarded_for"';      access_log  /var/log/nginx/access.log  main;     sendfile        on;     keepalive_timeout  65;      upstream minio {         server minio1:9000;         server minio2:9000;         server minio3:9000;         server minio4:9000;     }     server {         listen       9000;         listen  [::]:9000;         server_name  localhost;          # To allow special characters in headers         ignore_invalid_headers off;         # Allow any size file to be uploaded.         # Set to a value such as 1000m; to restrict file size to a specific value         client_max_body_size 0;         # To disable buffering         proxy_buffering off;         proxy_request_buffering off;          location / {             proxy_set_header Host $http_host;             proxy_set_header X-Real-IP $remote_addr;             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;             proxy_set_header X-Forwarded-Proto $scheme;              proxy_connect_timeout 300;             # Default is HTTP/1, keepalive is only enabled in HTTP/1.1             proxy_http_version 1.1;             proxy_set_header Connection "";             chunked_transfer_encoding off;              proxy_pass http://minio;         }     }     # include /etc/nginx/conf.d/*.conf;      upstream console {         server minio1:9001;         server minio2:9001;         server minio3:9001;         server minio4:9001;     }      server {         listen       80;         listen  [::]:80;         server_name  localhost;          # To allow special characters in headers         ignore_invalid_headers off;         # Allow any size file to be uploaded.         # Set to a value such as 1000m; to restrict file size to a specific value         client_max_body_size 0;         # To disable buffering         proxy_buffering off;          location / {             proxy_connect_timeout 5;             proxy_send_timeout 10;             proxy_read_timeout 10;              proxy_set_header Host $http_host;             proxy_set_header X-Real-IP $remote_addr;             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;             proxy_set_header X-Forwarded-Proto $scheme;              # Default is HTTP/1, keepalive is only enabled in HTTP/1.1             proxy_http_version 1.1;             proxy_set_header Connection "";             chunked_transfer_encoding off;              proxy_pass http://console;         }     }  } 

部署

cd /root/minio-swarm docker stack deploy -c docker-compose-nginx.yaml minio-swarm 

测试

浏览器访问

Minio分布式集群部署——Swarm

一个节点宕机

模拟其中一个节点宕机,看能否正常读取数据(minio集群的写入需要至少4个在线磁盘,如果是两个节点的集群,一个节点宕机,那么集群就只能读取,无法写入)

如果是一个有N块硬盘的分布式Minio,只要有N/2硬盘在线,你的数据就是可以读取的。不过你需要至少有N/2+1个硬盘来创建新的对象。

[root@test redis-swarm2]# docker service ls ID             NAME                      MODE         REPLICAS   IMAGE                                              PORTS l317d9wc49tt   minio-swarm_minio1        replicated   1/1        quay.io/minio/minio:RELEASE.2022-02-12T00-51-25Z    x2gj6ert03tj   minio-swarm_minio2        replicated   1/1        quay.io/minio/minio:RELEASE.2022-02-12T00-51-25Z    z624sonlnk02   minio-swarm_minio3        replicated   1/1        quay.io/minio/minio:RELEASE.2022-02-12T00-51-25Z    xu0gx8mbjocm   minio-swarm_minio4        replicated   1/1        quay.io/minio/minio:RELEASE.2022-02-12T00-51-25Z    53w8cpjpe7wd   minio-swarm_nginx         replicated   1/1        nginx:latest                                       *:9000->9000/tcp, *:9090->80/tcp 

现在将其中一台服务器停机处理,刷新浏览器

Minio分布式集群部署——Swarm

可以正常写入和读取数据

Minio分布式集群部署——Swarm

二个节点宕机

被强制退到登录界面,无法登录进去

Minio分布式集群部署——Swarm

注意: 如果要模拟节点宕机,至少需要3台机器,如果是两台,模拟宕机一台,另一台是无法写入的

Minio分布式集群部署——Swarm

发表评论

评论已关闭。

相关文章