双master节点+keepalived方式部署K8s 1.18.20

2023/9/30 17:42:20

相关部署方式也挺多,自己采用双master节点+单node节点方式,并且采用keepalived部署1.18.20版本,中间也出现过相关小问题,但都一一处理,记录以给需要的同仁们参考,希望大家都可以一起学习交流!!!

【部署环境】----参考之前已使用的环境

master01:192.168.66.200

master02:192.168.66.201

node01:192.168.66.250

vip:192.168.66.199

【操作步骤】

步骤1、虚机的初始化配置(相关配置已略去)

请参考K8s 1.15.0 版本部署安装_好好学习之乘风破浪的博客-CSDN博客

步骤2、双master节点配置keepalived

# yum install keepalived -y

# cd /etc/keepalived

修改keepalived.conf配置文件

[root@k8s-master01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER         ####master01上配置Master,master02上配置BACKUP
    interface ens33      #####需要根据虚机的网卡进行填写
    virtual_router_id 51
    priority 100           ####master01上配置100,master02上配置90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.66.199       #####配置VIP
    }
}

virtual_server 192.168.66.199 6443 {      #####配置VIP和端口
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.168.66.200 6443 {    #####配置master01和端口
        weight 1
        SSL_GET {
            url {
              path /
              digest ff20ad2481f97b1754ef3e12ecd3a9cc
            }
            url {
              path /mrtg/
              digest 9b3a0c85a887a256d6939da88aabd8cd
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

       real_server 192.168.66.201 6443 {   #####配置master02和端口
        weight 1
        SSL_GET {
            url {
              path /
              digest ff20ad2481f97b1754ef3e12ecd3a9cc
            }
            url {
              path /mrtg/
              digest 9b3a0c85a887a256d6939da88aabd8cd
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

}
[root@k8s-master02 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.66.199
    }
}

virtual_server 192.168.66.199 6443 {
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP

    real_server 192.168.66.200 6443 {
        weight 1
        SSL_GET {
            url {
              path /
              digest ff20ad2481f97b1754ef3e12ecd3a9cc
            }
            url {
              path /mrtg/
              digest 9b3a0c85a887a256d6939da88aabd8cd
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

       real_server 192.168.66.201 6443 {
        weight 1
        SSL_GET {
            url {
              path /
              digest ff20ad2481f97b1754ef3e12ecd3a9cc
            }
            url {
              path /mrtg/
              digest 9b3a0c85a887a256d6939da88aabd8cd
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

}

配置完成后重启keepalive服务

# systemctl enable keepalived  && systemctl start keepalived  && systemctl status keepalived

 步骤三、安装docker

可以参考k8s 1.18.20版本部署_好好学习之乘风破浪的博客-CSDN博客

步骤四、安装软件包

 可以参考k8s 1.18.20版本部署_好好学习之乘风破浪的博客-CSDN博客

五、部署集群

以下操作在k8s-master01节点上操作

5.1 、生成初始化配置
kubeadm config print init-defaults > kubeadm.yaml

[root@k8s-master01 ~]# cat  kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.66.199    ####需要修改为VIP的地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io  
kind: ClusterConfiguration
kubernetesVersion: v1.18.20   ####需要修改为v1.18.20
controlPlaneEndpoint: 192.168.66.199:6443   ####需要修改为VIP的地址和端口
apiServer:                              ####新增
  certSANs:                              ####新增
  - 192.168.66.200                        ####新增
  - 192.168.66.201                          ####新增
  - 192.168.66.199                           ####新增
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12  ####保持默认
  podSubnet: 10.2.0.0/16     ####新增
scheduler: {}
 
---                                             ####新增
apiVersion: kubeproxy.config.k8s.io/v1alpha1    ####新增
kind: KubeProxyConfiguration                    ####新增
mode: ipvs                                      ####新增

5.2 、执行初始化配置
kubeadm init --config kubeadm.yaml
成功会显示如下信息

[root@k8s-master01 ~]# kubeadm init --config kubeadm.yaml
W0401 23:48:19.240599   47054 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error converting YAML to JSON: yaml: unmarshal errors:
  line 17: key "apiServer" already set in map
W0401 23:48:19.241866   47054 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.199 192.168.66.199 192.168.66.200 192.168.66.201 192.168.66.199]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.66.199 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.66.199 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0401 23:48:24.472613   47054 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0401 23:48:24.474030   47054 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.004197 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58

5.3 按照要求执行生成文件

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.4 查询集群状态

# kubectl get cs

怎样处理127.0.0.1:10251端口可以参考

k8s 1.18.20版本部署_好好学习之乘风破浪的博客-CSDN博客

 5.5 部署calico组件

calico.yaml文件

链接:https://pan.baidu.com/s/1-zvq1Ug4Gvny4ek5u0HWqw 
提取码:2sbq

执行calico文件

查询集群pod运行情况

 步骤6、添加节点

(1)将master02节点添加到集群中,直接使用

 kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58 \
>     --control-plane 
[preflight] Running pre-flight checks

提示缺少证书文件

 从master01上上传证书文件到master02上

# cd /etc/kubernetes/pki

# scp ca.* 192.168.66.201:/etc/kubernetes/pki/

#  scp sa.* 192.168.66.201:/etc/kubernetes/pki/

# scp front* 192.168.66.201:/etc/kubernetes/pki/

还有etcd证书文件

# cd /etc/kubernetes/pki/etcd

# scp ca.* 192.168.66.201:/etc/kubernetes/pki/etcd/

 上传完成后再次执行添加集群操作

kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58     --control-plane 

[root@k8s-master02 kubernetes]#   kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58     --control-plane 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using the existing "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master02 localhost] and IPs [192.168.66.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master02 localhost] and IPs [192.168.66.201 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.201 192.168.66.199 192.168.66.200 192.168.66.201 192.168.66.199]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0402 00:27:01.015532    5719 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0402 00:27:01.032698    5719 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0402 00:27:01.034211    5719 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2023-04-02T00:27:17.644+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.66.201:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-master02 kubernetes]# 

 按照要求进行操作

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

 查询master02节点添加情况,显示master02节点已经添加到集群中正常

 (2)将node01节点添加到集群中,直接使用

[root@k8s-node01 ~]# kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58

 查询集群节点node01情况和pod运行情况

 这样master、node节点都正常添加到集群中!

 后续还会添加增加prometheus监控,期待中~~~~


http://www.jnnr.cn/a/369219.html

相关文章

【Git从入门到精通】分支机制

文章目录简述创建新分支切换分支基本的分支与合并操作基本的分支操作基本的合并操作基本的合并冲突解决远程分支推送跟踪分支拉取删除Git的分支模型是Git的杀手锏特性 简述 首先我们来看一下Git是如何存储数据的。 Git通过一系列的快照的方式来存储数据,当你发起提…

聊聊Java线程是个啥东西-Java多线程(1)

为什么要有线程在这个效率和质量并存的时代,首先, "并发编程" 成为 "刚需".单核 CPU 的发展遇到了瓶颈. 要想提高算力, 就需要多核 CPU. 而并发编程能更充分利用多核 CPU 资源. 有些任务场景需要 "等待 IO", 为了让等待 IO 的时间能够去做一些其…

【iOS】—— 多线程之pthread、NSThread

文章目录1.pthreadpthread简介:pthread使用方法pthread 其他相关方法2.NSThread创建,启动线程线程相关用法线程相关用法线程状态控制方法线程之间的通信NSThread线程安全和线程同步NSThread 非线程安全NSThread 线程安全线程的状态转换NSThread线程属性n…

Java基础知识——8.字符串及其拓展(完整版)

这篇文章我们来详细的讲一下字符串即String及其拓展。前面一篇也讲了,但是讲的很粗浅,这里我们详细的完整的讲一下,力争讲透String。 目录 1.String类 2.字符串常量池 3.总结 3.1 String类初始化后是不可变的(immutable) 3.2 引用变量与…

【新2023Q2押题JAVA】华为OD机试 - 优秀员工统计

最近更新的博客 华为od 2023 | 什么是华为od,od 薪资待遇,od机试题清单华为OD机试真题大全,用 Python 解华为机试题 | 机试宝典【华为OD机试】全流程解析+经验分享,题型分享,防作弊指南华为od机试,独家整理 已参加机试人员的实战技巧本篇题解:优秀员工统计 题目 公司某部…

这几种常见的 JVM 调优场景

假定你已经了解了运行时的数据区域和常用的垃圾回收算法,也了解了Hotspot支持的垃圾回收器。 一、cpu占用过高 cpu占用过高要分情况讨论,是不是业务上在搞活动,突然有大批的流量进来,而且活动结束后cpu占用率就下降了&#xff0…

zlmedaikit android编译

Windows 10 64bit Android Studio:Android Studio Electric Eel | 2022.1.1 Patch 2 NDK: android-ndk-r25c 1. 安装jdk 2. 打开http://ping.chinaz.com网站,输入dl.google.com地址,开始ping监测,选择一个时间最短的大陆IP地址&a…

C语言函数:内存函数memcpy()以及实现

C语言函数&#xff1a;内存函数memcpy() 引言&#xff1a; #define _CRT_SECURE_NO_WARNINGS#include <stdlib.h>int main() {int arr1[20] { 1,2,3,4,5,6,7,8,9 };int arr2[20] { 0 };strcpy(arr2, arr1);return 0; } strcpy函数&#xff1a;C语言函数&#xff1a;字…

Mac升级go版本(指定或最新)

升级流程 在Mac中对go版本的升级采用先卸载后安装的过程进行go版本升级&#xff08;或者回退&#xff09;。 卸载 在卸载前&#xff0c;先查看下当前的go版本&#xff1a; go version 删除 go 目录&#xff1a; sudo rm -rf /usr/local/go /usr/local/go/bin/go /etc/path…

CSS面试题

CSS面试题css画三角形&#xff1a;本质就是利用边框bordercss选择器优先级一个div&#xff0c;没有给高度和宽度&#xff0c;怎么水平垂直居中height与line-height的区别css画三角形&#xff1a;本质就是利用边框border 我们首先看一种情况&#xff1a; width:0px; height:0px…

数据库:Redis哨兵及cluster集群部署

一、redis数据库哨兵模式 目录 一、redis数据库哨兵模式 1、什么是哨兵模式 2、哨兵的作用 3、哨兵结构组成 4、哨兵故障转移机制 5、哨兵工作、切换原理 6、哨兵主节点选举原则 7、哨兵模式部署 二、redis数据库cluster集群 1、cluster集群优点、数据存储及同步方式…

【操作系统复习】第3章 处理机调度与死锁 3

死锁&#xff08;Deadlock&#xff09;&#xff1a;指多个进程在运行过程中因争夺资源而造成的一种僵局&#xff0c;当进程处于这种僵持状态时&#xff0c;若无外力作用&#xff0c;这些进程都将永远不能再向前推进。 对资源不加限制地分配可能导致进程间由于竞争资源而相互制约…

【Docker】1、Docker 基础知识随意介绍

文章目录一、什么是 Docker二、为什么要用 Docker 部署三、Ubuntu Docker 安装四、Dockerfile五、镜像5.1 镜像拉取5.2 镜像删除5.3 使用 docker save 将镜像保存成 tar 归档文件5.4 导入使用 docker save 导出的镜像5.5 使用 docker import 从归档文件中创建镜像5.6 将本地镜像…

CNStack 网络插件:hybridnet 的设计与实现

作者&#xff1a; 若禾 CNStack 是阿里云推出的一款开放的一站式企业级云原生技术中台。在异构的混合云基础设施上&#xff0c;对资源进行统一纳管和优化调度&#xff0c;以开放的、云原生的方式为平台及业务系统提供生产可用的产品及组件&#xff0c;帮助用户打造满足大规模、…

【Python小技巧】Anaconda环境下使用Notepad++运行python程序

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录前言一、Anaconda、notepad是什么&#xff1f;二、配置过程1. 找到Python.exe2. 编辑运行命令3. 试运行一段代码4. 保存快捷方式总结前言 最近升级了电脑系统&#…

Qt音视频开发27-ffmpeg视频旋转显示

一、前言 用手机或者平板拍摄的视频文件,很可能是旋转的,比如分辨率是1280x720,确是垂直的,相当于分辨率变成了720x1280,如果不做旋转处理的话,那脑袋必须歪着看才行,这样看起来太难受,所以一定要想办法解析到视频的旋转角度,然后根据这个角度重新绘制。在窗体那边也…

《花雕学AI》05:令人惊奇的ChatGPT,一个能够与人类对话的人工智能

今天是周末&#xff0c;4月2日&#xff0c;早上五点就起床了&#xff0c;没有去打羽毛球。 我平时在手机上喜欢看今日头条&#xff0c;了解各种时事新闻&#xff0c;发现今年来频繁出现的单词就是&#xff1a;ChatGPT&#xff0c;通过简单搜索&#xff0c;我逐步接受了这个概念…

从零开始学OpenCV——图像灰度变换详解(线性与非线性变换)

文章目录图像灰度变化灰度变换介绍灰度线性变换灰度分段线性变换图像点运算灰度非线性变换线性点运算灰度的非线性变换&#xff1a;对数变换灰度的非线性变换&#xff1a;伽马变换灰度的非线性变换&#xff1a;对比拉伸灰度的非线性变换&#xff1a; S形灰度变换灰度的非线性变…

Spring Cloud Alibaba Sentinel

一、简介 官网&#xff1a; https://github.com/alibaba/Sentinel/wiki/%E4%BB%8B%E7%BB%8D Sentinel: 分布式系统的流量防卫兵 随着微服务的流行&#xff0c;服务和服务之间的稳定性变得越来越重要。Sentinel 以流量为切入点&#xff0c;从流量控制、流量路由、熔断降级、系统…

HDFS Balancer负载均衡器

文章目录1、背景2、什么是平衡2.1 每个DataNode的利用率计算2.2 集群的利用率2.3 平衡3、hdfs balancer语法4、运行一个简单的balance案例4.1 设置平衡数据传输带宽4.2 执行banalce5、参考文档1、背景 当我们的hadoop集群运行了一段时间之后&#xff0c;各个DataNode上的数据分…
最新文章