99视频免费在线观看,先锋ady69xfplay色资源网站,又粗又硬麻豆传媒,成人不卡,99国产免费

首頁 >資訊 >
全球看點(diǎn):k8s實(shí)戰(zhàn)案例之部署redis單機(jī)和redis cluster
發(fā)布時(shí)間:2023-06-07 07:45:25 文章來源:博客園
redis是一款基于BSD協(xié)議,開源的非關(guān)系型數(shù)據(jù)庫(nosql數(shù)據(jù)庫),作者
1、在k8s上部署redis單機(jī)1.1、redis簡(jiǎn)介

redis是一款基于BSD協(xié)議,開源的非關(guān)系型數(shù)據(jù)庫(nosql數(shù)據(jù)庫),作者是意大利開發(fā)者Salvatore Sanfilippo在2009年發(fā)布,使用C語言編寫;redis是基于內(nèi)存存儲(chǔ),而且是目前比較流行的鍵值數(shù)據(jù)庫(key-value database),它提供將內(nèi)存通過網(wǎng)絡(luò)遠(yuǎn)程共享的一種服務(wù),提供類似功能的還有memcache,但相比 memcache,redis 還提供了易擴(kuò)展、高性能、具備數(shù)據(jù)持久性等功能。主要的應(yīng)用場(chǎng)景有session共享,常用于web集群中的tomcat或PHP中多web服務(wù)器的session共享;消息隊(duì)列,ELK的日志緩存,部分業(yè)務(wù)的訂閱發(fā)布系統(tǒng);計(jì)數(shù)器,常用于訪問排行榜,商品瀏覽數(shù)等和次數(shù)相關(guān)的數(shù)值統(tǒng)計(jì)場(chǎng)景;緩存,常用于數(shù)據(jù)查詢、電商網(wǎng)站商品信息、新聞內(nèi)容等;相對(duì)memcache,redis支持?jǐn)?shù)據(jù)的持久化,可以將內(nèi)存的數(shù)據(jù)保存在磁盤中,重啟redis服務(wù)或者服務(wù)器之后可以從備份文件中恢復(fù)數(shù)據(jù)到內(nèi)存繼續(xù)使用;


(資料圖片僅供參考)

1.2、PV/PVC 及 Redis 單機(jī)

由于redis的數(shù)據(jù)(主要是redis快照)都存放在存儲(chǔ)系統(tǒng)中,即便redis pod掛掉,對(duì)應(yīng)數(shù)據(jù)都不會(huì)丟;因?yàn)樵趉8s上部署redis單機(jī),redis pod掛了,k8s會(huì)將對(duì)應(yīng)pod重建,重建時(shí)會(huì)把對(duì)應(yīng)pvc掛載至pod中,加載快照,從而使得redis的數(shù)據(jù)不被pod的掛掉而丟數(shù)據(jù);

1.3、構(gòu)建redis鏡像
root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# lltotal 1784drwxr-xr-x  2 root root    4096 Jun  5 15:22 ./drwxr-xr-x 11 root root    4096 Aug  9  2022 ../-rw-r--r--  1 root root     717 Jun  5 15:20 Dockerfile-rwxr-xr-x  1 root root     235 Jun  5 15:21 build-command.sh*-rw-r--r--  1 root root 1740967 Jun 22  2021 redis-4.0.14.tar.gz-rw-r--r--  1 root root   58783 Jun 22  2021 redis.conf-rwxr-xr-x  1 root root      84 Jun  5 15:21 run_redis.sh*root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat Dockerfile #Redis Image# 導(dǎo)入自定義centos基礎(chǔ)鏡像FROM harbor.ik8s.cc/baseimages/magedu-centos-base:7.9.2009 # 添加redis源碼包至/usr/local/srcADD redis-4.0.14.tar.gz /usr/local/src# 編譯安裝redisRUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server  /usr/sbin/ && mkdir -pv /data/redis-data # 添加redis配置文件ADD redis.conf /usr/local/redis/redis.conf # 暴露redis服務(wù)端口EXPOSE 6379#ADD run_redis.sh /usr/local/redis/run_redis.sh#CMD ["/usr/local/redis/run_redis.sh"]# 添加啟動(dòng)腳本ADD run_redis.sh /usr/local/redis/entrypoint.sh# 啟動(dòng)redisENTRYPOINT ["/usr/local/redis/entrypoint.sh"]root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat build-command.sh #!/bin/bashTAG=$1#docker build -t harbor.ik8s.cc/magedu/redis:${TAG} .#sleep 3#docker push  harbor.ik8s.cc/magedu/redis:${TAG}nerdctl build -t  harbor.ik8s.cc/magedu/redis:${TAG} .nerdctl push harbor.ik8s.cc/magedu/redis:${TAG}root@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# cat run_redis.sh #!/bin/bash# Redis啟動(dòng)命令/usr/sbin/redis-server /usr/local/redis/redis.conf# 使用tail -f 在pod內(nèi)部構(gòu)建守護(hù)進(jìn)程tail -f  /etc/hostsroot@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# grep -v "^#\|^$" redis.conf bind 0.0.0.0protected-mode yesport 6379tcp-backlog 511timeout 0tcp-keepalive 300daemonize yessupervised nopidfile /var/run/redis_6379.pidloglevel noticelogfile ""databases 16always-show-logo yessave 900 1save 5 1save 300 10save 60 10000stop-writes-on-bgsave-error nordbcompression yesrdbchecksum yesdbfilename dump.rdbdir /data/redis-dataslave-serve-stale-data yesslave-read-only yesrepl-diskless-sync norepl-diskless-sync-delay 5repl-disable-tcp-nodelay noslave-priority 100requirepass 123456lazyfree-lazy-eviction nolazyfree-lazy-expire nolazyfree-lazy-server-del noslave-lazy-flush noappendonly noappendfilename "appendonly.aof"appendfsync everysecno-appendfsync-on-rewrite noauto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 64mbaof-load-truncated yesaof-use-rdb-preamble nolua-time-limit 5000slowlog-log-slower-than 10000slowlog-max-len 128latency-monitor-threshold 0notify-keyspace-events ""hash-max-ziplist-entries 512hash-max-ziplist-value 64list-max-ziplist-size -2list-compress-depth 0set-max-intset-entries 512zset-max-ziplist-entries 128zset-max-ziplist-value 64hll-sparse-max-bytes 3000activerehashing yesclient-output-buffer-limit normal 0 0 0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60hz 10aof-rewrite-incremental-fsync yesroot@k8s-master01:~/k8s-data/dockerfile/web/magedu/redis# 
1.3.1、驗(yàn)證rdis鏡像是否上傳至harbor?1.4、測(cè)試redis 鏡像1.4.1、驗(yàn)證將redis鏡像運(yùn)行為容器,看看是否正常運(yùn)行?1.4.2、遠(yuǎn)程連接redis,看看是否可正常連接?

能夠?qū)edis鏡像運(yùn)行為容器,并且能夠通過遠(yuǎn)程主機(jī)連接至redis進(jìn)行數(shù)據(jù)讀寫,說明我們構(gòu)建的reids鏡像沒有問題;

1.5、創(chuàng)建PV和PVC1.5.1、在nfs服務(wù)器上準(zhǔn)備redis數(shù)據(jù)存儲(chǔ)目錄
root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis-datadir-1mkdir: created directory "/data/k8sdata/magedu/redis-datadir-1"root@harbor:~# cat /etc/exports# /etc/exports: the access control list for filesystems which may be exported#               to NFS clients.  See exports(5).## Example for NFSv2 and NFSv3:# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)## Example for NFSv4:# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)#/data/k8sdata/kuboard *(rw,no_root_squash)/data/volumes *(rw,no_root_squash)/pod-vol *(rw,no_root_squash)/data/k8sdata/myserver *(rw,no_root_squash)/data/k8sdata/mysite *(rw,no_root_squash)/data/k8sdata/magedu/images *(rw,no_root_squash)/data/k8sdata/magedu/static *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-1 *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-2 *(rw,no_root_squash)/data/k8sdata/magedu/zookeeper-datadir-3 *(rw,no_root_squash)/data/k8sdata/magedu/redis-datadir-1 *(rw,no_root_squash) root@harbor:~# exportfs -avexportfs: /etc/exports [1]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/kuboard".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [2]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/volumes".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [3]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/pod-vol".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [4]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/myserver".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [5]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/mysite".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [7]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/images".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [8]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/static".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [11]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [12]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [13]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [16]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis-datadir-1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexporting *:/data/k8sdata/magedu/redis-datadir-1exporting *:/data/k8sdata/magedu/zookeeper-datadir-3exporting *:/data/k8sdata/magedu/zookeeper-datadir-2exporting *:/data/k8sdata/magedu/zookeeper-datadir-1exporting *:/data/k8sdata/magedu/staticexporting *:/data/k8sdata/magedu/imagesexporting *:/data/k8sdata/mysiteexporting *:/data/k8sdata/myserverexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~# 
1.5.2、創(chuàng)建pv
root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolume.yaml     ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-datadir-pv-1spec:  capacity:    storage: 10Gi  accessModes:    - ReadWriteOnce  nfs:    path: /data/k8sdata/magedu/redis-datadir-1     server: 192.168.0.42root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# 
1.5.3、創(chuàng)建pvc
root@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# cat redis-persistentvolumeclaim.yaml ---apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: redis-datadir-pvc-1   namespace: mageduspec:  volumeName: redis-datadir-pv-1   accessModes:    - ReadWriteOnce  resources:    requests:      storage: 10Giroot@k8s-master01:~/k8s-data/yaml/magedu/redis/pv# 
1.6、部署redis服務(wù)
root@k8s-master01:~/k8s-data/yaml/magedu/redis# cat redis.yamlkind: Deployment#apiVersion: extensions/v1beta1apiVersion: apps/v1metadata:  labels:    app: devops-redis   name: deploy-devops-redis  namespace: mageduspec:  replicas: 1   selector:    matchLabels:      app: devops-redis  template:    metadata:      labels:        app: devops-redis    spec:      containers:        - name: redis-container          image: harbor.ik8s.cc/magedu/redis:v4.0.14           imagePullPolicy: Always          volumeMounts:          - mountPath: "/data/redis-data/"            name: redis-datadir      volumes:        - name: redis-datadir          persistentVolumeClaim:            claimName: redis-datadir-pvc-1 ---kind: ServiceapiVersion: v1metadata:  labels:    app: devops-redis  name: srv-devops-redis  namespace: mageduspec:  type: NodePort  ports:  - name: http    port: 6379     targetPort: 6379    nodePort: 36379   selector:    app: devops-redis  sessionAffinity: ClientIP  sessionAffinityConfig:    clientIP:      timeoutSeconds: 10800root@k8s-master01:~/k8s-data/yaml/magedu/redis# 

上述報(bào)錯(cuò)說我們的服務(wù)端口超出范圍,這是因?yàn)槲覀冊(cè)诔跏蓟痥8s集群時(shí)指定的服務(wù)端口范圍;

1.6.1、修改nodeport端口范圍

編輯/etc/systemd/system/kube-apiserver.service,將其--service-node-port-range選項(xiàng)指定的值修改即可;其他兩個(gè)master節(jié)點(diǎn)也需要修改哦

1.6.2、重載kube-apiserver.service,重啟kube-apiserver
root@k8s-master01:~# systemctl daemon-reload                 root@k8s-master01:~# systemctl restart kube-apiserver.serviceroot@k8s-master01:~# 

再次部署redis

1.7、驗(yàn)證redis數(shù)據(jù)讀寫1.7.1、連接k8s任意節(jié)點(diǎn)的36376端口,測(cè)試redis讀寫數(shù)據(jù)1.8、驗(yàn)證redis pod 重建對(duì)應(yīng)數(shù)據(jù)是否丟失?1.8.1、查看redis快照文件是否存儲(chǔ)到存儲(chǔ)上呢?
root@harbor:~# ll /data/k8sdata/magedu/redis-datadir-1total 12drwxr-xr-x 2 root root 4096 Jun  5 16:29 ./drwxr-xr-x 8 root root 4096 Jun  5 15:53 ../-rw-r--r-- 1 root root  116 Jun  5 16:29 dump.rdbroot@harbor:~# 

可以看到剛才我們向redis寫入數(shù)據(jù),對(duì)應(yīng)redis在規(guī)定時(shí)間內(nèi)發(fā)現(xiàn)key的變化就做了快照,因?yàn)閞edis數(shù)據(jù)目錄時(shí)通過pv/pvc掛載的nfs,所以我們?cè)趎fs對(duì)應(yīng)目錄里時(shí)可以正??吹竭@個(gè)快照文件的;

1.8.2、刪除redis pod 等待k8s重建redis pod1.8.3、驗(yàn)證重建后的redis pod數(shù)據(jù)

可以看到k8s重建后的redis pod 還保留著原有pod的數(shù)據(jù);這說明k8s重建時(shí)掛載了前一個(gè)pod的pvc;

2、在k8s上部署redis集群2.1、PV/PVC及Redis Cluster-StatefulSet

redis cluster相比redis單機(jī)要稍微復(fù)雜一點(diǎn),我們也是通過pv/pvc將redis cluster數(shù)據(jù)存放在存儲(chǔ)系統(tǒng)中,不同于redis單機(jī),redis cluster對(duì)存入的數(shù)據(jù)會(huì)做crc16計(jì)算,然后和16384做取模計(jì)算,得出一個(gè)數(shù)字,這個(gè)數(shù)字就是存入redis cluster的一個(gè)槽位;即redis cluster將16384個(gè)槽位,平均分配給集群所有master節(jié)點(diǎn),每個(gè)master節(jié)點(diǎn)存放整個(gè)集群數(shù)據(jù)的一部分;這樣一來就存在一個(gè)問題,如果master宕機(jī),那么對(duì)應(yīng)槽位的數(shù)據(jù)也就不可用,為了防止master單點(diǎn)故障,我們還需要對(duì)master做高可用,即專門用一個(gè)slave節(jié)點(diǎn)對(duì)master做備份,master宕機(jī)的情況下,對(duì)應(yīng)slave會(huì)接管master繼續(xù)向集群提供服務(wù),從而實(shí)現(xiàn)redis cluster master的高可用;如上圖所示,我們使用3主3從的redis cluster,redis0,1,2為master,那么3,4,5就對(duì)應(yīng)為0,1,2的slave,負(fù)責(zé)備份各自對(duì)應(yīng)的master的數(shù)據(jù);這六個(gè)pod都是通過k8s集群的pv/pvc將數(shù)據(jù)存放在存儲(chǔ)系統(tǒng)中;

2.2、創(chuàng)建PV2.2.1、在nfs上準(zhǔn)備redis cluster 數(shù)據(jù)目錄
root@harbor:~# mkdir -pv /data/k8sdata/magedu/redis{0,1,2,3,4,5}mkdir: created directory "/data/k8sdata/magedu/redis0"mkdir: created directory "/data/k8sdata/magedu/redis1"mkdir: created directory "/data/k8sdata/magedu/redis2"mkdir: created directory "/data/k8sdata/magedu/redis3"mkdir: created directory "/data/k8sdata/magedu/redis4"mkdir: created directory "/data/k8sdata/magedu/redis5"root@harbor:~# tail -6 /etc/exports /data/k8sdata/magedu/redis0 *(rw,no_root_squash)/data/k8sdata/magedu/redis1 *(rw,no_root_squash)/data/k8sdata/magedu/redis2 *(rw,no_root_squash)/data/k8sdata/magedu/redis3 *(rw,no_root_squash)/data/k8sdata/magedu/redis4 *(rw,no_root_squash)/data/k8sdata/magedu/redis5 *(rw,no_root_squash)root@harbor:~# exportfs  -avexportfs: /etc/exports [1]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/kuboard".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [2]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/volumes".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [3]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/pod-vol".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [4]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/myserver".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [5]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/mysite".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [7]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/images".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [8]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/static".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [11]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [12]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-2".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [13]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/zookeeper-datadir-3".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [16]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis-datadir-1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [18]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis0".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [19]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis1".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [20]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis2".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [21]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis3".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [22]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis4".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexportfs: /etc/exports [23]: Neither "subtree_check" or "no_subtree_check" specified for export "*:/data/k8sdata/magedu/redis5".  Assuming default behaviour ("no_subtree_check").  NOTE: this default has changed since nfs-utils version 1.0.xexporting *:/data/k8sdata/magedu/redis5exporting *:/data/k8sdata/magedu/redis4exporting *:/data/k8sdata/magedu/redis3exporting *:/data/k8sdata/magedu/redis2exporting *:/data/k8sdata/magedu/redis1exporting *:/data/k8sdata/magedu/redis0exporting *:/data/k8sdata/magedu/redis-datadir-1exporting *:/data/k8sdata/magedu/zookeeper-datadir-3exporting *:/data/k8sdata/magedu/zookeeper-datadir-2exporting *:/data/k8sdata/magedu/zookeeper-datadir-1exporting *:/data/k8sdata/magedu/staticexporting *:/data/k8sdata/magedu/imagesexporting *:/data/k8sdata/mysiteexporting *:/data/k8sdata/myserverexporting *:/pod-volexporting *:/data/volumesexporting *:/data/k8sdata/kuboardroot@harbor:~# 
2.2.2、創(chuàng)建pv
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat pv/redis-cluster-pv.yaml apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv0spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis0 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv1spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis1 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv2spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis2 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv3spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis3 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv4spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis4 ---apiVersion: v1kind: PersistentVolumemetadata:  name: redis-cluster-pv5spec:  capacity:    storage: 5Gi  accessModes:    - ReadWriteOnce  nfs:    server: 192.168.0.42    path: /data/k8sdata/magedu/redis5 root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 
2.3、部署redis cluster2.3.1、基于redis.conf文件創(chuàng)建configmap
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.conf appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 
2.3.2、創(chuàng)建configmap
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl create cm redis-conf --from-file=./redis.conf -n magedu configmap/redis-conf createdroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl get cm -n magedu NAME               DATA   AGEkube-root-ca.crt   1      35hredis-conf         1      6sroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 
2.3.3、驗(yàn)證configmap
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl describe cm redis-conf -n magedu Name:         redis-confNamespace:    mageduLabels:       Annotations:  Data====redis.conf:----appendonly yescluster-enabled yescluster-config-file /var/lib/redis/nodes.confcluster-node-timeout 5000dir /var/lib/redisport 6379BinaryData====Events:  root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#
2.3.4、部署redis cluster
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis.yaml apiVersion: v1kind: Servicemetadata:  name: redis  namespace: magedu  labels:    app: redisspec:  selector:    app: redis    appCluster: redis-cluster  ports:  - name: redis    port: 6379  clusterIP: None  ---apiVersion: v1kind: Servicemetadata:  name: redis-access  namespace: magedu  labels:    app: redisspec:  type: NodePort  selector:    app: redis    appCluster: redis-cluster  ports:  - name: redis-access    protocol: TCP    port: 6379    targetPort: 6379    nodePort: 36379---apiVersion: apps/v1kind: StatefulSetmetadata:  name: redis  namespace: mageduspec:  serviceName: redis  replicas: 6  selector:    matchLabels:      app: redis      appCluster: redis-cluster  template:    metadata:      labels:        app: redis        appCluster: redis-cluster    spec:      terminationGracePeriodSeconds: 20      affinity:        podAntiAffinity:          preferredDuringSchedulingIgnoredDuringExecution:          - weight: 100            podAffinityTerm:              labelSelector:                matchExpressions:                - key: app                  operator: In                  values:                  - redis              topologyKey: kubernetes.io/hostname      containers:      - name: redis        image: redis:4.0.14        command:          - "redis-server"        args:          - "/etc/redis/redis.conf"          - "--protected-mode"          - "no"        resources:          requests:            cpu: "500m"            memory: "500Mi"        ports:        - containerPort: 6379          name: redis          protocol: TCP        - containerPort: 16379          name: cluster          protocol: TCP        volumeMounts:        - name: conf          mountPath: /etc/redis        - name: data          mountPath: /var/lib/redis      volumes:      - name: conf        configMap:          name: redis-conf          items:          - key: redis.conf            path: redis.conf  volumeClaimTemplates:  - metadata:      name: data      namespace: magedu    spec:      accessModes: [ "ReadWriteOnce" ]      resources:        requests:          storage: 5Giroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 

上述配置清單,主要用sts控制器創(chuàng)建了6個(gè)pod副本,每個(gè)副本都使用configmap中的配置文件作為redis配置文件,使用pvc模板指定pod在k8s上自動(dòng)關(guān)聯(lián)pv,并在magedu名稱空間創(chuàng)建pvc,即只要k8s上有空余的pv,對(duì)應(yīng)pod就會(huì)在magedu這個(gè)名稱空間按pvc模板信息創(chuàng)建pvc;當(dāng)然我們可以使用存儲(chǔ)類自動(dòng)創(chuàng)建pvc,也可以提前創(chuàng)建好pvc,一般情況下使用sts控制器,我們可以使用pvc模板的方式來指定pod自動(dòng)創(chuàng)建pvc(前提是k8s有足夠的pv可用);

應(yīng)用配置清單部署redis cluster

使用sts控制器創(chuàng)建pod,pod名稱是sts控制器的名稱-id,使用pvc模板創(chuàng)建pvc的名稱為pvc模板名稱-pod名稱,即pvc模板名-sts控制器名-id;

2.4、初始化redis cluster2.4.1、在k8s上創(chuàng)建臨時(shí)容器,安裝redis cluster 初始化工具
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl run -it ubuntu1804 --image=ubuntu:18.04 --restart=Never -n magedu bashIf you don"t see a command prompt, try pressing enter.root@ubuntu1804:/#root@ubuntu1804:/# apt update# 安裝必要工具root@ubuntu1804:/# apt install python2.7 python-pip redis-tools dnsutils iputils-ping net-tools# 更新piproot@ubuntu1804:/# pip install --upgrade pip# 使用pip安裝redis cluster初始化工具redis-tribroot@ubuntu1804:/# pip install redis-trib==0.5.1root@ubuntu1804:/#
2.4.2、初始化redis cluster
root@ubuntu1804:/# redis-trib.py create \ `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \ `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \ `dig +short redis-2.redis.magedu.svc.cluster.local`:6379 

在k8s上我們使用sts創(chuàng)建pod,對(duì)應(yīng)pod的名稱是固定不變的,所以我們初始化redis 集群就直接使用redis pod名稱就可以直接解析到對(duì)應(yīng)pod的IP地址;在傳統(tǒng)虛擬機(jī)或物理機(jī)上初始化redis集群,我們可用直接使用IP地址,原因是物理機(jī)或虛擬機(jī)IP地址是固定的,在k8s上pod的IP地址是不固定的;

2.4.3、給master指定slave給redis-0指定slave為 redis-3
root@ubuntu1804:/# redis-trib.py replicate \ --master-addr `dig +short redis-0.redis.magedu.svc.cluster.local`:6379 \ --slave-addr `dig +short redis-3.redis.magedu.svc.cluster.local`:6379
給redis-1指定slave為 redis-4
root@ubuntu1804:/# redis-trib.py replicate \ --master-addr `dig +short redis-1.redis.magedu.svc.cluster.local`:6379 \ --slave-addr `dig +short redis-4.redis.magedu.svc.cluster.local`:6379
給redis-2指定slave為 redis-5
root@ubuntu1804:/# redis-trib.py replicate \--master-addr `dig +short redis-2.redis.magedu.svc.cluster.local`:6379 \--slave-addr `dig +short redis-5.redis.magedu.svc.cluster.local`:6379
2.5、驗(yàn)證redis cluster狀態(tài)2.5.1、進(jìn)入redis cluster 任意pod 查看集群信息2.5.2、查看集群節(jié)點(diǎn)

集群節(jié)點(diǎn)信息中記錄了master節(jié)點(diǎn)id和slave id,其中slave后面會(huì)對(duì)應(yīng)master的id,表示該slave備份對(duì)應(yīng)master數(shù)據(jù);

2.5.3、查看當(dāng)前節(jié)點(diǎn)信息
127.0.0.1:6379> info# Serverredis_version:4.0.14redis_git_sha1:00000000redis_git_dirty:0redis_build_id:165c932261a105d7redis_mode:clusteros:Linux 5.15.0-73-generic x86_64arch_bits:64multiplexing_api:epollatomicvar_api:atomic-builtingcc_version:8.3.0process_id:1run_id:aa8ef00d843b4f622374dbb643cf27cdbd4d5ba3tcp_port:6379uptime_in_seconds:4303uptime_in_days:0hz:10lru_clock:8272053executable:/data/redis-serverconfig_file:/etc/redis/redis.conf# Clientsconnected_clients:1client_longest_output_list:0client_biggest_input_buf:0blocked_clients:0# Memoryused_memory:2642336used_memory_human:2.52Mused_memory_rss:5353472used_memory_rss_human:5.11Mused_memory_peak:2682248used_memory_peak_human:2.56Mused_memory_peak_perc:98.51%used_memory_overhead:2559936used_memory_startup:1444856used_memory_dataset:82400used_memory_dataset_perc:6.88%total_system_memory:16740012032total_system_memory_human:15.59Gused_memory_lua:37888used_memory_lua_human:37.00Kmaxmemory:0maxmemory_human:0Bmaxmemory_policy:noevictionmem_fragmentation_ratio:2.03mem_allocator:jemalloc-4.0.3active_defrag_running:0lazyfree_pending_objects:0# Persistenceloading:0rdb_changes_since_last_save:0rdb_bgsave_in_progress:0rdb_last_save_time:1685992849rdb_last_bgsave_status:okrdb_last_bgsave_time_sec:0rdb_current_bgsave_time_sec:-1rdb_last_cow_size:245760aof_enabled:1aof_rewrite_in_progress:0aof_rewrite_scheduled:0aof_last_rewrite_time_sec:-1aof_current_rewrite_time_sec:-1aof_last_bgrewrite_status:okaof_last_write_status:okaof_last_cow_size:0aof_current_size:0aof_base_size:0aof_pending_rewrite:0aof_buffer_length:0aof_rewrite_buffer_length:0aof_pending_bio_fsync:0aof_delayed_fsync:0# Statstotal_connections_received:7total_commands_processed:17223instantaneous_ops_per_sec:1total_net_input_bytes:1530962total_net_output_bytes:108793instantaneous_input_kbps:0.04instantaneous_output_kbps:0.00rejected_connections:0sync_full:1sync_partial_ok:0sync_partial_err:1expired_keys:0expired_stale_perc:0.00expired_time_cap_reached_count:0evicted_keys:0keyspace_hits:0keyspace_misses:0pubsub_channels:0pubsub_patterns:0latest_fork_usec:853migrate_cached_sockets:0slave_expires_tracked_keys:0active_defrag_hits:0active_defrag_misses:0active_defrag_key_hits:0active_defrag_key_misses:0# Replicationrole:masterconnected_slaves:1slave0:ip=10.200.155.175,port=6379,state=online,offset=1120,lag=1master_replid:60381a28fee40b44c409e53eeef49215a9d3b0ffmaster_replid2:0000000000000000000000000000000000000000master_repl_offset:1120second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:1repl_backlog_histlen:1120# CPUused_cpu_sys:12.50used_cpu_user:7.51used_cpu_sys_children:0.01used_cpu_user_children:0.00# Clustercluster_enabled:1# Keyspace127.0.0.1:6379> 
2.5.4、驗(yàn)證redis cluster讀寫數(shù)據(jù)是否正常?2.5.4.1、手動(dòng)連接redis cluster 進(jìn)行數(shù)據(jù)讀寫

手動(dòng)連接redis 集群master節(jié)點(diǎn)進(jìn)行數(shù)據(jù)讀寫,存在一個(gè)問題就是當(dāng)我們寫入的key經(jīng)過crc16計(jì)算對(duì)16384取模后,對(duì)應(yīng)槽位可能不在當(dāng)前節(jié)點(diǎn),redis它會(huì)告訴我們?cè)搆ey該在哪里去寫;從上面的截圖可用看到,現(xiàn)在redis cluster 是可用正常讀寫數(shù)據(jù)的

2.5.4.2、使用python腳本連接redis cluster 進(jìn)行數(shù)據(jù)讀寫
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# cat redis-client-test.py#!/usr/bin/env python#coding:utf-8#Author:Zhang ShiJie#python 2.7/3.8#pip install redis-py-clusterimport sys,timefrom rediscluster import RedisClusterdef init_redis():    startup_nodes = [        {"host": "192.168.0.34", "port": 36379},        {"host": "192.168.0.35", "port": 36379},        {"host": "192.168.0.36", "port": 36379},        {"host": "192.168.0.34", "port": 36379},        {"host": "192.168.0.35", "port": 36379},        {"host": "192.168.0.36", "port": 36379},    ]    try:        conn = RedisCluster(startup_nodes=startup_nodes,                            # 有密碼要加上密碼哦                            decode_responses=True, password="")        print("連接成功?。。。?!1", conn)        #conn.set("key-cluster","value-cluster")        for i in range(100):            conn.set("key%s" % i, "value%s" % i)            time.sleep(0.1)            data = conn.get("key%s" % i)            print(data)        #return conn    except Exception as e:        print("connect error ", str(e))        sys.exit(1)init_redis()root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 

運(yùn)行腳本,向redis cluster 寫入數(shù)據(jù)

root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# python redis-client-test.pyTraceback (most recent call last):  File "/root/k8s-data/yaml/magedu/redis-cluster/redis-client-test.py", line 8, in     from rediscluster import RedisClusterModuleNotFoundError: No module named "rediscluster"root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster#

這里提示沒有找到rediscluster模塊,解決辦法就是通過pip安裝redis-py-cluster模塊即可;

安裝redis-py-cluster模塊運(yùn)行腳本連接redis cluster進(jìn)行數(shù)據(jù)讀寫連接redis pod,驗(yàn)證數(shù)據(jù)是否正常寫入?

從上面的截圖可用看到三個(gè)reids cluster master pod各自都存放了一部分key,并非全部;說明剛才我們用python腳本把數(shù)據(jù)正常寫入了redis cluster;

驗(yàn)證在slave 節(jié)點(diǎn)是否可用正常讀取數(shù)據(jù)?

從上面的截圖可以了解到在slave節(jié)點(diǎn)是不可以讀取數(shù)據(jù);

到slave對(duì)應(yīng)的master節(jié)點(diǎn)讀取數(shù)據(jù)

上述驗(yàn)證說明了redis cluster 只有master可以讀寫數(shù)據(jù),slave只是對(duì)master數(shù)據(jù)做備份,不可以在slave上讀寫數(shù)據(jù);

2.6、驗(yàn)證驗(yàn)證redis cluster高可用2.6.1、在k8s node節(jié)點(diǎn)將redis:4.0.14鏡像上傳至本地harbor修改鏡像tag
root@k8s-node01:~# nerdctl tag redis:4.0.14 harbor.ik8s.cc/redis-cluster/redis:4.0.14
上傳redis鏡像至本地harbor
root@k8s-node01:~# nerdctl push harbor.ik8s.cc/redis-cluster/redis:4.0.14INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.list.v2+json, sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625) WARN[0000] skipping verifying HTTPS certs for "harbor.ik8s.cc" index-sha256:1ae9e0f790001af4b9f83a2b3d79c593c6f3e9a881b754a99527536259fb6625:    done           |++++++++++++++++++++++++++++++++++++++| manifest-sha256:5bd4fe08813b057df2ae55003a75c39d80a4aea9f1a0fbc0fbd7024edf555786: done           |++++++++++++++++++++++++++++++++++++++| config-sha256:191c4017dcdd3370f871a4c6e7e1d55c7d9abed2bebf3005fb3e7d12161262b8:   done           |++++++++++++++++++++++++++++++++++++++| elapsed: 1.4 s                                                                    total:  8.5 Ki (6.1 KiB/s)                                       root@k8s-node01:~# 
2.6.2、修改redis cluster部署清單鏡像和鏡像拉取策略

修改鏡像為本地harbor鏡像和拉取策略是方便我們測(cè)試redis cluster的高可用;

2.6.3、重新apply redis cluster部署清單
root@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# kubectl apply -f redis.yamlservice/redis unchangedservice/redis-access unchangedstatefulset.apps/redis configuredroot@k8s-master01:~/k8s-data/yaml/magedu/redis-cluster# 

這里相當(dāng)于給redis cluster更新,他們之間的集群關(guān)系還存在,因?yàn)榧宏P(guān)系配置都保存在遠(yuǎn)端存儲(chǔ)之上;

驗(yàn)證pod是否都正常running?驗(yàn)證集群狀態(tài)和集群關(guān)系

不同于之前,這里rdis-0變成了slave ,redis-3變成了master;從上面的截圖我們也發(fā)現(xiàn),在k8s上部署redis cluster pod重建以后(IP地址發(fā)生變化),對(duì)應(yīng)集群關(guān)系不會(huì)發(fā)生變化;對(duì)應(yīng)master和salve一對(duì)關(guān)系始終只是再對(duì)應(yīng)的master和salve兩個(gè)pod中切換,這其實(shí)就是高可用;

2.6.4、停掉本地harbor,刪除redis master pod,看看對(duì)應(yīng)slave是否會(huì)提升為master?停止harbor服務(wù)
root@harbor:~# systemctl stop harbor
刪除redis-3,看看redis-0是否會(huì)提升為master?

可用看到我們把redis-3刪除(相當(dāng)于master宕機(jī))以后,對(duì)應(yīng)slave提升為master了;

2.6.5、恢復(fù)harbor服務(wù),看看對(duì)應(yīng)redis-3恢復(fù)會(huì)議后是否還是redis-0的slave呢?恢復(fù)harbor服務(wù)驗(yàn)證redis-3pod是否恢復(fù)?

再次刪除redis-3以后,對(duì)應(yīng)pod正常被重建,并處于running狀態(tài);

驗(yàn)證redis-3的主從關(guān)系、

可以看到redis-3恢復(fù)以后,對(duì)應(yīng)自動(dòng)加入集群成為redis-0的slave;

標(biāo)簽:

精彩文檔: