Valkey-fürt konfigurálása és üzembe helyezése az Azure Kubernetes Service-ben (AKS)
Ebben a cikkben egy Valkey-fürtöt konfigurálunk és telepítünk az Azure Kubernetes Service-ben (AKS).
Feljegyzés
Ez a cikk a fő és a rabszolga kifejezésre mutató hivatkozásokat tartalmaz, amelyek a Microsoft által már nem használt kifejezések. Ha a kifejezés el lesz távolítva a Valkey-szoftverből, ebből a cikkből eltávolítjuk.
Névtér létrehozása
Hozzon létre egy névteret a Valkey-fürthöz a
kubectl create namespace
paranccsal.kubectl create namespace ${SERVICE_ACCOUNT_NAMESPACE} --dry-run=client --output yaml | kubectl apply -f -
Példa a kimenetre:
namespace/valkey created
Titkos kulcsok létrehozása
Hozzon létre egy véletlenszerű jelszót a Valkey-fürthöz az Openssl használatával, és tárolja azt az Azure Key Vaultban a
az keyvault secret set
parancs használatával. Állítsa be úgy a házirendet, hogy a felhasználó által hozzárendelt identitás lekérje a titkos kulcsot aaz keyvault set-policy
parancs használatával.SECRET=$(openssl rand -base64 32) echo requirepass $SECRET > /tmp/valkey-password-file.conf echo primaryauth $SECRET >> /tmp/valkey-password-file.conf az keyvault secret set --vault-name $MY_KEYVAULT_NAME --name valkey-password-file --file /tmp/valkey-password-file.conf --output table rm /tmp/valkey-password-file.conf az keyvault set-policy --name $MY_KEYVAULT_NAME --object-id $userAssignedObjectID --secret-permissions get --output table
Hozzon létre egy erőforrást
SecretProviderClass
a kulcstartóban tárolt Valkey-jelszó eléréséhez akubectl apply
parancs használatával.kubectl apply -f - <<EOF --- apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: valkey-password namespace: valkey spec: provider: azure parameters: usePodIdentity: "false" useVMManagedIdentity: "true" userAssignedIdentityID: "${userAssignedIdentityID}" keyvaultName: ${MY_KEYVAULT_NAME} # the name of the AKV instance objects: | array: - | objectName: valkey-password-file objectAlias: valkey-password-file.conf objectType: secret tenantId: "${TENANT_ID}" # the tenant ID of the AKV instance EOF
A Valkey-fürt üzembe helyezése
Hozzon létre egy
ConfigMap
kötetként csatlakoztatottt a ValkeybenStatefulSet
a Valkey-fürt parancs használatávalkubectl apply
történő konfigurálásához.kubectl apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: valkey-cluster namespace: valkey data: valkey.conf: |+ cluster-enabled yes cluster-node-timeout 15000 cluster-config-file /data/nodes.conf appendonly yes protected-mode yes dir /data port 6379 include /etc/valkey-password/valkey-password-file.conf EOF
Példa a kimenetre:
configmap/valkey-cluster created
Hozzon létre egy
StatefulSet
erőforrást azzalspec.affinity
a céllal, hogy az 1. és a 2. zónában tartsa az összes előválasztást, lehetőleg különböző csomópontokon, akubectl apply
parancs használatával.kubectl apply -f - <<EOF --- apiVersion: apps/v1 kind: StatefulSet metadata: name: valkey-masters namespace: valkey spec: serviceName: "valkey-masters" replicas: 3 selector: matchLabels: app: valkey template: metadata: labels: app: valkey appCluster: valkey-masters spec: terminationGracePeriodSeconds: 20 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: agentpool operator: In values: - valkey - key: topology.kubernetes.io/zone operator: In values: - ${MY_LOCATION}-1 - matchExpressions: - key: agentpool operator: In values: - valkey - key: topology.kubernetes.io/zone operator: In values: - ${MY_LOCATION}-2 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 90 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - valkey topologyKey: topology.kubernetes.io/zone - weight: 90 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - valkey topologyKey: kubernetes.io/hostname containers: - name: valkey image: "${MY_ACR_REGISTRY}.azurecr.io/valkey:latest" env: - name: VALKEY_PASSWORD_FILE value: "/etc/valkey-password/valkey-password-file.conf" - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP command: - "valkey-server" args: - "/conf/valkey.conf" - "--cluster-announce-ip" - "\$(MY_POD_IP)" resources: requests: cpu: "100m" memory: "100Mi" ports: - name: valkey containerPort: 6379 protocol: "TCP" - name: cluster containerPort: 16379 protocol: "TCP" volumeMounts: - name: conf mountPath: /conf readOnly: false - name: data mountPath: /data readOnly: false - name: valkey-password mountPath: /etc/valkey-password readOnly: true volumes: - name: valkey-password csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: valkey-password - name: conf configMap: name: valkey-cluster defaultMode: 0755 volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: managed-csi-premium resources: requests: storage: 20Gi EOF
Példa a kimenetre:
statefulset.apps/valkey-masters created
Hozzon létre egy második
StatefulSet
erőforrást a Valkey-másodpéldányokhoz azzalspec.affinity
a céllal, hogy az összes replika a 3. zónában maradjon, lehetőleg különböző csomópontokon, akubectl apply
parancs használatával.kubectl apply -f - <<EOF --- apiVersion: apps/v1 kind: StatefulSet metadata: name: valkey-replicas namespace: valkey spec: serviceName: "valkey-replicas" replicas: 3 selector: matchLabels: app: valkey template: metadata: labels: app: valkey appCluster: valkey-replicas spec: terminationGracePeriodSeconds: 20 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: agentpool operator: In values: - valkey - key: topology.kubernetes.io/zone operator: In values: - ${MY_LOCATION}-3 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 90 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - valkey topologyKey: kubernetes.io/hostname containers: - name: valkey image: "${MY_ACR_REGISTRY}.azurecr.io/valkey:latest" env: - name: VALKEY_PASSWORD_FILE value: "/etc/valkey-password/valkey-password-file.conf" - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP command: - "valkey-server" args: - "/conf/valkey.conf" - "--cluster-announce-ip" - "\$(MY_POD_IP)" resources: requests: cpu: "100m" memory: "100Mi" ports: - name: valkey containerPort: 6379 protocol: "TCP" - name: cluster containerPort: 16379 protocol: "TCP" volumeMounts: - name: conf mountPath: /conf readOnly: false - name: data mountPath: /data readOnly: false - name: valkey-password mountPath: /etc/valkey-password readOnly: true volumes: - name: valkey-password csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: valkey-password - name: conf configMap: name: valkey-cluster defaultMode: 0755 volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: managed-csi-premium resources: requests: storage: 20Gi EOF
Példa a kimenetre:
statefulset.apps/valkey-replicas created
Ellenőrizze ezt,
master-N
ésreplica-N
futnak különböző csomópontokon és zónákban a parancsok éskubectl get pods
akubectl get nodes
parancsok használatával.kubectl get pods -n valkey -o wide kubectl get node -o custom-columns=Name:.metadata.name,Zone:".metadata.labels.topology\.kubernetes\.io/zone"
Példa a kimenetre:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES valkey-masters-0 1/1 Running 0 2m55s 10.224.0.4 aks-valkey-18693609-vmss000004 <none> <none> valkey-masters-1 1/1 Running 0 2m31s 10.224.0.137 aks-valkey-18693609-vmss000000 <none> <none> valkey-masters-2 1/1 Running 0 2m7s 10.224.0.222 aks-valkey-18693609-vmss000001 <none> <none> valkey-replicas-0 1/1 Running 0 88s 10.224.0.237 aks-valkey-18693609-vmss000005 <none> <none> valkey-replicas-1 1/1 Running 0 70s 10.224.0.18 aks-valkey-18693609-vmss000002 <none> <none> valkey-replicas-2 1/1 Running 0 48s 10.224.0.242 aks-valkey-18693609-vmss000005 <none> <none> Name Zone aks-nodepool1-17621399-vmss000000 centralus-1 aks-nodepool1-17621399-vmss000001 centralus-2 aks-nodepool1-17621399-vmss000003 centralus-3 aks-valkey-18693609-vmss000000 centralus-1 aks-valkey-18693609-vmss000001 centralus-2 aks-valkey-18693609-vmss000002 centralus-3 aks-valkey-18693609-vmss000003 centralus-1 aks-valkey-18693609-vmss000004 centralus-2 aks-valkey-18693609-vmss000005 centralus-3
Várja meg, amíg az összes pod fut, mielőtt továbblép a következő lépésre.
Hozzon létre három fej nélküli
Service
erőforrást (az elsőt a teljes fürthöz, a másodikat az előválasztásokhoz, a harmadikat a másodikhoz) a Valkey-podok IP-címeinek lekéréséhez akubectl apply
parancs használatával.kubectl apply -f - <<EOF apiVersion: v1 kind: Service metadata: name: valkey-cluster namespace: valkey spec: clusterIP: None ports: - name: valkey-port port: 6379 protocol: TCP targetPort: 6379 selector: app: valkey sessionAffinity: None type: ClusterIP EOF kubectl apply -f - <<EOF apiVersion: v1 kind: Service metadata: name: valkey-masters namespace: valkey spec: clusterIP: None ports: - name: valkey-port port: 6379 protocol: TCP targetPort: 6379 selector: app: valkey appCluster: valkey-masters sessionAffinity: None type: ClusterIP EOF kubectl apply -f - <<EOF apiVersion: v1 kind: Service metadata: name: valkey-replicas namespace: valkey spec: clusterIP: None ports: - name: valkey-port port: 6379 protocol: TCP targetPort: 6379 selector: app: valkey appCluster: valkey-replicas sessionAffinity: None type: ClusterIP EOF
Példa a kimenetre:
service/valkey-cluster created service/valkey-masters created service/valkey-replicas created
A Valkey-fürt futtatása
Adja hozzá a Valkey-előválasztásokat az 1. és a 2. zónában a fürthöz a
kubectl exec
paranccsal.kubectl exec -it -n valkey valkey-masters-0 -- valkey-cli --cluster create --cluster-yes --cluster-replicas 0 \ valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379 \ valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379 \ valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379 \ --pass ${SECRET}
Példa a kimenetre:
>>> Performing hash slots allocation on 3 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379 slots:[0-5460] (5461 slots) master M: fd1fb98db83976478e05edd3d2a02f9a13badd80 valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379 slots:[5461-10922] (5462 slots) master M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379 slots:[10923-16383] (5461 slots) master >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ... >>> Performing Cluster Check (using node valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379) M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379 slots:[0-5460] (5461 slots) master M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e 10.224.0.176:6379 slots:[10923-16383] (5461 slots) master M: fd1fb98db83976478e05edd3d2a02f9a13badd80 10.224.0.247:6379 slots:[5461-10922] (5462 slots) master [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
Adja hozzá a Valkey-replikákat a 3. zónában a fürthöz a
kubectl exec
paranccsal.kubectl exec -ti -n valkey valkey-masters-0 -- valkey-cli --cluster add-node \ valkey-replicas-0.valkey-replicas.valkey.svc.cluster.local:6379 \ valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379 --cluster-slave \ --pass ${SECRET} kubectl exec -ti -n valkey valkey-masters-0 -- valkey-cli --cluster add-node \ valkey-replicas-1.valkey-replicas.valkey.svc.cluster.local:6379 \ valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379 --cluster-slave \ --pass ${SECRET} kubectl exec -ti -n valkey valkey-masters-0 -- valkey-cli --cluster add-node \ valkey-replicas-2.valkey-replicas.valkey.svc.cluster.local:6379 \ valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379 --cluster-slave \ --pass ${SECRET}
Példa a kimenetre:
>>> Adding node valkey-replicas-0.valkey-replicas.valkey.svc.cluster.local:6379 to cluster valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379 >>> Performing Cluster Check (using node valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379) M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379 slots:[0-5460] (5461 slots) master M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e 10.224.0.176:6379 slots:[10923-16383] (5461 slots) master M: fd1fb98db83976478e05edd3d2a02f9a13badd80 10.224.0.247:6379 slots:[5461-10922] (5462 slots) master [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. Automatically selected master valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379 >>> Send CLUSTER MEET to node valkey-replicas-0.valkey-replicas.valkey.svc.cluster.local:6379 to make it join the cluster. Waiting for the cluster to join >>> Configure node as replica of valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379. [OK] New node added correctly. >>> Adding node valkey-replicas-1.valkey-replicas.valkey.svc.cluster.local:6379 to cluster valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379 >>> Performing Cluster Check (using node valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379) M: fd1fb98db83976478e05edd3d2a02f9a13badd80 valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379 slots:[5461-10922] (5462 slots) master S: 0ebceb60cbcc31da9040159440a1f4856b992907 10.224.0.224:6379 slots: (0 slots) slave replicates ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e 10.224.0.176:6379 slots:[10923-16383] (5461 slots) master M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 10.224.0.14:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. Automatically selected master valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379 >>> Send CLUSTER MEET to node valkey-replicas-1.valkey-replicas.valkey.svc.cluster.local:6379 to make it join the cluster. Waiting for the cluster to join >>> Configure node as replica of valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379. [OK] New node added correctly. >>> Adding node valkey-replicas-2.valkey-replicas.valkey.svc.cluster.local:6379 to cluster valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379 >>> Performing Cluster Check (using node valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379) M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379 slots:[10923-16383] (5461 slots) master S: 0ebceb60cbcc31da9040159440a1f4856b992907 10.224.0.224:6379 slots: (0 slots) slave replicates ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 S: fa44edff683e2e01ee5c87233f9f3bc35c205dce 10.224.0.103:6379 slots: (0 slots) slave replicates fd1fb98db83976478e05edd3d2a02f9a13badd80 M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 10.224.0.14:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: fd1fb98db83976478e05edd3d2a02f9a13badd80 10.224.0.247:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. Automatically selected master valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379 >>> Send CLUSTER MEET to node valkey-replicas-2.valkey-replicas.valkey.svc.cluster.local:6379 to make it join the cluster. Waiting for the cluster to join >>> Configure node as replica of valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379. [OK] New node added correctly.
Ellenőrizze a podok szerepköreit az alábbi parancsokkal:
for x in $(seq 0 2); do echo "valkey-masters-$x"; kubectl exec -n valkey valkey-masters-$x -- valkey-cli --pass ${SECRET} role; echo; done for x in $(seq 0 2); do echo "valkey-replicas-$x"; kubectl exec -n valkey valkey-replicas-$x -- valkey-cli --pass ${SECRET} role; echo; done
Példa a kimenetre:
valkey-masters-0 master 84 10.224.0.224 6379 84 valkey-masters-1 master 84 10.224.0.103 6379 84 valkey-masters-2 master 70 10.224.0.200 6379 70 valkey-replicas-0 slave 10.224.0.14 6379 connected 98 valkey-replicas-1 slave 10.224.0.247 6379 connected 98 valkey-replicas-2 slave 10.224.0.176 6379 connected 84
Következő lépések
A nyílt forráskódú szoftverek Azure Kubernetes Service-ben (AKS) való üzembe helyezéséről az alábbi cikkekben olvashat bővebben:
- Magas rendelkezésre állású PostgreSQL-adatbázis üzembe helyezése az AKS-en
- Adatok és gépi tanulási folyamatok létrehozása és üzembe helyezése a Flyte-kel az AKS-en
Közreműködők
A Microsoft fenntartja ezt a cikket. Eredetileg a következő közreműködők írták:
- Nelly Kiboi | Szolgáltatásmérnök
- Saverio Proto | Vezető ügyfélélmény-mérnök
- Magas | Vezető ügyfélmérnök
- LaBrina Loving | Főszolgáltatás-mérnök
- Ken Kilty | Egyszerű TPM
- Russell de | Egyszerű TPM
- Colin Mixon | Product Manager
- Ketan Chawda | Vezető ügyfélmérnök
- Naveed Kharadi | Ügyfélélmény-mérnök
- Erin Schaffer | Tartalomfejlesztő 2
Azure Kubernetes Service