k8s 使用 pv-migrate 迁移 pvc

TL;DR 安装 wget https://github.com/utkuozdemir/pv-migrate/releases/download/v1.7.1/pv-migrate_v1.7.1_linux_x86_64.tar.gz tar -xvf pv-migrate_v1.7.1_linux_x86_64.tar.gz mv pv-migrate /usr/local/bin 用法 pv-migrate migrate \ --source-namespace default \ --dest-namespace default \ localpv-vol csi-lvmpv 🚀 Starting migration 💭 Will attempt 3 strategies: mnt2, svc, lbsvc 🚁 Attempting strategy: mnt2 📂 Copying data... 100% |██████████████████████████████| (3.4 GB/s) 📂 Copying data... 0% | | [0s:0s]🧹 Cleaning up 📂 Copying data... 100% |██████████████████████████████| ✨ Cleanup done ✅ Migration succeeded References Migration from Legacy Storage to Latest Storage Solution https://github.com/utkuozdemir/pv-migrate

March 14, 2025 | 1 分钟 | 145 字 | Tianlun Song

k8s 使用 OpenEBS 存储

TL;DR helm repo add openebs https://openebs.github.io/openebs helm repo update # 以默认值安装 helm install openebs --namespace openebs openebs/openebs --create-namespace # 禁用副本存储类型、lvm 本地存储、zfs本地存储,仅保留本地路径存储 helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --set engines.local.lvm.enabled=false --set engines.local.zfs.enabled=fa lse --create-namespace E0311 06:22:00.794754 111105 round_tripper.go:63] CancelRequest not implemented by *kube.RetryingRoundTripper NAME: openebs LAST DEPLOYED: Tue Mar 11 06:21:28 2025 NAMESPACE: openebs STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Successfully installed OpenEBS. Check the status by running: kubectl get pods -n openebs The default values will install both Local PV and Replicated PV. However, the Replicated PV will require additional configuration to be fuctional. The Local PV offers non-replicated local storage using 3 different storage backends i.e Hostpath, LVM and ZFS, while the Replicated PV provides one replicated highly-available storage backend i.e Mayastor. For more information, - view the online documentation at https://openebs.io/docs - connect with an active community on our Kubernetes slack channel. - Sign up to Kubernetes slack: https://slack.k8s.io - #openebs channel: https://kubernetes.slack.com/messages/openebs 实际使用需充分阅读官方文档。 ...

March 14, 2025 | 2 分钟 | 739 字 | Tianlun Song

k3s 部署 kube-prometheus-stack 监控栈

TL;DR $ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts $ helm repo update $ helm show values prometheus-community/kube-prometheus-stack $ helm show values prometheus-community/kube-prometheus-stack > values.yaml # Edit values.yaml $ helm install prometheus-community prometheus-community/kube-prometheus-stack --namespace monitoring -f values.yaml --create-namespace # update values.yaml $ helm upgrade --install prometheus-community prometheus-community/kube-prometheus-stack --namespace monitoring -f values.yaml References kube-prometheus-stack 实战指南:使用 kube-prometheus-stack 监控 K3s 集群

March 14, 2025 | 1 分钟 | 64 字 | Tianlun Song

k8s-k3s 标记节点暂时不可用及排空

TL;DR # 标记为不可调度 kubectl cordon NODE # 将运行的pod平滑的赶到其他节点上 kubectl drain NODE # 重新变得可调度 kubectl uncordon NODE References K8S中的cordon、uncordon和drain kubectl cordon

March 14, 2025 | 1 分钟 | 73 字 | Tianlun Song

helm operation 中断锁死问题解决

两种方案: 解决办法 方法一,卸载重装 helm uninstall <release name> -n <namespace> 方法二,回滚 This error can happen for few reasons, but it most commonly occurs when there is an interruption during the upgrade/install process as you already mentioned. 发生此错误的原因有很多,但最常见的原因是升级/安装过程中出现中断,正如您之前提到的。 ...

March 14, 2025 | 1 分钟 | 421 字 | Tianlun Song

cert-manager CNAME 问题记录

在研究 cert-manager 使用 webhook 方式调用 dnspod 使用 DNS-01 方式签发 SSL 证书遇到问题,一直得到错误: I0306 03:48:38.870605 1 controller.go:144] "syncing item" logger="cert-manager.controller" I0306 03:48:38.870714 1 dns.go:118] "checking DNS propagation" logger="cert-manager.controller.Check" resource_name="test1-tsh1-frytea-com-1-3300738485-2689263791" resource_namespace="default" resource_kind="Challenge" resource_version=" v1" dnsName="test1.tsh1.frytea.com" type="DNS-01" resource_name="test1-tsh1-frytea-com-1-3300738485-2689263791" resource_namespace="default" resource_kind="Challenge" resource_version="v1" domain="test1.tsh1.frytea.com" nameservers=["223.5.5.5:53","8.8.8.8:53"] I0306 03:48:38.879628 1 wait.go:94] "Updating FQDN" logger="cert-manager.controller" resource_name="test1-tsh1-frytea-com-1-3300738485-2689263791" resource_namespace="default" resource_kind="Challenge" resource_version="v1" dnsName="test 1.tsh1.frytea.com" type="DNS-01" fqdn="_acme-challenge.test1.tsh1.frytea.com." cname="tsh1.frytea.com." I0306 03:48:38.897174 1 wait.go:145] "Looking up TXT records" logger="cert-manager.controller" resource_name="test1-tsh1-frytea-com-1-3300738485-2689263791" resource_namespace="default" resource_kind="Challenge" resource_version="v1" dns Name="test1.tsh1.frytea.com" type="DNS-01" fqdn="tsh1.frytea.com." E0306 03:48:38.897227 1 sync.go:208] "propagation check failed" err="DNS record for \"test1.tsh1.frytea.com\" not yet propagated" logger="cert-manager.controller" resource_name="test1-tsh1-frytea-com-1-3300738485-2689263791" resource_nam espace="default" resource_kind="Challenge" resource_version="v1" dnsName="test1.tsh1.frytea.com" type="DNS-01"I0306 03:48:38.897688 1 controller.go:164] "finished processing work item" logger="cert-manager.controller" 我使用了以下资源: ...

March 6, 2025 | 1 分钟 | 426 字 | Tianlun Song

以沙箱的方式运行容器-安全容器Kata Containers

一.系统环境 本文主要基于Kubernetes1.22.2和Linux操作系统Ubuntu 18.04。 服务器版本 docker软件版本 Kubernetes(k8s)集群版本 Kata软件版本 containerd软件版本 CPU架构 Ubuntu 18.04.5 LTS Docker version 20.10.14 v1.22.2 1.11.5 1.6.4 x86_64 Kubernetes集群架构:k8scludes1作为master节点,k8scludes2,k8scludes3作为worker节点。 ...

December 18, 2024 | 21 分钟 | 10073 字 | Tianlun Song

kuboard v3 快速部署

Kuboard 官网给的 k8s 单节点快速部署似乎不太好用,直接用 docker 的翻译了一份,用于快速部署。 --- apiVersion: v1 kind: Namespace metadata: name: kuboard --- apiVersion: apps/v1 kind: Deployment metadata: name: kuboard namespace: kuboard spec: replicas: 1 selector: matchLabels: app: kuboard template: metadata: labels: app: kuboard spec: containers: - name: kuboard image: eipwork/kuboard:v3 ports: - containerPort: 80 name: http - containerPort: 10081 name: agent env: - name: KUBOARD_ENDPOINT value: "http://192.168.26.133:30080" # 请替换为您的实际内网 IP - name: KUBOARD_AGENT_SERVER_TCP_PORT value: "10081" volumeMounts: - name: data mountPath: /data volumes: - name: data hostPath: path: /etc/kuboard/data type: DirectoryOrCreate --- apiVersion: v1 kind: Service metadata: name: kuboard-svc namespace: kuboard spec: selector: app: kuboard ports: - name: http port: 80 targetPort: 80 nodePort: 30080 - name: agent port: 10081 targetPort: 10081 type: NodePort Kubard 官网:https://kuboard.cn/ ...

November 28, 2024 | 1 分钟 | 181 字 | Tianlun Song

Kubespray 部署生产级 k8s 集群

kubespray 是基于 ansible 开发的一套 K8s 生命周期管理软件,由 k8s 官方 sig 维护。 遇到问题多读文档,搞清楚命令意味着什么再操作。 部署方法 获取部署程序 # 从 kubespray 官方仓库拉取 git clone --depth=1 https://github.com/kubernetes-sigs/kubespray.git # 切换到 v2.26.0 版本,不同版本对应支持不同 k8s 版本,根据需要切换 git checkout v2.26.0 # 进入部署程序目录 cd kubespray # 创建 python 虚拟环境,激活,并安装依赖 python3 -m venv .venv source .venv/bin/activate pip3 install -r requirements.txt 定义部署节点信息 # 从集群配置模板拷贝一份 cp -rfp inventory/sample inventory/mycluster # 定义节点 IP 清单,将 IP 更换为自己节点的 IP declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5) CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} # 根据需要调整节点角色 inventory/mycluster/hosts.yaml 执行以上命令后会在生成一份节点角色清单在 inventory/mycluster/hosts.yaml 路径下,可以根据需要调整,如调整希望作为 master 的节点、运行 etcd 的节点、作为 worker 的节点等。 ...

November 23, 2024 | 2 分钟 | 572 字 | Tianlun Song

镜像操作神器 skopeo 用法总结

skopeo 是一个命令行工具,可对容器镜像和容器存储进行操作。 在没有dockerd的环境下,使用 skopeo 操作镜像是非常方便的。 安装 包管理器 # RHEL / CentOS Stream ≥ 8 sudo dnf install skopeo # RHEL/CentOS ≤ 7.x yum install skopeo # openSUSE: sudo zypper install skopeo # alpine: sudo apk add skopeo # macOS: brew install skopeo # ArchLinux sudo pacman -S skopeo 其他系统见 安装文档 ...

November 23, 2024 | 4 分钟 | 1572 字 | Tianlun Song