Skip to content

Commit

Permalink
feat: finish k8s known markdown
Browse files Browse the repository at this point in the history
  • Loading branch information
xliuqq committed Nov 23, 2024
1 parent b646bfc commit 901655d
Show file tree
Hide file tree
Showing 75 changed files with 6,360 additions and 64 deletions.
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
---
tags:
- 数据密集型系统
---

Expand Down
Binary file added docs/cloud/.pics/helm/helm_arch.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
260 changes: 260 additions & 0 deletions docs/cloud/helm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,260 @@
# Helm

> **The package manager for Kubernetes**.
>
> Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
## 概念

<img src=".pics/helm/helm_arch.jpg" alt="img" style="zoom: 80%;" />

简化Kubernetes应用的部署和管理,等价于CentOS的yum;

- **Chart**: 是Helm管理的安装包,里面包含需要部署的安装包资源。可以把Chart比作CentOS yum使用的rpm文件。每个Chart包含下面两部分:
- **包的基本描述文件Chart.yaml**
- 放在**templates目录**中的一个或多个**Kubernetes manifest文件模板**
- **Release**:是chart的部署实例,一个chart在一个Kubernetes集群上可以有多个release,即这个chart可以被安装多次;
- **Repository:chart**的仓库,用于发布和存储chart

v3 版本仅有客户端 ***helm***

- helm是一个命令行工具,可在本地运行,一般运行在CI/CD Server上。

***templates*** 目录

- Kubernetes 资源的配置模板,Helm 会将 `values.yaml` 中的参数值注入到模板中生成标准的 YAML 配置文件



## 组成

### Chart模板

参考 https://helm.sh/docs/chart_template_guide/

```shell
$ helm create nginx
$ tree .
.
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml

3 directories, 10 files
```



### 资源安装顺序

Helm installs resources in the following order:

- Namespace
- NetworkPolicy
- ResourceQuota
- LimitRange
- PodSecurityPolicy
- PodDisruptionBudget
- ServiceAccount
- Secret
- SecretList
- ConfigMap
- StorageClass
- PersistentVolume
- PersistentVolumeClaim
- CustomResourceDefinition
- ClusterRole
- ClusterRoleList
- ClusterRoleBinding
- ClusterRoleBindingList
- Role
- RoleList
- RoleBinding
- RoleBindingList
- Service
- DaemonSet
- Pod
- ReplicationController
- ReplicaSet
- Deployment
- HorizontalPodAutoscaler
- StatefulSet
- Job
- CronJob
- Ingress
- APIService



## 原理

### 升级Patch

**Helm 3 three-way 合并策略**

考虑 old manifest,live state 和 new manifest 三者,生成 patch。

- **如果 manifest 前后不变,helm 3 会将新的 mainfest 覆盖,此时外部的改动会丢失**

可以通过 lookup 函数,将原先的配置查出来作为配置内容,如下所示

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: upgrade-test
namespace: default
data:
content: |
# 查找 config map 的当前内容,并应用
{{ (lookup "v1" "ConfigMap" "default" "upgrade-test").data.content | indent 4 }}

```



## 安装

官方针对多种平台有[预编译好的安装包](https://github.com/helm/helm/releases)

```shell
$ tar -zxvf helm-v3.2.1-linux-amd64.tar.gz
$ cp linux-amd64/helm /usr/local/bin/
$ helm version
# version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
```



## 配置

配置国内Chart仓库

- 微软仓库(http://mirror.azure.cn/kubernetes/charts/)
- 阿里云仓库(https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
- 官方仓库(https://hub.kubeapps.com/charts/incubator)

```shell
$ helm repo add stable http://mirror.azure.cn/kubernetes/charts
$ helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
$ helm repo update
# 删除存储库
$ helm repo remove aliyun
```



## 命令

### search

- `helm search repo mysql` :搜索charts

### install

- `helm install aliyun aliyun/mysql`:安装chart

### 覆盖默认值

```bash
$ echo '{mariadb.auth.database: user0db, mariadb.auth.username: user0}' > values.yaml
$ helm install -f values.yaml bitnami/wordpress --generate-name
```

### status

- `helm status happy-panda`:查看状态

### show

- `helm show values bitnami/wordpress `:查看默认的配置值;

### create

- ``:创建应用模板

### lint

- `helm lint promocouponsvc/`
- 会检测 chart 的语法,报告错误以及给出建议;

### package

- `helm package promocouponsvc/`
- 打包后生成一个 tgz文件



## 语法

### lookup Function


```yaml
data:
content: |
# 查找 config map 的当前内容,并应用
{{ (lookup "v1" "ConfigMap" "default" "upgrade-test").data.content | indent 4 }}

```

### 'include' Function

> template is an action,无法将输出通过管道给其它操作,可以完全使用 include 而不是 template.
>
> 返回是个被渲染的字符串,即使`mytpl`定义的时候是 bool,也会被转为字符串。
```yaml
# includes a template called mytpl, then lowercases the result, then wraps that in double quotes.
value: {{ include "mytpl" . | lower | quote }}
```
### ‘required' Function
```yaml
# required function declares an entry for .Values.who is required, and will print an error message when that entry is missing
value: {{ required "A valid .Values.who entry required!" .Values.who }}
```
### 'tpl' Function
**evaluate strings as templates inside a template**
```yaml
# values
template: "{{ .Values.name }}"
name: "Tom"

# template
{{ tpl .Values.template . }}

# output
Tom
```



#### 控制结构

#### if/else

创建条件块,结束用end

```yaml
# -代表删除空格,前面的删除前面的空格
{{- if values.drink "ccc" }}
{{- end }}
```

#### range

2 changes: 0 additions & 2 deletions docs/cloud/index.md

This file was deleted.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/deploy/k8s_deploy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/extend_csi/csi_lifecycle.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/go_client/framework.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/go_client/resync-process.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/k8s_security/zoo_arch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/k8s_security/zoo_compare.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/metrics/k8s_aggregator.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/network/k8s_calico.webp
Binary file not shown.
Binary file added docs/cloud/k8s/.pics/network/k8s_calico_ip.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/network/k8s_flannel_udp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cloud/k8s/.pics/scheduler/resource_quota.png
Binary file added docs/cloud/k8s/.pics/ui_manager/app-store.png
Binary file added docs/cloud/k8s/.pics/ui_manager/cicd.png
Binary file added docs/cloud/k8s/.pics/ui_manager/console.png
Binary file added docs/cloud/k8s/.pics/ui_manager/project.png

Large diffs are not rendered by default.

Binary file added docs/cloud/k8s/.pics/ui_manager/ui-dashboard.png
120 changes: 120 additions & 0 deletions docs/cloud/k8s/cmds.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# 命令及使用

## kubectl

### apply

部署或者更新:`kubectl apply -f app.yaml`

### get

> json 路径匹配:
>
> - `-o jsonpath='{.items[?(@.metadata.uid=="43504a29-8b72-4304-873c-3fee910374fa")].metadata.name}'`
`–show-labels `: 查看标签

- 查看pod服务:`kubectl get pod -n cbc-dev | grep router`
- 查询服务pod 和所在的节点:`kubectl -n cbc-dev get pod -o wide`
- 查看deployment:`kubectl get deployments`
- 查看replicaset:`kubectl get rs`
- 查看namespace:`kubectl get ns`


### describe

描述Pod信息:`kubectl -n cbc-dev describe pod podName `

### logs

查看pod 启动日志:`kubectl -n cbc-dev logs podName `

### exec

进入pod查看:`kubectl -n cbc-dev exec -it podName bash`

### expose

将资源暴露为service对外访问,支持: pod(po),service(svc),replication controller(rc),deployment(deploy),replica set(rs)

`kubectl expose pod hc-base-jupyter-pod --port=8888 --target-port=8888 --type=NodePort -n ai-education`

- `--port`: 集群内部服务访问端口,即通过 clusterip + port 在容器内访问;
- `--target-port`:容器内的服务端口
- `--type`:服务暴露类型,一般选择NodePort,会**自动分配一个外部访问的port**(通过`kubectl get service`查看)
- `--name`:名称(可选)

### port-forward

通过**端口转发映射本地端口到指定的应用(Pod)端口**`kubectl port-forward svc/yunikorn-service 9889:9889 -n yunikorn`

### delete

删除pods:`kubectl -n cbc-dev delete -f app.yaml `

### label

节点打标签:`kubectl label nodes kube-node label_name=label_value`

删除标签(最后指定Label的key名并与一个减号相连):`kubectl label nodes 194.246.9.5 gpu-`

### scale

Pod伸缩:`kubectl scale deployment nginx-deployment --replicas=4`

### edit

编辑Yaml配置:`kubectl edit deployment/nginx-deployment`

### rollout

回滚版本:`kubectl rollout undo deployment/nginx-deployment`

查看历史:`kubectl rollout history deployment/nginx-deployment`

查看滚动更新的状态:`kubectl rollout status deployment/nginx-deployment`

暂停(批量改配置),暂停 -> 修改配置 -> 恢复 -> 滚动更新:`kubectl rollout pause deployment/nginx-deployment`

恢复:`kubectl rollout resume deployment/nginx-deployment`



## 容器内部访问 ApiServer

### 官方客户端库

默认的官方客户端,如 [Go 客户端库](https://github.com/kubernetes/client-go/),通过`rest.InClusterConfig()`自动处理 API Server 的主机发现和身份认证

- 通过 ServiceAccount 的挂载信息进行认证,`/var/run/secrets/kubernetes.io/serviceaccount/`目录下
- `token``ca.crt``namespace`



### RestAPI直接访问

两种方式获取 REST 地址:

- **`KUBERNETES_SERVICE_HOST``KUBERNETES_SERVICE_PORT_HTTPS` 环境变量**为 Kubernetes API 服务器生成一个 HTTPS URL。
- API 服务器的集群内地址也**发布到 `default` 命名空间中名为 `kubernetes` 的 Service** 中( **`kubernetes.default.svc`** )。

```shell
# 指向内部 API 服务器的主机名
APISERVER=https://kubernetes.default.svc
# 服务账号令牌的路径
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# 读取 Pod 的名字空间
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# 读取服务账号的持有者令牌
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# 引用内部证书机构(CA)
CACERT=${SERVICEACCOUNT}/ca.crt

# 使用令牌访问 API
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
```





Loading

0 comments on commit 901655d

Please sign in to comment.