Kubectl

Start

Install

  • 直接下载安装 kubectl client
# OSX
$ brew insatll kubernetes-cli
  • 下载 binary release,通过这种方式可以下载历史版本的 kubectl
# OSX
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/darwin/amd64/kubectl

# Linux
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/linux/amd64/kubectl

# 添加执行权限
$ sudo chmod +x ./kubectl

# 移动到 PATH
$ sudo mv ./kubectl /usr/local/bin/

Config

kubectl 默认使用~/.kube/config文件作为做连接 kubernetes 集群的配置文件,你可以在 kubernetes server master 的~/.kube/config找到它,他是集群管理员角色的 config。当然也可以创建其他角色(权限的)config。

然后将对应的config文件放到 client 节点的~/.kube下,然后就可以通过kubectl来访问和管理 kubernetes 集群了。

Multi Cluster Config

将多个集群配置添加到config

apiVersion: v1
kind: Config
preferences: {}
clusters:
- name: "bjidc-prod-diamond"
  cluster:
    server: "https://www.example.com/k8s/clusters/c-krfhf"
- name: "bjidc-test-diamond"
  cluster:
    server: "https://www.example.com/k8s/clusters/c-t5mbq"
- name: "ts-bj-test"
  cluster:
    server: "https://www.example.com/k8s/clusters/c-qn45l"
- name: "aliyun-hd1-diamond"
  cluster:
    server: "https://www.example.com/k8s/clusters/c-kzszr"

users:
- name: "bjidc-prod-diamond"
  user:
    token: "kubeconfig-u-2nxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
- name: "bjidc-test-diamond"
  user:
    token: "kubeconfig-u-2nxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
- name: "ts-bj-test"
  user:
    token: "kubeconfig-u-2nxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
- name: "aliyun-hd1-diamond"
  user:
    token: "kubeconfig-u-2nxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

contexts:
- name: "bjidc-prod-diamond"
  context:
    user: "bjidc-prod-diamond"
    cluster: "bjidc-prod-diamond"
    namespace: "dlink-prod"
- name: "bjidc-test-diamond"
  context:
    user: "bjidc-test-diamond"
    cluster: "bjidc-test-diamond"
    namespace: "dlink-test"
- name: "ts-bj-test"
  context:
    user: "ts-bj-test"
    cluster: "ts-bj-test"
    namespace: "dlink-test"
- name: "aliyun-hd1-diamond"
  context:
    user: "aliyun-hd1-diamond"
    cluster: "aliyun-hd1-diamond"
    namespace: "dlink-prod"

current-context: "bjidc-prod-diamond"

将多个集群的配置添加完毕后

# 查看合并后的 config
$ kubectl config view

# 查看集群
$ kubectl config get-contexts
CURRENT   NAME                 CLUSTER              AUTHINFO             NAMESPACE
          aliyun-hd1-diamond   aliyun-hd1-diamond   aliyun-hd1-diamond   dlink-prod
*         bjidc-prod-diamond   bjidc-prod-diamond   bjidc-prod-diamond   dlink-prod
          bjidc-test-diamond   bjidc-test-diamond   bjidc-test-diamond   dlink-test
          ts-bj-test           ts-bj-test           ts-bj-test           dlink-test

# 切换集群
$ kubectl config use-context bjidc-prod

Usages

# kubectl action resource
$ kubectl [flags] [options]

Support Commands

$ kubectl
$ kubectl -h
$ kubectl --help 

Detail Command Instruction and Example

$ kubectl <command>
$ kubectl <command> -h
$ kubectl <command> --help

# 例如
$ kubectl get -h
$ kubectl get nodes --help

A list of global command-line options (applies to all commands)

$ kubectl options
The following options can be passed to any command:

      --alsologtostderr=false: log to standard error as well as files
      --as='': Username to impersonate for the operation
      --as-group=[]: Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --cache-dir='C:\Users\zhangqiang\.kube\http-cache': Default HTTP cache directory
      --certificate-authority='': Path to a cert file for the certificate authority
      --client-certificate='': Path to a client certificate file for TLS
      --client-key='': Path to a client key file for TLS
      --cluster='': The name of the kubeconfig cluster to use
      --context='': The name of the kubeconfig context to use
      --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will
make your HTTPS connections insecure
      --kubeconfig='': Path to the kubeconfig file to use for CLI requests.
      --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace
      --log-dir='': If non-empty, write log files in this directory
      --log-file='': If non-empty, use this log file
      --log-file-max-size=1800: Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0,
the maximum file size is unlimited.
      --log-flush-frequency=5s: Maximum number of seconds between log flushes
      --logtostderr=true: log to standard error instead of files
      --match-server-version=false: Require server version to match client version
  -n, --namespace='': If present, the namespace scope for this CLI request
      --password='': Password for basic authentication to the API server
      --profile='none': Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)
      --profile-output='profile.pprof': Name of the file to write the profile to
      --request-timeout='0': The length of time to wait before giving up on a single server request. Non-zero values
should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.
  -s, --server='': The address and port of the Kubernetes API server
      --skip-headers=false: If true, avoid header prefixes in the log messages
      --skip-log-headers=false: If true, avoid headers when opening log files
      --stderrthreshold=2: logs at or above this threshold go to stderr
      --token='': Bearer token for authentication to the API server
      --user='': The name of the kubeconfig user to use
      --username='': Username for basic authentication to the API server
  -v, --v=0: number for the log level verbosity
      --vmodule=: comma-separated list of pattern=N settings for file-filtered logging

Common Options

$ kubectl get <type of resource>

# 在使用 kubectl 时,指定 --kubeconfig 可以指定配置文件,不指定则默认使用 ~/.kube/config 文件
$ kubectl get pods --kubeconfig ./myconfig

# 查看 pod 更详细的信息,所在的节点信息如(IP、NODE、NOMINATED NODE、READINESS、GATES)
$ kubectl get pod <pod_name> -o wide

# 查看以 yaml 格式查看 resouce 所有定义
$ kubectl get pod <pod_name> -o yaml

Use “kubectl api-resources” for a complete list of supported resources.

$ kubectl api-resources
* all  
* certificatesigningrequests (aka 'csr')  
* clusterrolebindings  
* clusterroles  
* componentstatuses (aka 'cs')  
* configmaps (aka 'cm')  
* controllerrevisions  
* cronjobs  
* customresourcedefinition (aka 'crd')  
* daemonsets (aka 'ds')  
* deployments (aka 'deploy')  
* endpoints (aka 'ep')  
* events (aka 'ev')  
* horizontalpodautoscalers (aka 'hpa')  
* ingresses (aka 'ing')  
* jobs  
* limitranges (aka 'limits')  
* namespaces (aka 'ns')  
* networkpolicies (aka 'netpol')  
* nodes (aka 'no')  
* persistentvolumeclaims (aka 'pvc')  
* persistentvolumes (aka 'pv')  
* poddisruptionbudgets (aka 'pdb')  
* podpreset  
* pods (aka 'po')  
* podsecuritypolicies (aka 'psp')  
* podtemplates  
* replicasets (aka 'rs')  
* replicationcontrollers (aka 'rc')  
* resourcequotas (aka 'quota')  
* rolebindings  
* roles  
* secrets  
* serviceaccounts (aka 'sa')  
* services (aka 'svc')  
* statefulsets (aka 'sts')  
* storageclasses (aka 'sc')

Use “kubectl explain <resource>” for a detailed description of that resource (e.g. kubectl explain pods).

$ kubectl explain pods
KIND:     Pod
VERSION:  v1

DESCRIPTION:
     Pod is a collection of containers that can run on a host. This resource is
     created by clients and scheduled onto hosts.

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#resources

   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds

   metadata     <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata

   spec <Object>
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

   status       <Object>
     Most recently observed status of the pod. This data may not be up to date.
     Populated by the system. Read-only. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

Basic Command (beginner)

create

# Create a resource from a file or from stdin.
$ kubectl create

expose

# 使用 replication controller, service, deployment 或者 pod 并暴露它作为一个新的 Kubernetes Service
$ kubectl expose

run

# 在集群中运行一个指定的镜像
$ kubectl run
# 启动一个 Kafka client 的镜像
$ kubectl run kafka-client \
    --restart='Never' \
    --requests='cpu=1000m,memory=2048Mi' \
    --image docker.io/bitnami/kafka:2.6.0-debian-10-r9 \
    --namespace dlink-prod \
    --command -- sleep infinity
# 进入 Pod 内部
$ kubectl exec --tty -i kafka-client --namespace dlink-prod -- bash

set

# 为 objects 设置一个指定的特征
$ kubectl set
Configure application resources

 These commands help you make changes to existing application resources.

Available Commands:
  env            Update environment variables on a pod template
  image          更新一个 pod template 的镜像
  resources      在对象的 pod templates 上更新资源的 requests/limits
  selector       设置 resource 的 selector
  serviceaccount Update ServiceAccount of a resource
  subject        Update User, Group or ServiceAccount in a RoleBinding/ClusterRoleBinding

# 修改 Deployment Pod Container 的 Image
$ kubectl set image deployment/<deployment_name> <container_name>=IMAGE:TAG

Basic Command (Intermediate)

explain

# 查看资源的文档
$ kubectl explain

get

# 显示一个或更多 resources
$ kubectl get

edit

# 在服务器上编辑一个 resource
$ kubectl edit

扩容 PVC

创建一个 StorageClass,test-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-sc
parameters:
  cachingmode: None
  kind: Managed
  storageaccounttype: Standard_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true # 允许扩容,不允许缩容
$ kubectl apply -f test-sc.yaml
storageclass.storage.k8s.io/test-sc created

创建一个 PVC,test-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: data-bjidc-bigdata-kafka-0
spec:
 accessModes:
   - ReadWriteOnce
 volumeMode: Filesystem
 resources:
   requests:
     storage: 100Gi
 storageClassName: test-sc
$ kubectl apply -f test-pvc.yaml
persistentvolumeclaim/data-bjidc-bigdata-kafka-0 created
# 修改 pvc 容量
$ kubectl edit pvc data-bjidc-bigdata-kafka-0
# 会进入默认编辑器(Windows 会弹出记事本),修改保存退出即可
persistentvolumeclaim/data-bjidc-bigdata-kafka-0 edited

# 执行扩容
$ kubectl get pvc data-bjidc-bigdata-kafka-0 -o yaml | kubectl apply -f -
persistentvolumeclaim/data-bjidc-bigdata-kafka-0 configured

# 查看是否生效
$ kubectl describe pvc data-bjidc-bigdata-kafka-0
...
Capacity:      500Gi
...

delete

# Delete resources by filenames, stdin, resources and names, or by resources and label selector
$ kubectl delete

# Delete PV
$ kubectl get pv
NAME                                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                                      STORAGECLASS          REASON   AGE
pvc-c783f36f-e079-11ea-99a6-fa163e5bd4e1   8Gi        RWO            Delete        Bound    dlink-prod/data-airflow-bitnami-postgresql-0   local-dynamic                   8d

$ kubectl delete pv pvc-c783f36f-e079-11ea-99a6-fa163e5bd4e1
$ kubectl get pv
NAME                                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                                      STORAGECLASS          REASON   AGE
pvc-c783f36f-e079-11ea-99a6-fa163e5bd4e1   8Gi        RWO          Terminating      Bound   dlink-prod/data-airflow-bitnami-postgresql-0   local-dynamic                   8d

# 删除
$ kubectl patch pv pvc-c783f36f-e079-11ea-99a6-fa163e5bd4e1 -p '{"metadata":{"finalizers":null}}'

Deploy Commands

rollout

# Manage the rollout of a resource
$ kubectl rollout

scale

# 为 deployment, replicaset, replication controller 或者 Job 设置一个新的副本数量
$ kubectl scale

autoscale

# 自动调整一个 deployment, replicaset, 或者 replication controller 的副本数量
$ kubectl autoscale

Cluster Management Command

certificate

# 修改 certificate 资源
$ kubectl certificate

cluster-info

# 显示集群信息
$ kubectl cluster-info
# To further debug and diagnose cluster problems
$ kubectl cluster-info dump

top

# Display Resource (CPU/Memory/Storage) usage. 需要部署配套的组件才能获取到监控值(1.8 以下部署 heapter,1.8 以上部署 metric-server)
$ kubectl top

# 查看节点 bj-idc1-10-10-41-108-10.53.6.23 的资源配置
$ kubectl top node bj-idc1-10-10-41-108-10.53.6.23

cordon

# 标记 node 为 unschedulable
$ kubectl cordon

uncordon

# 标记 node 为 schedulable
$ kubectl uncordon

drain

# drain node in preparation for maintenance(排除节点准备维护)
$ kubectl drain

taint

# 更新一个或者多个 nodes 上的 taints(污点)
$ kubectl taint

Troubleshooting and Debugging Commands

describe

# 显示一个指定 resource 或者 group 的 resources 详情
$ kubectl describe
# 例如查看 pod name 为 my-logstash-0 的详细信息,包含 pod yaml 和 events
# Note: events 非常重要,可以看到 pod 启动过程中的错误,有助于 debug
$ kubectl describe pod my-logstash-0
Name:           my-logstash-0
Namespace:      dlink-test
Priority:       0
Node:           bj-idc1-10-10-41-108-10.53.6.201/10.53.6.201
Start Time:     Thu, 09 Jul 2020 20:27:26 +0800
Labels:         app=logstash
                controller-revision-hash=my-logstash-76dfc8dc79
                release=my-logstash
                service=logstash
                statefulset.kubernetes.io/pod-name=my-logstash-0
Annotations:    checksum/patterns: 4e73423ba530faf76bfdcffbae5f51f68fb3c6dcccfad3d591d0dc39eef6dbaf
                checksum/pipeline: b5ce37c5633f923053ff11bf0b9eb8220d34f72d463d4ee62e723a5f161f3dc1
                checksum/templates: ce4d22daf7abd91101bc05692f0bc34f15696dae3d3f7c742a2a8a4373b889c8
                kubernetes.io/psp: default
                seccomp.security.alpha.kubernetes.io/pod: docker/default
Status:         Running
IP:             10.244.146.48
IPs:            <none>
Controlled By:  StatefulSet/my-logstash
Containers:
  logstash:
    Container ID:   docker://e6ae73f7b0fb66e5ee2760e8c0b73a1150cb5d143970f3c719797d4919a3ac09
    Image:          docker.elastic.co/logstash/logstash:7.7.1
    Image ID:       docker-pullable://docker.elastic.co/logstash/logstash@sha256:2930409e50a09aa1cd156226f9d233158e768e8e9c49e0c2c4cf63d083968c65
    Ports:          9600/TCP, 8080/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Thu, 09 Jul 2020 20:38:20 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Thu, 09 Jul 2020 20:34:35 +0800
      Finished:     Thu, 09 Jul 2020 20:35:23 +0800
    Ready:          False
    Restart Count:  7
    Limits:
      cpu:     200m
      memory:  1Gi
    Requests:
      cpu:      200m
      memory:   1Gi
    Liveness:   http-get http://:monitor/ delay=20s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:monitor/ delay=20s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HTTP_HOST:                0.0.0.0
      HTTP_PORT:                9600
      ELASTICSEARCH_HOST:       elasticsearch-client.default.svc.cluster.local
      ELASTICSEARCH_PORT:       9200
      LS_JAVA_OPTS:             -Xmx1g -Xms1g
      CONFIG_RELOAD_AUTOMATIC:  true
      PATH_CONFIG:              /usr/share/logstash/pipeline
      PATH_DATA:                /usr/share/logstash/data
      QUEUE_CHECKPOINT_WRITES:  1
      QUEUE_DRAIN:              true
      QUEUE_MAX_BYTES:          1gb
      QUEUE_TYPE:               persisted
    Mounts:
      /usr/share/logstash/data from data (rw)
      /usr/share/logstash/files from files (rw)
      /usr/share/logstash/patterns from patterns (rw)
      /usr/share/logstash/pipeline from pipeline (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from my-logstash-token-4jpb9 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-my-logstash-0
    ReadOnly:   false
  patterns:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-logstash-patterns
    Optional:  false
  files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-logstash-files
    Optional:  false
  pipeline:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      my-logstash-pipeline
    Optional:  false
  my-logstash-token-4jpb9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  my-logstash-token-4jpb9
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
                 project=dlink:NoExecute
Events:
  Type     Reason     Age                   From                                       Message
  ----     ------     ----                  ----                                       -------
  Normal   Scheduled  11m                   default-scheduler                          Successfully assigned dlink-test/my-logstash-0 to bj-idc1-10-10-41-108-10.53.6.201
  Normal   Started    10m (x2 over 10m)     kubelet, bj-idc1-10-10-41-108-10.53.6.201  Started container logstash
  Warning  Unhealthy  9m22s (x6 over 10m)   kubelet, bj-idc1-10-10-41-108-10.53.6.201  Readiness probe failed: Get http://10.244.146.48:9600/: dial tcp 10.244.146.48:9600: connect: connection refused
  Normal   Killing    9m20s (x2 over 10m)   kubelet, bj-idc1-10-10-41-108-10.53.6.201  Container logstash failed liveness probe, will be restarted
  Normal   Pulling    9m17s (x3 over 11m)   kubelet, bj-idc1-10-10-41-108-10.53.6.201  Pulling image "docker.elastic.co/logstash/logstash:7.7.1"
  Normal   Pulled     9m14s (x3 over 10m)   kubelet, bj-idc1-10-10-41-108-10.53.6.201  Successfully pulled image "docker.elastic.co/logstash/logstash:7.7.1"
  Normal   Created    9m14s (x3 over 10m)   kubelet, bj-idc1-10-10-41-108-10.53.6.201  Created container logstash
  Warning  Unhealthy  6m17s (x16 over 10m)  kubelet, bj-idc1-10-10-41-108-10.53.6.201  Liveness probe failed: Get http://10.244.146.48:9600/: dial tcp 10.244.146.48:9600: connect: connection refused
  Warning  BackOff    89s (x20 over 5m55s)  kubelet, bj-idc1-10-10-41-108-10.53.6.201  Back-off restarting failed container

# 检查节点容量和已分配的资源数量
$ kubectl describe nodes e2e-test-node-pool-4lw4
Name:            e2e-test-node-pool-4lw4
[ ... 这里忽略了若干行以便阅读 ...]
Capacity:
 cpu:                               2
 memory:                            7679792Ki
 pods:                              110
Allocatable:
 cpu:                               1800m
 memory:                            7474992Ki
 pods:                              110
[ ... 这里忽略了若干行以便阅读 ...]
Non-terminated Pods:        (5 in total)
  Namespace    Name                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------    ----                                  ------------  ----------  ---------------  -------------
  kube-system  fluentd-gcp-v1.38-28bv1               100m (5%)     0 (0%)      200Mi (2%)       200Mi (2%)
  kube-system  kube-dns-3297075139-61lj3             260m (13%)    0 (0%)      100Mi (1%)       170Mi (2%)
  kube-system  kube-proxy-e2e-test-...               100m (5%)     0 (0%)      0 (0%)           0 (0%)
  kube-system  monitoring-influxdb-grafana-v4-z1m12  200m (10%)    200m (10%)  600Mi (8%)       600Mi (8%)
  kube-system  node-problem-detector-v0.1-fj7m3      20m (1%)      200m (10%)  20Mi (0%)        100Mi (1%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests    CPU Limits    Memory Requests    Memory Limits
  ------------    ----------    ---------------    -------------
  680m (34%)      400m (20%)    920Mi (12%)        1070Mi (14%)

logs

# 输出容器在 pod 的日志
$ kubectl logs

attach

# attach 到一个运行中的 container
$ kubectl attach

exec

# 在一个 container 中执行一个命令
$ kubectl exec
# 交互式进入容器内部
$ kubectl exec --tty -i <pod_name> --namespace dlink-prod -- bash

# pod 中存在多个 container 的情况
# 查看 pod 多个 container name
$ kubectl.exe get pod <pod_name> -o=jsonpath="{.spec.containers[*].name}"
# 进入指定的 container -c => --container
$ kubectl exec --tty -i <pod_name> -c <container_name> --namespace dlink-prod -- bash

port-forward

# 转发一个或多个本子端口到 pod
$ kubectl port-forward

proxy

# 运行一个 proxy 到 kubernetes API server
$ kubectl proxy

cp

# 复制 files 和 directories 到 containers 和从 containers 复制 files 和 directories
$ kubectl cp --help
Copy files and directories to and from containers.

Examples:
  # !!!Important Note!!!
  # Requires that the 'tar' binary is present in your container
  # image.  If 'tar' is not present, 'kubectl cp' will fail.
  #
  # For advanced use cases, such as symlinks, wildcard expansion or
  # file mode preservation consider using 'kubectl exec'.

  # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>
  tar cf - /tmp/foo | kubectl exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar

  # Copy /tmp/foo from a remote pod to /tmp/bar locally
  kubectl exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar

  # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace
  kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir

  # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
  kubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>

  # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace>
  kubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar

  # Copy /tmp/foo from a remote pod to /tmp/bar locally
  kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar

Options:
  -c, --container='': Container name. If omitted, the first container in the pod will be chosen
      --no-preserve=false: The copied file/directory's ownership and permissions will not be preserved in the container

Usage:
  kubectl cp <file-spec-src> <file-spec-dest> [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).

auth

# inspect authorization(检查授权)
$ kubectl auth

Advanced Commands

apply

# 通过文件名或者标准输入流(stdin)对资源进行配置
$ kubectl apply
# 使用 Reloader - A Kubernetes controller to watch changes in ConfigMap and Secrets and then restart pods for 
# Deployment, StatefulSet, DaemonSet and DeploymentConfig
# 参考链接 https://github.com/stakater/Reloader
kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: "foo-configmap"
spec:
  template:
    metadata:
...
$ kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml

patch

# 使用 strategic merge patch 更新一个资源的 field(s)
$ kubectl patch

replace

# 通过 filename 或者 stdin 替换一个资源
$ kubectl replace

wait

# Experimental: Wait for one condition on one or many resources
$ kubectl wait

convert

# 在不同的 API versions 转换配置文件
$ kubectl convert

Settings Commands

label

# 更新在这个资源上的 labels
$ kubectl label

annotate

# 更新一个资源的注解
$ kubectl annotate

completion

# Output shell completion code for the specified shell (bash or zsh)
$ kubectl completion

Other Commands

alpha

# Commands for features in alpha
$ kubectl alpha

api-resources

# Print the supported API resources on the server
$ kubectl api-resources

api-verisons

# Print the supported API versions on the server, in the form of "group/version"
$ kubectl api-verisons

config

# 修改 kube config 文件,执行该命令会生成 ~/.kube/config 文件
$ kubectl config

# 配置一个名为 default 的集群,并指定服务地址与根证书
$ kubectl config set-cluster default --server=https://192.168.4.111:443 --certificate-authority=${PWD}/ssl/ca.pem
# 设置一个管理用户为 admin,并配置访问证书
$ kubectl config set-credentials admin --certificate-authority=${PWD}/ssl/ca.pem --client-key=${PWD}/ssl/admin-key.pem --client-certificate=${PWD}/ssl/admin.pem
# 设置一个名为 default 使用 default 集群与 admin 用户的上下文,
$ kubectl config set-context default --cluster=default --user=admin
# 启用 default 为默认上下文
$ kubectl config use-context default

# 设置修改当前的 namespace,这样就不用每次执行命令的时候加 --namespace 参数了
$ kubectl config set-context --current --namespace=<insert-namespace-name-here>

plugin

# Runs a command-line plugin
$ kubectl plugin

version

# 输出 client 和 server 的版本信息
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8",
Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1",
Compiler:"gc", Platform:"linux/amd64"}

qin

取消

感谢您的支持,我会继续努力的!

扫码支持
扫码支持
扫码打赏

打开支付宝扫一扫,即可进行扫码打赏哦