AutoScaling in Kubenetes

2021-7-26|2022-12-28
Vaayne
Vaayne
category
password
date
Jul 26, 2021 01:54 AM
icon
slug
status
tags
summary
type
 
 

AutoScaling 的三种形式

Cluster Autoscaling

deals with node scaling operations
修改集群的 Node 数量

Vertical Scaling

Vertical scaling can essentially resize your server with no change to your code. It is the ability to increase the capacity of existing hardware or software by adding resources. Vertical scaling is limited by the fact that you can only get as big as the size of the server.
修改 Pod 的资源,适用于一些 stateful 的应用

Horizontal Pod Autoscaling

Horizontal scaling affords the ability to scale wider to deal with traffic. It is the ability to connect multiple hardware or software entities, such as servers, so that they work as a single logical unit. This kind of scale cannot be implemented at a moment’s notice.
增加 Pod 的数量,适用于一些 stateless 的应用
 

Horizontal Pod Autoscaling

历史

  • The Horizontal Pod Autoscaler feature was first introduced in Kubernetes v1.1, the HPA scaled pods based on observed CPU utilization and later on based on memory usage
  • Kubernetes 1.6 a new API Custom Metrics API was introduced that enables HPA access to arbitrary metrics.
  • Kubernetes 1.7 introduced the aggregation layer that allows 3rd party applications to extend the Kubernetes API by registering themselves as API add-ons.
The Custom Metrics API along with the aggregation layer made it possible for monitoring systems like Prometheus to expose application-specific metrics to the HPA controller.
 
 

基于 CPU 和 Memory 的 Auto Scaling

 
The Kubernetes Metrics Server API 是集群范围内的资源使用数据聚合器,通过汇集来自kubernetes.summary_api 的数据,收集节点和pod的CPU和内存使用情况。
notion image
基于 Metrics Server 收集的 CPU 和 Memory 使用数据, 可以设置 type 为 Resource 的 HPA
Example:
Deployment — pod.yaml
HPA — pod-hap.yaml
 

基于自定义 Metrics 的 Autoscaling

 
这种可以支持各种类型的 Metrics,比如我们使用 Prometheus 进行 Metrics 进行收集,就可以基于 Prometheus 收集的各种 Metrics 来进行 AutoScaling,目前 Prometheus 支持收集的 Metrics 的种类非常多,可以参考
 
基于自定义的 Metrics 需要两个组件进行支持
  • 一个是收集各种 Metrics 然后存放到 Prometheus 这类的时序数据库
    • Install Prometheus
notion image
 
 
Example
Install Prometheus and prometheus-adapter
deploy pod info to k8s
pod.yaml
 
deploy podinfo 之后,它就会 expose 一个自定义的 metric http_requests_total ,然后 Prometheus 会收集这个 metric,使用 http_requests 的名字存储在 Prometheus 里
基于这个 Metric 创建 HPA
pod-hpa.yaml
查看结果
 

Metrics 类型

Metrics 主要分为两大类,一个是 Resource Metrics 一个是 Custom Metrics
  1. Resource Metric
      • Only support CPU and Memory
      • support target type of AverageValue and Utilization
  1. Custom Metrics
    1. Pod Metric
        • Only support a target type of AverageValue
    2. Object Metric
        • These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the object; they only describe it.
        • support target type of AverageValue and Value
          • With Value, the target is compared directly to the returned metric from the API
          • With AverageValue, the value returned from the custom metrics API is divided by the number of Pods before being compared to the target.

        一些高级用法

        同时支持多种 Metrics 的 HPA

         
        如果同时提供了多种, HorizontalPodAutoscaler 按顺序计算每个 Metric 的值,然后选择scale 到需要最多的 replica 数量的那个。
        针对下面的设置,HorizontalPodAutoscaler 会尝试增加 Pod 数来保证:
        1. 每个 Pod 大约使用了 50% 的 request 的 CPU
        1. 每个 Pod 能够支持 1000 RPS
        1. 所有的 Pod 一起可以支持 10000 RPS
         

        支持特定的 metrics 的 AutoScaling

        For all non-resource metric types (pod, object, and external, described below), you can specify an additional label selector which is passed to your metric pipeline.
        使用 kubenetes 内部的 labels 进行 selecter, 下面的例子:
        收集 http_requests 的metrics,只匹配 GET 请求

        使用外部的 Metrics 的 AutoScaling

        如果想基于外部服务的状态,比如 message queue 的数量,而不是 Kubernetes 内的 Object 进行 AutoScaling,需要使用 External 这种类型的 metric。
        • 支持 Value 和 AverageValue 两种 target type
        • 和 Object type 的 metric 功能一样
         
        When possible, it's preferable to use the custom metric target types instead of external metrics, since it's easier for cluster administrators to secure the custom metrics API. The external metrics API potentially allows access to any metric, so cluster administrators should take care when exposing it.
         

References

 
Kubernetes 游记之一,多云环境 K3S 集群的搭建我的科学上网配置 2023