Skip to content

iits-consulting/terraform-opentelekomcloud-cce

Repository files navigation

OTC Cloud Container Engine Terraform module

A module designed to support full capabilities of OTC CCE while simplifying the configuration for ease of use.

Usage example

module "cce" {
  source             = "iits-consulting/cce/opentelekomcloud"
  name               = var.name

  // Cluster configuration
  cluster_vpc_id            = module.vpc.vpc.id
  cluster_subnet_id         = values(module.vpc.subnets)[0].id
  cluster_cluster_version   = "v1.19.8-r0"
  cluster_high_availability = false
  cluster_enable_scaling    = true #set this flag to false to disable auto scaling
  // Node configuration
  node_availability_zones = ["eu-de-03", "eu-de-01"]
  node_count              = 3
  node_flavor             = local.node_spec_default
  node_storage_type       = "SSD"
  node_storage_size       = 100
  // Autoscaling configuration
  autoscaling_node_max = 8
}

WARNING: The parameter node_config.node_storage_encryption_enabled should be kept as false unless an agency for EVS is created with:

  • Agency Name = EVSAccessKMS
  • Agency Type = Account
  • Delegated Account = op_svc_evs
  • Permissions = KMS Administrator within the project

Testing Scaling up and down

We first test the scaling up by adding a test deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: autoscale-test-deployment
  labels:
    app: autoscale-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: autoscale-test
  template:
    metadata:
      labels:
        app: autoscale-test
    spec:
      containers:
        - name: hello-world
          image: nginx
          ports:
            - containerPort: 80
          resources:
            requests:
              memory: "64Mi"
              cpu: "250m"

We can scale the deployment and see how the cluster responds:

> kubectl scale deployment/autoscale-test-deployment --replicas=40

Since the 40 replicas utilize 10 CPUs, these do not fit on the nodes in the default node pool. Therefore the autoscaler will kick in and create an additional node.

> kubectl get pods
NAME                                        READY   STATUS            RESTARTS   AGE
autoscale-test-deployment-6f9ff6448-4x248   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-5kdcn   0/2     PodInitializing   0          14s
autoscale-test-deployment-6f9ff6448-6pcmv   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-8ftc8   1/2     Running           0          14s
autoscale-test-deployment-6f9ff6448-9kxvt   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-9scj5   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-d7btf   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-dsrvs   0/2     PodInitializing   0          14s
autoscale-test-deployment-6f9ff6448-dxf58   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-gdjvx   0/2     PodInitializing   0          14s
autoscale-test-deployment-6f9ff6448-grwsl   0/2     PodInitializing   0          14s
autoscale-test-deployment-6f9ff6448-gxbr9   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-h27z2   0/2     Init:0/1          0          14s
autoscale-test-deployment-6f9ff6448-h89vw   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-hltfb   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-hs5q8   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-m5zn9   0/2     PodInitializing   0          14s
autoscale-test-deployment-6f9ff6448-m6fxx   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-mmtz2   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-mrpjt   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-mzkrn   2/2     Running           0          26h
autoscale-test-deployment-6f9ff6448-n6hrq   1/2     Running           0          14s
autoscale-test-deployment-6f9ff6448-p2p9v   0/2     PodInitializing   0          14s
autoscale-test-deployment-6f9ff6448-pt4vj   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-q2ksm   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-q7p7t   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-qfbqq   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-qs949   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-qszsx   0/2     PodInitializing   0          14s
autoscale-test-deployment-6f9ff6448-rm6c9   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-rnfzn   0/2     PodInitializing   0          14s
autoscale-test-deployment-6f9ff6448-rsgh6   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-sgzhb   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-v8qvm   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-w57gp   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-wfp5p   0/2     Pending           0          14s
autoscale-test-deployment-6f9ff6448-xh5sm   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-xrnrz   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-z72sp   0/2     Pending           0          13s
autoscale-test-deployment-6f9ff6448-zdgkp   0/2     PodInitializing   0          14s

And then we also see the started nodes, 2 in the default node pool and 4 in the autoscale node pool:

> kubectl get nodes -L cce.cloud.com/cce-nodepool
NAME              STATUS   ROLES    AGE     VERSION                             CCE-NODEPOOL
192.168.13.187    Ready    <none>   6m23s   v1.17.9-r0-CCE20.7.1.B003-17.36.3   otc-customer-success-dev-node-pool-autoscale
192.168.161.247   Ready    <none>   4h15m   v1.17.9-r0-CCE20.7.1.B003-17.36.3   otc-customer-success-dev-node-pool-autoscale
192.168.182.115   Ready    <none>   39d     v1.17.9-r0-CCE20.7.1.B003-17.36.3
192.168.186.181   Ready    <none>   6m23s   v1.17.9-r0-CCE20.7.1.B003-17.36.3   otc-customer-success-dev-node-pool-autoscale
192.168.42.133    Ready    <none>   39d     v1.17.9-r0-CCE20.7.1.B003-17.36.3
192.168.83.154    Ready    <none>   6m17s   v1.17.9-r0-CCE20.7.1.B003-17.36.3   otc-customer-success-dev-node-pool-autoscale

Scaling down again...

> kubectl scale deployment/autoscale-test-deployment --replicas=1

Requirements

Name Version
terraform >= 1.5.7
errorcheck 3.0.3
opentelekomcloud ~> 1.36, >=1.36.52
random ~> 3.0
tls ~> 4.0

Providers

Name Version
errorcheck 3.0.3
opentelekomcloud ~> 1.36, >=1.36.52
random ~> 3.0
tls ~> 4.0

Modules

No modules.

Resources

Name Type
errorcheck_is_valid.autoscaler_version_availability resource
errorcheck_is_valid.cluster_container_network_type resource
errorcheck_is_valid.cluster_storage_size_both_set resource
errorcheck_is_valid.cluster_storage_size_combined resource
errorcheck_is_valid.metrics_version_availability resource
errorcheck_is_valid.node_storage_remainder_path resource
opentelekomcloud_cce_addon_v3.autoscaler resource
opentelekomcloud_cce_addon_v3.metrics resource
opentelekomcloud_cce_cluster_v3.cluster resource
opentelekomcloud_cce_node_pool_v3.cluster_node_pool resource
opentelekomcloud_compute_keypair_v2.cluster_keypair resource
opentelekomcloud_kms_key_v1.node_storage_encryption_key resource
opentelekomcloud_vpc_eip_v1.cce_eip resource
random_id.cluster_keypair_id resource
random_id.id resource
tls_private_key.cluster_keypair resource
opentelekomcloud_cce_addon_templates_v3.autoscaler data source
opentelekomcloud_cce_addon_templates_v3.metrics data source
opentelekomcloud_identity_project_v3.current data source
opentelekomcloud_kms_key_v1.node_storage_encryption_existing_key data source
opentelekomcloud_vpc_subnet_v1.eni_subnet data source

Inputs

Name Description Type Default Required
cluster_subnet_id The UUID of the subnet for the cluster nodes. string n/a yes
cluster_vpc_id The ID of the VPC for the cluster nodes. string n/a yes
name CCE cluster name string n/a yes
node_availability_zones Availability zones for the node pools. Providing multiple availability zones creates one node pool in each zone. set(string) n/a yes
node_count Number of nodes to create number n/a yes
node_flavor Node specifications in otc flavor format string n/a yes
autoscaler_node_max Maximum limit of servers to create number 10 no
autoscaler_node_min Lower bound of servers to always keep (default: <node_count>) number null no
autoscaler_version Version of the Autoscaler Addon Template string "latest" no
cluster_annotations CCE cluster annotations, key/value pair format. This field is not stored in the database and is used only to specify the add-ons to be installed in the cluster. map(string) {} no
cluster_api_access_trustlist Specifies the trustlist of network CIDRs that are allowed to access cluster APIs. list(string) null no
cluster_authenticating_proxy_ca X509 CA certificate configured in authenticating_proxy mode. The maximum size of the certificate is 1 MB. string null no
cluster_authenticating_proxy_cert Client certificate issued by the X509 CA certificate configured in authenticating_proxy mode. string null no
cluster_authenticating_proxy_private_key Private key of the client certificate issued by the X509 CA certificate configured in authenticating_proxy mode. string null no
cluster_authentication_mode Authentication mode of the Cluster. Either rbac or authenticating_proxy string "rbac" no
cluster_component_configurations Specifies the kubernetes component configurations. For details, see https://docs.otc.t-systems.com/cloud-container-engine/umn/clusters/managing_clusters/modifying_cluster_configurations.html#cce-10-0213 map(map(string)) {} no
cluster_container_cidr Kubernetes pod network CIDR range string "172.16.0.0/16" no
cluster_container_network_type Container network type: vpc-router, overlay_l2 or eni for VirtualMachine Clusters; underlay_ipvlan for BareMetal Clusters string "" no
cluster_delete_all_network Specifies whether to delete all associated network resources when deleting the CCE cluster. bool null no
cluster_delete_all_storage Specifies whether to delete all associated storage resources when deleting the CCE cluster. bool null no
cluster_delete_efs Specifies whether to unbind associated SFS Turbo file systems when deleting the CCE cluster. bool null no
cluster_delete_eni Specifies whether to delete ENI ports when deleting the CCE cluster. bool null no
cluster_delete_evs Specifies whether to delete associated EVS disks when deleting the CCE cluster. bool null no
cluster_delete_net Specifies whether to delete cluster Service/ingress-related resources, such as ELB when deleting the CCE cluster. bool null no
cluster_delete_obs Specifies whether to delete associated OBS buckets when deleting the CCE cluster. bool null no
cluster_delete_sfs Specifies whether to delete associated SFS file systems when deleting the CCE cluster. bool null no
cluster_enable_scaling Enable autoscaling of the cluster node pools bool false no
cluster_enable_volume_encryption System and data disks encryption of master nodes. Changing this parameter will create a new cluster resource. bool true no
cluster_eni_subnet_id Specifies the UUID of ENI subnet. Specified only when creating a CCE Turbo cluster (when cluster_container_network_type = "eni"). If unspecified, module will use the same subnet as cluster_subnet_id. string "" no
cluster_extend_param CCE cluster extended parameters, key/value pair format. For details, please see https://docs.otc.t-systems.com/cloud-container-engine/api-ref/apis/cluster_management/creating_a_cluster.html#cce-02-0236-table17575013586. map(string) null no
cluster_high_availability Create the cluster in highly available mode bool false no
cluster_highway_subnet_id The ID of the high speed network for bare metal nodes. string null no
cluster_ignore_addons Skip all cluster addons operations. bool null no
cluster_ignore_certificate_clusters_data Skip sensitive cluster data. (will disable some module outputs) bool null no
cluster_ignore_certificate_users_data Skip sensitive user data. (will disable some module outputs) bool null no
cluster_install_icagent Install icagent for logging and metrics via AOM bool false no
cluster_ipv6_enable Specifies whether the cluster supports IPv6 addresses. This field is supported in clusters of v1.25 and later versions. bool null no
cluster_kube_proxy_mode Service forwarding mode: iptables or ipvs string null no
cluster_no_addons Remove addons installed by the default after the cluster creation. bool null no
cluster_public_access Bind a public IP to the CLuster to make it publicly reachable over the internet. bool true no
cluster_security_group_id Default worker node security group ID of the cluster. If specified, the cluster will be bound to the target security group. Otherwise, the system will automatically create a default worker node security group for you. string null no
cluster_service_cidr Kubernetes service network CIDR range string "172.17.0.0/16" no
cluster_size Size of the cluster: small, medium, large string "small" no
cluster_timezone CCE cluster timezone in string format string null no
cluster_type Cluster type: VirtualMachine or BareMetal string "VirtualMachine" no
cluster_version CCE cluster version. string "v1.31" no
metrics_server_version Version of the Metrics Server Addon Template string "latest" no
node_container_runtime The container runtime to use. Must be set to either containerd or docker. string "containerd" no
node_k8s_tags (Optional, Map) Tags of a Kubernetes node, key/value pair format. map(string) {} no
node_os Operating system of worker nodes: EulerOS 2.9 or HCE OS 2.0 string "HCE OS 2.0" no
node_postinstall Post install script for the cluster ECS node pool. string "" no
node_storage_encryption_enabled Enable OTC KMS volume encryption for the node pool volumes. bool true no
node_storage_encryption_kms_key_name If KMS volume encryption is enabled, specify a name of an existing kms key. Setting this disables the creation of a new kms key. string null no
node_storage_kubernetes_size How much of the data disk (in percent) is reserved for the kubernetes runtime storage (i.e. ephemeral storage). OTC default is 10 number null no
node_storage_remainder_path If the runtime & kubernetes sizes do not add up to 100(%), otc wants to know where/how to mount the remaining space. Note that there are forbidden paths, see otc-documentation for which paths are forbidden. string null no
node_storage_runtime_size How much of the data disk (in percent) is reserved for the node runtime storage (i.e. docker images). OTC default is 90 number null no
node_storage_size Size of the node system disk in GB number 100 no
node_storage_type Type of node storage SATA, SAS or SSD string "SATA" no
node_taints Node taints for the node pool
list(object({
effect = string
key = string
value = string
}))
[] no
tags Common tag set for CCE resources map(any) {} no

Outputs

Name Description
cluster Complete configuration of the created CCE cluster.
cluster_credentials Collection of access credentials for the API server. (Some or all values will be an empty string if cluster_ignore_certificate_clusters_data or cluster_ignore_certificate_users_data is true)
cluster_id UUID of the created CCE cluster.
cluster_lb_public_ip Public EIP address of the cluster API server. (will be an empty string if cluster_public_access is false or cluster_ignore_certificate_clusters_data is true)
cluster_name Name of the created CCE cluster.
cluster_private_ip Private IP address of the cluster API server. (will be an empty string if cluster_ignore_certificate_clusters_data is true)
cluster_public_ip Public EIP address of the cluster API server. (will be an empty string if cluster_public_access is false or cluster_ignore_certificate_clusters_data is true)
kubeconfig Cluster credentials for the created CCE cluster in kubeconfig YAML format. (Some or all values will be an empty string if cluster_ignore_certificate_clusters_data or cluster_ignore_certificate_users_data is true)
kubeconfig_json Cluster credentials for the created CCE cluster in kubeconfig JSON format. (Some or all values will be an empty string if cluster_ignore_certificate_clusters_data or cluster_ignore_certificate_users_data is true)
kubeconfig_yaml Cluster credentials for the created CCE cluster in kubeconfig YAML format. (Some or all values will be an empty string if cluster_ignore_certificate_clusters_data or cluster_ignore_certificate_users_data is true)
node_pool_ids UUIDs of the cluster node pools.
node_pool_keypair_name Name of the keypair resource created in OTC for worker node pools.
node_pool_keypair_private_key Private key of the keypair resource created in OTC for worker node pools.
node_pool_keypair_public_key Public key of the keypair resource created in OTC for worker node pools.
node_pools Complete configurations of the created node pools.
node_pools_names Names of the cluster node pools.
node_sg_id UUID of the security group for worker nodes.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages