amazon web services – How does kubectl determine the server IP address

I’m running a k8s cluster in AWS and I got the following error after I ran kubectl get cluster-info:

E0902 13:22:08.516718     897 memcache.go:265] couldn't get current server API group list: Get "https://api.kubevpro.quickesh.com/api?timeout=32s": dial tcp 203.0.113.123:443: i/o timeout
Unable to connect to the server: dial tcp 203.0.113.123:443: i/o timeout

It seems like that kubectl is trying to connect to 203.0.113.123:443 but the connection fails somehomw.

What I have tried

  • nslookup api.kubevpro.quickesh.com got the following output:
Server:     127.0.0.53
Address:    127.0.0.53#53

Non-authoritative answer:
Name:   api.kubevpro.quickesh.com
Address: 203.0.113.123
  • ran kops validate cluster --state=s3://ky-bucket-kops --name=kubevpro.quickesh.com and got the following result:
Validating cluster kubevpro.quickesh.com

INSTANCE GROUPS
NAME                ROLE        MACHINETYPE MIN MAX SUBNETS
control-plane-us-east-1a    ControlPlane    t3.medium   1   1   us-east-1a
nodes-us-east-1a        Node        t3.small    1   1   us-east-1a
nodes-us-east-1b        Node        t3.small    1   1   us-east-1b

NODE STATUS
NAME    ROLE    READY

VALIDATION ERRORS
KIND    NAME        MESSAGE
dns apiserver   Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
Error: validation failed: cluster not yet healthy
  • checked the config of kubectl by kubectl config and got the result:
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://api.kubevpro.quickesh.com
    tls-server-name: api.internal.kubevpro.quickesh.com
  name: kubevpro.quickesh.com
contexts:
- context:
    cluster: kubevpro.quickesh.com
    user: kubevpro.quickesh.com
  name: kubevpro.quickesh.com
current-context: kubevpro.quickesh.com
kind: Config
preferences: {}
users:
- name: kubevpro.quickesh.com
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

So you see that the context config is correct.

Why cannot kubectl get connected to the server?

Read more here: Source link