logo

Kubernetes - Service

Last Updated: 2024-03-03

A Service provides an unchanging IP, used between frontend deployment and backend deployment.

A Service is responsible for enabling network access to a set of pods.

Each Service gets a ClusterIP allocated, one IP to get traffic to all the endpoints.

Service types:

  • ClusterIP: for testing; cluster scoped IP, used internally, the service is not exposed to resources outside the cluster. Workload can be accessed by a node ip + port, e.g. http://192.168.126.8:32768.
  • NodePort: for services within the cluster; maps a node port to a service; can be accessed from outside the cluster by requesting <NodeIP>:<NodePort>
  • LoadBalancer: for services to be exposed to external world, using a cloud provider's load balancer.

ClusterIP vs LoadBalancer: LoadBalancer has an external IP.

When you create a Service, it creates a corresponding DNS entry.

Service selects Pods by selector:

selector:
  app: http-echo

Headless Services

Headless Services: "None" for the cluster IP address .spec.clusterIP.

For headless Services, a cluster IP is not allocated, kube-proxy does not handle these Services, and there is no load balancing or proxying done by the platform for them.

With or Without Selector

A Service with a selector: the endpoints controller creates EndpointSlices automatically and modifies the DNS configuration to return A or AAAA records (IPv4 or IPv6 addresses) that point directly to the Pods backing the Service.

A Service without a selector: you can manually add an EndpointSlice (use case: the Service can abstract other kinds of backends, including ones that run outside the cluster.) Read more: https://kubernetes.io/docs/concepts/services-networking/service/#without-selectors

Troubleshooting: if the kubernetes service exists, kube-apiserver pod is running, but errors connecting to apiserver: check EndpointSlice, if kubernetes Service does not have a selector, EndpointSlice must be manually created.

Service IP

K8s Service IP addresses do not really exist; You can see them with iptables -t nat -L -n on any Node that is running kube-proxy.