Service Discovery And Configuration
- Provides service discovery and service registry to enable inter service communication
- System-wide configurations: can be quickly rolled out and keep in sync
- Build your own: etcd, consul, zookeeper
- Built-in: Kubernetes, Marathon, AWS
The client queries a service registry then uses a load‑balancing algorithm to select one of the available service instances and makes a request.
- Netflix Eureka(service registry) + Netflix Ribbon(IPC client that works with Eureka to load balance requests across the available service instances)
- Amazon Cloud Map
The client makes a request to a service via a load balancer. The load balancer queries the service registry and routes each request to an available service instance.
- The AWS Elastic Load Balancer (ELB)
- Kubernetes and Marathon run a proxy on each host in the cluster. The proxy plays the role of a server‑side discovery load balancer.
- details of discovery are abstracted away from the client. Clients simply make requests to the load balancer.
- some deployment environments provide this functionality for free.
Cons: if the load balancer is not provided by the deployment environment, it is another highly available system component to maintain.
A database containing the network locations of service instances.
A service registry needs to be highly available and up to date.
- Apache Zookeeper
Kubernetes, Marathon, and AWS do not have an explicit service registry. It is a built‑in part of the infrastructure.
Consul has native support for multiple datacenters
All of these systems have roughly the same semantics when providing key/value storage: reads are strongly consistent and availability is sacrificed for consistency in the face of a network partition.
ZooKeeper et al. provide only a primitive K/V store and require that application developers build their own system to provide service discovery. Consul, by contrast, provides an opinionated framework for service discovery and eliminates the guess-work and development effort