Wrangle microservices for local development with Minikube

Amit Uttam
door2door Engineering
5 min readApr 6, 2017

--

Having adopted a microservice architecture philosophy a while ago for our core products, we have definitely benefited from constructing modular pieces of our solution in languages and frameworks best suited for a specific business/product function. Simultaneously, this has also enabled us to draw on specific expertise of teammates in particular functional areas, thereby allowing them to focus and solve difficult problems within the boundaries of that functional service, unburdened with the baggage of a “principal” architecture or programming language. Very liberating.

( Image reference: https://martinfowler.com/articles/microservices.html )

While this is a well-known benefit of microservices, I now found myself wrestling with a slew of small microservice repos on my development machine, each needing be running and talking with each other in a particular way, just so I could use our public API as a whole, single entity. That’s the way our customers and our web and mobile clients look at our API, with all the detailed machinations of the individual services remaining rightfully out of view. A “black box”, so to speak. I wanted a way to test all the functionality of my public API on my laptop, with all the constituent services operating normally and communicating with each other, with a single command.

I wanted a “Backend-in-a-Box” (pending ™)

thereby providing me:

  • Ability to test, validate, verify and simulate complete system operation on a laptop.
  • One-click startup, with optional seeding.
  • Treating local environment as an orchestrated cluster of containered services makes it function closer to how production is set up.

Enter Kubernetes & Minikube

Minikube (L) & Kubernetes (R)

I had my eye on Kubernetes for a while now to help with clustering and deployment automation improvements as they pertained to our production ecosystem… a subject for a future blog post. However, the current challenge was to enable developers to mobilize and orchestrate apps locally. This is when Minikube fit. It’s Kubernetes, running very minimally on a local VM.

In other words, with Minikube, I “deploy” the individual microservice codebases, into a single-node “cluster” (a cluster of 1). One “box” (technically a Kubernetes Node), with all my microservices running within, in Pods, each exposed to the others as Kubernetes Services.

I had initially gone down the path of Docker Compose, but what won me over was the in-built DNS addon Service that comes pre-canned in Kubernetes, coupled with the comparatively clearer documentation of Kubernetes overall.

Here’s what the inter-service architecture looks like now with Minikube (Backend-in-a-Box):

Six microservices running in one Minikube node

Docker containers with your app code go into a Pod, which gets wrapped by a Service that defines:

  • an internal DNS name entry for service lookup
  • TCP port(s) of your choosing, to expose for access

This port can optionally be publicly exposed, as is the case with our API and Events services in the diagram above, so that these particular services can be accessed from outside the Minikube node.

(The relationship between containers, Pods and Services is quite well explained in the Kubernetes documentation, including multi-container Pods, and multi-Pod Services, if that’s what your ecosystem requires.)

Our services primarily interact with each other via HTTP, and so within Minikube, the Users service can talk to the Solver service merely by sending HTTP requests to http://solver . Perfect.

To get started

You’ll need:

  • A virtual machine ecosystem. We use VirtualBox, which is the default VM driver for Minikube anyway.
  • Your microservices running in containers.
  • Kubernetes CLI installed. (brew install kubernetes-cli on a Mac)
  • Minikube installed. (brew cask install minikube on a Mac)

Start Minikube via:

minikube start

This creates a new virtual machine in VirtualBox, which is accessible in a very similar way to docker-machine.

With kubectl cluster-info, you’ll see the 3 canned Kubernetes services that come with Minikube, including the DNS service already ready for new registrations.

Minikube is ready for your microservices

Configuration of each service can be done via Kubernetes CLI commands (kubectl), or via JSON/YAML config files. Here’s the relevant Kubernetes config for our Events service and pod, that runs a container off our managed docker image for the application.

apiVersion: v1
kind: Pod
metadata:
labels:
name: events
name: events
spec:
containers:
- name: drt-events
image: door2door/drt-events:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4000
volumeMounts:
- mountPath: /usr/src/app/.shared
name: shared-env
command: ["/bin/bash", "-c", "node event-server"]
---
apiVersion: v1
kind: Service
metadata:
name: events
labels:
name: events
spec:
type: NodePort
ports:
- port: 4000
targetPort: 4000
name: events
selector:
name: events

To create the pod & service in the running Minikube node:

kubectl create -f path/to/yaml/config

Now the Events service is accessible for my clients to connect to, at port 4000 on the IP address of the minikube node:

minikube service events --url

Minikube also comes with a really handy dashboard for an easy way to check on all your services and pods in the Minikube “cluster”.

Minikube’s Dashboard

Clearly there are several implementation details that I’ve left out from this introductory article. For example, how a database Pod can co-exist with an app code Pod, in the same Service, thereby making each Service more of a self-contained microservice, by true definition. Also in the next post, I’ll describe some of the details on how our various Minikube’d microservices share common configuration and environment variables between them via ConfigMaps.

Google’s done a really good job with Kubernetes, and obviously they know the challenges of web-scale better than anyone. But it’s nice to see tools like Minikube come out to help everyone get started with their excellent ecosystem.

--

--