Configuring local DNS for microservice routing
Cross-service connectivity failures in local Docker Compose environments frequently stem from non-deterministic name resolution. When microservices attempt to communicate using arbitrary FQDNs, the default Docker embedded resolver silently drops queries, causing cascading timeouts that break integration test suites and stall Developer Onboarding & Local Environment Automation workflows. This guide provides a deterministic, production-parity approach to configuring local DNS for microservice routing, ensuring routing behavior matches Kubernetes service discovery without introducing external dependencies.
Symptom/Error: Service Discovery Failures in Local Compose
When a frontend or API consumer attempts to reach a downstream service, requests fail with DNS resolution errors rather than standard connection refusals. The exact error signature typically manifests as curl: (6) Could not resolve host or gRPC UNAVAILABLE: DNS resolution failed.
Diagnostic Command:
docker compose exec frontend curl -sI http://api-gateway.internal:8080/health && echo $? || echo 'DNS resolution failed'
Expected Terminal Output:
curl: (6) Could not resolve host: api-gateway.internal
DNS resolution failed
Resolution Steps:
- Verify container-to-container name resolution using the embedded resolver:
docker compose exec frontend nslookup api-gateway 127.0.0.11
- Inspect the response for
EAI_AGAIN(temporary failure) orNXDOMAIN(non-existent domain). These indicate missing local resolver entries or TLD interception. - If resolution fails, bypass the embedded resolver temporarily by injecting
extra_hoststo confirm network connectivity before implementing a permanent DNS fix.
Prevention:
Implement automated pre-flight checks in your Makefile or package.json scripts that validate DNS resolution before starting integration test suites. Fail fast if nslookup returns NXDOMAIN against expected service FQDNs.
Root Cause: Docker Embedded DNS Limitations & Custom TLD Conflicts
Docker's default 127.0.0.11 embedded DNS server only resolves container names and aliases defined within the active Compose network. It does not handle arbitrary FQDNs, custom TLDs, or external domain routing. Additionally, host /etc/hosts entries are explicitly isolated from container namespaces and will not propagate.
Conflicts frequently arise when teams use .local or .internal TLDs for local routing. macOS and Linux systems often intercept .local via mDNS/Bonjour, while corporate DNS forwarders may hijack .internal queries, causing unpredictable resolution behavior across developer machines.
Diagnostic Command:
docker network inspect myapp_default --format '{{json .Options}}' | jq '.dns'
Expected Terminal Output:
null
(A null response confirms the network relies on the default 127.0.0.11 resolver with no custom DNS overrides.)
Resolution Steps:
- Standardize on
.docker.internalor.svc.cluster.localsuffixes across alldocker-compose.ymlfiles to avoid collision with host OS mDNS/Bonjour services. - Avoid relying on host
/etc/hostsfor container routing. Docker explicitly isolates network namespaces; host-level modifications will not affect inter-container traffic. - Deploy a lightweight, authoritative DNS sidecar to handle custom TLD resolution deterministically.
Prevention: Enforce a naming convention policy in your platform documentation. Require all local service definitions to use a reserved, non-routable TLD that bypasses OS-level resolvers and corporate forwarders.
Step-by-Step Fix: Implementing a Local DNS Resolver
To achieve deterministic routing, deploy a lightweight coredns instance as a dedicated resolver within your Compose network. This approach mirrors production kube-dns behavior and provides exact control over A/AAAA/CNAME records.
Diagnostic Command:
cat /etc/resolv.conf | grep nameserver && docker compose run --rm dns-check dig @127.0.0.11 api-gateway.internal
Implementation Steps:
- Create a
Corefilein your project root:
.:53 {
hosts {
172.20.0.3 auth-service.internal
172.20.0.4 payment-worker.internal
172.20.0.5 api-gateway.internal
fallthrough
}
forward . 127.0.0.11
log
}
- Deploy the resolver:
docker run -d --name local-dns \
--network myapp_default \
-p 53:53/udp \
-v $(pwd)/Corefile:/etc/coredns/Corefile \
coredns/coredns:latest
- Inject the resolver into your Compose network configuration:
networks:
default:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
dns:
- 172.20.0.2 # CoreDNS container IP
- Override container DNS explicitly in
docker-compose.yml:
services:
api-gateway:
dns:
- 127.0.0.11
- 172.20.0.2
extra_hosts:
- "auth-service:172.20.0.3"
- Validate propagation:
docker compose exec api-gateway getent hosts auth-service.internal
Expected Output:
172.20.0.3 auth-service.internal
For bridge network isolation, reference Local Network & Port Mapping to ensure UDP/53 traffic isn't blocked by iptables DROP rules or host firewall policies.
Prevention:
Commit the Corefile and docker-compose.yml DNS overrides to version control. Enforce schema validation via docker compose config --resolve-image-digests in CI pipelines to catch malformed DNS directives before merge.
Prevention/Parity Check: Ensuring Dev-to-Prod Routing Consistency
Local DNS configuration must mirror production service discovery to prevent environment-specific routing bugs. Aligning local resolver behavior with production Containerized Local Environments & Docker Compose Patterns ensures that developers interact with identical routing semantics, reducing drift between staging and production deployments.
Diagnostic Command:
docker compose exec api-gateway sh -c 'cat /etc/resolv.conf && dig +short auth-service.internal'
Parity Validation Script:
Run this script immediately after docker compose up -d to verify deterministic resolution across all services:
#!/bin/bash
set -euo pipefail
for svc in api-gateway auth-service payment-worker; do
echo "Validating DNS for ${svc}.internal..."
if ! docker compose exec -T "${svc}" getent hosts "${svc}.internal" > /dev/null 2>&1; then
echo "FAIL: ${svc}.internal resolution failed."
exit 1
fi
done
echo "DNS parity verified across all microservices."
Rollback Command: If the custom resolver introduces latency or conflicts, revert to Docker's embedded DNS immediately:
docker compose down
docker rm -f local-dns
# Remove dns: and extra_hosts: blocks from docker-compose.yml
docker compose up -d
Prevention & Onboarding Integration:
Integrate DNS resolution tests into developer onboarding checklists. Use .devcontainer.json postCreateCommand to auto-configure host systemd-resolved stub listeners, preventing manual /etc/hosts drift. Mock Kubernetes kube-dns responses using mock-dns containers during local development to guarantee that service routing logic remains identical across environments.