DIP in Practice
I keep seeing the same mistake in different costumes. Someone wants to make life easier — usually by writing a bit of automation, or by centralising something that feels scattered — and the fix quietly points the dependency arrows the wrong way. Weeks later, the thing that was supposed to help is the thing everyone has to work around.
The classic version goes like this. Your team maintains an internal package. You cut a new version, and three services need to pick it up. The obvious move is to write a post-release action that opens PRs in every consumer repo:
# .github/workflows/post-release.yml (in the library repo)
name: Notify consumers
on:
release:
types: [published]
jobs:
open-prs:
strategy:
matrix:
repo: [orders-api, billing-service, frontend-app]
steps:
- uses: actions/checkout@v4
with:
repository: my-org/${{ matrix.repo }}
token: ${{ secrets.CROSS_REPO_PAT }}
- run: |
sed -i "s/my-lib@.*/my-lib@${{ github.event.release.tag_name }}/" package.json
git checkout -b bump-my-lib-${{ github.event.release.tag_name }}
git commit -am "bump my-lib to ${{ github.event.release.tag_name }}"
gh pr create --title "Bump my-lib" --body "Auto-opened by library release."
It feels productive. But look at what just happened. The library repo now has to know who uses it, hold credentials to push into those repos, and somehow handle breakages it cannot reproduce locally. The library has started depending on its consumers.
graph TD lib[my-lib] -->|"opens PRs in"| A[orders-api] lib -->|"opens PRs in"| B[billing-service] lib -->|"opens PRs in"| C[frontend-app] style lib fill:#1e293b,stroke:#f87171,color:#f8fafc style A fill:#1e293b,stroke:#64748b,color:#94a3b8 style B fill:#1e293b,stroke:#64748b,color:#94a3b8 style C fill:#1e293b,stroke:#64748b,color:#94a3b8
That is exactly what the Dependency Inversion Principle tells you not to do.
The fix is to flip the arrow. Each consumer watches the registry (or runs Dependabot / Renovate) and decides on its own schedule when to upgrade. The library publishes a version and forgets who is listening:
graph TD lib[my-lib] -->|publishes to| R[Package Registry] A[orders-api] -->|watches| R B[billing-service] -->|watches| R C[frontend-app] -->|watches| R style lib fill:#1e293b,stroke:#4ade80,color:#f8fafc style R fill:#1e293b,stroke:#4ade80,color:#f8fafc style A fill:#1e293b,stroke:#64748b,color:#94a3b8 style B fill:#1e293b,stroke:#64748b,color:#94a3b8 style C fill:#1e293b,stroke:#64748b,color:#94a3b8
DIP, the D in SOLID, is usually explained with class diagrams and abstract interfaces — and it was coined in that context. But the underlying heuristic transfers cleanly to distributed systems, build pipelines, and infrastructure: high-level modules should not depend on low-level ones, both should depend on an abstraction. Or, more bluntly —
If your upstream component is keeping a list of the downstream things that use it, the arrows are pointing the wrong way.
That “list” is the tell. It might be a list of consumer repos, a list of caller services, a list of webhook subscribers, a list of app names inside a build script, a list of environments inside a Terraform module. Whenever I spot one, I know there is probably an inversion hiding underneath.
The rest of this post is a tour of that same mistake, wearing seven different uniforms.
1. The shared database schema
A users table has a full_name column. Auth reads it, billing reads it, a nightly analytics job reads it:
-- The original schema
CREATE TABLE users (
id UUID PRIMARY KEY,
full_name TEXT NOT NULL,
email TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT now()
);
The team that owns users decides to split the column into first_name and last_name. They ship the migration on a Friday afternoon:
-- Friday afternoon migration
ALTER TABLE users ADD COLUMN first_name TEXT;
ALTER TABLE users ADD COLUMN last_name TEXT;
UPDATE users SET
first_name = split_part(full_name, ' ', 1),
last_name = split_part(full_name, ' ', 2);
ALTER TABLE users DROP COLUMN full_name;
Over the weekend, billing’s invoice PDFs come out blank:
# billing-service/invoice.py
class InvoiceGenerator:
def generate(self, user_id: str) -> Invoice:
row = db.execute("SELECT full_name FROM users WHERE id = %s", user_id)
# ^^^^^^^^^ column no longer exists
return Invoice(
customer_name=row["full_name"], # KeyError or None
...
)
Nobody did anything wrong. The schema was acting as a public API with no contract, and the team that owned it had no real way to know who depended on what shape. They had accidentally become the change-management board for the whole company.
graph TD Auth -->|"direct column access"| T[users table] Billing -->|"direct column access"| T Analytics -->|"direct column access"| T style T fill:#1e293b,stroke:#f87171,color:#f8fafc style Auth fill:#1e293b,stroke:#64748b,color:#94a3b8 style Billing fill:#1e293b,stroke:#64748b,color:#94a3b8 style Analytics fill:#1e293b,stroke:#64748b,color:#94a3b8
That is the inversion: the schema, which should be the low-level detail, ended up being something every other service directly binds to.
The honest fix is boring: introduce a thin contract between the table and the readers. A database view that presents a stable shape while the underlying table evolves freely:
-- The stable contract: a view
CREATE VIEW users_v1 AS
SELECT
id,
first_name || ' ' || last_name AS full_name,
first_name,
last_name,
email,
created_at
FROM users;
The old column stays (as a computed field in the view) until everyone has migrated, and the schema is free to evolve behind it.
graph TD T[users table] --- V[users_v1 view] Auth -->|reads| V Billing -->|reads| V Analytics -->|reads| V style T fill:#1e293b,stroke:#4ade80,color:#f8fafc style V fill:#1e293b,stroke:#4ade80,color:#f8fafc style Auth fill:#1e293b,stroke:#64748b,color:#94a3b8 style Billing fill:#1e293b,stroke:#64748b,color:#94a3b8 style Analytics fill:#1e293b,stroke:#64748b,color:#94a3b8
2. Service-to-service shapes
The checkout service calls inventory via GET /items/:id and reads response.stock:
// GET /items/abc-123 (v1 response)
{
"id": "abc-123",
"name": "Widget",
"stock": 42
}
Six months later, inventory wants to return per-warehouse stock as an object. They cannot — checkout will crash:
// checkout-service/inventory_client.go
type Item struct {
ID string `json:"id"`
Name string `json:"name"`
Stock int `json:"stock"` // expects a number, will blow up on an object
}
func (c *Client) GetItem(id string) (*Item, error) {
resp, _ := http.Get(c.baseURL + "/items/" + id)
var item Item
json.NewDecoder(resp.Body).Decode(&item) // silent zero-value if shape changes
return &item, nil
}
So inventory adds /v2/items/:id, and now they maintain both forever. Five more callers appear, each pinned to a slightly different shape, and inventory’s “simple” endpoint becomes a museum of other teams’ assumptions:
// What inventory *wants* to return
{
"id": "abc-123",
"name": "Widget",
"stock": {
"us-east-1": 30,
"eu-west-1": 12
}
}
The tell here is subtle: there is no explicit list of callers, but inventory behaves as if there is one. Every deprecation discussion turns into an archaeology project — “who still hits v1?”, “can we sunset the flat stock field?”, “let’s grep the other repos.”
A shared contract — a protobuf schema that both sides depend on — pulls the arrows back into line:
// contracts/inventory/v1/item.proto
syntax = "proto3";
package inventory.v1;
message Item {
string id = 1;
string name = 2;
StockInfo stock = 3;
}
message StockInfo {
int32 total = 1; // total across all warehouses
map<string, int32> by_warehouse = 2; // new granular data
}
This is a new contract — existing callers migrate to it rather than getting a silent drop-in replacement. The point is not magic backward compatibility; it is that evolution is coordinated through the schema, not through ad-hoc archaeology across every consumer repo.
Inventory evolves the contract, not the callers. Callers depend on the contract, not on inventory’s internal shape.
graph TD I[Inventory Service] -->|implements| C[inventory.v1.proto] CH[Checkout] -->|depends on| C SH[Shipping] -->|depends on| C AN[Analytics] -->|depends on| C style I fill:#1e293b,stroke:#4ade80,color:#f8fafc style C fill:#1e293b,stroke:#4ade80,color:#f8fafc style CH fill:#1e293b,stroke:#64748b,color:#94a3b8 style SH fill:#1e293b,stroke:#64748b,color:#94a3b8 style AN fill:#1e293b,stroke:#64748b,color:#94a3b8
3. Webhooks with a Rolodex
Payments fires a charge.succeeded event, and in its config is a hardcoded list:
# payments-service/config/webhooks.yml
events:
charge.succeeded:
targets:
- url: https://orders.internal/hooks/charge
secret: ${ORDERS_WEBHOOK_SECRET}
- url: https://analytics.internal/hooks/charge
secret: ${ANALYTICS_WEBHOOK_SECRET}
- url: https://fraud.internal/hooks/charge
secret: ${FRAUD_WEBHOOK_SECRET}
- url: https://emails.internal/hooks/charge
secret: ${EMAILS_WEBHOOK_SECRET}
A new loyalty team spins up and needs the event. They file a ticket against payments. Payments is now a routing table; when fraud is down, retries pile up, queues grow, and charges stall behind a service payments should not even know about.
graph TD P[Payments] -->|"POST /hooks/charge"| O[Orders] P -->|"POST /hooks/charge"| A[Analytics] P -->|"POST /hooks/charge"| F["Fraud (down)"] P -->|"POST /hooks/charge"| E[Emails] P -.->|"???"| L[Loyalty - needs a ticket] style P fill:#1e293b,stroke:#f87171,color:#f8fafc style F fill:#451a1a,stroke:#f87171,color:#f87171 style L fill:#1e293b,stroke:#eab308,color:#eab308 style O fill:#1e293b,stroke:#64748b,color:#94a3b8 style A fill:#1e293b,stroke:#64748b,color:#94a3b8 style E fill:#1e293b,stroke:#64748b,color:#94a3b8
Publishing to a topic flips this cleanly. Payments emits charge.succeeded and forgets who is listening:
# payments-service/events.py
def process_charge(charge: Charge) -> None:
# ... process the charge ...
# Fire and forget — no list of consumers
topic.publish(
event="charge.succeeded",
data={"charge_id": charge.id, "amount": charge.amount},
)
Consumers subscribe on their own schedule, and the list of subscribers lives in the one place that genuinely cares about it — the broker — rather than in the service that shouldn’t:
# loyalty-service/subscriptions.tf (loyalty team's own repo)
resource "google_pubsub_subscription" "charge_events" {
name = "loyalty-charge-succeeded"
topic = "projects/payments/topics/charge.succeeded"
push_config {
push_endpoint = "https://loyalty.internal/hooks/charge"
}
}
graph TD P[Payments] -->|publishes| T["Topic: charge.succeeded"] O[Orders] -->|subscribes| T A[Analytics] -->|subscribes| T F[Fraud] -->|subscribes| T E[Emails] -->|subscribes| T L[Loyalty] -->|subscribes| T style P fill:#1e293b,stroke:#4ade80,color:#f8fafc style T fill:#1e293b,stroke:#4ade80,color:#f8fafc style O fill:#1e293b,stroke:#64748b,color:#94a3b8 style A fill:#1e293b,stroke:#64748b,color:#94a3b8 style F fill:#1e293b,stroke:#64748b,color:#94a3b8 style E fill:#1e293b,stroke:#64748b,color:#94a3b8 style L fill:#1e293b,stroke:#64748b,color:#94a3b8
No ticket. No merge into payments. Loyalty is live.
4. The monorepo build that knows every app
The root build config has a giant switch:
# Makefile (repo root)
.PHONY: build
build:
ifeq ($(APP),web)
cd apps/web && npm ci && npx next build
else ifeq ($(APP),mobile)
cd apps/mobile && npm ci && npx expo export
else ifeq ($(APP),admin)
cd apps/admin && npm ci && npx vite build
npx sentry-cli sourcemaps upload ./apps/admin/dist
else ifeq ($(APP),docs)
cd apps/docs && pip install -r requirements.txt && mkdocs build
endif
A new team wants to add a Rust service. They cannot ship until the platform team merges a PR into the root Makefile. The platform team becomes a bottleneck for every new project, and the build file becomes a bulletin board of everyone else’s quirks.
graph TD M[Root Makefile] -->|"knows how to build"| W[web] M -->|"knows how to build"| MO[mobile] M -->|"knows how to build"| A[admin] M -->|"knows how to build"| D[docs] M -.->|"??? needs a PR"| R[new-rust-service] style M fill:#1e293b,stroke:#f87171,color:#f8fafc style R fill:#1e293b,stroke:#eab308,color:#eab308 style W fill:#1e293b,stroke:#64748b,color:#94a3b8 style MO fill:#1e293b,stroke:#64748b,color:#94a3b8 style A fill:#1e293b,stroke:#64748b,color:#94a3b8 style D fill:#1e293b,stroke:#64748b,color:#94a3b8
The inversion is that the build system holds the list of apps. The fix is for each app to declare how it builds — a standard target in its own directory — and the platform depends only on the abstraction “an app knows how to build itself:”
# apps/web/Makefile
.PHONY: build
build:
npm ci && npx next build
# apps/mobile/Makefile
.PHONY: build
build:
npm ci && npx expo export
# apps/new-rust-service/Makefile <-- new team, no PR needed
.PHONY: build
build:
cargo build --release
The root build just discovers and delegates:
# Makefile (repo root) — the platform's only job
APPS := $(wildcard apps/*/Makefile)
.PHONY: build
build:
@for app in $(dir $(APPS)); do \
echo "Building $$app..."; \
$(MAKE) -C $$app build; \
done
New team adds their directory, adds a Makefile with a build target, and they’re shipping. Zero coordination.
5. Shared CI with app-specific patches
The central ci.yml in the platform repo has if: matrix.repo == 'checkout' blocks scattered throughout:
# .github/workflows/ci.yml (platform repo)
jobs:
test:
strategy:
matrix:
repo: [checkout, inventory, admin-panel, docs]
steps:
- uses: actions/checkout@v4
with:
repository: my-org/${{ matrix.repo }}
- name: Install
run: npm ci
- name: Lint
run: npm run lint
- name: Unit tests
if: matrix.repo != 'admin-panel' # "legacy, skip tests"
run: npm test
- name: E2E tests
if: matrix.repo == 'checkout' # only checkout needs Playwright
run: npx playwright test
- name: Upload sourcemaps
if: matrix.repo == 'admin-panel' # one-off for admin
run: npx sentry-cli sourcemaps upload ./dist
env:
SENTRY_AUTH_TOKEN: ${{ secrets.ADMIN_SENTRY_TOKEN }}
When the platform team upgrades the runner image, something breaks for one specific app and rolls back the whole pipeline. Every conditional branch is a coupling between the platform and a specific consumer.
The fix has the same shape as the build system: a reusable workflow that takes inputs, called from each app’s own repo:
# .github/workflows/reusable-ci.yml (platform repo — the skeleton)
name: Reusable CI
on:
workflow_call:
inputs:
run-e2e:
type: boolean
default: false
run-unit-tests:
type: boolean
default: true
jobs:
ci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run lint
- if: inputs.run-unit-tests
run: npm test
- if: inputs.run-e2e
run: npx playwright test
# checkout/.github/workflows/ci.yml (checkout's own repo — the flesh)
name: CI
on: [push, pull_request]
jobs:
ci:
uses: my-org/platform/.github/workflows/reusable-ci.yml@main
with:
run-e2e: true
Platform owns the skeleton; apps own the flesh. No more if: matrix.repo == conditionals. No more cross-repo rollbacks. That said, the input list — run-e2e, run-unit-tests — is its own quiet roster: add a new step type and you are back to filing a platform PR to add an input. If there is a strong reason to keep CI logic centralised — enforcing org-wide security scans, managing a shared runner pool, or satisfying compliance requirements — the reusable workflow is the right tool. If the only reason is that it feels tidy, each app owning its workflow entirely is often the cleaner cut.
6. Terraform modules with baked-in environments
A modules/vpc/main.tf contains a hardcoded environment roster:
# modules/vpc/main.tf (the inverted version)
variable "project" {
type = string
}
locals {
environments = {
dev = {
instance_type = "t3.micro"
az_count = 2
}
staging = {
instance_type = "t3.small"
az_count = 2
}
prod = {
instance_type = "t3.large"
az_count = 3
}
}
}
resource "aws_vpc" "main" {
for_each = local.environments
cidr_block = "10.${index(keys(local.environments), each.key)}.0.0/16"
tags = {
Name = "${var.project}-${each.key}"
}
}
resource "aws_instance" "bastion" {
for_each = local.environments
instance_type = each.value.instance_type
# ...
}
A new region is funded — eu-west-1 prod goes in. Someone edits the module. The diff plan shows changes to every existing environment at once, and everyone panics, because a shared module change has just become a company-wide event.
graph TD M["modules/vpc/main.tf"] -->|"hardcodes"| D[dev] M -->|"hardcodes"| S[staging] M -->|"hardcodes"| P[prod] M -.->|"must edit module"| EU["prod-eu (new)"] style M fill:#1e293b,stroke:#f87171,color:#f8fafc style EU fill:#1e293b,stroke:#eab308,color:#eab308 style D fill:#1e293b,stroke:#64748b,color:#94a3b8 style S fill:#1e293b,stroke:#64748b,color:#94a3b8 style P fill:#1e293b,stroke:#64748b,color:#94a3b8
The module knows which environments exist and what each one is for. Both of those are facts that belong to the caller. The module should depend on the abstraction “an environment config” and let whoever instantiates it supply the name and the shape:
# modules/vpc/main.tf (corrected)
variable "project" {
type = string
}
variable "environment" {
type = string
}
variable "instance_type" {
type = string
}
variable "az_count" {
type = number
default = 2
}
variable "cidr_block" {
type = string
}
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
tags = {
Name = "${var.project}-${var.environment}"
Environment = var.environment
}
}
resource "aws_instance" "bastion" {
instance_type = var.instance_type
# ...
}
Each environment instantiates the module on its own terms:
# environments/prod-us/main.tf
module "vpc" {
source = "../../modules/vpc"
project = "myapp"
environment = "prod-us"
instance_type = "t3.large"
az_count = 3
cidr_block = "10.0.0.0/16"
}
# environments/prod-eu/main.tf — new region, no module changes needed
module "vpc" {
source = "../../modules/vpc"
project = "myapp"
environment = "prod-eu"
instance_type = "t3.large"
az_count = 3
cidr_block = "10.3.0.0/16"
}
The plan for prod-eu touches nothing in prod-us. The module is stable; the callers supply the variance.
7. Tests that stub at the HTTP boundary
An order service notifies a shipping provider when an order is placed. The production code calls the provider directly:
# orders/notifications.py
import requests
def notify_shipping(order_id: str, address: dict) -> None:
requests.post(
"https://shipping.provider.io/api/v1/notify",
json={"order_id": order_id, "address": address},
headers={"Authorization": f"Bearer {settings.SHIPPING_API_KEY}"},
timeout=5,
)
There is no abstraction, so the tests are forced to reach down and stub at the HTTP layer:
# tests/test_notifications.py
import json
import responses
@responses.activate
def test_notify_shipping_sends_correct_payload():
responses.add(
responses.POST,
"https://shipping.provider.io/api/v1/notify",
json={"status": "ok"},
status=200,
)
notify_shipping("order-123", {"street": "1 Main St", "city": "NYC"})
assert len(responses.calls) == 1
body = json.loads(responses.calls[0].request.body)
assert body["order_id"] == "order-123"
The test now knows the URL, the HTTP method, the auth header format, and the exact wire shape. Switch providers and every test needs new URLs. The test suite has become a roster of implementation details — the same tell, one layer down:
graph TD T[Test] -->|"stubs at"| HTTP["requests.post + URL"] Code[notify_shipping] -->|"calls"| HTTP style HTTP fill:#1e293b,stroke:#f87171,color:#f8fafc style T fill:#1e293b,stroke:#64748b,color:#94a3b8 style Code fill:#1e293b,stroke:#64748b,color:#94a3b8
The fix is to introduce an abstraction the service code depends on, and let the test inject a fake through that same seam:
# orders/ports.py
from typing import Protocol
class ShippingNotifier(Protocol):
def notify(self, order_id: str, address: dict) -> None: ...
# orders/adapters.py
import requests
class HttpShippingNotifier:
def notify(self, order_id: str, address: dict) -> None:
requests.post(
"https://shipping.provider.io/api/v1/notify",
json={"order_id": order_id, "address": address},
headers={"Authorization": f"Bearer {settings.SHIPPING_API_KEY}"},
timeout=5,
)
# orders/service.py
def process_order(order: Order, notifier: ShippingNotifier) -> None:
# ... process the order ...
notifier.notify(order.id, order.address)
The test provides its own implementation of the protocol — no HTTP involved:
# tests/test_order_service.py
class FakeShippingNotifier:
def __init__(self):
self.calls = []
def notify(self, order_id: str, address: dict) -> None:
self.calls.append({"order_id": order_id, "address": address})
def test_process_order_notifies_shipping():
notifier = FakeShippingNotifier()
process_order(Order(id="order-123", address={"street": "1 Main St"}), notifier)
assert len(notifier.calls) == 1
assert notifier.calls[0]["order_id"] == "order-123"
graph TD S[process_order] -->|depends on| P[ShippingNotifier protocol] H[HttpShippingNotifier] -->|implements| P F[FakeShippingNotifier] -->|implements| P T[Test] -->|injects| F style S fill:#1e293b,stroke:#4ade80,color:#f8fafc style P fill:#1e293b,stroke:#4ade80,color:#f8fafc style H fill:#1e293b,stroke:#64748b,color:#94a3b8 style F fill:#1e293b,stroke:#64748b,color:#94a3b8 style T fill:#1e293b,stroke:#64748b,color:#94a3b8
The test knows nothing about HTTP. Swap providers — only the adapter changes, and the tests stay green.
Common denominator
Looking at the seven side by side, the shape is almost comical:
| Layer | What keeps the roster | The roster |
|---|---|---|
| Database | users table schema | List of reader services |
| API | inventory endpoint | List of versioned endpoints maintained per caller’s assumed shape |
| Events | payments webhook config | List of subscriber URLs |
| Build | Root Makefile | List of app names and build commands |
| CI | Shared ci.yml | List of repo-specific conditionals |
| Infra | Terraform module | List of environment names and sizes |
| Tests | notify_shipping function | List of URLs, methods, and wire shapes |
Different layer, same mistake. The thing that should be stable and depended-upon is instead reaching outward and collecting dependencies on everything that uses it. That roster — whatever form it takes — is the clearest signal I know of that a design has been quietly inverted.
graph LR
subgraph "Inverted (wrong)"
U1[Upstream] -->|"keeps a list of"| D1[Downstream A]
U1 -->|"keeps a list of"| D2[Downstream B]
U1 -->|"keeps a list of"| D3[Downstream C]
end
subgraph "Correct"
U2[Upstream] -->|depends on| A[Abstraction]
D4[Downstream A] -->|depends on| A
D5[Downstream B] -->|depends on| A
D6[Downstream C] -->|depends on| A
end
style U1 fill:#1e293b,stroke:#f87171,color:#f8fafc
style U2 fill:#1e293b,stroke:#4ade80,color:#f8fafc
style A fill:#1e293b,stroke:#4ade80,color:#f8fafc
style D1 fill:#1e293b,stroke:#64748b,color:#94a3b8
style D2 fill:#1e293b,stroke:#64748b,color:#94a3b8
style D3 fill:#1e293b,stroke:#64748b,color:#94a3b8
style D4 fill:#1e293b,stroke:#64748b,color:#94a3b8
style D5 fill:#1e293b,stroke:#64748b,color:#94a3b8
style D6 fill:#1e293b,stroke:#64748b,color:#94a3b8
When you feel the urge to centralise or automate, pause and check which way the arrows point. If your fix requires an upstream component to keep a list of downstream things, you are not solving the problem — you are encoding it.