10 min readJohnny UnarJohnny Unar

Modular Django without the circus

Most teams don't need microservices. They need Django apps with hard boundaries, versioning, and upgrade paths that won't wreck production.

the wrong fork

A lot of teams treat architecture like a forced choice, keep the monolith forever or break it into services, then spend a year inventing deployment problems they didn't have before. I've watched this happen with perfectly normal SaaS products, a CRM with maybe 40 models, some Celery workers, a customer portal, a billing flow, nothing exotic, and six months later they're debugging gRPC timeouts between two services that still share the same PostgreSQL cluster and are deployed from the same GitHub Actions workflow.

That whole middle ground is badly underused. Django already gives you an app system, migration namespaces, pluggable settings, and enough import machinery to create real module boundaries if you stop treating apps/ as a junk drawer. The useful move is to build a modular monolith where core domains are shipped as versioned Python packages, installed as wheels, wired through explicit interfaces, and validated in CI as if they were third-party dependencies. Same process space, same transaction boundary, same debugger, vastly better discipline.

This matters because most product teams don't lose time on CPU saturation or network scaling, they lose time on change coupling. Sales touches billing, billing touches accounts, accounts touches notifications, then one innocent field rename in models.py explodes across half the repo because nobody knows which imports are safe. At Steezr we've seen this pattern in internal ERP systems and document processing backends, especially in codebases that started clean and then grew sideways under delivery pressure. Microservices won't save that codebase. Stronger boundaries will.

The thesis is simple, package the domains that deserve longevity, keep them in-process, version them aggressively, and force upgrades through CI. You get most of the engineering upside people are usually reaching for with microservices, without signing up for service discovery, distributed tracing, idempotency bugs, and all the other tax you absolutely will pay.

package the domain

A replaceable Django app has to be installable outside the main repo, otherwise you're just role-playing modularity. Put each serious domain in its own Python package with a pyproject.toml, build wheels with python -m build, publish them to a private index like GitHub Packages or an internal devpi, and install them in the host project like any other dependency.

A real package layout looks like this:

billing-app/ pyproject.toml src/billing_app/__init__.py src/billing_app/apps.py src/billing_app/models.py src/billing_app/api.py src/billing_app/migrations/ tests/

Use the src/ layout. It prevents accidental imports from the working tree and catches packaging mistakes early. In pyproject.toml, keep dependencies explicit:

toml
[build-system]
requires = ["setuptools>=69", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name = "billing-app"
version = "2.4.1"
requires-python = ">=3.12"
dependencies = [
  "Django>=5.0,<5.2",
  "psycopg[binary]>=3.1,<3.3"
]

[tool.setuptools.packages.find]
where = ["src"]

Then make the Django app config boring and stable:

python
# src/billing_app/apps.py
from django.apps import AppConfig

class BillingAppConfig(AppConfig):
    name = "billing_app"
    verbose_name = "Billing"
    default_auto_field = "django.db.models.BigAutoField"

Host project:

python
INSTALLED_APPS = [
    "django.contrib.auth",
    "django.contrib.contenttypes",
    "billing_app.apps.BillingAppConfig",
]

Nothing fancy yet. The important part is psychological as much as technical, once the code is versioned and published independently, people stop reaching across boundaries casually. They feel the cost of changing a public surface, which is exactly what you want. If renaming a function means bumping a package version and updating a changelog, the team suddenly gets a lot less reckless with internal coupling.

Keep the public surface tiny. Give each package an api.py or services.py that the outside world is allowed to import, and treat everything else as private. If another app imports billing_app.models.Invoice directly, maybe that's acceptable. If it imports billing_app.tasks._rebuild_invoice_cache, your boundary is already dead.

imports need discipline

Most Django modularity efforts fail at import time, not at runtime. Somebody imports a model from another app inside models.py, then the other app imports back into a signal handler, Django app loading gets weird, and eventually you hit django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. or the even more annoying partial initialization bug where imports succeed locally and fail under Gunicorn with ImportError: cannot import name 'X' from partially initialized module.

The fix is strict import direction and explicit integration points. Domain packages shouldn't import each other freely. They should depend inward on shared contracts, or outward through a thin host-owned integration layer. In practice, that means using strings for foreign keys where you must cross app boundaries:

python
customer = models.ForeignKey("accounts.Account", on_delete=models.PROTECT)

and resolving models lazily when needed:

python
from django.apps import apps

Account = apps.get_model("accounts", "Account")

I don't love apps.get_model, it's less readable than direct imports, though it beats circular imports and startup fragility. Use it at the edges, not everywhere. A better pattern for non-ORM interaction is to define a narrow protocol in one package and implement it in the host project.

Example, billing_app/api.py:

python
from dataclasses import dataclass
from decimal import Decimal

@dataclass(frozen=True)
class ChargeRequest:
    account_id: int
    amount: Decimal
    currency: str

class PaymentGateway:
    def charge(self, req: ChargeRequest) -> str:
        raise NotImplementedError

Then the host wires Stripe, Adyen, or a fake gateway in settings:

python
BILLING_PAYMENT_GATEWAY = "project.payments.StripeGateway"

Inside the package, load it with django.utils.module_loading.import_string. That keeps the package replaceable and avoids hidden imports into host internals. Same idea applies to signals, avoid broad global receivers when a direct service call or explicit hook will do. Django signals are convenient and sloppy at the same time, which is why they spread coupling while pretending to reduce it.

migrations are the real boundary

If you want independently versioned apps, migration ownership has to be non-negotiable. Each package owns its own migration directory, and no package writes migrations for another package's models. Seems obvious, gets violated constantly.

Django already namespaces migrations by app label, which helps, though dependency edges still matter. A billing app can depend on accounts.0003_add_status, sure, though keep those dependencies sparse because they turn upgrades into a graph problem. If support_app depends on a billing migration, and billing depends on accounts, and accounts decides to reference support for audit metadata, you've built a knot. Untangling that during a Friday deploy is miserable.

Aim for one-way schema dependencies. Shared identifiers beat deep relational entanglement. Sometimes that means storing account_id as an integer and validating through an application service instead of adding a hard foreign key across domains. Purists hate this. I don't care. If the domain boundary matters more than relational purity, the extra lookup is the cheaper trade.

For host projects, pin migration modules only when you have a very good reason. Package defaults are fine:

python
MIGRATION_MODULES = {
    # leave packaged apps alone unless you're doing something unusual
}

What you do need is upgrade rehearsal. Every package release should be tested against the previous released version plus real migration progression. In CI, create a matrix that installs billing-app==2.3.0, runs migrations on a temp PostgreSQL 16 instance, seeds fixture data, upgrades to the current wheel, and runs python manage.py migrate again. If a migration explodes with psycopg.errors.DependentObjectsStillExist: cannot drop column amount because view invoice_summary depends on it, good, you found it before prod did.

Data migrations deserve extra suspicion. Keep them idempotent where possible, chunk large updates, and never assume the host database is small enough for one giant transaction. Django's migration framework makes the happy path easy and the dangerous path deceptively easy too. Senior teams treat migrations like release engineering, not like generated boilerplate.

version like you mean it

Replaceable apps live or die on versioning discipline. If every release is 0.1.x forever and nobody knows what a minor bump means, the package is just a folder with extra paperwork. Use semver, mostly. Major for breaking public API or migration behavior that requires operator action, minor for additive features, patch for bug fixes that don't force code changes in consumers.

You also need an upgrade contract, not vague tribal knowledge. Each package should ship a CHANGELOG.md with entries like:

md
## 3.0.0
- Removed `billing_app.api.create_invoice`
- Added `InvoiceService.create()`
- Migration `0018_backfill_invoice_number` rewrites existing rows, expect longer deploys on large tables
- Requires host setting `BILLING_PAYMENT_GATEWAY`

Then enforce it in CI. We use GitHub Actions for this kind of thing because it's ubiquitous and boring, which is perfect. One workflow builds the wheel, installs it into a sample host project, runs contract tests, then runs an upgrade matrix across selected prior versions.

A stripped-down job looks like this:

yaml
jobs:
  upgrade-test:
    runs-on: ubuntu-24.04
    strategy:
      matrix:
        from_version: ["2.2.0", "2.3.1", "2.4.0"]
    services:
      postgres:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: postgres
        ports: ["5432:5432"]
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"
      - run: pip install "billing-app==${{ matrix.from_version }}" -r host/requirements.txt
      - run: cd host && python manage.py migrate && python manage.py loaddata seed.json
      - run: pip install --force-reinstall dist/billing_app-*.whl
      - run: cd host && python manage.py migrate && pytest tests/contracts -q

That last line matters. Contract tests should assert the public behavior the host depends on, not package internals. Create invoice, apply payment, emit webhook payload shape, preserve status transitions. If you only run the package's own unit tests, you'll miss the integration breakages that actually hurt during upgrades.

Once teams have this setup, upgrades stop feeling scary. They become routine maintenance instead of archaeology.

where this breaks

This pattern has limits, and pretending otherwise is how architecture posts turn into fiction. If different domains need independent deploy cadence, independent scaling characteristics, or different trust boundaries, packages inside one Django process won't cover that. A document OCR pipeline chewing CPU on separate worker pools has different operational needs than a customer-facing CRUD app. Split that by runtime concern if you must. Same for regulated data boundaries where one component genuinely can't share process memory with another.

Most teams aren't there. They have one product, one database, one deployment train, and a codebase that's becoming sticky because every part can reach every other part. Modular packages solve that exact problem well.

The hard part is cultural. Engineers love the freedom of direct imports until the bill arrives. You'll need code review rules, import-linting, and a willingness to reject convenient shortcuts. Tools help. import-linter is good for declaring contracts such as billing_app may not import customer_portal. ruff can catch a lot of nonsense quickly. A simple Architecture Decision Record saying "cross-package access goes through api.py only" prevents endless debate.

One more thing, don't package every app on day one. That's cargo cult modularity. Start with the domains that have churn, clear ownership, and a reasonable chance of reuse across products. Billing is a classic candidate. Accounts sometimes is. A thin marketing site app obviously isn't. We build a lot of SaaS systems at Steezr, mostly with Django and Next.js, and the teams that stay fast after year two are the ones that introduce boundaries where change is expensive, not where architecture diagrams look impressive.

Microservices still have their place. Just earn them. A wheel file and a strict import policy will carry you much further than most teams expect.

Johnny Unar

Written by

Johnny Unar

Want to work with us?

Most teams don't need microservices. They need Django apps with hard boundaries, versioning, and upgrade paths that won't wreck production.