the repo is not the problem
Monorepos get blamed for failures that usually come from weak ownership, soft dependency rules, and CI that only checks whether code compiles on one engineer’s laptop. The repository shape matters far less than the discipline around it. We’ve inherited repos where a Next.js app imported internal files from another app with a path like ../../admin/src/lib/auth, a Go worker reached across service boundaries by depending on another service’s internal package, and package managers quietly installed four React versions because nobody pinned peer ranges with any seriousness. That repo felt cursed. It wasn’t cursed, it was ungoverned.
Our default setup at steezr is simple, and I’d rather keep it simple than chase every trendy build tool on GitHub. We use pnpm 8 workspaces for JavaScript and TypeScript packages, Turborepo 1.x for task orchestration and remote cache hits in CI, and go.work to make local Go development tolerable without turning every service into a giant ball of shared packages. The whole point is to make the fastest path also the safest path, because developers will always find the path of least resistance, especially when they’re under deadline pressure and a customer portal needs to ship by Friday.
That means a monorepo has to answer a few questions very clearly. Which packages are public within the repo, and which ones are private implementation details. Which apps can consume which shared libraries. Which teams can change base tooling. Which checks run before merge, and which failures are hard blocks. If those answers live in a Notion page, you already lost. They need to exist in package.json, pnpm-workspace.yaml, turbo.json, go.work, CODEOWNERS, lint rules, and a CI pipeline that fails loudly.
The repo won’t stay sane by accident. It stays sane because the rules are encoded where developers can’t ignore them.
pnpm with explicit edges
pnpm is still the right choice for a big mixed frontend repo because its workspace model rewards discipline instead of hiding sloppy dependency graphs behind hoisting. npm got better, Yarn still exists, Bun is fast and still a moving target for large teams, pnpm remains the one I trust when twenty packages depend on each other and I want mistakes to surface early.
Our root pnpm-workspace.yaml stays boring:
1packages:2 - "apps/*"3 - "packages/*"4 - "tooling/*"5 - "services/web-*"
And the root package.json does most of the policy work:
1{2 "name": "acme-monorepo",3 "private": true,4 "packageManager": "pnpm@8.15.7",5 "engines": {6 "node": ">=22.11.0",7 "pnpm": ">=8.15.0"8 },9 "scripts": {10 "build": "turbo run build",11 "dev": "turbo run dev --parallel",12 "lint": "turbo run lint",13 "test": "turbo run test",14 "typecheck": "turbo run typecheck",15 "check:deps": "pnpm -r exec npm-package-json-lint .",16 "check:boundaries": "pnpm -r exec depcruise src --config ../../tooling/dependency-cruiser.cjs"17 },18 "pnpm": {19 "overrides": {20 "react": "19.1.0",21 "react-dom": "19.1.0"22 },23 "packageExtensions": {24 "some-bad-package@*": {25 "peerDependencies": {26 "react": ">=18 <20"27 }28 }29 }30 }31}
A few opinions, stated plainly. Pin the package manager version. Pin Node. Use pnpm.overrides for repo-wide consistency when a package family has to stay aligned, especially React and @types/react during upgrades. Don’t let apps depend on transitive luck. If apps/portal uses zod, declare zod in apps/portal/package.json, even if @acme/ui also depends on it. pnpm will punish hidden coupling, which is exactly what you want.
For internal packages, we use the workspace protocol everywhere:
1{2 "name": "@acme/ui",3 "version": "0.12.0",4 "private": false,5 "peerDependencies": {6 "react": ">=18 <20",7 "react-dom": ">=18 <20"8 },9 "dependencies": {10 "clsx": "^2.1.1"11 },12 "devDependencies": {13 "react": "workspace:^",14 "react-dom": "workspace:^",15 "typescript": "workspace:^"16 }17}
That workspace:^ matters. Internal packages follow repo-controlled versions, and peer dependencies keep app-level React ownership where it belongs. If you skip this, you’ll eventually hit Invalid hook call. Hooks can only be called inside of the body of a function component. and spend an afternoon discovering two React copies in the graph.
turbo should stay boring
Turborepo is at its best when it’s acting like a predictable task graph and cache coordinator, not a magical meta-build layer that half the team only vaguely understands. I’ve seen teams stuff deployment logic, secret fetching, codegen, migrations, and weird shell wrappers into Turbo tasks until turbo run build became a small religion. Then one cache key changed, CI got slower, and nobody could explain why.
Keep the pipeline tight. Ours usually looks like this:
1{2 "$schema": "https://turbo.build/schema.json",3 "globalDependencies": [4 "pnpm-lock.yaml",5 "tsconfig.base.json",6 ".nvmrc",7 ".env.example"8 ],9 "tasks": {10 "build": {11 "dependsOn": ["^build"],12 "inputs": ["src/**", "package.json", "tsconfig.json", "next.config.*"],13 "outputs": ["dist/**", ".next/**", "build/**"]14 },15 "typecheck": {16 "dependsOn": ["^typecheck"],17 "inputs": ["src/**", "package.json", "tsconfig.json"]18 },19 "lint": {20 "dependsOn": ["^lint"],21 "inputs": ["src/**", "package.json", ".eslintrc.*", "eslint.config.*"]22 },23 "test": {24 "dependsOn": ["^build"],25 "inputs": ["src/**", "test/**", "vitest.config.*", "package.json"],26 "outputs": ["coverage/**"]27 },28 "dev": {29 "cache": false,30 "persistent": true31 }32 }33}
Remote caching is where the monorepo starts paying rent, especially once CI is running on every pull request and half the packages haven’t changed. Vercel Remote Cache works, self-hosted cache works, both are fine. What matters is that cache misses are understandable. If your build task reads generated files that aren’t declared in inputs, your cache is lying to you. If your task writes to random temp directories outside outputs, your cache is incomplete. People then stop trusting green builds, and once that trust is gone, they start rerunning everything locally, which defeats the point.
We also keep Turbo scoped to JavaScript concerns. Go services don’t pretend to be Turbo packages. They can be triggered by top-level scripts or CI matrices, and that separation reduces weird cross-runtime coupling. The repo can be one unit without every tool pretending the whole repo is its natural habitat.
go.work is a boundary tool
A lot of teams treat go.work like a convenience file for local development, which it is, though stopping there misses the bigger value. It’s also a way to say, very clearly, these are separate modules, they evolve independently, and local composition must not erase those boundaries.
A stripped-down example:
1go 1.22.623use (4 ./services/billing5 ./services/docproc6 ./services/auth7 ./libs/go/httpx8 ./libs/go/events9)
Each service still has its own go.mod:
1module github.com/acme/monorepo/services/docproc23go 1.22.645require (6 github.com/acme/monorepo/libs/go/events v0.0.07 github.com/jackc/pgx/v5 v5.7.28)
Two rules keep this sane. First, shared Go code lives in clearly named libraries under libs/go/*, never inside another service. If services/billing/internal/pdf contains useful code and services/docproc imports it through some hacky replace directive, that’s a process failure, not a clever shortcut. Go already gives you internal package boundaries for a reason. Use them. Second, every service must build and test with its own module context in CI, not just through the workspace. Local go.work can hide missing version declarations, accidental replaces, and imports that only resolve because your laptop sees the whole repo.
We’ve seen the classic error enough times:
1package github.com/acme/monorepo/services/auth/internal/jwt is not in std
Sometimes it’s a bad import path, sometimes a workspace-only assumption leaking into code that’s supposed to compile in isolation. Either way, the fix is structural. Shared code moves to a shared module, the dependency gets declared properly, and the service regains its autonomy.
This matters more as the team grows. The second you have different engineers shipping a document processing pipeline in Go while another team is maintaining a Django admin and a Next.js customer portal, shared repos need stronger mechanical sympathy, not weaker. go.work gives you local ergonomics without sacrificing module edges, if you resist the urge to turn everything into one giant Go module.
shared ui needs version discipline
Shared UI libraries are where frontend monorepos usually rot first, because everyone wants reuse and almost nobody wants to own compatibility. A button package starts small, then someone adds auth-aware navigation, then app-specific feature flags, then a form wrapper that only works with one backend contract, and suddenly @acme/ui is a junk drawer with a semver number attached.
Keep shared UI aggressively dumb. Design tokens, primitives, composable patterns, maybe a thin set of opinionated components if your design system has actual weight behind it. Business workflows stay in app packages. If a component needs to know about a tenant billing plan, it doesn’t belong in the shared library.
We support both Next.js 15 apps still on React 18 and newer apps already on React 19 by using peer ranges and strict release rules:
1{2 "name": "@acme/ui",3 "version": "0.12.0",4 "exports": {5 ".": {6 "types": "./dist/index.d.ts",7 "import": "./dist/index.js"8 },9 "./styles.css": "./dist/styles.css"10 },11 "peerDependencies": {12 "react": ">=18 <20",13 "react-dom": ">=18 <20"14 },15 "sideEffects": [16 "**/*.css"17 ]18}
Versioning strategy is blunt on purpose. Patch for bug fixes with no API or markup contract change. Minor for additive components or props. Major for any breaking API, changed DOM structure that affects tests, changed CSS contract, or React baseline change. If a package is shared by multiple revenue-bearing apps, major bumps require at least one upgrade PR proving migration cost is acceptable. No hypothetical compatibility claims.
We also ban app imports from shared packages at the lint layer. @acme/ui can’t import next/navigation, can’t import apps/*, can’t import environment-specific config. If a component needs a router adapter, we pass it in. That feels slightly annoying the first week and massively cleaner six months later. You either preserve the dependency direction, or the dependency direction preserves nothing.
ci gates that actually matter
Most monorepo CI pipelines are noisy because they check too much of the wrong stuff and not enough of the right stuff. A useful pipeline should answer, quickly, whether a change violated dependency boundaries, broke isolated builds, or introduced drift in shared foundations.
The core gates we use are small. One, frozen installs, always.
1- name: Install JS deps2 run: pnpm install --frozen-lockfile
Two, affected checks for speed, full checks on protected branches for confidence.
1- name: Turbo build2 run: pnpm turbo run build test lint typecheck --filter=...[origin/main]
Three, Go services build in their own module directories, not through go.work magic.
1- name: Test Go services2 run: |3 for d in services/*; do4 if [ -f "$d/go.mod" ]; then5 echo "testing $d"6 (cd "$d" && GOWORK=off go test ./... && GOWORK=off go build ./...)7 fi8 done
That GOWORK=off flag catches a lot. Four, dependency boundary checks. We typically use dependency-cruiser for TS import rules and a few custom scripts for package policy, for example rejecting direct imports across app folders or disallowing undeclared dependencies discovered via pnpm list --depth -1 plus package manifest validation.
Five, ownership. CODEOWNERS is unfashionable right up until a shared package breaks three apps because nobody who understood it reviewed the PR. We require owners on packages/ui/**, tooling/**, and root config files. Changing turbo.json, the root TypeScript config, or lockfile strategy should never be a drive-by commit.
A final one that pays off fast, lockfile drift checks. If a PR changes package.json anywhere and not pnpm-lock.yaml, fail it. If a shared package changes peer dependency ranges without a release note or changeset, fail it. If a Go library go.mod changes and downstream services still only pass with go.work enabled, fail it. These aren’t glamorous checks, they don’t make conference talks, they save teams from spending Wednesday afternoon untangling accidental coupling that should’ve died in a pull request.
