10 min readJohnny UnarJohnny Unar

Next.js Edge Functions Are a Bad Place for Real Backend Logic

Edge runtimes look fast in demos, then fall apart under real backend constraints. A hybrid edge plus regional model gives you speed without wrecking correctness.

the sales pitch breaks fast

Edge Functions get sold with a very specific kind of benchmark brain damage, somebody shows a request handled in 12ms from a POP near the user, everybody nods, and then six weeks later the team is trying to cram actual backend work into an environment that was built for lightweight request shaping, not for the kind of ugly, stateful, CPU-sensitive logic that keeps real products alive.

The problem starts with the runtime itself. In Next.js 14 on Vercel, the Edge Runtime is basically a constrained V8 isolate with Web APIs, not a full Node.js process. That means no native Node APIs in the usual sense, limited package compatibility, and a long tail of libraries that either fail at build time or, worse, pass CI and then explode at runtime with errors like The edge runtime does not support Node.js 'crypto' module or Dynamic Code Evaluation (e.g. 'eval', 'new Function') not allowed in Edge Runtime. If your app does document parsing, image manipulation, PDF generation, Excel exports, bcrypt-based password migration, S3 multipart uploads with mature SDK paths, or anything touching native modules, you are now negotiating with the runtime instead of shipping product.

Database access is where the fantasy really collapses. Traditional PostgreSQL drivers like pg assume long-lived TCP connections. Edge runtimes don't give you that shape of networking. You end up pushed toward HTTP-based database proxies, serverless drivers, or vendor-specific adapters, each introducing another layer of latency, another quota, another failure mode. That setup is fine for a simple key lookup. It gets shaky once transactions, advisory locks, session state, or complex ORM behavior enter the picture.

We've seen this firsthand at Steezr on SaaS backends where the frontend team wanted auth checks and tenant resolution at the edge, which was sensible, then the scope drift started, invoice generation moved into the same path, then CRM sync logic, then audit writes. The deployment stayed fast. The system got worse.

where edge actually works

The edge is excellent at a narrow set of jobs, and that narrowness is a strength if you respect it. Request routing, geolocation-aware redirects, bot filtering, cookie inspection, cache key normalization, lightweight auth gating, AB test assignment, and stale content delivery all fit the runtime well because they are mostly stateless, mostly cheap, and tolerant of some environmental constraints.

Next.js middleware is the obvious example. You can inspect a session cookie, verify a compact JWT with jose, attach tenant metadata in headers, and rewrite /app to /eu/app or /us/app based on the user profile or POP region, all before the request touches your regional compute. That works well because the logic stays small and deterministic. A middleware file like this is boring, which is exactly what you want:

ts
// middleware.ts
import { NextRequest, NextResponse } from 'next/server'
import { jwtVerify } from 'jose'

export async function middleware(req: NextRequest) {
  const token = req.cookies.get('session')?.value
  if (!token) return NextResponse.redirect(new URL('/login', req.url))

  const secret = new TextEncoder().encode(process.env.JWT_SECRET)
  const { payload } = await jwtVerify(token, secret)
  const region = payload.region === 'us' ? 'us' : 'eu'

  const headers = new Headers(req.headers)
  headers.set('x-tenant-id', String(payload.tenant_id))
  headers.set('x-home-region', region)

  return NextResponse.next({ request: { headers } })
}

export const config = {
  matcher: ['/app/:path*']
}

That request can then hit a regional API in Frankfurt or Ashburn where your real logic lives, where PostgreSQL connections behave normally, where you can run Prisma, psycopg, ffmpeg, LibreOffice, or a headless browser without praying the bundle compiler doesn't strip something important.

Caching also belongs here. Public product pages, signed asset redirects, feature flag bootstrap payloads, and precomputed dashboard shells can all benefit from POP-local delivery. Keep the edge path side-effect free if possible. The minute a request needs transactional correctness, significant CPU, or broad library compatibility, send it home to a region that you control and can reason about.

db connections, cpu limits, and brittle code

Senior engineers usually don't get burned by the hello-world part. They get burned by the second-order effects, the weird operational costs that only show up once traffic gets uneven and product requirements stop being polite.

Database connectivity is the first one. Prisma on edge has improved through adapters and Data Proxy style patterns, and there are decent products in this space, yet the ergonomics still lag behind a plain regional Node service talking to PostgreSQL over a normal connection pool. Transactions across several queries get slower and harder to trace, connection semantics vary by vendor, and some ORMs quietly disable features in edge-compatible modes. If you've ever had to explain why one endpoint uses @prisma/client, another uses a fetch-based adapter, and a third bypasses the ORM entirely because of a driver limitation, you already know the cost isn't theoretical.

CPU limits come next. Edge runtimes are designed for short bursts. That sounds harmless until your supposedly simple endpoint picks up a little extra work over time, maybe HMAC verification on a large webhook body, maybe tenant config merges, maybe a permission graph expansion, maybe a zod schema validating a payload the size of a small novel. The median request stays fine, p95 starts climbing, and then you get timeout noise that nobody can reproduce locally because next dev on your MacBook isn't a good simulation of a constrained isolate running halfway across the planet.

Deployments get brittle too. Teams start auditing every dependency for edge compatibility, pinning versions to dodge regressions, maintaining separate code paths for runtime = 'edge' and runtime = 'nodejs', and reading package source to figure out why a transitive import pulled in fs. One common failure in Next.js builds looks like this:

text
Module not found: Can't resolve 'net'
Import trace for requested module:
./node_modules/pg/lib/connection.js
./node_modules/pg/lib/index.js
./src/lib/db.ts

Now you are architecting around bundler behavior. That's bad engineering economics. The infrastructure choice should simplify the codebase, not force every feature through a compatibility sieve.

the hybrid model that holds up

The model I recommend is simple, edge for request admission and response acceleration, regional compute for business logic and data mutation. You can run this on Vercel, Fly.io, plain Kubernetes, or a boring VM setup. The principle stays the same.

On Vercel with Next.js, keep middleware.ts at the edge. Use Route Handlers selectively. Public cacheable endpoints can stay edge-native if they only assemble data from headers, cookies, feature flags, or a KV cache. Stateful API routes should declare Node explicitly:

ts
// app/api/orders/route.ts
export const runtime = 'nodejs'
export const preferredRegion = 'fra1'

import { NextRequest, NextResponse } from 'next/server'
import { prisma } from '@/lib/prisma'

export async function POST(req: NextRequest) {
  const body = await req.json()

  const order = await prisma.order.create({
    data: {
      customerId: body.customerId,
      totalCents: body.totalCents
    }
  })

  return NextResponse.json(order, { status: 201 })
}

That one line, runtime = 'nodejs', saves a lot of pain. Add preferredRegion if your database lives in one place and correctness matters more than shaving 40ms for a mutation request.

On Fly.io, the split can be even cleaner. Put Next.js close to users for the shell and cacheable responses, then route writes and heavier reads to a regional Django or Node service pinned near Postgres. A fly.toml for the backend might look like this:

toml
app = "crm-api"
primary_region = "fra"

[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 1

[[vm]]
  cpu_kind = "shared"
  cpus = 2
  memory_mb = 1024

That gives you a stable home for long-running DB connections, background jobs, and native dependencies. Meanwhile the edge tier handles fast-path concerns. This is the same shape we've used on customer portals and ERP-style systems, including stacks with Next.js in front and Django behind it, because invoice approval flows and document processing pipelines don't become simpler just because an edge runtime exists.

observability or guesswork

A hybrid setup only works if tracing crosses the boundary cleanly. Otherwise the edge tier becomes a latency rumor generator, everybody blames the wrong hop, and incident review turns into folklore.

Start with a request ID generated at the first entry point, usually middleware. Forward it through every internal call with headers like x-request-id, x-tenant-id, and traceparent if you're on OpenTelemetry. Log edge decisions explicitly, cache hit or miss, auth outcome, chosen region, rewrite target, and whether the request was short-circuited. Then log the same request ID in the regional service with DB timings attached.

In Next.js you can wire OTel through instrumentation.ts, export traces to an OTLP collector, then view them in Grafana Tempo, Datadog, or Honeycomb. I like Honeycomb for request path debugging because the high-cardinality fields are first-class, and hybrid systems generate a lot of high-cardinality context by design. A useful span set looks like this, middleware.auth, middleware.region_select, api.orders.parse_json, db.insert_order, queue.enqueue_invoice_job. Keep the names boring.

Metrics need a little discipline too. Track p50, p95, and timeout rate separately for edge and regional handlers. Track cache hit ratio per route. Track regional fan-out, one incoming request that spawns four backend fetches will erase any edge latency gains quickly. Track DB wait time. If p95 edge is 18ms and p95 regional is 140ms with 90ms in db.pool.wait, moving more code to the edge won't save you. Fix the database path.

The anti-pattern is staring at a global synthetic latency chart and telling yourself the edge deployment is working because the median got smaller. Medians lie. Tail behavior is what users report, what support teams suffer through, and what executives see during demos.

choose predictability over cleverness

There is a specific kind of architecture that looks great in conference talks and feels awful after six months of feature work, edge-heavy Next.js backends land in that bucket surprisingly often. The appeal is obvious, one deployment surface, code near users, fewer moving parts on paper. The paper is lying.

Predictable systems put stateful logic near state, keep routing logic cheap, and treat platform constraints as design inputs instead of trivia to discover in production. If your request needs Postgres transactions, native modules, stable CPU, broad package support, or deep observability, run it in a regional environment where Node.js or Python behaves like Node.js or Python. Use the edge to reject junk early, serve cached responses quickly, and attach enough context that the regional service can make a good decision without repeating work.

This choice also helps teams ship faster. Engineers don't have to wonder if a library is edge-safe. Code review gets easier because route boundaries are obvious. Incident response gets easier because there are fewer magical execution environments involved in a failing request. Cost control gets easier because the expensive paths are isolated and measurable.

A lot of our work at Steezr ends up in exactly this shape, Next.js handling the customer-facing shell, auth checks, some caching, Django or Node carrying the heavy backend load, PostgreSQL sitting close to the services that mutate it. It isn't trendy. It works. I'd take that trade every time.

Johnny Unar

Written by

Johnny Unar

Want to work with us?

Edge runtimes look fast in demos, then fall apart under real backend constraints. A hybrid edge plus regional model gives you speed without wrecking correctness.