TL;DR: Migrate your Next.js App Router application from expensive PaaS providers to AWS ECS, Docker, and CloudFront to cut hosting costs by up to 80%. This guide demonstrates how to configure Next.js for a
standaloneoutput and build a highly optimized, multi-stage Dockerfile that keeps your production image under 150MB. It also covers securely authenticating and pushing your compiled image to AWS ECR to prepare for Fargate deployment.
⚡ Key Takeaways
- Set
output: 'standalone'innext.config.mjsto strip out the bloated 1GB+node_modulesfolder and generate a minimal production build. - Install the
libc6-compatpackage in your Alpine Linux base image to ensure Next.js native image optimization functions correctly. - Explicitly copy the
publicand.next/staticfolders into your Docker runner stage to prevent 404 errors on static assets and CSS. - Secure your production container by executing the Node.js runtime as a non-root user (
nextjs:nodejs) and settingpoweredByHeader: false. - Create an AWS ECR repository with
--image-scanning-configuration scanOnPush=truevia the AWS CLI to securely host and vulnerability-scan your container images.
You've just received your monthly hosting invoice. Your Next.js application—which started as a blazing-fast, inexpensive prototype—is now handling millions of requests. The developer experience on Vercel was unparalleled in the early days. But now, you are paying $40 for every 100GB of extra bandwidth, fighting opaque serverless execution timeouts, and watching your infrastructure bill skyrocket past $4,000 a month for traffic that would cost a fraction of that on raw AWS.
The convenience of a git push deployment has morphed into vendor lock-in. If your application relies heavily on Server-Side Rendering (SSR), edge middleware, and high-bandwidth image delivery, PaaS pricing models scale exponentially against you.
The solution is containerization. By migrating your Next.js App Router application to AWS Elastic Container Service (ECS) on AWS Fargate, fronted by an Application Load Balancer (ALB) and Amazon CloudFront, you retain infinite horizontal scalability while cutting your monthly hosting costs by up to 80%.
This guide provides the exact architectural blueprint and code configurations needed to deploy an enterprise-grade, self-hosted Next.js infrastructure.
Preparing Next.js for Docker: The Standalone Output
By default, running next build generates a .next folder meant to be run alongside your entire node_modules directory. For a Docker container, copying a 1GB+ node_modules folder results in bloated image sizes, slow CI/CD pipelines, and sluggish container cold starts.
Next.js provides an Output File Tracing feature that automatically creates a standalone folder containing only the files and dependencies strictly required for production.
Enable this by updating your next.config.mjs:
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'standalone',
images: {
// Required if you use next/image with a custom loader or external URLs
remotePatterns: [
{
protocol: 'https',
hostname: 'assets.yourdomain.com',
},
],
},
// Disable the powered-by header for security
poweredByHeader: false,
};
export default nextConfig;
With standalone enabled, we can construct a highly optimized, multi-stage Dockerfile. This ensures our final production image contains only the compiled code and a minimal Node.js runtime, keeping the image size comfortably under 150MB.
# Stage 1: Base image
FROM node:20-alpine AS base
# Install libc6-compat required by Next.js image optimization
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Stage 2: Install dependencies
FROM base AS deps
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Stage 3: Builder
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Disable telemetry during build
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
# Stage 4: Production runner
FROM base AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1
ENV PORT 3000
# Ensure Next.js binds to all network interfaces in Docker
ENV HOSTNAME "0.0.0.0"
# Run as non-root user for enhanced security
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
USER nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
EXPOSE 3000
CMD ["node", "server.js"]
Production Note: The standalone build does not automatically copy the
publicor.next/staticfolders into thestandalonedirectory (so they can ideally be served by a CDN). Because we are running a self-contained Docker image, you must explicitly copy them in the Dockerfile, as shown above. Otherwise, your static assets and CSS will return 404s.
Building and Pushing to AWS Elastic Container Registry (ECR)
Before we can provision ECS, we need a secure, private registry to store our Docker images. AWS Elastic Container Registry (ECR) integrates natively with ECS and AWS IAM.
Use the AWS CLI to create your repository:
aws ecr create-repository \
--repository-name nextjs-enterprise-app \
--image-scanning-configuration scanOnPush=true \
--region us-east-1
Once created, authenticate your local Docker daemon with ECR, build the image, and push it to AWS. (Replace 123456789012 with your AWS Account ID).
# Authenticate
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
# Build the multi-stage image
docker build -t nextjs-enterprise-app .
# Tag for ECR
docker tag nextjs-enterprise-app:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/nextjs-enterprise-app:latest
# Push to AWS
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/nextjs-enterprise-app:latest
Provisioning the ECS Fargate Cluster and Task Definition
Managing EC2 instances manually defeats the purpose of modern DevOps. By utilizing AWS Fargate, we run our containers on serverless compute. You define the memory and CPU requirements, and AWS handles the underlying server orchestration, patching, and scaling.
When we handle DevOps and Cloud Deployments for enterprise clients, we heavily rely on ECS Fargate to balance granular architectural control with operational simplicity.
The core of an ECS deployment is the Task Definition. This JSON document tells ECS which image to pull, what ports to open, and which environment variables and secrets to inject.
Create a file named task-definition.json:
{
"family": "nextjs-production-task",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "1024",
"memory": "2048",
"executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
"taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskRole",
"containerDefinitions": [
{
"name": "nextjs-app",
"image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/nextjs-enterprise-app:latest",
"essential": true,
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000,
"protocol": "tcp"
}
],
"environment": [
{ "name": "NODE_ENV", "value": "production" }
],
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:ssm:us-east-1:123456789012:parameter/production/DATABASE_URL"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/nextjs-production",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
}
Register this task definition with AWS:
aws ecs register-task-definition --cli-input-json file://task-definition.json
Application Load Balancer (ALB) and Health Checks
Fargate tasks are ephemeral; they receive new internal IP addresses every time they deploy or scale. An Application Load Balancer (ALB) provides a static entry point to your cluster and evenly distributes incoming traffic across your active Next.js containers.
To ensure the ALB only routes traffic to healthy containers, you must configure a Target Group with an HTTP health check. Next.js App Router makes this simple.
Create a lightweight health check route in your Next.js application at app/api/health/route.ts:
import { NextResponse } from 'next/server';
export async function GET() {
return NextResponse.json(
{ status: 'healthy', timestamp: new Date().toISOString() },
{ status: 200 }
);
}
Critical Warning: Do not connect to your database inside this health check endpoint. If your database experiences a brief spike in latency, the ALB will mark all your Next.js containers as unhealthy and terminate them, causing a cascading system outage. Keep health checks restricted strictly to container responsiveness.
When configuring your ALB Target Group (via the AWS Console, Terraform, or CLI), set the Health Check Path to /api/health, the protocol to HTTP, and the port to 3000.
Replacing Vercel's Edge Network with AWS CloudFront
One of the biggest reasons teams hesitate to leave Vercel is its built-in Edge Network. Out of the box, Vercel globally caches static assets, images, and ISR (Incremental Static Regeneration) payloads.
To replicate this enterprise-grade caching, we place Amazon CloudFront in front of our Application Load Balancer. Without CloudFront, all traffic hits your ECS containers directly, which defeats the purpose of Next.js static optimizations and drives up your ALB data transfer costs. Assessing your bandwidth footprint and choosing the right CDN architecture is a key part of our infrastructure pricing strategy when transitioning from PaaS to AWS.
You must configure CloudFront to respect Next.js Cache-Control headers. Next.js dynamically sends Cache-Control: public, max-age=31536000, immutable for static assets.
Create a CloudFront Cache Policy that ensures proper handling of dynamic Next.js routes and query parameters:
{
"CachePolicyConfig": {
"Name": "Nextjs-Optimized-Policy",
"DefaultTTL": 86400,
"MaxTTL": 31536000,
"MinTTL": 0,
"ParametersInCacheKeyAndForwardedToOrigin": {
"EnableAcceptEncodingGzip": true,
"EnableAcceptEncodingBrotli": true,
"HeadersConfig": {
"HeaderBehavior": "whitelist",
"Headers": {
"Items": [
"Host",
"Authorization",
"x-forwarded-for"
]
}
},
"CookiesConfig": {
"CookieBehavior": "all"
},
"QueryStringsConfig": {
"QueryStringBehavior": "all"
}
}
}
}
Key Routing Behaviors to configure in CloudFront:
/_next/static/*and/public/*: Set to aggressively cache at the edge. Route these to your ALB, but let CloudFront cache them heavily based on origin headers to offload traffic from your containers.Default (*): Forward all cookies, authorization headers, and query strings so SSR functions, API routes, and Server Actions work correctly.- Host Header: Forwarding the
Hostheader is critical for ALB origins. Without it, Next.js dynamic routing, redirects, and middleware may fail to resolve the correct canonical URL.
Automating the Deployment with GitHub Actions
A successful migration requires retaining the "git push to deploy" developer experience. AWS handles the infrastructure, but GitHub Actions will power the CI/CD pipeline.
We can automate the entire lifecycle—building the Docker image, pushing it to ECR, and forcing an ECS deployment—using official AWS actions.
Create a .github/workflows/deploy.yml file in your repository:
name: Deploy Next.js to ECS
on:
push:
branches:
- main
env:
AWS_REGION: us-east-1
ECR_REPOSITORY: nextjs-enterprise-app
ECS_CLUSTER: production-cluster
ECS_SERVICE: nextjs-service
ECS_TASK_DEFINITION: .aws/task-definition.json
CONTAINER_NAME: nextjs-app
jobs:
deploy:
name: Build and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: ${{ env.ECS_TASK_DEFINITION }}
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition@v2
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
With this pipeline, when a developer merges a PR into main, GitHub Actions automatically builds the standalone Next.js Docker image, pushes it to ECR, updates the task definition with the new image SHA, and triggers a zero-downtime rolling deployment in ECS. The wait-for-service-stability: true flag ensures the workflow only passes once the new containers successfully pass the ALB health checks.
The Financial Impact of Self-Hosting
The Developer Experience of Platform-as-a-Service providers is phenomenal for teams prioritizing speed to market over unit economics. However, as your enterprise scales, compute and bandwidth abstractions quickly become liabilities.
By taking ownership of your routing layer (CloudFront), your compute layer (ECS), and your containerization (Docker), you replace opaque platform markups with transparent, raw infrastructure costs. A $4,000 Vercel bill can routinely be condensed into a $600 AWS footprint—freeing up capital to hire engineers instead of paying for bandwidth overages.
Transitioning from a PaaS to AWS doesn't mean abandoning modern CI/CD. With the right configuration, you achieve the exact same automated deployment pipelines and global edge caching, entirely under your control. If your team is struggling with platform limits or exorbitant hosting bills, book a free architecture review with our infrastructure experts to map out your migration strategy.
Work With Us
Need help building this in production? SoftwareCrafting is a full-stack development agency — we ship enterprise-grade React, Next.js, Node.js, React Native, and Flutter applications for global clients.
Frequently Asked Questions
What is the difference between Server Components and Client Components in the App Router?
Server Components render exclusively on the server, reducing the JavaScript bundle size sent to the browser and allowing direct access to backend resources. Client Components, marked with the "use client" directive, render on the client side and are used when you need interactivity, browser APIs, or React state hooks.
How do SoftwareCrafting services help teams migrate from the Pages router to the App Router?
SoftwareCrafting services provide hands-on engineering support to incrementally migrate your legacy Next.js codebase without disrupting production traffic. We help map out your routing strategy, refactor data fetching methods, and resolve complex client-server boundary errors.
How does data fetching work differently in the App Router compared to getServerSideProps?
Instead of using specific Next.js APIs like getServerSideProps, the App Router uses the standard native fetch() API extended with caching and revalidation options. This allows developers to fetch data directly inside asynchronous Server Components, offering more granular control over caching at the component level.
What is the best way to handle global state in a Next.js Server Components architecture?
Global state should be pushed down the component tree as much as possible, utilizing React Context only within Client Components. For data shared across Server Components, you should rely on Next.js's built-in request memoization, which automatically caches multiple identical fetch requests in a single render pass.
Can SoftwareCrafting services optimize the Core Web Vitals of our Next.js application?
Absolutely. SoftwareCrafting services conduct deep performance audits to identify render-blocking resources, unoptimized images, and excessive client-side JavaScript. We then implement advanced Next.js features like streaming, partial prerendering, and dynamic imports to significantly improve your LCP and INP scores.
📎 Full Code on GitHub Gist: The complete
n8n-data-extractor.jsfrom this post is available as a standalone GitHub Gist — copy, fork, or embed it directly.
