TL;DR: Eliminate the dual-write problem in Node.js by replacing Redis-backed queues like BullMQ with Graphile Worker. By leveraging PostgreSQL's native row-level locking, you can enqueue background jobs within the exact same ACID transaction as your database inserts, guaranteeing jobs are never lost due to pod crashes or network failures.
⚡ Key Takeaways
- Identify the "Dual-Write Problem" in standard Express.js controllers where a crash between a PostgreSQL
COMMITand a Redisqueue.add()causes permanent state corruption. - Avoid the operational overhead of maintaining complex Transactional Outbox patterns, CDC pipelines (like Debezium), and Kafka queues for simple background tasks.
- Use
@graphile/workerto build an ACID-compliant job queue directly inside PostgreSQL using standard relational tables and native row-level locking. - Guarantee job execution by wrapping both your business data
INSERTand your job enqueue operation within a single SQLBEGIN/COMMITtransaction block. - Programmatically initialize your queue infrastructure by running
runMigrations({ pgPool: pool })to safely generate the requiredgraphile_workerschema.
Imagine you are building a user registration flow in Node.js. When a user signs up, you need to insert their record into PostgreSQL and enqueue a welcome email. The industry-standard reflex is to grab Redis, spin up BullMQ or Celery, and write a two-step process: save to the database, then push to the queue.
This approach introduces the Dual-Write Problem.
If the process crashes immediately after the database commit but before the Redis network call completes, you are left with a ghost record. The user exists in your database, but the welcome email job is lost forever in the void. Conversely, if you enqueue the job before the database commit, the worker might try to process the email before the user record is visible in Postgres, resulting in a foreign key failure or a "User Not Found" error.
To solve this, engineering teams often implement the Transactional Outbox Pattern, requiring separate outbox tables, CDC (Change Data Capture) pipelines using Debezium, or complex polling workers. This adds immense operational overhead, forcing your team to maintain Redis clusters, eviction policies, and Kafka queues just to reliably send a welcome email.
There is a better, mathematically sound way. By leveraging PostgreSQL's row-level locking mechanisms, we can achieve true ACID-compliant background job queues without ever deploying Redis.
The Dual-Write Anomaly in Standard Node.js Architectures
To understand why we need a unified database-queue architecture, let's look at how the standard Redis-backed queue fails. The following code snippet demonstrates a typical Express.js controller using pg and bullmq.
import { Request, Response } from 'express';
import { Pool } from 'pg';
import { Queue } from 'bullmq';
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const emailQueue = new Queue('email-queue', {
connection: { host: 'redis', port: 6379 }
});
export async function registerUser(req: Request, res: Response) {
const { email, name } = req.body;
const client = await pool.connect();
try {
await client.query('BEGIN');
// Step 1: Insert the user into PostgreSQL
const userRes = await client.query(
'INSERT INTO users (email, name) VALUES ($1, $2) RETURNING id',
[email, name]
);
const userId = userRes.rows[0].id;
await client.query('COMMIT'); // Data is now durable
// ⚠️ THE DANGER ZONE: What if the pod dies right here?
// What if Redis is undergoing a master-replica failover?
// Step 2: Enqueue job to Redis
await emailQueue.add('welcome-email', { userId, email });
res.status(201).json({ success: true, userId });
} catch (error) {
// ⚠️ LOGICAL FLAW: If Redis fails, this ROLLBACK does nothing
// because the user insert was already committed above!
await client.query('ROLLBACK');
res.status(500).json({ error: 'Internal Server Error' });
} finally {
client.release();
}
}
Production Note: If a network blip occurs at
THE DANGER ZONE, your application state is permanently corrupted. The user is registered, but the background job is skipped. You cannot easily retry this without building external reconciliation scripts.
When we architect systems for our custom backend development and API services, eliminating distributed state anomalies like this is our first priority. Relying on two separate storage mediums for a single logical transaction is an architectural anti-pattern.
Enter Graphile Worker: ACID-Compliant Background Jobs
Graphile Worker is a robust, high-performance job queue that runs directly on PostgreSQL. It uses standard relational tables to store job payloads and relies on Postgres' native locking mechanisms to distribute work to multiple Node.js workers concurrently.
Because the queue lives inside Postgres, adding a job becomes a standard SQL INSERT. This means the job enqueue operation can participate in the exact same database transaction as your business logic. If the transaction rolls back, the job is never enqueued. If it commits, the job is guaranteed to be scheduled.
To get started, install Graphile Worker and initialize the database schema:
npm install @graphile/worker pg
npm install -D @types/pg
Next, run the Graphile Worker API to automatically generate the required graphile_worker schema and tables inside your Postgres database. This can be executed dynamically in a migration script:
import { runMigrations } from '@graphile/worker';
import { Pool } from 'pg';
async function setupWorkerSchema() {
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
// Safely creates the graphile_worker schema and required tables
await runMigrations({ pgPool: pool });
console.log("Graphile Worker schema successfully initialized.");
await pool.end();
}
setupWorkerSchema();
The migration creates a highly optimized jobs table that handles payload storage, retry counts, scheduling, and error logs—all without requiring a separate caching tier.
Implementing the Transactional Outbox Pattern Natively
With Graphile Worker installed, we can refactor our problematic Express controller. Instead of using a separate Redis client, we will use a raw SQL query to insert the job inside the same Postgres transaction block.
Graphile Worker provides a PL/pgSQL function called graphile_worker.add_job(). We can invoke this directly using our existing pg client.
import { Request, Response } from 'express';
import { Pool } from 'pg';
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
export async function registerUserRefactored(req: Request, res: Response) {
const { email, name } = req.body;
const client = await pool.connect();
try {
await client.query('BEGIN');
// Step 1: Insert user
const userRes = await client.query(
'INSERT INTO users (email, name) VALUES ($1, $2) RETURNING id',
[email, name]
);
const userId = userRes.rows[0].id;
// Step 2: Enqueue job natively inside the SAME transaction
await client.query(
`SELECT graphile_worker.add_job(
'send_welcome_email',
json_build_object('userId', $1::uuid, 'email', $2::text)
)`,
[userId, email]
);
// Step 3: Atomic commit.
// Both the user and the job are persisted simultaneously.
await client.query('COMMIT');
res.status(201).json({ success: true, userId });
} catch (error) {
// If anything fails, BOTH the user insertion and the job are cleanly rolled back.
await client.query('ROLLBACK');
res.status(500).json({ error: 'Internal Server Error' });
} finally {
client.release();
}
}
Notice the architectural shift: there is no intermediate failure state. The database provides an absolute guarantee that either the user and the job exist, or neither exist. We have successfully implemented the Transactional Outbox Pattern without any extra infrastructure or polling microservices.
Setting Up Typesafe Task Executors
Enqueuing jobs is only half the equation; we must also process them. Graphile Worker runs as a Node.js process that listens to the PostgreSQL database for incoming jobs. We define our tasks as standard asynchronous TypeScript functions.
First, let's create our task file tasks/sendWelcomeEmail.ts. Graphile Worker automatically passes the job payload and a collection of helpers to the function.
// tasks/sendWelcomeEmail.ts
import { Helpers } from '@graphile/worker';
// Define expected payload for type safety
interface WelcomeEmailPayload {
userId: string;
email: string;
}
export default async function sendWelcomeEmail(
payload: WelcomeEmailPayload,
helpers: Helpers
) {
const { userId, email } = payload;
helpers.logger.info(`Sending welcome email to ${email} (User: ${userId})`);
try {
// Simulate external API call (e.g., SendGrid, AWS SES)
await mockEmailProvider.send({
to: email,
subject: 'Welcome to the platform!',
body: 'We are glad you are here.'
});
helpers.logger.info(`Email successfully dispatched to ${email}`);
} catch (error) {
helpers.logger.error(`Failed to send email to ${email}: ${error.message}`);
// Throwing an error tells Graphile to mark the job as failed
// and automatically schedule a retry with exponential backoff.
throw error;
}
}
Next, we create the worker boot script (worker.ts) that will initialize the connection pool, register the tasks, and start polling and listening for work.
// worker.ts
import { run } from '@graphile/worker';
import sendWelcomeEmail from './tasks/sendWelcomeEmail';
async function main() {
const runner = await run({
connectionString: process.env.DATABASE_URL,
concurrency: 10,
// Provide our task registry
taskList: {
send_welcome_email: sendWelcomeEmail,
},
// Listen to Postgres LISTEN/NOTIFY for instant wake-ups
noHandleSignals: false,
});
console.log("Worker is running and listening for jobs...");
await runner.promise;
}
main().catch(console.error);
Performance Deep Dive: How SKIP LOCKED Beats Polling
A common critique of using relational databases for job queues is database contention. Historically, if multiple worker threads queried a database table looking for jobs, they would block each other, causing massive CPU spikes and deadlocks.
Graphile Worker bypasses this entirely using PostgreSQL's FOR UPDATE SKIP LOCKED clause.
When a Graphile worker looks for the next job, it executes a query functionally identical to this:
UPDATE graphile_worker.jobs
SET
locked_at = NOW(),
locked_by = 'worker_node_1'
WHERE id = (
SELECT id
FROM graphile_worker.jobs
WHERE locked_at IS NULL
AND run_at <= NOW()
AND attempts < max_attempts
ORDER BY priority ASC, run_at ASC
LIMIT 1
FOR UPDATE SKIP LOCKED
)
RETURNING *;
Production Note: The
SKIP LOCKEDdirective is the magic bullet here. If Worker A locks Job 1, Worker B's query will silently ignore Job 1 and immediately lock Job 2. There is no waiting, no blocking, and no deadlock.
This enables highly concurrent job processing. You can scale to dozens of Node.js worker containers, and Postgres will seamlessly distribute jobs among them. In our standardized engineering processes and architectural reviews, we consistently benchmark Postgres queues handling upwards of 10,000 jobs per second—more than enough for 99% of global SaaS applications.
Furthermore, Graphile Worker utilizes standard PostgreSQL LISTEN and NOTIFY. When you execute graphile_worker.add_job(), Postgres broadcasts a notification. The idle worker process receives this event and instantly picks up the job, meaning latency is practically identical to a Redis Pub/Sub implementation.
Advanced Job Control: Debouncing, Concurrency, and Cron
Replacing Redis doesn't mean sacrificing advanced queue features. Graphile Worker provides native mechanisms for cron scheduling, job debouncing, and rate limiting directly within the database schema.
Debouncing Duplicate Jobs
If a user rapidly clicks a "Generate Report" button, you don't want to enqueue ten intensive PDF generation jobs. Graphile handles this using a job_key. If a job is added with an existing key, the new job is either ignored or replaces the old one. We use PostgreSQL named arguments to pass the job_key securely:
// Enqueueing a debounced job
await client.query(
`SELECT graphile_worker.add_job(
'generate_monthly_report',
json_build_object('userId', $1::uuid),
job_key => $2::text
)`,
[userId, `report_generation_${userId}`]
);
Cron Jobs and Recurring Tasks
Graphile allows you to define recurring schedules using standard cron syntax. Instead of running a separate node-cron process that requires complex leader election to prevent duplicate runs, you define your schedule centrally.
import { run } from '@graphile/worker';
async function startWorker() {
await run({
connectionString: process.env.DATABASE_URL,
// Define recurring tasks
parsedCronItems: [
{
task: "daily_database_cleanup",
pattern: "0 2 * * *", // 2 AM every day
options: {
backfillPeriod: 0, // Do not backfill if worker was down
}
}
],
taskList: {
daily_database_cleanup: async (payload, helpers) => {
helpers.logger.info("Running daily cleanup...");
// Execute cleanup logic
}
}
});
}
Why Simpler Infrastructure Wins
Architecture is as much about subtracting complexity as it is about adding features. By moving background jobs to PostgreSQL, you eliminate an entire class of infrastructure.
Look at the difference in a standard deployment configuration. Here is the docker-compose.yml for an application using BullMQ:
version: '3.8'
services:
api:
build: ./api
depends_on:
- postgres
- redis
worker:
build: ./worker
depends_on:
- postgres
- redis
postgres:
image: postgres:15
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
command: ["redis-server", "--appendonly", "yes"]
volumes:
pgdata:
With this setup, you are maintaining a Redis cluster, configuring its AOF (Append Only File) persistence, managing memory limits, and writing complex application code to handle split-brain scenarios between Postgres and Redis.
Now, look at the architecture utilizing Graphile Worker:
version: '3.8'
services:
api:
build: ./api
depends_on:
- postgres
worker:
build: ./worker
depends_on:
- postgres
postgres:
image: postgres:15
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Redis is gone entirely.
The DevOps burden is drastically slashed. When you back up your PostgreSQL database via WAL archiving, your background job state is perfectly synchronized with your user data. If you restore a point-in-time backup from three days ago, the job queue reverts to its exact state from three days ago, ensuring complete transactional integrity without manual intervention.
Frequently Asked Questions
Why am I seeing [object Object] rendered on my web page?
This happens when you try to output a plain JavaScript object directly to the DOM or concatenate it with a string without stringifying it first. JavaScript automatically calls the default .toString() method on the object, which returns the literal string [object Object].
How can I properly inspect the contents of an object instead of seeing [object Object]?
You should use console.log() to inspect the object in your browser's developer tools, which allows you to expand and explore its properties. If you need to render it directly in the UI for debugging purposes, wrap the object in JSON.stringify(myObject, null, 2) to display it as a formatted JSON string.
How do I prevent [object Object] errors when sending data to an API?
When sending JSON payloads via the fetch API, ensure you serialize your data object first. Passing the raw object to the body property will result in an [object Object] string being sent to your server. Always use body: JSON.stringify(payload) to ensure the data is transmitted correctly.
Can SoftwareCrafting services help resolve persistent data rendering bugs like this?
Yes, debugging serialization issues and complex state management bugs can be incredibly time-consuming. SoftwareCrafting services provide expert code reviews and debugging assistance to help your team identify the root cause of rendering errors. We ensure your data pipelines are robust, predictable, and type-safe from the backend to the UI.
Why does string interpolation (template literals) cause the [object Object] output?
Template literals implicitly convert all embedded expressions to strings. If a variable inside the literal is an object, JavaScript applies the .toString() method during interpolation. To fix this, access the specific properties you need, such as `User info: ${user.name}` instead of referencing the entire object.
How can I enforce better type checking to prevent these object coercion errors?
Adopting TypeScript is the most effective way to catch object-to-string coercion errors at compile time. By defining strict interfaces for your data, the compiler will warn you before you accidentally render an object to the DOM. If your team is ready to adopt stricter typing, SoftwareCrafting services can guide your TypeScript migration to ensure a smooth and error-free implementation.
📎 Full Code on GitHub Gist: The complete
unresolved-template-error.jsfrom this post is available as a standalone GitHub Gist — copy, fork, or embed it directly.
Conclusion
Redis is an incredible piece of technology, but using it as a default job queue for relational data architectures is a legacy habit from an era before Postgres perfected SKIP LOCKED. By migrating to Graphile Worker, you unify your data layer, enforce strict ACID compliance on your background tasks, and drastically simplify your deployment pipelines.
Boring, unified infrastructure scales better than fragmented, trendy micro-components. If your Node.js application is struggling with ghost records, lost emails, or unreliable queues, it is time to drop the dual-write anti-pattern.
Ready to streamline your infrastructure and eliminate technical debt? Book a free architecture review to talk to our backend engineers. We help global technical founders simplify their stacks, reduce AWS costs, and build inherently reliable systems.
