Making Architectural Decisions Under Real Constraints: Building a Scalable Intelligence Platform

#architecture #nodejs #nestjs #ddd #decision-making

When you’re starting a new backend project, one of the first questions you face is: monolith or microservices? It’s tempting to reach for the latest patterns, but the right answer depends entirely on your constraints.

Recently, I had to make this decision for a competitive intelligence platform that monitors signals from competitors—pricing changes, product releases, job postings, API updates, and more. Here’s how we navigated the decision, what we chose, and why.

The Project Context

The platform needs to:

The challenge: We needed to ship features quickly while building something maintainable and scalable. The team would grow, and we couldn’t afford to accumulate tech debt that would slow us down in 6 months.

The Architectural Question

Starting fresh gives you options—too many options, sometimes. We had three paths:

  1. Microservices from day one
  2. Simple monolithic API (Express + PostgreSQL)
  3. Modular monolith with clear boundaries

Each had merit. None was obviously “correct.”

Option 1: Microservices from Day One

The case for it:

Why we didn’t choose it:

The lesson here: Microservices solve problems you don’t have yet. When you have them, you’ll know.

Option 2: Simple Express Monolith

The case for it:

Why we didn’t choose it:

The lesson: Fast now, slow later. We’d be refactoring in 6 months when we should be shipping features.

What We Chose: Modular Monolith with DDD-Lite

We went with a modular monolith—a single deployable application, but organized into isolated modules (bounded contexts in DDD terms) that communicate through events.

Why this made sense:

1. Single Deployment, Clear Boundaries

We get the operational simplicity of a monolith (one deploy, one server, one log stream) with the organizational benefits of microservices (modules own their domain, communicate via contracts).

2. Event-Driven from the Start

Even though all modules are in the same process, they communicate through Kafka events. When the monitoring module detects a pricing change, it publishes a PricingChangedEvent. The notifications module subscribes and sends alerts.

Why Kafka in a monolith? Two reasons:

3. Extractable by Design

When we eventually need to scale the scraping module independently (it’s CPU-bound), we can extract it to a separate service with zero changes to the event contracts. The rest of the system won’t even notice.

4. Team-Friendly

As the team grows, devs can own entire bounded contexts without stepping on each other’s toes. Clear module boundaries = fewer merge conflicts, clearer PRs.

The Structure

The codebase is organized around bounded contexts—each representing a distinct area of the domain:

Each context follows a layered architecture: domain logic, application use cases, infrastructure adapters. Modules communicate exclusively through domain events published to Kafka—no direct function calls across boundaries.

This structure means we can reason about each module independently. When you’re working on the pricing detector, you don’t need to understand how notifications work. You just publish a PricingChangedEvent and move on.

Implementation Decisions

Beyond the high-level architecture, a few specific choices shaped how we work:

Shared domain patterns. We built base classes for common patterns—DomainEvent for all events, Entity for domain objects with identity, Result for explicit error handling. This enforces consistency: every module speaks the same language.

Initially, this felt like over-engineering. But it’s paid off: code reviews are easier, new team members onboard faster, and refactoring is safer because contracts are explicit.

Event-first communication. Modules publish domain events to Kafka rather than calling each other’s functions. This felt heavyweight at first—”why not just call the notification service?”—but it’s made the codebase more resilient. If the notifications module is slow or crashes, it doesn’t block signal detection. Events queue up and get processed when the service recovers.

Local development with Docker. We use Docker Compose so new developers can run docker-compose up and have PostgreSQL and Kafka running locally. No “works on my machine” issues, no manual installation steps.

Trade-offs We Accepted

Every decision has costs. Here’s what we’re living with:

1. More boilerplate than raw Express
NestJS is opinionated. You write more files (module, controller, service, repository) than you would with Express routes. But this structure pays off as the codebase grows.

2. Kafka adds infrastructure complexity
Running Kafka locally is heavier than “just use function calls.” But the decoupling and async-first design are worth it.

3. Discipline required
Maintaining module boundaries takes effort. Without code reviews that enforce “no cross-module imports,” the architecture degrades. We need to stay vigilant.

4. Not optimized for heavy CPU workloads yet
If scraping becomes a bottleneck, we’ll need to extract it. But we’ll cross that bridge when we have the data to justify it.

Reflection: Would I Make the Same Decision Again?

Yes, given the same constraints.

If we had:

But with a small team, rapid feature development needs, and domain we’re still learning, modular monolith was the pragmatic choice.

What Surprised Me

The value of shared domain patterns. I initially thought DomainEvent, Entity, Result were over-engineering. But they’ve enforced consistency across all modules and made code reviews easier. Everyone speaks the same language.

How easy Docker Compose makes onboarding. Seeing new devs run npm run docker:up and have a working environment in 2 minutes is worth the upfront setup.

What I’d Change

Start with fewer signals. We planned for 9 signals from the start. In hindsight, implementing 2-3 end-to-end first would’ve validated the architecture faster and reduced scope.

More upfront schema design. We’re iterating on database schemas as we go. Some upfront design sessions would’ve saved refactor time.

Key Lesson

Architecture decisions should serve your constraints, not textbook ideals.

There’s no “best” architecture—only trade-offs. Microservices aren’t inherently better than monoliths. DDD isn’t always the answer. The right choice depends on:

Start with the simplest thing that could work, but build in seams so you can evolve. A well-structured monolith beats a poorly executed microservices architecture every time.


Building something similar? I’d love to hear how you approached the architecture decision. Connect with me on LinkedIn.