Why Your Microservices Might Just Be a Distributed Monolith

Microservices architecture has been the darling of the software development world for years. Companies large and small have rushed to break down their monolithic applications into smaller, independently deployable services. The promise? Better scalability, faster development cycles, and increased resilience.
But here’s a hard truth many development teams are reluctant to admit: what they’ve built isn’t truly a microservices architecture at all. Instead, they’ve created what’s often called a “distributed monolith” — a system that has all the drawbacks of both monoliths and microservices, with few of the benefits of either.
What is a Distributed Monolith?
A distributed monolith is an architectural anti-pattern where a system is split into multiple services that are deployed independently, but are so tightly coupled that they must be deployed together, tested together, and ultimately fail together. It’s like taking a monolith, chopping it into pieces, distributing those pieces across different servers, and then connecting them with a complex web of synchronous dependencies.
This results in a system that’s more complex than a monolith but without the benefits that true microservices should provide.
Signs Your “Microservices” Are Actually a Distributed Monolith
Let’s explore the telltale signs that your microservices architecture might actually be a distributed monolith in disguise:
1. Synchronous Communication Dominance
In a true microservices architecture, services should be able to function independently. But in a distributed monolith, services are often tightly coupled through synchronous API calls.
If Service A needs to wait for a response from Service B before it can complete a request, and Service B needs to wait for Service C, you’ve created a chain of dependencies that undermines the independence principle of microservices.
// Example of problematic synchronous dependency chain
async function processOrder(orderId) {
// Service A calls Service B synchronously
const userInfo = await userService.getUserInfo(userId);
// Service B calls Service C synchronously
const inventory = await inventoryService.checkAvailability(productId);
// Service C calls Service D synchronously
const paymentStatus = await paymentService.processPayment(orderId);
// If any of these calls fail, the entire transaction fails
return createOrderResponse(userInfo, inventory, paymentStatus);
}
2. Shared Database or Data Models
One of the fundamental principles of microservices is database independence. Each service should own and manage its data.
If multiple services are reading from or writing to the same database tables, or if changes to a data model in one service require changes in other services, you’re dealing with a distributed monolith.
// Service A and Service B both accessing the same database table
// Service A
const users = await database.query("SELECT * FROM users WHERE status = 'active'");
// Service B
await database.query("UPDATE users SET last_login = NOW() WHERE user_id = ?", [userId]);
3. Coordinated Deployments
In a true microservices architecture, you should be able to deploy services independently without affecting others. If you find yourself saying, “We need to deploy Service A, B, and C together because of their interdependencies,” you’ve built a distributed monolith.
This often manifests in elaborate release plans and deployment schedules that coordinate multiple services, defeating one of the main advantages of microservices: independent deployability.
4. Shared Libraries and Code
While code reuse is generally good practice, excessive sharing of libraries between services can create hidden dependencies. If a change to a shared library requires updates to multiple services, those services are coupled.
// Shared library used by multiple services
// If this ValidationUtils library changes, all services using it may need to be updated
import { ValidationUtils } from 'shared-utils';
function validateUserInput(input) {
if (!ValidationUtils.isValidEmail(input.email)) {
throw new Error('Invalid email format');
}
// More validation logic
}
5. Cascading Failures
When one service goes down and it brings down a chain of other services, you’re experiencing a classic symptom of a distributed monolith. True microservices should implement resilience patterns (circuit breakers, fallbacks, etc.) to prevent cascading failures.
// Missing resilience patterns
async function getUserProfile(userId) {
try {
// If userService is down, this will throw an error
const userBasicInfo = await userService.getBasicInfo(userId);
// If the above fails, these never execute
const userPreferences = await preferencesService.getPreferences(userId);
const userActivityStats = await activityService.getStats(userId);
return {
...userBasicInfo,
preferences: userPreferences,
activity: userActivityStats
};
} catch (error) {
// The entire function fails if any service fails
throw new Error('Failed to retrieve user profile');
}
}
6. Transactional Boundaries Spanning Multiple Services
If you need to maintain transaction integrity across multiple services, you might be dealing with a distributed monolith. True microservices should be designed around business capabilities with clear transactional boundaries.
7. Tight Release Coupling
When a new feature requires coordinated changes across multiple services, it indicates tight coupling. In a proper microservices architecture, most features should be implementable by changing a single service or a small number of services.
Why Do Distributed Monoliths Happen?
Understanding how teams end up with distributed monoliths is crucial for avoiding this anti-pattern:
1. Premature Decomposition
Many teams jump into microservices without fully understanding their domain. Breaking down a system along the wrong boundaries leads to services that remain tightly coupled.
As the famous quote attributed to Martin Fowler goes: “You can’t do microservices right until you’ve done monoliths wrong.” There’s wisdom in starting with a monolith and extracting services only when you understand the domain well enough.
2. Misunderstanding Microservices Principles
Some teams adopt microservices as a trendy architectural choice without internalizing the underlying principles and trade-offs. They focus on the “micro” part (small services) without considering the “service” part (independent, autonomous components).
3. Technical Conway’s Law
Conway’s Law states that “organizations design systems that mirror their communication structure.” If your organization isn’t structured to support independent teams owning separate services, your architecture will likely reflect that with tightly coupled services.
4. Legacy Migration Challenges
When migrating from a monolith to microservices, it’s tempting to simply split the codebase into separate deployables without addressing the underlying coupling. This often results in a distributed version of the original monolith.
5. Overlooking Data Coupling
Many teams focus on decoupling services at the API level but overlook the coupling that occurs through shared data models and databases. Data coupling can be even more problematic than API coupling.
The Real Costs of a Distributed Monolith
A distributed monolith combines the worst aspects of both architectural styles:
1. Increased Complexity Without Proportional Benefits
You take on all the distributed systems challenges (network latency, eventual consistency, distributed debugging) without gaining the flexibility and scalability benefits of true microservices.
2. Higher Operational Overhead
You now have multiple services to deploy, monitor, and maintain, but they still need to be coordinated as if they were a single application.
3. Slower Development and Deployment
Changes require coordination across services, slowing down development cycles and increasing the risk of integration issues.
4. Reduced Resilience
Tight coupling between services means failures propagate easily through the system, potentially causing widespread outages.
5. Debugging Nightmares
Tracing issues across service boundaries is significantly more complex than within a monolith, especially when services are interdependent.
// Imagine debugging this chain of calls across multiple services
// Service A
async function processUserRequest(requestId) {
logger.info(`Starting process for request ${requestId}`);
const result = await serviceB.performOperation(requestId);
logger.info(`Completed process for request ${requestId}`);
return result;
}
// Service B
async function performOperation(requestId) {
logger.info(`Service B processing request ${requestId}`);
const data = await serviceC.fetchData(requestId);
const processed = await internalProcessing(data);
logger.info(`Service B completed processing for ${requestId}`);
return processed;
}
// Service C
async function fetchData(requestId) {
logger.info(`Service C fetching data for ${requestId}`);
// If this fails, the error has to bubble all the way back up
// through multiple services
}
How to Evolve from a Distributed Monolith to True Microservices
If you’ve recognized your architecture as a distributed monolith, don’t despair. Here are strategies to evolve toward a true microservices architecture:
1. Embrace Asynchronous Communication
Replace synchronous API calls with asynchronous messaging patterns where appropriate. Event-driven architectures can help decouple services by allowing them to communicate without direct dependencies.
// Before: Synchronous communication
async function createOrder(orderData) {
const inventory = await inventoryService.checkAndReserve(orderData.items);
const payment = await paymentService.processPayment(orderData.payment);
const shipping = await shippingService.scheduleDelivery(orderData.address);
return createOrderRecord(orderData, inventory, payment, shipping);
}
// After: Asynchronous event-based communication
function createOrder(orderData) {
const orderId = createPendingOrder(orderData);
eventBus.publish('order.created', { orderId, orderData });
return { orderId, status: 'processing' };
}
// Separate handlers in each service subscribe to relevant events
function handleOrderCreated(event) {
const { orderId, orderData } = event;
const inventoryResult = processInventory(orderData.items);
eventBus.publish('inventory.processed', { orderId, inventoryResult });
}
function handleInventoryProcessed(event) {
const { orderId, inventoryResult } = event;
if (inventoryResult.success) {
// Process payment
const paymentResult = processPayment(orderId);
eventBus.publish('payment.processed', { orderId, paymentResult });
} else {
eventBus.publish('order.failed', { orderId, reason: 'inventory' });
}
}
2. Implement Domain-Driven Design (DDD)
Use DDD principles to identify bounded contexts and align service boundaries with business capabilities rather than technical concerns. This helps ensure that services are truly independent.
3. Adopt the Strangler Fig Pattern
Instead of attempting a big-bang transformation, gradually migrate functionality from the distributed monolith to properly designed microservices, one bounded context at a time.
4. Establish Clear Service Ownership
Assign dedicated teams to own specific services, giving them autonomy over their service’s lifecycle, from development to deployment and operations.
5. Implement Resilience Patterns
Add circuit breakers, timeouts, and fallback mechanisms to prevent cascading failures and improve system resilience.
// Implementing a circuit breaker pattern
const circuitBreaker = new CircuitBreaker({
failureThreshold: 3,
resetTimeout: 30000, // 30 seconds
fallback: () => ({ status: 'degraded', data: getCachedData() })
});
async function getUserData(userId) {
return circuitBreaker.execute(() => userService.getData(userId));
}
6. Decouple Data
Move away from shared databases toward a model where each service owns its data. Use data replication, CQRS (Command Query Responsibility Segregation), or event sourcing patterns to manage data that needs to be shared between services.
7. Implement API Gateways and BFFs
Use API gateways or Backend-for-Frontend (BFF) patterns to simplify client interactions and reduce the need for clients to make multiple calls to different services.
8. Invest in Observability
Implement comprehensive logging, monitoring, and distributed tracing to gain visibility into your distributed system and make debugging easier.
// Implementing distributed tracing
app.use((req, res, next) => {
const traceId = req.headers['x-trace-id'] || generateNewTraceId();
const spanId = generateNewSpanId();
// Add trace context to request
req.traceContext = { traceId, spanId, parentSpanId: null };
// Add to response headers for debugging
res.setHeader('x-trace-id', traceId);
// Ensure trace context is passed to downstream services
logger.info('Request received', {
traceId,
spanId,
path: req.path,
method: req.method
});
next();
});
When a Monolith Might Actually Be Better
It’s important to acknowledge that microservices aren’t always the right choice. In some cases, a well-designed monolith might be preferable to both a distributed monolith and true microservices:
1. For Small to Medium-Sized Applications
If your application isn’t large enough to justify the overhead of microservices, a monolith can be simpler to develop, deploy, and maintain.
2. When Team Size is Limited
Microservices shine in organizations with multiple teams that can independently own different services. With a small team, the coordination overhead might outweigh the benefits.
3. For Applications with Simple Domains
If your domain doesn’t naturally decompose into distinct bounded contexts, forcing a microservices architecture can create unnecessary complexity.
4. When Time-to-Market is Critical
For startups and new products where validating ideas quickly is essential, starting with a monolith allows for faster iteration.
5. When Operational Resources are Constrained
Microservices require sophisticated operational capabilities (containerization, orchestration, service mesh, etc.). If these resources aren’t available, a monolith might be more practical.
Case Study: From Distributed Monolith to True Microservices
Let’s look at a hypothetical case study of a company that recognized their distributed monolith problem and took steps to address it:
The Initial State: E-Commerce Platform
An e-commerce company had broken their application into services based on technical layers:
- User Service (authentication, profiles)
- Product Service (catalog, inventory)
- Order Service (cart, checkout)
- Payment Service (processing payments)
Despite having separate services, they faced several issues:
- All services used the same shared database
- Changes to the product schema required coordinated deployments across multiple services
- The order flow involved synchronous calls to all other services, creating a chain of dependencies
- Releases became increasingly complex, requiring all services to be deployed together
The Transformation
The company took the following steps to evolve their architecture:
1. Domain Analysis and Bounded Contexts
They conducted workshops to identify true bounded contexts in their business domain, resulting in a different service breakdown:
- Customer Context (profiles, preferences, support history)
- Catalog Context (products, categories, search)
- Inventory Context (stock management, reservations)
- Order Context (cart, checkout process)
- Fulfillment Context (shipping, delivery tracking)
- Payment Context (payment processing, refunds)
2. Database Decoupling
Each context got its own database, with careful consideration of data ownership. They implemented:
- Data replication for read-only needs across services
- Event sourcing for critical state changes
- CQRS pattern for separating read and write models
3. Event-Driven Communication
They replaced most synchronous calls with event-based communication:
// Before: Synchronous order processing
async function placeOrder(cart, user) {
// Check inventory synchronously
const inventoryCheck = await inventoryService.checkAvailability(cart.items);
if (!inventoryCheck.available) {
throw new Error('Items not available');
}
// Process payment synchronously
const payment = await paymentService.charge(user.paymentMethod, cart.total);
if (!payment.successful) {
throw new Error('Payment failed');
}
// Create order synchronously
const order = await orderService.create({
user: user.id,
items: cart.items,
payment: payment.id
});
// Schedule fulfillment synchronously
await fulfillmentService.scheduleDelivery(order.id);
return order;
}
// After: Event-driven order processing
function placeOrder(cart, user) {
// Create pending order
const orderId = generateOrderId();
// Publish event
eventBus.publish('order.initiated', {
orderId,
userId: user.id,
items: cart.items,
paymentDetails: {
method: user.paymentMethod,
amount: cart.total
}
});
return {
orderId,
status: 'processing'
};
}
4. Resilience Patterns
They implemented circuit breakers, retries, and fallbacks for the remaining synchronous communications:
const productService = new CircuitBreaker(
async (productId) => {
const response = await fetch(`${PRODUCT_SERVICE_URL}/products/${productId}`);
if (!response.ok) throw new Error('Product service error');
return response.json();
},
{
failureThreshold: 3,
resetTimeout: 10000,
fallback: (productId) => getCachedProduct(productId) || {
id: productId,
name: 'Product information temporarily unavailable',
price: null,
status: 'unavailable'
}
}
);
5. API Gateway Pattern
They introduced an API gateway to simplify client interactions and handle cross-cutting concerns:
- Authentication and authorization
- Request routing
- Response aggregation
- Rate limiting
6. Observability Improvements
They invested in comprehensive monitoring and tracing:
- Distributed tracing across all services
- Centralized logging with context preservation
- Business-level metrics dashboards
- Alerting based on service-level objectives (SLOs)
The Results
After implementing these changes, the company experienced:
- Independent deployability of services, with deployment frequency increasing from bi-weekly to multiple times per day
- Improved resilience, with partial system failures no longer affecting the entire platform
- Better scalability, allowing them to scale individual services based on demand
- Faster feature delivery through parallel development by autonomous teams
- Clearer ownership boundaries, reducing coordination overhead
Conclusion: Finding the Right Balance
The journey from a distributed monolith to true microservices isn’t easy, but it’s worth the effort for systems that genuinely benefit from this architectural style. The key is honesty about your current architecture and a pragmatic approach to improvement.
Remember these principles:
- Start with business domains, not technical boundaries. Services should align with business capabilities.
- Embrace asynchronous communication to reduce coupling between services.
- Respect service autonomy in both data and deployment.
- Design for failure with appropriate resilience patterns.
- Invest in operational excellence with robust monitoring, logging, and tracing.
- Be pragmatic about which parts of your system benefit from microservices and which might be better as a monolith.
Most importantly, don’t get caught up in architectural dogma. The goal isn’t to have microservices; it’s to have a system architecture that supports your business needs, enables your teams to work effectively, and provides value to your users.
Whether you choose microservices, a monolith, or something in between, make that choice deliberately based on your specific context, not just because it’s the latest trend in software architecture.
And if you find yourself with a distributed monolith, take heart. Recognizing the problem is the first step toward solving it, and with a thoughtful approach, you can evolve your architecture into something that truly delivers on the promises of microservices — or perhaps discover that a well-designed monolith was what you needed all along.