
Most teams build services first and extract shared code later. We did the opposite — investing in shared utility libraries before building a single service. The result: 34 services with zero duplicated infrastructure code, production-grade security from day one, and a development velocity that accelerated with every new service.
There is a counterintuitive decision at the heart of TCTF's architecture: we built our shared utility libraries before we built our first service. Most teams do the opposite — they build services, notice duplication, and extract shared code after the fact. We identified the patterns every service would need (error handling, logging, security, validation, response formatting) and built them as a shared library first. This meant the first service took longer to ship, but the second service took half the time, and by the 34th service, spinning up a new backend was measured in days, not weeks. This article is the story of that investment — what we built, why we built it first, and how it paid for itself 34 times over.
The conventional wisdom in software engineering is to avoid premature abstraction. Build the first service, then the second, and when you see duplication, extract it. This advice is sound for small projects where the scope is uncertain. But TCTF was never a small project. We knew from the architecture phase that we would build dozens of services. We knew every service would need error handling, logging, input validation, security sanitization, and response formatting. The patterns were predictable.
The question was not whether to build shared libraries — it was when. Building them after the first few services means those early services develop their own patterns. Each team solves the same problems differently. Error responses have different formats. Logging uses different structures. Security sanitization is inconsistent. Then you spend months retrofitting the shared library into services that already work, breaking things along the way.
Building the shared library first eliminates this problem entirely. The first service adopts the shared patterns from day one. The second service does the same. By the time you reach the 34th service, every service speaks the same language — same error formats, same log structures, same security guarantees. The consistency is not enforced by code reviews or documentation — it is enforced by the library itself.
The cost was real: the first service took longer to ship because we were building the library in parallel. But the investment curve flipped quickly. The second service shipped in half the time. The fifth service shipped in a quarter. By the tenth service, the shared library was saving more time per service than it cost to build in the first place. That is the compounding return of shared infrastructure.
📐Building shared libraries before services is counterintuitive. The first service takes longer. But by the 34th service, the investment has paid for itself many times over. Consistency is enforced by the library, not by code reviews.

Security is the domain where shared libraries matter most. A single service with an injection vulnerability is a security incident. A platform with 34 services and inconsistent sanitization is a catastrophe waiting to happen. Our security module provides 12 sanitization functions for 12 distinct contexts: HTML content, DynamoDB parameters, NoSQL queries, file paths, URLs, regular expressions, JSON payloads, phone numbers, numeric inputs, object keys, control characters, and general user input.
Each sanitizer is purpose-built for its context. The HTML sanitizer strips script tags and event handlers while preserving safe formatting. The DynamoDB sanitizer validates and constrains expression attribute values and condition expressions to prevent injection through crafted filter or key condition strings. The NoSQL sanitizer handles DynamoDB-specific vectors — ensuring that expression attribute names and values cannot be manipulated to access unintended data, and that reserved words in expressions are properly escaped. The path sanitizer prevents directory traversal. The URL sanitizer validates schemes and prevents javascript: protocol injection. Every service imports the same sanitizers, and every input passes through the appropriate one before processing.
Beyond sanitization, the security module includes JWT utilities for token inspection and validation, data masking for logs (transforming email addresses to patterns like s***e@c***d.org so sensitive data never appears in plain text in log output), an API key service with built-in rotation support, and CSRF token generation and validation. These are not optional add-ons — they are the default security posture for every service.
The zero-trust principle means every service validates every input, even for internal service-to-service calls. A compromised service cannot inject malicious data into another service because the receiving service sanitizes all inputs regardless of the source. This defense-in-depth approach means a single vulnerability does not cascade into a platform-wide breach.
Getting any one of these 12 sanitizers wrong in any one of 34 services creates a platform-wide vulnerability. The shared library ensures that the security implementation is written once, reviewed thoroughly, tested extensively, and used everywhere. When a new attack vector is discovered, we patch the library once and every service is protected.
🔒12 sanitization functions for 12 contexts. JWT utilities, data masking, API key rotation, CSRF tokens. Every service uses the same security module — getting any one wrong in any one service creates a platform-wide vulnerability.
Error handling is one of those things that seems simple until you have 34 services each doing it differently. Different error response formats, different HTTP status codes for the same conditions, different log structures — debugging becomes archaeology. Our shared error handling architecture eliminates this entirely with a BaseError class hierarchy that every service extends.
The BaseError class carries an error code, an HTTP status mapping, a user-facing message, and structured metadata for logging. Subclasses like ValidationError, AuthenticationError, NotFoundError, and RateLimitError map to specific HTTP status codes and error response formats. When a service throws a ValidationError, the response format is identical whether it comes from the user service, the billing service, or the messaging service.
The error response format is standardized across the platform: a JSON object with a status code, an error code (machine-readable), a message (human-readable), and optional details (field-level validation errors, rate limit information, etc.). Frontend developers know exactly what to expect from every API call. Error handling code in the React app does not need service-specific logic — one error handler works for all 34 services.
Structured error logging follows the same pattern. Every error is logged as structured JSON with the error code, HTTP status, request ID, service name, timestamp, and stack trace. When you are debugging a production issue at 2 AM, you can search for an error code across all services and get consistent, parseable results. You do not need to know which service threw the error to understand the log entry.
The consistency extends to error documentation. Because every service uses the same error codes and response formats, the API documentation for error responses is generated once and applies everywhere. New developers learn one error handling pattern and can work on any service.
🏗️ One BaseError class hierarchy, one response format, one logging structure — across 34 services. When you debug a production issue, you know exactly what the error looks like regardless of which service threw it.

Notifications are a cross-cutting concern that touches every service. When a user submits a project, the activity feed needs an entry, the project reviewers need an email, and analytics needs a tracking event. When a message is sent, the recipient needs a push notification, the conversation feed needs an update, and the unread count needs to increment. Implementing this logic in every service would be a maintenance nightmare.
The activity publisher reduces all of this to a single function call: publisher.publish(event). The event object contains the event type, the actor, the target, and any relevant metadata. The publisher sends the event to SNS, which fans out to subscribers. The activity service handles the feed. The communication service handles email and push notifications. The analytics service handles tracking. The producing service does not know or care about the downstream consumers.
We support over 30 event types across 6 phases of the platform lifecycle: onboarding events, project events, community events, billing events, messaging events, and administrative events. Each event type has a defined schema, and the publisher validates the event against the schema before publishing. This catches malformed events at the source rather than in the consumer.
The pipeline handles the complexity that individual services should not need to think about. Deduplication ensures that retried events do not create duplicate notifications. Aggregation batches rapid-fire events (like 10 comments in a minute) into a single notification. User preference checking ensures that users only receive notifications through channels they have opted into. Rate limiting prevents notification storms during high-activity periods.
The result is that a service developer adding a new feature needs exactly one line of code to integrate with the entire notification pipeline. They define the event type, call publisher.publish(), and the pipeline handles everything else. This is the power of shared infrastructure — the complexity is centralized, and the interface is simple.
📡publisher.publish(event) — one function call is all any service needs. SNS fans out to the activity feed, email, push notifications, and analytics. 30+ event types, deduplication, aggregation, and user preferences — all handled by the pipeline.

The payoff of building shared libraries first is measurable. Every new service starts with production-grade security (12 sanitizers, JWT validation, CSRF protection), standardized error handling (BaseError hierarchy, consistent response formats, structured logging), and full notification support (activity publisher, 30+ event types, deduplication and aggregation) — all from day one, with zero additional implementation effort.
The time to ship a new service has decreased dramatically over the course of the project. The first service, built in parallel with the shared library, took several weeks. By the tenth service, the pattern was so well-established that a new service could be scaffolded, implemented, and deployed in days. The shared library handles the infrastructure concerns, and the service developer focuses entirely on business logic.
There is zero duplicated infrastructure code across the 34 services. No copy-pasted error handlers. No reimplemented sanitizers. No hand-rolled notification logic. When we find a bug in the shared library, we fix it once and every service benefits. When we add a new capability (a new sanitizer, a new event type, a new error code), every service has access to it immediately.
The consistency benefit is harder to quantify but equally important. Any developer can move between services without learning new patterns. Code reviews are faster because reviewers know the shared patterns. Onboarding new team members is simpler because there is one way to do things, not 34 different approaches to the same problems.
The shared library is not just a convenience — it is the architectural foundation that makes 34 services manageable. Without it, we would have 34 services with 34 different error formats, 34 different security implementations, and 34 different notification integrations. The shared library is the reason TCTF's backend is a coherent platform rather than a collection of disconnected services.
🚀Every new service starts with production-grade security, error handling, logging, and notification support from day one. No copy-paste, no reinvention. The shared library is the multiplier that makes 34 consistent services possible.
Building shared libraries before building services is counterintuitive. It feels like you are delaying the real work. But the real work is not building one service — it is building 34 services that are consistent, secure, and maintainable. The shared library is the multiplier that makes that possible.
Never miss a post
Subscribe to get the latest TCTF articles delivered to your inbox.