AI Prompt: Part 1 — Shared Infrastructure & Libraries¶
Series: Forma3D.Connect Microservice Decomposition + GridFlock STL Pipeline (Part 1 of 6) Purpose: Set up the foundational shared infrastructure (Redis, BullMQ event bus, shared libraries) that all subsequent microservices will depend on Estimated Effort: 8–12 hours Prerequisites: Current monolithic API working, multi-tenancy infrastructure in place Output: Redis in Docker Compose,
libs/service-common(event bus, internal auth, service clients, user context),libs/gridflock-core(JSCAD generation library), all with passing tests Status: 🚧 TODO Next Part: Part 2 — API Gateway + Order Service
🎯 Mission¶
Set up the shared infrastructure and libraries that all microservices will use. This is the foundation layer — nothing else can be built without it. The existing monolith (apps/api) continues running unchanged after this part.
What this part delivers:
- Redis container in Docker Compose — for BullMQ event queues, session storage, and Socket.IO adapter
libs/service-common— Shared microservice utilities:- BullMQ event bus (replaces EventEmitter2 for cross-service events)
- Internal auth guard (API key validation for service-to-service calls)
- Typed HTTP service clients (base class + concrete clients for each service)
- User context middleware (extracts user info from gateway-provided headers)
- Health indicator
libs/gridflock-core— JSCAD-based GridFlock generation library:- Parametric Gridfinity baseplate generation
- Plate set calculator (splits large grids into printer-bed-sized plates)
- Connector geometry (intersection puzzle, edge puzzle)
- Magnet holes, numbering
- STL serialization
- Printer profiles
- Unit tests for all shared libraries
What this part does NOT do:
- Does not create any microservices (those are Parts 2–4)
- Does not modify the existing monolith
- Does not touch Docker Compose beyond adding Redis
- Does not modify the CI/CD pipeline
📌 Context (Current State)¶
Current Monolith Architecture¶
Everything runs in a single NestJS process (apps/api):
apps/api/src/
├── analytics/ ─┐
├── audit/ │
├── auth/ │
├── cancellation/ │
├── common/ │
├── config/ │
├── database/ │ ALL IN ONE PROCESS
├── event-log/ │
├── fulfillment/ │
├── gateway/ (Socket.IO)│
├── notifications/ │
├── observability/ │
├── orchestration/ │
├── orders/ │
├── print-jobs/ │
├── product-mappings/ │
├── push-notifications/ │
├── retry-queue/ │
├── sendcloud/ │
├── shipments/ │
├── shopify/ │
├── simplyprint/ │
├── tenancy/ │
├── throttler/ │
├── users/ ─┘
└── versioning/
Current Inter-Module Communication¶
The monolith uses two patterns for cross-module communication:
1. Event-driven (EventEmitter2 — in-process):
Order created → ORDER_EVENTS.CREATED
→ Orchestration: sets order PROCESSING, creates print jobs
Print job completed → PRINT_JOB_EVENTS.COMPLETED
→ Orchestration: recalculates part counts, checks order completion
Order ready → ORCHESTRATION_EVENTS.ORDER_READY_FOR_FULFILLMENT
→ Fulfillment: creates Shopify fulfillment
Shipment created → SHIPMENT_EVENTS.CREATED
→ Fulfillment: adds tracking info to fulfillment
2. Interface-based injection (domain contracts):
IOrdersService → used by Orchestration, Fulfillment, Sendcloud
IPrintJobsService → used by Orchestration, Cancellation
IShipmentsService → used by Sendcloud, Orders
IFulfillmentService → used by Gateway (status checks)
These patterns must be preserved in the microservice architecture, but using distributed mechanisms instead of in-process calls.
Existing Shared Libraries¶
libs/domain/ — Domain entities, enums, types, errors, Zod schemas
libs/domain-contracts/ — Service interfaces (IOrdersService, etc.), DTOs
libs/api-client/ — External API client types (SimplyPrint, Sendcloud)
libs/config/ — Shared configuration constants
libs/utils/ — Date/string utilities
libs/observability/ — Sentry + OpenTelemetry configuration
libs/testing/ — Centralized test fixtures
Current Infrastructure (Docker Compose)¶
traefik — Reverse proxy with Let's Encrypt
api — NestJS monolith (port 3000)
web — React frontend (port 80)
docs — Documentation site
pgadmin — PostgreSQL admin
PostgreSQL is external (managed database, not in docker-compose).
🛠️ Tech Stack Reference¶
Same as existing, plus:
- Redis: BullMQ event queues + BullMQ job processing + session store + Socket.IO adapter
- BullMQ:
bullmq,@nestjs/bullmqfor async job processing - JSCAD:
@jscad/modeling,@jscad/stl-serializerfor STL generation - HTTP Proxy:
http-proxy-middlewarefor API Gateway routing (installed now, used in Part 2) - ioredis: Redis client for Node.js
🏗️ Target Architecture (Overview)¶
This is the target architecture for the full 6-part series. This part builds the shared foundation layer.
┌─────────────┐
│ Clients │
└──────┬──────┘
│
┌──────▼──────┐
│ API Gateway │ ← Part 2
└──────┬──────┘
│
┌────────────────────┼────────────────────┐
│ │ │
┌─────▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐
│ Order │ │ Print │ │ Shipping │
│ Service │ │ Service │ │ Service │ ← Parts 2–3
└─────┬──────┘ └──────┬─────┘ └──────┬─────┘
│ ┌──────▼──────┐ │
│ │ GridFlock │ │ ← Part 4
│ │ Service │ │
│ └──────┬──────┘ │
│ ┌──────▼──────┐ │
│ │ Slicer │ │ ← Part 4
│ └──────┬──────┘ │
┌─────▼────────────────────▼────────────────────▼─────┐
│ Redis (BullMQ Events + Queues + Sessions) ← THIS PART │
└─────────────────────────┬───────────────────────────┘
┌─────────────────────────▼───────────────────────────┐
│ PostgreSQL (Shared) │
└──────────────────────────────────────────────────────┘
Why BullMQ Over Redis Pub/Sub¶
Redis pub/sub delivers every message to every subscriber. With 3 replicas of Order Service, all 3 would process the same event (duplicate work). BullMQ uses BRPOPLPUSH/XREADGROUP internally — only one worker claims each job. BullMQ is already in the stack for job processing, so this adds zero new dependencies.
| Feature | Redis pub/sub | BullMQ event queues |
|---|---|---|
| Multiple replicas | All replicas process every event (duplicates) | Only one worker claims each event (no duplicates) |
| Persistence | Fire-and-forget (lost on disconnect) | Persisted in Redis until processed |
| Retry | None | Built-in exponential backoff |
| Dead letter queue | None | Failed events retained for analysis |
| Backpressure | None | Workers process at their own pace |
📁 Files to Create¶
New: libs/service-common¶
libs/service-common/
├── src/
│ ├── index.ts
│ ├── lib/
│ │ ├── events/
│ │ │ ├── event-bus.interface.ts # IEventBus interface
│ │ │ ├── bullmq-event-bus.ts # BullMQ implementation
│ │ │ ├── event-types.ts # All cross-service event definitions
│ │ │ └── event-bus.module.ts
│ │ ├── internal-auth/
│ │ │ ├── internal-auth.guard.ts # Validates INTERNAL_API_KEY
│ │ │ └── internal-auth.module.ts
│ │ ├── service-client/
│ │ │ ├── base-service-client.ts # HTTP client base class
│ │ │ ├── order-service.client.ts # Typed client for Order Service
│ │ │ ├── print-service.client.ts # Typed client for Print Service
│ │ │ ├── shipping-service.client.ts # Typed client for Shipping Service
│ │ │ ├── gridflock-service.client.ts # Typed client for GridFlock Service
│ │ │ ├── slicer.client.ts # HTTP client for Slicer Container
│ │ │ └── service-client.module.ts
│ │ ├── user-context/
│ │ │ ├── user-context.middleware.ts # Extracts user from gateway headers
│ │ │ └── user-context.types.ts
│ │ └── health/
│ │ └── service-health.indicator.ts
│ └── __tests__/
├── project.json
└── jest.config.ts
New: libs/gridflock-core¶
libs/gridflock-core/
├── src/
│ ├── index.ts
│ ├── lib/
│ │ ├── generator.ts # Main plate generation
│ │ ├── types.ts # GridFlock parameter interfaces
│ │ ├── defaults.ts # Default values
│ │ ├── printer-profiles.ts # Pre-configured printers
│ │ ├── plate-set-calculator.ts # Multi-plate surface planning
│ │ ├── assembly-guide.ts # Assembly instructions
│ │ ├── geometry/
│ │ │ ├── base-plate.ts
│ │ │ ├── grid-cells.ts
│ │ │ ├── magnet-holes.ts
│ │ │ ├── intersection-puzzle.ts # GRIPS-like connectors
│ │ │ ├── edge-puzzle.ts # Edge puzzle connectors
│ │ │ └── numbering.ts
│ │ └── serializer.ts
│ └── __tests__/
├── project.json
└── jest.config.ts
🔧 Implementation¶
Step 1.1: Add Dependencies¶
# Redis, BullMQ event bus, session store, Socket.IO adapter
pnpm add ioredis @nestjs/microservices connect-redis @socket.io/redis-adapter
# BullMQ for event queues + GridFlock pipeline jobs
pnpm add bullmq @nestjs/bullmq
# HTTP proxy for gateway (used in Part 2)
pnpm add http-proxy-middleware @nestjs/axios
# JSCAD for GridFlock
pnpm add @jscad/modeling @jscad/stl-serializer
Step 1.2: Add Redis to Docker Compose¶
Add to the existing deployment/staging/docker-compose.yml (or local dev compose):
redis:
image: redis:7-alpine
restart: unless-stopped
ports:
- '6379:6379'
volumes:
- redis-data:/data
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 10s
timeout: 5s
retries: 5
networks:
- forma3d-network
Add redis-data to the volumes section.
Step 1.3: Create libs/service-common¶
pnpm nx generate @nx/js:library service-common --directory=libs/service-common --bundler=tsc --unitTestRunner=jest
Event Bus Interface¶
// libs/service-common/src/lib/events/event-bus.interface.ts
export interface IEventBus {
publish(event: ServiceEvent): Promise<void>;
subscribe(eventType: string, handler: EventHandler): Promise<void>;
unsubscribe(eventType: string): Promise<void>;
}
BullMQ Event Bus Implementation¶
// libs/service-common/src/lib/events/bullmq-event-bus.ts
import { Queue, Worker, Job } from 'bullmq';
@Injectable()
export class BullMQEventBus implements IEventBus {
private queues = new Map<string, Queue>();
private workers = new Map<string, Worker>();
constructor(
@Inject('REDIS_CONNECTION') private readonly connection: IORedis,
) {}
/**
* Publish an event by adding a job to the corresponding BullMQ queue.
* The queue name IS the event type (e.g., 'order.created', 'print-job.completed').
*/
async publish(event: ServiceEvent): Promise<void> {
const queue = this.getOrCreateQueue(event.eventType);
await queue.add(event.eventType, event, {
removeOnComplete: 1000,
removeOnFail: 5000,
attempts: 3,
backoff: { type: 'exponential', delay: 1000 },
});
}
/**
* Subscribe a handler to an event type. BullMQ ensures only ONE worker
* instance across all replicas processes each job (event).
*/
async subscribe(eventType: string, handler: EventHandler): Promise<void> {
if (this.workers.has(eventType)) {
throw new Error(`Already subscribed to ${eventType}`);
}
const worker = new Worker(
eventType,
async (job: Job<ServiceEvent>) => {
await handler(job.data);
},
{
connection: this.connection,
concurrency: 5,
},
);
this.workers.set(eventType, worker);
}
async unsubscribe(eventType: string): Promise<void> {
const worker = this.workers.get(eventType);
if (worker) {
await worker.close();
this.workers.delete(eventType);
}
}
private getOrCreateQueue(eventType: string): Queue {
if (!this.queues.has(eventType)) {
this.queues.set(eventType, new Queue(eventType, {
connection: this.connection,
}));
}
return this.queues.get(eventType)!;
}
}
Event Type Definitions¶
// libs/service-common/src/lib/events/event-types.ts
export const SERVICE_EVENTS = {
// Order Service publishes
ORDER_CREATED: 'order.created',
ORDER_READY_FOR_FULFILLMENT: 'order.ready-for-fulfillment',
ORDER_CANCELLED: 'order.cancelled',
// Print Service publishes
PRINT_JOB_COMPLETED: 'print-job.completed',
PRINT_JOB_FAILED: 'print-job.failed',
PRINT_JOB_STATUS_CHANGED: 'print-job.status-changed',
PRINT_JOB_CANCELLED: 'print-job.cancelled',
// GridFlock Service publishes
GRIDFLOCK_MAPPING_READY: 'gridflock.mapping-ready',
GRIDFLOCK_PIPELINE_FAILED: 'gridflock.pipeline-failed',
// Shipping Service publishes
SHIPMENT_CREATED: 'shipment.created',
SHIPMENT_STATUS_CHANGED: 'shipment.status-changed',
} as const;
// Base event interface — all events carry tenant context
export interface ServiceEvent {
eventType: string;
tenantId: string;
timestamp: string;
correlationId?: string;
}
// Concrete event interfaces
export interface OrderCreatedEvent extends ServiceEvent {
eventType: typeof SERVICE_EVENTS.ORDER_CREATED;
orderId: string;
lineItems: Array<{
lineItemId: string;
productSku: string;
quantity: number;
}>;
}
export interface PrintJobCompletedEvent extends ServiceEvent {
eventType: typeof SERVICE_EVENTS.PRINT_JOB_COMPLETED;
printJobId: string;
orderId: string;
lineItemId: string;
}
export interface PrintJobFailedEvent extends ServiceEvent {
eventType: typeof SERVICE_EVENTS.PRINT_JOB_FAILED;
printJobId: string;
orderId: string;
lineItemId: string;
errorMessage: string;
}
export interface ShipmentCreatedEvent extends ServiceEvent {
eventType: typeof SERVICE_EVENTS.SHIPMENT_CREATED;
shipmentId: string;
orderId: string;
trackingNumber: string | null;
trackingUrl: string | null;
carrier: string | null;
}
export interface GridflockMappingReadyEvent extends ServiceEvent {
eventType: typeof SERVICE_EVENTS.GRIDFLOCK_MAPPING_READY;
orderId: string;
lineItemId: string;
sku: string;
}
export interface GridflockPipelineFailedEvent extends ServiceEvent {
eventType: typeof SERVICE_EVENTS.GRIDFLOCK_PIPELINE_FAILED;
orderId: string;
lineItemId: string;
sku: string;
errorMessage: string;
failedStep: 'stl-generation' | 'slicing' | 'simplyprint-upload' | 'mapping-creation';
}
Internal Auth Guard¶
// libs/service-common/src/lib/internal-auth/internal-auth.guard.ts
@Injectable()
export class InternalAuthGuard implements CanActivate {
constructor(private readonly configService: ConfigService) {}
canActivate(context: ExecutionContext): boolean {
const request = context.switchToHttp().getRequest();
const apiKey = request.headers['x-internal-api-key'];
const expectedKey = this.configService.get<string>('INTERNAL_API_KEY');
if (!apiKey || apiKey !== expectedKey) {
throw new ForbiddenException('Invalid internal API key');
}
return true;
}
}
Service Clients¶
// libs/service-common/src/lib/service-client/base-service-client.ts
export abstract class BaseServiceClient {
constructor(
protected readonly httpService: HttpService,
protected readonly baseUrl: string,
protected readonly apiKey: string,
) {}
protected async get<T>(path: string, headers?: Record<string, string>): Promise<T> {
const response = await firstValueFrom(
this.httpService.get<T>(`${this.baseUrl}${path}`, {
headers: {
'x-internal-api-key': this.apiKey,
...headers,
},
}),
);
return response.data;
}
protected async post<T>(path: string, data: unknown, headers?: Record<string, string>): Promise<T> {
const response = await firstValueFrom(
this.httpService.post<T>(`${this.baseUrl}${path}`, data, {
headers: {
'x-internal-api-key': this.apiKey,
...headers,
},
}),
);
return response.data;
}
}
// libs/service-common/src/lib/service-client/print-service.client.ts
@Injectable()
export class PrintServiceClient extends BaseServiceClient {
constructor(httpService: HttpService, configService: ConfigService) {
super(
httpService,
configService.get<string>('PRINT_SERVICE_URL'),
configService.get<string>('INTERNAL_API_KEY'),
);
}
async createPrintJobs(tenantId: string, orderId: string, lineItems: LineItemDto[]): Promise<PrintJobDto[]> {
return this.post('/internal/print-jobs', { tenantId, orderId, lineItems });
}
async getJobsByOrderId(tenantId: string, orderId: string): Promise<PrintJobDto[]> {
return this.get(`/internal/print-jobs/order/${orderId}?tenantId=${tenantId}`);
}
async cancelJobsForOrder(tenantId: string, orderId: string): Promise<void> {
await this.post(`/internal/print-jobs/order/${orderId}/cancel`, { tenantId });
}
/**
* Upload a gcode file buffer to SimplyPrint via the Print Service.
* The buffer is sent as base64 in the HTTP body — no file system involved.
*/
async uploadFileToSimplyPrint(tenantId: string, file: { filename: string; buffer: Buffer }): Promise<{ simplyPrintFileId: string; filename: string }> {
return this.post('/internal/simplyprint/upload', {
tenantId,
filename: file.filename,
fileBase64: file.buffer.toString('base64'),
});
}
}
// libs/service-common/src/lib/service-client/gridflock-service.client.ts
@Injectable()
export class GridflockServiceClient extends BaseServiceClient {
constructor(httpService: HttpService, configService: ConfigService) {
super(
httpService,
configService.get<string>('GRIDFLOCK_SERVICE_URL'),
configService.get<string>('INTERNAL_API_KEY'),
);
}
async generateForOrder(request: GenerateForOrderDto): Promise<void> {
await this.post('/internal/gridflock/generate-for-order', request);
}
async getMappingStatus(tenantId: string, sku: string): Promise<{ exists: boolean }> {
return this.get(`/internal/gridflock/mapping-status/${encodeURIComponent(sku)}?tenantId=${tenantId}`);
}
}
// libs/service-common/src/lib/service-client/slicer.client.ts
@Injectable()
export class SlicerClient {
constructor(
private readonly httpService: HttpService,
private readonly configService: ConfigService,
) {}
async slice(request: {
stlBuffer: Buffer;
stlFilename: string;
machineProfile: string;
processProfile: string;
filamentProfile: string;
}): Promise<{ gcodeBuffer: Buffer; filename: string }> {
const slicerUrl = this.configService.get<string>('SLICER_URL');
const formData = new FormData();
formData.append('stl', new Blob([request.stlBuffer]), request.stlFilename);
formData.append('machineProfile', request.machineProfile);
formData.append('processProfile', request.processProfile);
formData.append('filamentProfile', request.filamentProfile);
const response = await firstValueFrom(
this.httpService.post(`${slicerUrl}/slice`, formData, {
responseType: 'arraybuffer',
timeout: 120_000,
}),
);
return {
gcodeBuffer: Buffer.from(response.data),
filename: request.stlFilename.replace('.stl', '.3mf'),
};
}
async health(): Promise<{ status: string }> {
const slicerUrl = this.configService.get<string>('SLICER_URL');
const response = await firstValueFrom(this.httpService.get(`${slicerUrl}/health`));
return response.data;
}
}
User Context Middleware¶
// libs/service-common/src/lib/user-context/user-context.types.ts
export interface UserContext {
userId: string;
tenantId: string;
email: string;
roles: string[];
permissions: string[];
isSuperAdmin: boolean;
}
export const USER_CONTEXT_HEADERS = {
USER_ID: 'x-user-id',
TENANT_ID: 'x-tenant-id',
USER_EMAIL: 'x-user-email',
USER_ROLES: 'x-user-roles',
USER_PERMISSIONS: 'x-user-permissions',
IS_SUPER_ADMIN: 'x-is-super-admin',
} as const;
// libs/service-common/src/lib/user-context/user-context.middleware.ts
@Injectable()
export class UserContextMiddleware implements NestMiddleware {
use(req: Request, _res: Response, next: NextFunction): void {
const headers = req.headers;
if (headers[USER_CONTEXT_HEADERS.USER_ID]) {
req['user'] = {
userId: headers[USER_CONTEXT_HEADERS.USER_ID] as string,
tenantId: headers[USER_CONTEXT_HEADERS.TENANT_ID] as string,
email: headers[USER_CONTEXT_HEADERS.USER_EMAIL] as string,
roles: (headers[USER_CONTEXT_HEADERS.USER_ROLES] as string || '').split(','),
permissions: (headers[USER_CONTEXT_HEADERS.USER_PERMISSIONS] as string || '').split(','),
isSuperAdmin: headers[USER_CONTEXT_HEADERS.IS_SUPER_ADMIN] === 'true',
} as UserContext;
}
next();
}
}
Step 1.4: Create libs/gridflock-core¶
pnpm nx generate @nx/js:library gridflock-core --directory=libs/gridflock-core --bundler=tsc --unitTestRunner=jest
Implement the GridFlock generation library with JSCAD. This library contains:
types.ts—GridFlockParams,PlateSetConfig,PlateSetResult,PrinterProfile,GenerationResult,AssemblyGuidedefaults.ts— Default parameter valuesprinter-profiles.ts— Pre-configured profiles for Bambu Lab, Prusa, Creality, Voron printersgenerator.ts— MaingenerateGridFlockPlate()function using JSCAD CSG operationsplate-set-calculator.ts— Divides a target grid into printer-bed-sized platesassembly-guide.ts— Generates assembly instructions with neighbor referencesgeometry/— Individual geometry modules (base plate, grid cells, magnet holes, connectors, numbering)serializer.ts— JSCAD STL serialization wrapper
IMPORTANT: The connector geometry (intersection puzzle + edge puzzle) must be ported from the GridFlock source code. Fork and adapt — do not rewrite from scratch.
See the feasibility study (docs/03-architecture/research/gridflock-feasibility-study.md) sections "GridFlock Connector Systems" and "GridFlock Core Library" for the complete parameter interfaces and example implementation.
Plate Set Calculator¶
The system uses the Bambu Lab A1 printer bed (256×256mm) as the standard. Grids larger than the bed are automatically split into multiple plates:
Customer orders: 450×320mm grid
Printer bed: 256×256mm (Bambu Lab A1)
Grid unit: 42mm (Gridfinity standard)
Calculation:
Max grid per plate: floor(256/42) = 6 units per axis
Max plate size: 6×6 = 252×252mm
Plates needed X: ceil(450/252) = 2
Plates needed Y: ceil(320/252) = 2
Total plates: 2×2 = 4
Result: 4 STL files → 4 gcode files → 4 SimplyPrint files → 1 product mapping (4 parts)
Buffer-Based Pipeline Design¶
The GridFlock pipeline is fully stateless — all file data flows as in-memory Buffer objects. No files are ever written to disk:
JSCAD generates STL Slicer converts to gcode SimplyPrint stores files
┌──────────────────┐ ┌──────────────────────┐ ┌────────────────────┐
│ generatePlate() │ │ POST /slice │ │ Upload gcode file │
│ → Buffer (STL) │──HTTP→│ Body: STL Buffer │──HTTP→│ Body: gcode Buffer │
│ │ │ Response: gcode Buffer│ │ → SimplyPrint ID │
└──────────────────┘ └──────────────────────┘ └────────────────────┘
In-memory In-memory Permanent storage
(GridFlock) (Slicer) (SimplyPrint cloud)
🧪 Testing Requirements¶
libs/service-common Tests¶
- BullMQ Event Bus: publish and subscribe work correctly, events are persisted, retry logic functions
- Internal Auth Guard: rejects requests without API key, rejects invalid keys, allows valid keys
- Base Service Client: correctly constructs HTTP requests with internal API key header
- User Context Middleware: extracts all user fields from headers, handles missing headers gracefully
- Event type definitions: TypeScript type safety, all event interfaces extend ServiceEvent
libs/gridflock-core Tests¶
generateGridFlockPlate()produces valid binary STL bufferscalculatePlateSet()correctly divides large grids into Bambu Lab A1 bed-sized plates (256×256mm)- Connector geometry produces watertight meshes
- Magnet hole placement is correct
- Plate numbering is sequential
- SKU computation:
computeSku(450, 320, 'intersection-puzzle', true)→"GF-450x320-IP-MAG" - SKU normalization:
computeSku(320, 450, 'intersection-puzzle', true)→"GF-450x320-IP-MAG"(same — larger dim first) - Printer profiles contain correct bed sizes
✅ Validation Checklist¶
-
pnpm nx build service-commonsucceeds -
pnpm nx build gridflock-coresucceeds -
pnpm nx test service-commonpasses -
pnpm nx test gridflock-corepasses -
pnpm nx run-many -t lint --allpasses - No TypeScript errors
- Redis container starts via Docker Compose and responds to
redis-cli ping→PONG - Existing monolith (
apps/api) still builds and runs unchanged - No
any, nots-ignore, noeslint-disable - Event bus interface is transport-agnostic (BullMQ is an implementation detail)
- All service clients are typed (no raw HTTP calls with untyped responses)
🚫 Constraints¶
- No
any,ts-ignore, oreslint-disable - Event bus MUST implement
IEventBusinterface backed byBullMQEventBus - Do NOT modify the existing monolith — the monolith keeps running as-is
- Port GridFlock geometry from GridFlock source — do not rewrite from scratch
- Buffer-based pipeline design —
generateGridFlockPlate()must returnBuffer, not write files
📚 Key References¶
- Feasibility study:
docs/03-architecture/research/gridflock-feasibility-study.md - GridFlock source code: https://github.com/yawkat/gridflock
- JSCAD documentation: https://openjscad.xyz/
- BullMQ documentation: https://docs.bullmq.io/
- Current domain contracts:
libs/domain-contracts/src/lib/ - Current event patterns:
apps/api/src/orchestration/
END OF PART 1
Next: Part 2 — API Gateway + Order Service