FlowTime Integration — System Flow Modeling
Can we record the reasoning behind a system model, not just the simulation results?
Research | FlowTime, flow simulation, bottleneck analysis, model provenance
The scenario
Norrland Logistik runs three warehouses along the E4 corridor between Sundsvall and Umeå, handling parcel sorting and last-mile dispatch for Nordic e-commerce. Six months ago they migrated from their legacy WMS to a cloud-based system. Since the migration, throughput at the Härnösand hub has dropped 30%. Parcels that used to clear sorting in 45 minutes now take over an hour. The backlog builds through the afternoon and doesn’t clear until the night shift.
They have telemetry — timestamps on every scan event, queue depths from the conveyor PLCs, processing times per sorting station. Thousands of data points per hour. But the data tells them what is slow, not why. The operations manager suspects the new system’s batch-processing interval is too long. The IT team thinks it’s a network latency issue. The shift supervisor says they just need a second sorting line.
Everyone has a theory. Nobody has a model. And nobody can test their theory without spending money.
Three integration models
FlowTime and Liminara relate in three ways simultaneously. They are not mutually exclusive — they compose.
Model 1: FlowTime as computation engine
The simplest view. FlowTime is an external simulation engine — a C#/.NET 9 program — that Liminara calls to run calculations. Flow data goes in, simulation results come out. Liminara doesn’t understand queue dynamics. It doesn’t need to.
Model 2: Model-building as a Liminara pipeline
More interesting. FlowTime’s simulation is deterministic — same model, same inputs, same outputs. But building the model from a real system is not deterministic. It involves genuine choices: how to decompose the system into services and queues, what retry parameters to assume, where to draw boundaries. These are decisions worth recording.
Model 3: Shared philosophical DNA
Both systems share core convictions — determinism, step-by-step evaluation, immutability, explainability, time as structure — but operate at different scales. FlowTime is a microscope (fine-grained flow dynamics over continuous time). Liminara is a workshop (discrete process of producing and deciding). They complement rather than compete.
FlowTime and Liminara share an author and are co-evolving. FlowTime is written in C#/.NET 9 with a Blazor WebAssembly UI.
The pipeline
PHASE 1: INGEST AND MODEL (decisions are here)
══════════════════════════════════════════════════════════════════
telemetry ──→ ingest ──→ detect_services ──→ propose_model ──→ validate ──→ calibrate ──→ model
(PLC logs,
scan events,
timestamps)
│ │ │
│ identified: │ AI proposes: │ parameter fit:
│ 5 services │ "inbound dock modeled │ sorting station
│ 3 queues │ as M/D/2 queue, │ μ = 47s (±3s)
│ 2 routers │ sorting as 4 parallel │ confidence: 94%
│ │ servers with shared │
│ │ input queue" │ DECISION RECORDED
│ │ DECISION RECORDED │ (seeds, iterations,
│ │ │ convergence path)
PHASE 2: SIMULATE AND COMPARE (computation is here)
══════════════════════════════════════════════════════════════════
model ──→ scenario_baseline ──→ ┐
│
├──→ compare ──→ recommend ──→ report
model ──→ scenario_second_line ─┤
│
│
model ──→ scenario_batch_fix ───┘
│ │ │
│ FlowTime simulates: │ throughput │ AI synthesizes:
│ 24h of warehouse ops │ comparison: │ "Second sorting line
│ at 1-minute granularity │ │ improves throughput 22%
│ 1440 time bins │ baseline: 847 │ but batch interval fix
│ deterministic │ +line: 1034 │ recovers 26% at 1/10th
│ │ +batch: 1068 │ the cost"
│ │ │
│ │ │ DECISION RECORDED
Phase 1 — Ingest and Model:
ingest: Pull 7 days of telemetry from Härnösand’s PLC historian and WMS API. 2.3M scan events, 168 hourly queue depth snapshots. (External data fetch — logged.)detect_services: Statistical decomposition of the scan event stream into logical services. Identifies: inbound dock, primary sort, secondary sort, dispatch buffer, outbound dock. Three queues between them. Two routing points (parcel type → sort line).propose_model: AI examines the detected services and proposes a FlowTime model structure — queueing disciplines, server counts, routing rules. This is the creative step. The AI’s choices are recorded as a decision: “modeled primary sort as 4 parallel servers with shared queue based on observed concurrent processing pattern.” A human could adjust this. The decision records exactly what was chosen and why.validate: Run the proposed model against historical telemetry using FlowTime. Compare simulated queue depths against observed queue depths. Model accuracy: R² = 0.91 for primary sort queue, 0.87 for dispatch buffer. Good enough to proceed.calibrate: Optimize model parameters (service times, batch intervals, routing probabilities) to minimize the gap between simulated and observed behavior. Uses iterative search — the seeds and convergence path are recorded as decisions. Output: calibrated model with confidence intervals.
Phase 2 — Simulate and Compare:
scenario_baseline: Simulate the current system for 24 hours using FlowTime. Same model, same arrival pattern. Throughput: 847 parcels/hour during peak. Backlog clears at 22:30.scenario_second_line: Clone the model, add a second sorting line (8 servers instead of 4). Simulate. Throughput: 1,034 parcels/hour (+22%). Backlog clears at 19:15.scenario_batch_fix: Clone the model, reduce the batch processing interval from 300s to 60s. Simulate. Throughput: 1,068 parcels/hour (+26%). Backlog clears at 18:45.compare: Tabulate results across all scenarios. Cost estimates from reference data.recommend: AI synthesizes the comparison into a recommendation. “The batch interval fix recovers more throughput at roughly one-tenth the capital cost of a second sorting line. The second line becomes relevant only above 1,200 parcels/hour sustained.” Decision recorded.
Every FlowTime simulation is deterministic and cacheable — same model, same arrival pattern, same result. The model-building and recommendation steps involve genuine AI judgments, which are recorded. The data ingestion touches external systems and is logged.
What you can ask afterward
| Question | How it’s answered |
|---|---|
| ”Why was primary sort modeled as 4 parallel servers?” | Decision record for propose_model: AI examined concurrent processing patterns in scan events, found 4 simultaneous parcels in sorting at 94th percentile. Model choice recorded with reasoning and the telemetry slice it was based on. |
| ”How accurate is the model?” | Validation result: simulated vs. observed queue depths, R² per service. The validation ran the model against 7 days of real data — the comparison is a sealed record, not a claim. |
| ”What if the AI had modeled sort as 3 servers instead of 4?” | Fork the run at propose_model. Override the decision: 3 servers. Everything downstream re-executes — validation (R² drops to 0.83), calibration (different parameters), scenarios (different throughput numbers). Compare both model variants side by side. The telemetry ingestion and service detection are reused. |
| ”What happens if parcel volume grows 40% next year?” | Add a fourth scenario: same calibrated model, scale arrival rate ×1.4. FlowTime simulates — deterministic. Neither the model-building decisions nor the existing scenario results change. Only the new scenario and the comparison/recommendation re-execute. |
| ”Who decided the batch fix was better than the second line?” | Decision record for recommend: AI reasoning preserved. If a human overrode the recommendation, that override is also a recorded decision with its own rationale. |
Before and after
Today: Norrland Logistik’s operations manager argues with IT based on gut feeling. They bring in a consultant who spends three weeks building a simulation model in a commercial tool. The model lives in the consultant’s license. When parameters change, they call the consultant back. Six months later, nobody remembers what assumptions went into the model. The shift supervisor’s theory about the second sorting line was never tested because the consultant ran out of billable hours.
With provenance: The model is built from telemetry in a recorded pipeline. Every modeling choice — why 4 servers, why FIFO discipline, why 300s batch interval — is a decision with a trace to the data that motivated it. What-if scenarios are cheap: change one parameter, FlowTime re-simulates in seconds, everything else reuses previous results. When the operations manager asks “what if volume grows 40%?” six months from now, nobody needs to reconstruct the model. It’s a sealed record. Add a new scenario, get an answer.
The batch interval fix? It was the right call. The decision record shows exactly why — and when volume eventually does hit 1,200 parcels/hour, the model is already there, ready for the next scenario.
FlowTime source: github.com/23min/flowtime. Looking for operations teams and flow modeling practitioners. [Contact ->]