Operational Analytics Case Study
Space Request Demand and Allocation
An anonymised case study showing how a live space request register was used to understand demand, backlog pressure, and practical planning opportunities for allocation work.
Operational Analytics Case Study
An anonymised case study showing how a live space request register was used to understand demand, backlog pressure, and practical planning opportunities for allocation work.
909
Requests reviewed across the main register snapshot, covering January 2021 to 11 October 2024.
286
Active requests remained in the live pipeline, equal to 31.5% of the recorded register at extract date.
281
Peak open backlog was reached in August 2024, showing sustained accumulation rather than a short-lived spike.
69%
Of active requests sat in paused or approval-related states, indicating that workflow friction was a major planning issue.
This work started as a reporting and operational planning problem rather than a pure modelling exercise. A live space request register was already being used to track requests, closures, priorities, and supporting notes, but recurring questions from leadership needed more than ad hoc counts. The analysis therefore focused on whether the register could describe incoming demand, show where work was stalling, and support better sequencing of allocation activity.
The published case study combines two workbook snapshots of the same register, growing from 815 rows in the earlier file to 909 rows in the later extract, alongside Python scripts, notebook experiments, an RMarkdown reporting draft, and portfolio write-ups. The portfolio version concentrates on aggregate demand, backlog, and flow patterns. It does not attempt to publish detailed site-level or request-level allocation outcomes.
The source files show a pragmatic workflow built around tools already available in a secure operational setting, with exploratory modelling used to support planning rather than replace operational judgement.
Used for notebook-based exploration, monthly aggregation, regression tests, and chart generation logic.
Used in the source scripts to clean dates, subset fields, engineer calendar features, and reshape register data.
Used as the live operational register and for formula-led reporting on monthly intake, aged requests, and category splits.
Used to trial automated reporting outputs, including summaries of the oldest open requests for management reporting.
Used as an exploratory method to test whether backlog and request counts were following a sustained upward trend.
Trialled in the source Python work as a classification exercise, but kept exploratory rather than operational.
Used to review requests over time, open requests per month, and the value and limits of simple forecasting approaches.
Included date standardisation, filtering incomplete rows, and working around a register structure that changed over time.
Used to translate raw register data into dashboards, monthly meeting packs, and practical management summaries.
Monthly intake stayed active across the whole period, with repeated surges rather than a single isolated spike. Peaks were visible in late 2021, spring 2022, and again during summer 2024.
Requests were more likely to arrive in the middle and later part of the working week, with Wednesday and Thursday clearly heavier than Monday.
The register was not dominated by new inflow alone. It also contained a substantial active queue, and most of that queue sat in paused or approval-related states rather than delivery states.
Register split
Closed: 623 requests. Active: 286 requests.
Active queue mix
The notebook and portfolio write-up used linear regression to test whether open requests were rising over time. The source write-up reports an R² of 0.89 and MSE of 404.13, which is strong enough to show a real upward signal, but still better suited to planning discussion than precise operational forecasting.
The register was live, operational, and changed structure over time, which made rigid formula-driven reporting fragile. Status values were not fully standardised, and the allocation field was not suitable for a clean published trend. In the latest extract there were 276 non-blank allocation entries, but they contained 143 different free-text values and many were descriptive rather than categorical. A small number of placeholder rows also carried incomplete date information, so the published story focuses on robust aggregate patterns rather than false precision.
The value of the analysis was not only in describing volumes. It also pointed to where planning and allocation activity could be sequenced more effectively.
Summer 2024 combined strong intake with the highest open backlog, making it a practical reference point for resourcing and escalation planning.
Approval and pause states accounted for most active work, suggesting reporting should distinguish waiting cases from delivery-stage cases.
Mid-week submission pressure suggests triage, review meetings, and follow-up activity could be timed more deliberately around demand rhythm.
Tracking aged requests and approval-stage dwell time would make bottlenecks visible before they become embedded in the backlog.
Even without a publishable allocation-rate metric, the backlog trend gives credible evidence for discussing workload, prioritisation, and capacity.
This portfolio version removes organisation names, site names, building names, requestor names, room references, and internal identifiers. Where the raw source included free-text allocation notes or descriptions that could identify services or locations, those details were generalised or omitted. The counts, patterns, and analytical approach remain representative of the original work, but the published narrative has been tightened to protect operational confidentiality.