The Problem
Operational reporting in multifamily runs on CSV exports. Asset managers pull rent rolls Monday morning. Leasing directors download traffic reports mid-week. Accounting runs P&L exports on the 15th. Each team works from its own spreadsheet with its own pull date.
The data is stale before anyone opens it. A unit goes on notice Tuesday. Monday's rent roll doesn't show it. A lease gets approved Wednesday. The leasing summary won't reflect it until next Monday's pull. By Thursday, decisions are being made on four-day-old data.
Most PMS platforms expose a REST API with the same data behind those exports. The gap is that no one has built the pipeline to pull it automatically, normalize it, and store it somewhere queryable. A 10,000-unit operator built that pipeline: a Next.js dashboard connected directly to the PMS API, with scheduled cron jobs storing snapshots to a Postgres database. Six automated jobs replaced the manual Monday export workflow entirely.
The Architecture
Three layers: ingestion, storage, and presentation.
Ingestion runs through Vercel cron jobs that call internal Next.js API routes. Each route calls the PMS API, parses the response, and writes to the database. The cron schedule matches data change frequency: availability runs daily, leasing metrics run weekly, financials run monthly on the 15th after the prior period closes.
Storage is Supabase (PostgreSQL). All tables include a snapshot_date column so every write is an append, not an overwrite. Historical trend queries run against the same tables as current-week queries. A property_lookup table holds static metadata (unit counts, market, internal property ID) so ingestion jobs don't need to re-derive it on every run.
Presentation is Next.js with React server components. Each page queries the database directly, computes derived metrics server-side, and renders the result. No intermediate API layer between the frontend and the database. Client-side JS handles filtering and sorting only.
The API Client
Authentication and Request Structure
const PMS_BASE_URL = process.env.PMS_BASE_URL;
const PMS_API_KEY = process.env.PMS_API_KEY;
async function pmsRequest(endpoint: string, method: string, params: object) {
const body = {
auth: { type: "apikey" },
requestId: 1,
method: { name: method, params }
};
const res = await fetch(`${PMS_BASE_URL}/${endpoint}`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-API-Key": PMS_API_KEY
},
body: JSON.stringify(body)
});
if (!res.ok) throw new Error(`PMS ${method} failed: ${res.status}`);
return res.json();
}
Every API call uses the same request envelope: auth block, a static request ID, and a method object with name and params. Individual ingestion functions only specify the method name and params. The client handles the envelope, headers, and error throwing. Credentials stay in environment variables.
Retry Logic
async function pmsRequestWithRetry(
endpoint: string,
method: string,
params: object,
retries = 3
) {
for (let attempt = 1; attempt <= retries; attempt++) {
try {
return await pmsRequest(endpoint, method, params);
} catch (err) {
if (attempt === retries) throw err;
await new Promise(r => setTimeout(r, 1000 * attempt));
}
}
}
Financial endpoints are unreliable when batching more than two properties per request. The retry wrapper backs off linearly: 1 second, 2 seconds, 3 seconds. If all three attempts fail, the error propagates and the cron job exits without writing partial data to the database.
The Ingestion Jobs
Cron Schedule
// vercel.json
{
"crons": [
{ "path": "/api/cron/daily/availability", "schedule": "0 12 * * *" },
{ "path": "/api/cron/weekly/box-score", "schedule": "0 15 * * 1" },
{ "path": "/api/cron/weekly/leasing-activity", "schedule": "0 14 * * 1" },
{ "path": "/api/cron/monthly/gl-trees", "schedule": "0 13 15 * *" },
{ "path": "/api/cron/monthly/gl-transactions", "schedule": "15 13 15 * *" },
{ "path": "/api/cron/monthly/income-statement","schedule": "30 13 15 * *" }
]
}
Daily availability runs at 7am CT. Weekly box score and leasing metrics run Monday morning so they're loaded before the first asset management call. Monthly GL jobs are staggered 15 minutes apart on the 15th to avoid concurrent requests to financial endpoints that tend to time out under load.
Box Score Ingestion
The box score pulls last week's leasing funnel from the reporting API. Key fields: leads, tours, applications, approvals, move_ins, move_outs, renewals, physical_occupancy. The report returns one row per property. The job maps each row to a typed record and upserts to Supabase with property_id + snapshot_date as the conflict key.
const data = await pmsRequestWithRetry("reports", "getBoxScore", {
reportVersion: "4.0",
filters: {
property_id: propertyIds,
period_type: "lastwk",
"week-start": "0",
allow_future_periods: true
}
});
const rows = data.response.result.reports[0].report.rows;
const records = rows.map(row => ({
property_id: row.property_id,
snapshot_date: today,
leads: Number(row.leads ?? 0),
tours: Number(row.tours ?? 0),
applications: Number(row.applications ?? 0),
approvals: Number(row.approvals ?? 0),
move_ins: Number(row.move_ins ?? 0),
move_outs: Number(row.move_outs ?? 0),
lease_renewals: Number(row.renewals ?? 0),
occupancy_pct: Number(row.physical_occupancy ?? 0)
}));
await supabase
.from("box_score_weekly")
.upsert(records, { onConflict: "property_id,snapshot_date" });
The upsert conflict key means reruns are safe. If the Monday cron fires twice, the second run overwrites the first with identical data. No duplicate rows.
Availability Ingestion
The availability report returns every unit in the portfolio with its current status. Key fields: unit_number, unit_status, move_out_date. Status values include Occupied, Vacant Not Ready, Vacant Ready, On Notice, Admin Hold, and Pre-Leased. The job filters out Occupied and Admin Hold units and stores a daily snapshot of the remaining pipeline.
const units = data.response.result.reports[0].report.rows;
const pipeline = units.filter(u =>
u.unit_status !== "Occupied" &&
u.unit_status !== "Admin Hold"
);
const records = pipeline.map(u => ({
property_id: u.property_id,
unit_number: u.unit_number,
status: u.unit_status,
move_out_date: u.move_out_date || null,
snapshot_date: today
}));
await supabase
.from("availability_daily")
.insert(records); // append, not upsert
Availability uses insert rather than upsert because the goal is to accumulate a full daily history. Each day's snapshot is a separate set of rows. Querying across snapshots shows how long individual units have been in each status: days between move-out and Vacant Ready, days between Vacant Ready and Pre-Leased. That transition data isn't available in any single snapshot.
Leasing Activity via Event Log
The leasing event endpoint returns application and lease status transitions filtered by date range. Two event types are tracked: application progress and lease progress. The relevant event reasons are Application:Completed, Application:PartiallyCompleted, Lease:Approved, and Renewal:Approved. Source strings include a trailing agent suffix that gets stripped before storage.
const prospects = data.response.result.prospects?.prospect ?? [];
const events = prospects.flatMap(p =>
(p.events?.event ?? []).map(e => ({
property_id: propertyId,
prospect_id: p.id,
event_reason: e.eventReason.replace(/ by .+$/, "").trim(),
event_date: e.eventDate,
snapshot_date: today
}))
);
await supabase
.from("leasing_events_weekly")
.upsert(events, { onConflict: "property_id,prospect_id,event_date,event_reason" });
The Dashboard
Box Score Page
One table showing last week's funnel for every property: leads, tours, applications, approvals, move-ins, move-outs, renewals, and physical occupancy. Sortable on every column. Server-side computed columns add lead-to-tour, tour-to-application, and application-to-approval conversion rates. Properties below a configurable approval rate threshold get a flag. Thresholds live in a config object, not database rows, so they can be updated without a schema change.
WAR Page
The Weekly Activity Report replicates the structure of the manual spreadsheet the leasing team had been maintaining. Same columns, same row-per-property layout, same conditional formatting logic. The data comes from Supabase via server component instead of a manually refreshed Excel file.
Sticky column headers inside a full-height scrollable table container required a specific CSS pattern:
/* Root layout */
.war-root {
height: 100vh;
overflow: hidden; /* critical: do not set auto here */
}
/* Scroll container */
.war-table-container {
height: calc(100vh - 120px);
overflow: auto;
}
thead th {
position: sticky;
top: 0;
z-index: 10;
background: #fff;
}
Setting overflow: auto on the root element breaks sticky positioning because the browser treats the root as the scroll container. The root needs overflow: hidden so the browser identifies the inner div as the scroll parent.
Financial Pages
The income statement page pulls from monthly snapshots stored by the GL crons. Property dropdown and period selector at the top. Standard P&L layout: GPR, vacancy loss, concessions, bad debt, net rental income, utility reimbursements, other income, total revenue, then expense lines (payroll, repairs, utilities, taxes, insurance, management fee, G&A), then NOI. Values are pre-computed by the cron and stored as typed columns, not raw GL strings.
The GL transactions page is a searchable table of individual transactions filtered by property, account, date range, or amount. Useful for tracing a variance on the income statement back to specific postings without exporting from the PMS.
Property Name Normalization
One production bug worth documenting: property names returned by the API don't always match what's stored in the lookup table. A casing difference caused a join to silently fail. The property produced two rows in the dashboard: one with data, one without.
const normalize = (name: string) =>
name.trim().toLowerCase().replace(/\s+/g, " ");
// property_lookup has a generated column in Supabase:
// property_name_normalized TEXT GENERATED ALWAYS AS
// (lower(trim(property_name))) STORED
const { data } = await supabase
.from("property_lookup")
.select("*")
.eq("property_name_normalized", normalize(apiPropertyName));
The fix adds a normalization pass before every join and a generated column in property_lookup that stores the lowercased, trimmed name at insert time. Casing differences and extra whitespace from either side no longer cause mismatches.
What This Enables
The Monday export workflow across leasing and asset management was consuming 3-4 hours per week. Each team member pulled their own CSV, updated their own spreadsheet, and passed the file up for consolidation. Leadership got a summary deck by early afternoon with data that was already a week old.
With the crons running, current data is in the dashboard before anyone opens their laptop Monday morning. Box score, availability pipeline, leasing funnel, and prior month financials are in one place with a shared data source. If a property manager and an asset manager are looking at different numbers, it's because they're filtering differently, not because they pulled at different times.
The append storage model makes trend queries straightforward. Availability history answers how long units sit in each status by property. Box score history shows leasing velocity week-over-week without anyone manually building a trailing table. Income statement history supports T3 vs T12 comparisons without re-exporting from the PMS.
Adding a new data source means writing a new cron route, a new Supabase table, and a new page. The API client, retry logic, auth, and deployment infrastructure already exist. The incremental cost of the sixth report is lower than the first.
The API already exposes the data. The work is building the pipeline that makes it current and queryable.