Three plausible approaches for the pipeline queue + VCC poller slice. Reading them as packaged choices, not individual knobs.
- **Endpoint shape:** per-deal rows with stage progress; rename `/admin/jobs` → `/admin/queue/deals`.
- **Slice scope:** endpoint + FE wiring + VCC poller, one PR.
- **No queue abstraction:** pipeline queue only. DOR / physical queues are separate features, later.
- **Poller scope:** minimal — HubSpot poll → create Deals → enqueue `process_deal`. No enrichment, no filtering.
- **FE updates:** 2s polling via TanStack Query.
- **Schema:** Alembic migration adds `deals.hubspot_deal_id TEXT UNIQUE NOT NULL` (idempotent re-poll) + index on `deals.updated_at DESC` for "most-recent first" queries.
- **Poller:** Procrastinate periodic task `poll_hubspot` via `procrastinate_app.periodic()`, every N minutes (settings-controlled, default 5 min). Calls a new `integrations/hubspot/client.py` ported from `checkin-pipeline/app/hubspot.py`. For each result: upsert deal by `hubspot_deal_id`; if newly created, `process_deal.defer_async(deal_id=...)`.
- **Endpoint:** `GET /admin/queue/deals?limit=50&before=<timestamp>` returns `{ items: [DealQueueRow], next_before: ... }`. Each row embeds `steps[]` — one per stage with `status`, `started_at`, `ended_at`. Single SQL query (`selectinload(Deal.pipeline_runs)`) ordered by `updated_at DESC`.
- **FE:** Delete `queue-store.ts` and `queue-mock.ts`; use a generated TanStack Query hook. `isLive` toggle controls `refetchInterval: 2_000 | false`. Small adapter under `web/src/lib/api/adapters/` maps API row → existing `PipelineRun` shape.
- **Cost:** ~30 req/min per open admin tab; payload bounded by `limit=50`. One DB query per poll, joinedload is cheap with the new index.
- Identical FE + endpoint to A.
- Instead of `procrastinate_app.periodic()`, add a `scheduler` service to `docker-compose.yml` running `python -m vcc_backend.workers.scheduler` — a thin async loop that sleeps + calls `sync_hubspot_deals()`.
- **Pros:** scheduler failures visible as service-down, easier to bounce independently, no entanglement with Procrastinate scheduling.
- **Cons:** fourth container, more wiring, two ways to "run something on a schedule" in the codebase. Procrastinate already supports periodic — duplicating muddies the model.
- Worth doing only if Procrastinate-periodic disappoints in practice. It hasn't yet.
- `GET /admin/queue/deals` returns rows **without** `steps[]` — just `id`, `hubspot_deal_id`, `status`, `current_stage`, `updated_at`.
- Separate `GET /admin/deals/{id}/runs` returns full stage history.
- FE polls the list endpoint at 2s; `RunDetailSheet` lazy-loads on row click.
- **Pros:** smaller poll payload (5 fields × 50 rows vs. 5 + 6×4 fields × 50 rows).
- **Cons:** actually doesn't help much — the queue table renders stage-progress dots inline, so it needs steps for every visible row anyway. Adds an endpoint for a payload size that wasn't a problem.
**Approach A.** Keeps surface area small (one endpoint, one task), aligns with Procrastinate's existing periodic-task pattern, and inline `steps[]` matches what the FE already renders. B is a fallback if Procrastinate-periodic causes pain; C is premature optimization.
Until real stages land, all `pipeline_runs` complete in <1s (stubs return immediately). The "live queue" page will mostly show rows in `done` state — visually less interesting than the mock. Not a design flaw, just current reality. The slice still verifiably works end-to-end.