LLM-Wiki roadmap reprioritized per gpt-5.5 architectural review
Source type:
obs· Harvested: 2026-05-03 · Original date: 2026-05-03T12:14:08.720Z Metadata:{"project":"lunhsiangyuan","type":"decision","obs_id":65010}
obs/65010 · decision · 2026-05-03T12:14:08.720Z
LLM-Wiki roadmap reprioritized per gpt-5.5 architectural review
Following cursor agent gpt-5.5 architectural review that identified missing provenance foundation, user chose option (a) full adoption of recommendations. Original roadmap had Step 2 (harvest) jumping directly to Step 3 (engagement signals) and Step 4 (synthesis). Review revealed harvest produces raw files without manifest, no normalized source units, making precise attribution impossible in synthesis phase. New roadmap inserts Step 2.5 to add manifest.json per harvest run, split B/C/H sources into stable units each with unit_id + content_hash, enabling provenance.db to reference specific units rather than vague file paths. Step 4 synthesis now constrained to single topic + single day + manual review before scaling. Three productization features (screenshot watcher, graphify incremental mode, web deploy with Vercel auth) deferred until core distillation proven reliable. This prevents three-month failure modes: append-only becoming second daily-insights, taxonomy drift, sanitization privacy breach, graphify pollution from date nodes.
Concepts: [“why-it-exists”,“trade-off”,“problem-solution”]
Facts: [“Step 2.5 added to implement manifest.json + normalized source units + content hash + idempotency testing before synthesis”,“Step 4 reduced from multi-topic automated dispatcher to single-topic single-day manual review workflow”,“Screenshot watcher (Step 3C), graphify (Step 5), and web deploy (Step 6) deferred as “productization noise” not core distillation”,“Bug fix task created for AAI git log timing boundary (23:59 issue) and possible_conflict schema chicken-egg problem”,“Decision rationale: 30-60 minute foundation investment prevents “building house on sand” and shifts Step 4 from “prompt hoping AI doesn’t hallucinate” to “structural attribution enforcement""]
[← 回 Alfred Brain Hub]