the writers' room
Don’t let your show’s memory fade out.
Script coordinators, supervisors, writers, producers, and key crew carry impossible amounts of canon in their heads. Then people leave, seasons pass, pages change, and the next team has to reconstruct the truth from old bibles, notes, emails, and panic.
Fabula turns every script into connected canon for production: a canonical record of characters, relationships, events, objects, and consequences, with each claim traced back to the line that proves it. So when the room asks “did we already do this?”, the answer does not depend on someone remembering.
notes from the room
Four failures every long-form show inherits from its tools.
failure 01
The pitch that dies on the whiteboard.
The idea is good. The room is silent. Not because it's bad, but because everyone is trying to remember if you already did this bit in Season 2. The bible is outdated. The coordinator is googling frantically. That dead air is expensive.
failure 02
The character who returns.
A Season 1 character comes back in Season 4. You need her title, her relationships, what she knew and when she knew it. The bible has a paragraph from eighteen months ago by someone who left the show. Eighty scripts to reread is not a continuity strategy.
failure 03
The contradiction caught in prep.
Tomorrow’s pages reference an event that conflicts with what was established two seasons ago. You catch it. The difference between catching it now and catching it in dailies is tens of thousands of dollars. The system that holds your canon has no mechanism to flag when new pages violate it — continuity depends on somebody becoming the hard drive.
failure 04
The bible that’s already out of date.
The bible. The document everyone consults and nobody trusts. It was last updated during the hiatus by an intern who's now working on a different show. Maintaining it is a part-time job nobody has time for, so it becomes a fossil record of what the show once was, not what it's become.
the name of the captain
Captain Jean-Luc Picard. Galen. Locutus of Borg. Same man. Different circumstance.
One character, many names: “Jean-Luc Picard,” “Captain Picard,” “Galen,” “Locutus of Borg,” “Dixon Hill,” “Kamin” — all the same person. A viewer knows that. A spreadsheet doesn’t. Automated systems that spawn a new entry for each identity just move the problem; they don’t solve it.
Fabula resolves every alias to a single canonical entity. Identity shifts, undercover missions, and holodeck personas aren’t separate records — they’re tracked as per-event state on the edge linking Picard to each appearance.
Seven seasons of TNG: one Picard, every name unified, every transformation tracked.
The same approach works everywhere: Cromwell in Wolf Hall, Catherine Cawood in Happy Valley, the Brigadier in Doctor Who. One entity per character, every alias resolved, every transformation pinned to the episode or moment it happened.
// Alias resolution · Captain Jean-Luc Picard
{
"canonical_id": "char_tng_picard",
"canonical_name": "Jean-Luc Picard",
"aliases_resolved": [
"Jean-Luc Picard",
"Captain Picard",
"Galen",
"Locutus of Borg",
"Dixon Hill",
"Kamin"
],
"episode_appearances": 178,
"resolution_confidence": 1.0
}
// Command rank, undercover missions, forced personas, and holodeck avatars aren’t a separate index.
// They register as per-event state on the PARTICIPATED_AS edge linking Picard to each episode or event —
// the observed_status field reads "Posing as Galen," "Assimilated as Locutus of Borg," "Acts as Dixon Hill," or
// "Life as Kamin," for the installments where it happens, just as the script records them.
// The graph carries the canonical identity; the journey lives on the edges.
deep cuts
Character bios that write themselves.
Most “AI script analysis” summarises the script in one pass. That’s a single pour from a single prompt — useful for triage, useless once a show has more than one season under it.
Fabula composes from the graph. For any entity that appears across three or more seasons, a synthesis pass produces a unified arc paragraph from the per-season profile nodes underneath: the rank changes, the relationships, the trajectory across decades. Beneath the paragraph sit the season-level snapshots; beneath those, the events; beneath those, the lines of script.
What development gets is not a summary — it’s a layered profile, every clause traceable down to the events that justify it. If someone asks what changes about Cromwell across the trilogy?, the answer arrives with citations, in prose someone can actually circulate.
// Cross-series synthesis · Brigadier Lethbridge-Stewart // Composed from 13 Classic Who season profiles Elevated from Colonel commanding troops at Goodge Street Fortress to Brigadier overseeing UNIT's global operations, Lethbridge-Stewart evolved from disciplined pragmatist to wise institutional anchor; his arc traces from absolute faith in chain-of-command protocol to a measured trust in unconventional expertise, tempered by decades of impossible encounters and a drier, more resigned stance toward bureaucracy and time itself. // every clause traces back to the events that justify it, // down to the scene anchor and the line of script. // the per-episode and per-season layers underneath are queryable // in the same way; this paragraph is just the top of the stack.
second unit
Give your AI agents something better than a pile of PDFs.
The same graph behind Fabula ships as usable data: Parquet for analytics, Cypher for Neo4j, and structured JSON for everything else. Beneath it: 25 node types, 44 relationship types, every claim cited to its script line. Events link via narrative types—CAUSAL, FORESHADOWING, THEMATIC_PARALLEL, CALLBACK, ESCALATION, and more—each with a score and a reason.
Character state—beliefs, goals, emotions, status—lives on the edge between character and event. Nothing is flattened or vague. Retrieval returns the exact moment: who was there, what they believed, what changed, what earlier scene it echoes, and which line proves it.
That’s the difference. Long-context prompting asks one model call to hold the whole show in its head. Fabula flips the script: precise retrieval from a structured graph, so the model gets only what it needs and every answer cites its source.
56 datasets are public on the brandburner org on Hugging Face — the full Doctor Who megagraph (703 episodes), the TNG megagraph, the West Wing megagraph, and per-series exports for Wolf Hall, Happy Valley, Indiana Jones, Dracula, Knives Out, and I, Claudius. Query the hosted Parquet directly via the Datasets Server, DuckDB, Polars, or SQL — or download it and load it into your own stack.
// GraphRAG retrieval · The West Wing · typed connections + provenance
MATCH (anchor:Event {scene_anchor: 'S2E14 · Act III'})
-[c:THEMATIC_PARALLEL|CALLBACK|EMOTIONAL_ECHO]-(echo:Event)
-[:OCCURS_IN]->(ep:Episode)
RETURN ep.code AS episode,
echo.scene_anchor AS scene,
type(c) AS connection,
c.strength AS strength,
c.description AS why,
echo.script_lines AS provenance
ORDER BY c.strength DESC;
// One typed traversal returns the scenes an LLM should read to
// answer "where else has the show set this up, and how did it
// pay off?" — not via vector similarity, via the show's own
// narrative architecture. Every row cites the script verbatim.
delivery
Three things a connected canon delivers that a binder can’t.
A breakdown that follows the drama, not the formatting
Fabula reads past the page furniture and records what the scene actually does: the events, participants, locations, objects, emotional turns, and continuity consequences. The result is not a prettier summary. It is the show’s working memory, structured scene by scene.
A character record that survives the hiatus
Every alias, appearance, relationship shift, goal, belief, and observed state rolls up to one canonical character. When someone returns after three seasons, production does not start from a stale paragraph in an old bible. The arc is already there, built from the scenes.
An audit trail for the argument after the argument
Every claim carries the script line behind it. When the room asks where something came from, Fabula can show the source. When the choice gets challenged later, nobody has to reconstruct the logic from memory. The receipts travelled with the decision.
below the line
Underneath: a context graph.
Everything above runs on the same context graph that data teams query directly. Production gets the canonical record. Data teams get the same artefact in Parquet. One graph, two consumers.