Theatre Faculty Publications and Presentations
Document Type
Article
Publication Date
Fall 11-10-2025
Abstract
Current AI video models generate short clips with inconsistent visual details across shots, preventing traditional film editing. A character may change clothing, lighting may shift, and spatial relationships may drift between angles—all of which break cinematic continuity (Bordwell, Thompson, & Smith, 2020). This paper formalizes Chain Continuity, a reproducible four-step method that eliminates these discontinuities by separating camera construction from performance generation. The method requires locking a single Master Key Frame taken from the first frame of the Master Shot, building all other camera angles as static Setup Frames derived only from that first-frame key, and then generating performances solely from verified Setup Frames. By dividing the process into a two-stage pipeline—Frame Construction → Performance Generation—Chain Continuity produces matching coverage, enabling real editing. General filmmaking concepts (continuity, coverage, shot geography, and camera angles) are defined using standard cinematography sources (Mascelli, 1965; Brown, 2016; Salt, 2009). The paper then introduces a second, AI-native method—Multi-Master Keyframe Coverage—in which multiple Master Key Frames are captured at specific dialogue beats in the Master Shot, enabling perfect micro-continuity for each line. Together, these methods present a classical (Part I) and an AI-native (Part II) pathway to edit-ready AI filmmaking.
Recommended Citation
Trevino, John, "Chain Continuity & Multi-Master Keyframe Coverage- Two Methods for Edit-Ready AI Filmmaking" (2025). Theatre Faculty Publications and Presentations. 20.
https://scholarworks.utrgv.edu/the_fac/20

Comments
Copyright the Author.