Theatre Faculty Publications and Presentations

Document Type

Article

Publication Date

Fall 11-10-2025

Abstract

Current AI video models generate short clips with inconsistent visual details across shots, preventing traditional film editing. A character may change clothing, lighting may shift, and spatial relationships may drift between angles—all of which break cinematic continuity (Bordwell, Thompson, & Smith, 2020). This paper formalizes Chain Continuity, a reproducible four-step method that eliminates these discontinuities by separating camera construction from performance generation. The method requires locking a single Master Key Frame taken from the first frame of the Master Shot, building all other camera angles as static Setup Frames derived only from that first-frame key, and then generating performances solely from verified Setup Frames. By dividing the process into a two-stage pipeline—Frame Construction → Performance Generation—Chain Continuity produces matching coverage, enabling real editing. General filmmaking concepts (continuity, coverage, shot geography, and camera angles) are defined using standard cinematography sources (Mascelli, 1965; Brown, 2016; Salt, 2009). The paper then introduces a second, AI-native method—Multi-Master Keyframe Coverage—in which multiple Master Key Frames are captured at specific dialogue beats in the Master Shot, enabling perfect micro-continuity for each line. Together, these methods present a classical (Part I) and an AI-native (Part II) pathway to edit-ready AI filmmaking.

Comments

Copyright the Author.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.