
Cornell University researchers have pioneered a groundbreaking method to watermark videos using light, addressing the urgent problem of video manipulation in an era dominated by generative artificial intelligence and rapidly spreading misinformation. Their innovation, named “Noise-Coded Illumination for Forensic and Photometric Video,” involves embedding a secret, unique code directly into the light emitted by sources such as computer screens, photography lamps, or building-integrated lighting at specific locations or events like interviews or press conferences. This watermark is composed of finely tuned fluctuations in light intensity, which are meticulously designed to remain imperceptible to human observers, preserving the visual quality and realism of the footage.
The research, led by graduate student Peter Michael and assistant professor Abe Davis, equips light sources to emit subtle luminous codes during filming. Any video recorded under this precisely programmed illumination naturally captures the concealed watermark, which acts as a forensic fingerprint for the footage. The method’s brilliance lies in the physical nature of the watermark: unlike digital overlays or software stamps, which can be removed or faked, the light-based code is inherent in the environmental lighting, making it extraordinarily difficult to replicate or forge, especially by AI-driven deepfake technologies.
Upon verification, the watermark can be extracted from recorded footage and compared against a database of authentic codes. If the video has been maliciously edited—whether by cropping, filtering, compressing, or using generative AI—the light-based code is disrupted, alerting forensic tools to potential tampering or fabrication. Studies have demonstrated that the watermark survives typical video manipulations and remains detectable even after videos pass through AI-generated alterations, making it robust for practical forensic applications. This capability is especially critical because, as video editing tools become more advanced and widely accessible, the reliability of videos as evidence or news sources has been increasingly undermined.
The system is making its debut at SIGGRAPH 2025 in Vancouver, marking a significant milestone for visual media authentication technologies. Its applications extend well beyond academic research: broadcasters, journalists, and organizations can use specialized lighting during live events or interviews to discreetly watermark content for future verification. Legal, political, and financial sectors, often plagued by video misinformation, stand to benefit from an additional layer of trust in video evidence. Furthermore, the technology is being explored for integration with smartphones and other consumer devices, allowing widespread access to authenticity verification tools.
However, this promising approach is not without challenges. The technology’s effectiveness hinges on controlled lighting environments, limiting its applicability in outdoor or unregulated settings. There are also technical hurdles: signal interference from multiple light sources in complex environments could compromise accuracy, and adversarial efforts to reverse-engineer or simulate the watermark may arise. Ethical considerations have been raised regarding the governance and privacy of watermark databases, ensuring they are not misused for validating manipulated content.
Despite these limitations, the light-based watermark system developed at Cornell presents a robust solution that bridges the physical and digital worlds. Unlike previous deepfake countermeasures, which relied on post-production software or facial artifact analysis, this technique transforms the environment itself into a guardian of video integrity. By adapting principles from astronomy and leveraging cutting-edge optics, the researchers have created a forensic tool that can withstand the evolving sophistication of AI-generated fakes, setting a new benchmark for trust in visual media.
More information is available here.
Image: Sreang Hok/Cornell University. Abe Davis, left, assistant professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science, and graduate student Peter Michael with a watermark light.
You must be logged in to post a comment.