What is Glaze and Nightshade?
Glaze is a tool developed by researchers at the University of Chicago to protect artists from having their work unknowingly scraped and used to train generative AI models. It works by applying subtle, algorithmically generated perturbations to an image that are invisible to the human eye but disrupt the way AI models interpret and learn from it. When Glazed images are fed into AI training pipelines, the model is misled into learning incorrect representations of the artist’s style. This undermines the ability of tools like Stable Diffusion or Midjourney to reproduce the original style when prompted with phrases like “in the style of [artist name].”
NightGlaze is an extension of this concept, developed in response to the accelerating arms race between protective techniques and AI model training. As models increasingly begin training on datasets that already include Glazed images, there's a risk that they will adapt to or “see through” the original cloaking methods. NightGlaze counters this by generating more complex and resilient perturbations designed to fool newer, more robust generative models. It refines the adversarial noise with a deeper understanding of how current AI architectures process style and visual patterns, making the cloaking more adaptive and difficult to reverse-engineer.