Glaze and Nightshade icon
Glaze and Nightshade icon

Glaze and Nightshade

 Like

Glaze is a tool developed by the University of Chicago to protect digital artwork from being scraped and used to train generative AI models.

License model

  • FreeProprietary

Country of Origin

  • US flagUnited States

Platforms

  • Mac
  • Windows
  No rating
0likes
0comments
0news articles

Features

Suggest and vote on features

Properties

  1.  Privacy focused
No features, maybe you want to suggest one?

Glaze and Nightshade News & Activities

Highlights All activities

Recent activities

Show all activities

Glaze and Nightshade information

  • Developed by

    US flagSand Lab, University of Chicago
  • Licensing

    Proprietary and Free product.
  • Alternatives

    3 alternatives listed
  • Supported Languages

    • English

AlternativeTo Category

Security & Privacy

Popular alternatives

View all

Our users have written 0 comments and reviews about Glaze and Nightshade, and it has gotten 0 likes

Glaze and Nightshade was added to AlternativeTo by Sam Lander on May 16, 2025 and this page was last updated May 16, 2025.
No comments or reviews, maybe you want to be first?
Post comment/review

What is Glaze and Nightshade?

Glaze is a tool developed by researchers at the University of Chicago to protect artists from having their work unknowingly scraped and used to train generative AI models. It works by applying subtle, algorithmically generated perturbations to an image that are invisible to the human eye but disrupt the way AI models interpret and learn from it. When Glazed images are fed into AI training pipelines, the model is misled into learning incorrect representations of the artist’s style. This undermines the ability of tools like Stable Diffusion or Midjourney to reproduce the original style when prompted with phrases like “in the style of [artist name].”

NightGlaze is an extension of this concept, developed in response to the accelerating arms race between protective techniques and AI model training. As models increasingly begin training on datasets that already include Glazed images, there's a risk that they will adapt to or “see through” the original cloaking methods. NightGlaze counters this by generating more complex and resilient perturbations designed to fool newer, more robust generative models. It refines the adversarial noise with a deeper understanding of how current AI architectures process style and visual patterns, making the cloaking more adaptive and difficult to reverse-engineer.