DragGAN

Synthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects. Existing approaches gain controllability of generative adversarial networks (GANs) via manually...

DragGAN screenshot 1

Cost / License

  • Free
  • Open Source

Platforms

  • Windows
-
No reviews
0likes
0comments
0alternatives
0news articles

Features

Suggest and vote on features
No features, maybe you want to suggest one?

 Tags

DragGAN News & Activities

Highlights All activities

Recent activities

No activities found.

DragGAN information

  • Developed by

    XingangPan
  • Licensing

    Open Source and Free product.
  • Written in

  • Alternatives

    0 alternatives listed
  • Supported Languages

    • English

AlternativeTo Category

Photos & Graphics

GitHub repository

  •  36,006 Stars
  •  3,447 Forks
  •  153 Open Issues
  •   Updated  
View on GitHub
DragGAN was added to AlternativeTo by tomyan112 on and this page was last updated .
No comments or reviews, maybe you want to be first?
Post comment/review

What is DragGAN?

Synthesizing visual content that meets users' needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects. Existing approaches gain controllability of generative adversarial networks (GANs) via manually annotated training data or a prior 3D model, which often lack flexibility, precision, and generality. In this work, we study a powerful yet much less explored way of controlling GANs, that is, to "drag" any points of the image to precisely reach target points in a user-interactive manner, as shown in Fig.1. To achieve this, we propose DragGAN, which consists of two main components including:

  1. A feature-based motion supervision that drives the handle point to move towards the target position, and
  2. A new point tracking approach that leverages the discriminative GAN features to keep localizing the position of the handle points.

Through DragGAN, anyone can deform an image with precise control over where pixels go, thus manipulating the pose, shape, expression, and layout of diverse categories such as animals, cars, humans, landscapes, etc. As these manipulations are performed on the learned generative image manifold of a GAN, they tend to produce realistic outputs even for challenging scenarios such as hallucinating occluded content and deforming shapes that consistently follow the object's rigidity. Both qualitative and quantitative comparisons demonstrate the advantage of DragGAN over prior approaches in the tasks of image manipulation and point tracking. We also showcase the manipulation of real images through GAN inversion.