Meta unveils new SAM 2's AI model with real-time object tracking in images and videos

Meta unveils new SAM 2's AI model with real-time object tracking in images and videos

Meta has unveiled Segment Anything Model 2 (SAM 2), the successor to its acclaimed Segment Anything Model (SAM). SAM 2 is a unified model capable of real-time promptable object segmentation in both images and videos. It stands out as the first model to identify which pixels belong to a target object across frames in a video, even for previously unseen objects and visual domains.

The model's versatility opens up numerous applications, from creating video effects to segmenting moving cells in scientific research. Meta has released SAM 2's code and model weights under the Apache 2.0 license, along with the SA-V dataset, which includes around 51,000 real-world videos and over 600,000 spatio-temporal masks. SAM 2's outputs can also enhance generative video models, offering new possibilities in video effects and computer vision systems.

by Paul

MORE ABOUT: #Meta SAM 2
  • FreeOpen Source
  • ...

Meta SAM 2 is the pioneering unified model for segmenting objects across both images and videos. It allows users to select an object using a click, box, or mask as input on any image or video frame. This versatile tool streamlines object segmentation tasks, catering to diverse applications in image and video processing.

Comments

Sam Lander
2
1 reply
guck_foogle

Nothing related to Meta is good for the public.

Gu