Hi from the Segment Anything team! Today we’re releasing Segment Anything Model 2! It's the first unified model for real-time promptable object segmentation in images and videos! We're releasing the code, models, dataset, research paper and a demo! We're excited to see what everyone builds! https://ai.meta.com/blog/segment-anything-2/
The web demo is actually pretty neat: https://sam2.metademolab.com/demo
I selected each shoe as individual objects and the model was able to segment them even as they overlapped.
I guess the demo simply doesn't work unless you accept cookies?
Are there people who don’t accept cookies?
Don’t most websites require you to accept cookies?
In many jurisdictions requiring blanket acceptance of cookies to access the whole site is illegal, eg https://ico.org.uk/for-organisations/direct-marketing-and-pr... . Sites have to offer informed consent for nonessential cookies - but equally don't have to ask if the only cookies used are essential. So a popup saying 'Accept cookies?' with no other information doesn't cut it.
lol
You don't need consent for functional cookies that are necessary for the website to work. Anything you are accepting or declining in a cookie popup shouldn't affect the user experience in any major way.
I know a lot of people who reflexively reject all cookies, and the internet indeed does keep working for them.
If someone gives me the choice i don’t.
I reject cookies on the regular. Generally do not see any downsides for the things I browse.
Always refuse them, close to zero problems.
I can’t think of a technical reason a website without auth needs cookies to function.
It is giving me "Access Denied".
Might have issues if you're from Texas or Illinois due to their local laws.
What is the Illinois law?
Edit: Found lower in thread: biometric privacy laws
"The Firefox browser doesn’t support the video features we’ll need to run this demo. Please try again using Chrome or Safari."
Same :(
Just a guess, maybe it's the VideoFrame API? It was the only video-related feature I could find that Chrome and Safari have and FF doesn't.
It's super fun! I used it on a video of my new cactus tweezers: https://simonwillison.net/2024/Jul/29/sam-2/
I tried on the default video (white soccer ball), and it seems to really struggle with the trees in the background, maybe you could benefit of more of such examples.
Try tracking the table tennis bat
This research demo is not open to residents of, or those accessing the demo from, the States of Illinois or Texas.
Are there laws stricter than in California or EU in those places?
Hi from Germany. In case you were wondering, we regulated ourselves to the point where I can't even see the demo of SAM2 until some other service than Meta deploys it.
Does anyone know if this already happened?
It’s more like “Meta is restricting European access to models even though they don’t have to, because they believe it’s an effective lobbying technique as they try to get EU regulations written to their preference.”
The same thing happened with the Threads app which was withheld from European users last year for no actual technical reason. Now it’s been released and nothing changed in between.
These free models and apps are bargaining chips for Meta against the EU. Once the regulatory situation settles, they’ll do what they always do and adapt to reach the largest possible global audience.
The same thing happened with the Threads app which was withheld from European users last year for no actual technical reason. Now it’s been released and nothing changed in between.
No technical reason, but legal reasons. IIRC it was about cross-account data sharing from Instagram to Threads, which is a lot more dicey legally in the EU than in NA.
It’s not like Meta doesn’t know how it works. They ship many apps that share accounts like FB + Messenger most prominently.
They’ve also had separate apps in the past that shared an Instagram account, like IGTV (2018 - 2022).
The Threads delay was primarily a lobbying ploy.
No, it really was a legal privacy thing. I worked in privacy at Meta at that time. Everybody was eager to ship it everywhere, but it wasn't worth the wrath of the EU to launch without a clear data separation between IG and threads.
> Meta is restricting European access to models even though they don’t have to
This video segmentation model could be used by self-driving cars to detect pedestrians, or in road traffic management systems to detect vehicles, either of which would make it a Chapter III High-Risk AI System.
And if we instead say it's not specific to those high-risk applications, it is instead a general purpose model - wouldn't that make it a Chapter V General Purpose AI Model?
Obviously you and I know the "general purpose AI models" chapter was drafted with LLMs (and their successors) in mind, rather than image segmentation models - but it's the letter of the law, not the intent, that counts.
Not saying you're wrong, but in this instance it might be a regulation specific to Germany since the site works just fine from the Netherlands.
Sounds like big tech's strategy to make you protest against regulating them is working brilliantly.
Which German regulation prevents this? Is it biometric related?
It seems that https://mullvad.net is a necessary part of my Internet toolkit these days, for many reasons.
Looking at it right now from Denmark. You must have some other problem.
Has anyone built anything cool with the original SAM? What did you build?
We are using it to segment different pieces of an industrial facility (pipes valves, etc.) before classification
Are you working with image data or do you have laser scans? If laser scans, how are you extending SAM to work with that format?
One thing its enabled is automated annotations for segmentation, even on out-of-distribution examples. e.g. in the first 7 months of SAM, users on Roboflow used SAM-powered labeling to label over 13 million images, saving over ~21 years[0] of labeling time. That doesn't include labeling from self hosting autodistill[1] for automated annotation either.
[0] based on comparing avg labeling session time on individual polygon creation vs SAM-powered polygon examples [1] https://github.com/autodistill/autodistill
Grounded SAM[1] is extremely useful for segmenting novel classes. The model is larger and not as accurate as specialized models (e.g. any YOLO segmenter), but it's extremely useful for prototyping ideas in ComfyUI. Very excited to try SAM2.
[1] - https://github.com/IDEA-Research/Grounded-Segment-Anything
I used it for segmentation for this home climbing/spray wall project: https://freeclimbs.org/wall/demo/edit-set
It does detection on the backend and then feeds those bounding boxes into SAM running in the browser. This is a little slow on the first pass but allows the user the adjust the bboxes and get new segmentations in nearly real time, without putting a ton of load on the server. Saved me having to label a bunch of holds with precise masks/polygons (I labeled 10k for the detection model and that was quite enough). I might try using SAM's output to train a smaller model in the future, haven't gotten around to it.
(Site is early in development and not ready for actual users, but feel free to mess around.)
As mentioned in another comment I use it all the time for zero-shot segmentation to do quick image collage type work (former FB-folks take their memes very seriously). It’s crazy good at doing plausible separations on parts of an image with no difference at the pixel level.
Someone who knows Creative Suite can comment on what Photoshop can do on this these days, one imagines it’s something, but the SAM stuff is so fast it can run in low-spec settings.
We use SAM to segment GUI elements in https://github.com/OpenAdaptAI/OpenAdapt
Really cool. Doesn't really work for juggling unfortunately, https://sam2.metademolab.com/shared/fa993f12-b9ce-4f19-bb75-...
It looks like it’s working to me. Segmentation isn’t supposed to be used for tracking alone. If you add tracking on top, the uncertainty in the estimated mask for the white ball (which is sometimes getting confused with the wall) would be accounted for and you’d be able to track it well.
The blog post (https://ai.meta.com/blog/segment-anything-2/) mentions tracking as a use case. Similar objects is known to be challenging and they mention it in the Limitations section. In that video, I only used one frame, but in some other tests even when I prompted in several frames as recommended, it didn't really work, still.
Yeah, it's a reasonable expectation since the blog highlights it. Just figure it's worth calling out that SOTA trackers are able to deal with object disappearance well enough that when used with this it would handle things. I'd venture to say that most people doing any kind of tracking aren't relying on their segmentation process.
Reference?
I’m not sure what you are looking for a reference to exactly, but segmentation as a preprocessing step for tracking has been one of, if not the primary, most typical workflow for decades.
I bet it would do a lot better if it had a more frames per second (or slow-mo)
Cool! Seems this is cuda only?
Can run on CPU (slower) or AMD GPUs.
What about Mac/Metal?
this is what I was getting at, i tried on my mbp and no luck. might be just an installer issue but I wanted confirmation from someone with more know-how before diving in
I got SAM 1 to work with MPS device on my MacBook Pro M1, don’t know if it works with this one too.
i covered SAM 1 a year ago (https://news.ycombinator.com/item?id=35558522). notes from quick read of the SAM 2 paper https://ai.meta.com/research/publications/sam-2-segment-anyt...
1. SAM 2 was trained on 256 A100 GPUs for 108 hours (SAM1 was 68 hrs on same cluster). Taking the upper end $2 A100 cost off gpulist means SAM2 cost ~$50k to train - surprisingly cheap for adding video understanding?
2. new dataset: the new SA-V dataset is "only" 50k videos, with careful attention given to scene/object/geographical diversity incl that of annotators. I wonder if LAION or Datacomp (AFAICT the only other real players in the open image data space) can reach this standard..
3. bootstrapped annotation: similar to SAM1, a 3 phase approach where 16k initial annotations across 1.4k videos was then expanded to 63k+197k more with SAM 1+2 assistance, with annotation time accelerating dramatically (89% faster than SAM1 only) by the end
4. memory attention: SAM2 is a transformer with memory across frames! special "object pointer" tokens stored in a "memory bank" FIFO queue of recent and prompted frames. Has this been explored in language models? whoa?
(written up in https://x.com/swyx/status/1818074658299855262)
I might be minority, but I am not that surprised by the results or the not so significant GPU hours. I've been video segment tracking for a while now using SAM for mask generation and some of the robust academic video-object segmentation models (see CUTIE: https://hkchengrex.com/Cutie/ presented at CVPR this year.)for tracking the mask.
I need to read SAM2 paper, but 4. seems a lot like what Rex has in CUTIE. CUTIE can consistently track segments across video frames even if they get occluded/ go out of frame for a while.
Of course Facebook has had a video tracking ML model for a year or so - Co-tracker [1] - just tracking pixels rather than segments.
Seems like there's functional overlap between segmentation models and the autofocus algorithms developed by Canon and Sony for their high-end cameras.
The Canon R1 for example will not only continually track a particular object even if partially occluded but will also pre-focus on where it predicts the object will be when it emerged from being totally hidden. It can also be programmed by the user to focus on a particular face to the exclusion of all else.
A colleague of mine has written up a quick explainer on the key features (https://encord.com/blog/segment-anything-model-2-sam-2/). The memory attention module for keeping track of objects throughout a video is very clever - one of the trickiest problems to solve, alongside occlusion. We've spent so much time trying to fix these issues in our CV projects, now it looks like Meta has done the work for us :-)
Would love to use it for my startup, but I believe it is to self-host on a server with GPU? Or is there an easy to use API?
Previous SAM v1 you can use e.g. in here:
You just have to wait probably few weeks for the SAM v2 to be available. Hugging Face might also have some offering
I ran it with 3040x3040px images on my MacBook M1 Pro in about 9 seconds + 200ms or so for the masking.
It's OSS, so there isn't an "official" hosted version, but someone probably is gonna offer it soon.
This research demo is not open to residents of, or those accessing the demo from, the States of Illinois or Texas.
Alright, I'll bite, why not?
It's because their biometric privacy laws are written in such a general way that detecting the presence of a face is considered illegal.
I'm kinda on board with this.
I know Illinois and Texas have biometric privacy laws; I would guess it's related to that. (I am in Illinois and cannot access the demo, so I don't know what if anything it's doing which would be in violation.)
We extend SAM to video by considering images as a video with a single frame.
I can't make sense of this sentence. Is there some mistake?
Everything is a video. An image is the special case of length 1 frame
Here's a sentence I would understand: > We extend SAM to video and retrofit support for images by considering images as a video with a single frame.
As it is written, I don't see the link between "We extend SAM to video" and "by considering images as a video with a single frame".
I read it like this:
- "We extend SAM to video", because is was previously only for images and it's capabilities are being extended to videos
- "by considering images as a video with a single frame", explaining how they support and build upon the previous image functionality
The main assumptions here are that images -> videos is a level up as opposed to being a different thing entirely, and the previous level is always supported.
"retrofit" implies that the ability to handle images was bolted on afterwards. "extend to video" implies this is a natural continuation of the image functionality, so the next part of the sentence is explaining why there is a natural continuation.
I think the first SAM is the open source model I've gotten the most mileage out of. Very excited to play around with SAM2!
What have you found it useful for?
Annotating datasets so I can train a smaller more specialized production model.
...the first SAM is the open source model I've gotten the most mileage out of
How's OpenMMLab's MMSegmentation, if you've tried it? https://github.com/open-mmlab/mmsegmentation
It seems like Amazon is putting its weight behind it (from the papers they've published): https://github.com/amazon-science/bigdetection
Wonder if I can use this to count my winter wood stock. Before resuscitating my mutilated Python environment, could someone please run this on a photo of stacked uneven bluegum logs to see if it can segment the pieces? OpenCV edge detection does not cut it:
Heads up that link reveals real name. Maybe edit it out if you care
thx for the heads up :) full name is in my HN profile. Good to know iCloud reveals that.
Impressive, wondering if this is now out of the box fast enough to run on iphone. Previous SAM had some community projects such as FastSAM, MobileSAM, EfficientSAM that tried to speed up. Wish when Readme reporting FPS, provided on what hardware it was tested
I’d guess testing hardware is same as training hardware, so A100. If it was on a mobile device they would have definitely said that.
Anyone have any home project ideas (or past work) to apply this to / inspire others?
I was initially thinking the obvious case would be some sort of system for monitoring your plant health. It could check for shrinkage / growth, colour change etc and build some sort of monitoring tool / automated watering system off that.
I used the original SAM (alongside Grounding DINO) to create an ever growing database of all the individual objects I see as I go about my daily life. It automatically parses all the photos I take on my Meta Raybans and my phone along with all my laptop screenshots. I made it for an artwork that's exhibiting in Australia, and it will likely form the basis of many artworks to come.
I haven't put it up on my website yet (and proper documentation is still coming) so unfortunately the best I can do is show you an Instagram link:
https://www.instagram.com/p/C98t1hlzDLx/?igsh=MWxuOHlsY2lvdT...
Not exactly functional, but fun . Artwork aside it's quite interesting to see your life broken into all its little bits. Provides a new perspective (apparently, there are a lot more teacups in my life than I notice).
Will it handle tracking out of frame?
i.e. if I stand in the center of my room and take a video of the room spinning around slowly over 5 seconds. Then reverse spin around for 5 seconds.
Will it see the same couch? Or will it see two couches?
I think it depends how long it is out of frame for, there is a cache that you might be able to tweak the size of.
How do these techniques handle transparent, translucent, mesh/gauge/hair like objects that interact with background.
Splashing water or Orange juice, spraying snow from skis, rain and snowfall, foliage, fences and meshes, veils etc.
State of the art still looks pretty bad at this IMO.
Roughly how many fps could you get running this on a raspberry pi?
It's amazing!
Trying to run https://sam2.metademolab.com/demo and...
Quote: "Sorry Firefox users! The Firefox browser doesn’t support the video features we’ll need to run this demo. Please try again using Chrome or Safari."
Wtf is this shit? Seriously!
Very excited to give it a try, SAM has had great performance in Biology applications.
Any use of this category of tools in OCR?
This is a super-useful model. Thanks, guys.
Anyone managed to get this to work on Google collab? I am having trouble with the imports and not sure what is going on.
How many days it will take to see this in military use killing people …
I would like to train a model to classify frames in a video (and identify "best" frame for something I want to locate, according to my training data).
Is SAM-2 useful to use as a base model to finetune a classifier layer on? Or are there better options today?
This is great! Can someone point me to examples how to bundle something like to run offline on a browser, if possible at all?
Interesting how you can bully the model into accepting multiple people as one object, but it keeps trying to down-select to just one person (which you can then fix by adding another annotated frame in).
Thank you for this amazing work you are sharing.
I do have a 2 questions: 1. isn't addressing the video frame by frame expensive? 2. In the web demo when the leg moves fast it loses it's track from the shoe. Does the memory part not throwing some uristics to over come this edge case?
Nice! Of particular interest to me is the slightly improved mIoU and 6x speedup on images [1] (though they say the speedup is mainly from the more efficient encoder, so multiple segmentations of the same image presumably would see less benefit?). It would also be nice to get a comparison to original SAM with bounding box inputs - I didn't see that in the paper though I may have missed it.
[1] - page 11 of https://ai.meta.com/research/publications/sam-2-segment-anyt...
Huge fan of the SAM loss function. Thanks for making this.
Somewhat related: is there much research into how these models can be tricked or possible security implications?
Does it segment and describe or recognize objects? What "pipeline" would be needed to achieve that? Thanks.
What happened to text prompts that were shown as early results in SAM1? I assume they never really got them working well?
stupid question from a noob: what exactly is object segmentation? what does your library actually do? Does it cut clips?
Given an image, it will outline where objects are in the image.
and extract segments of images where the object are in the image as I understand it?
A segment then is a collection of images that follow each other in time?
So if you have a video comprised of img1, img2, img3, img4 and object shows in img1 and img2 and img4
Can you catch that as a sequence img1, img2, img3, img4 and can you also catch just the object img1, img2, img4 but get some sort of information that there is a break between img2 and img4 - number of images break etc.?
On edit: Or am I totally off about the segment possibilities and what it means?
Or can you only catch img1 and img2 as a sequence?
I'm not in the field and what SAM does is immediately apparent when you view the home page. Did you not even give it a glance?
Yes I did give it a glance, polite and clever HN member, it showed an object in a sequence of images extracted from video, and evidently followed the object from sequence.
Perhaps however my interpretation of what happens here is way off, which is why I asked in an obviously incorrect and stupid way that you have pointed out to me without clarifying exactly why it was incorrect and stupid.
So anyway there is the extraction of the object I referred to, but also seeming to follow the object through sequence of scenes?
https://github.com/facebookresearch/segment-anything-2/raw/m...
So it seems to me that they identify the object and follow it for a contiguous sequence. Img1, img2, img3, img4, is my interpretation incorrect here?
But what I am wondering is - what happens if the object is not in img3 - like perhaps two people talking and shifting viewpoint from person talking to person listening. Person talking is in img1, img2, img4. Can you get that sequence or is it just img1, img2 the sequence.
It says "We extend SAM to video by considering images as a video with a single frame." which I don't know what that means, does it mean that they concatenated all the video frames into a single image and identified the object in them, in which case their example still shows contiguous images without the object ever disappearing so my question still pertains.
So anyway my conclusion is what said when addressing me was wrong, to quote: "what SAM does is immediately apparent when you view the home page" because I (the you addressed) viewed the homepage I wondered about some things? Obviously wrong things that you have identified as being wrong.
And thus my question is: If what SAM does is immediately apparent when you view the home page can you point out where my understanding has failed?
On edit: grammar fixes for last paragraph / question.
A segment is a visually distinctive... segment of image, segmentation is basically splitting an image into objects: https://segment-anything.com, as such it has nothing to do with time or video.
Now SAM 2 is about video, so they seem to add object tracking (that is attributing same object to the same segment throughout frames)
The videos in the main article demonstrate that it can track objects in and out of frame (the one with bacteria or the one with boy going around the tree). However they do acknowledge this part of the algorithm can produce incorrect result sometimes (example with the horses).
The answer to your question is img1, img2, img4, as there is no reason to believe that it can only track objects in contiguous sequence.
Classification per pixel
Code, model, data and under Apache 2.0. Impressive.
Curious how this was allowed to be more open source compared to Llama's interesting new take on "open source". Are other projects restricted in some form due to technical/legal issues and the desire is to be more like this project? Or was there an initiative to break the mold this time round?
Yeah, but there's a CLA for some reason. I'm wary they will switch to a new license down the road.
So get it today. You can't retroactively change a license on someone.
data is creative commons
LLMs are trained on the entire internet so loads of copyrighted data, which Meta can’t distribute, and is afraid to even reference
will the model ever be extended to being able to segment audio (eg. different people talking, different instruments in a soundtrack?)
Check out Facebook DeMucs, and more newer: Ultimate Vocal Remover project on GitHub
There are a ton of models that do Stemming like this. We use them all the time. Lookup MvSep on Replicate.com
That would be really cool to try out. I hope someone is doing that.
Thank you for sharing it! Is there any plans to move the codebase to a more performant programming language?
It's all C, C++ and Fortran(?) under the hood so moving languages probably won't matter as much as you expect.
Everything in machine learning uses Python.
It doesn't matter much because all the real computation happens on the GPU. But you could take their neural network and do inference using any language you want.
Oh, nice!
The first one was excellent. Now part of my Gimp toolbox. Thanks for your work!
How did you add it to gimp?
https://github.com/Shriinivas/gimpsegany
https://github.com/crb02005/gimp-segment-anything
Awesome model - thank you! Are you guys planning to provide any guidance on fine-tuning?
I've been supporting non-computational (i.e. scientists) to use and finetune SAM for biological applications, so excited to see how SAM2 performs and how the video aspects work for large image stacks of 3D objects.
Considering the instant flood of noisy issues/PRs on the repo and the limited fix/update support on SAM, are there plans/buy-in for support of SAM2 on the medium-term beyond quick fixes? Either way, thank you to the team for your work on this and the continued public releases!
Grounded SAM has become an essential tool in my toolbox (for others: it lets you mask any image using a text prompt, only). HUGE thank you to the team at Meta, I can't wait to try SAM2!
Huge fan of the SAM work, one of the most underrated models.
My favorite use case is that it slays for memes. Try getting a good alpha mask of Fassbender Turtleneck any other way.
Keep doing stuff like this. <3