return to table of content

SAM 2: Segment Anything in Images and Videos

nravi20
26 replies
19h20m

Hi from the Segment Anything team! Today we’re releasing Segment Anything Model 2! It's the first unified model for real-time promptable object segmentation in images and videos! We're releasing the code, models, dataset, research paper and a demo! We're excited to see what everyone builds! https://ai.meta.com/blog/segment-anything-2/

vivzkestrel
6 replies
15h5m

stupid question from a noob: what exactly is object segmentation? what does your library actually do? Does it cut clips?

j7ake
4 replies
14h34m

Given an image, it will outline where objects are in the image.

bryanrasmussen
3 replies
12h14m

and extract segments of images where the object are in the image as I understand it?

A segment then is a collection of images that follow each other in time?

So if you have a video comprised of img1, img2, img3, img4 and object shows in img1 and img2 and img4

Can you catch that as a sequence img1, img2, img3, img4 and can you also catch just the object img1, img2, img4 but get some sort of information that there is a break between img2 and img4 - number of images break etc.?

On edit: Or am I totally off about the segment possibilities and what it means?

Or can you only catch img1 and img2 as a sequence?

nsonha
2 replies
9h30m

I'm not in the field and what SAM does is immediately apparent when you view the home page. Did you not even give it a glance?

bryanrasmussen
1 replies
8h23m

Yes I did give it a glance, polite and clever HN member, it showed an object in a sequence of images extracted from video, and evidently followed the object from sequence.

Perhaps however my interpretation of what happens here is way off, which is why I asked in an obviously incorrect and stupid way that you have pointed out to me without clarifying exactly why it was incorrect and stupid.

So anyway there is the extraction of the object I referred to, but also seeming to follow the object through sequence of scenes?

https://github.com/facebookresearch/segment-anything-2/raw/m...

So it seems to me that they identify the object and follow it for a contiguous sequence. Img1, img2, img3, img4, is my interpretation incorrect here?

But what I am wondering is - what happens if the object is not in img3 - like perhaps two people talking and shifting viewpoint from person talking to person listening. Person talking is in img1, img2, img4. Can you get that sequence or is it just img1, img2 the sequence.

It says "We extend SAM to video by considering images as a video with a single frame." which I don't know what that means, does it mean that they concatenated all the video frames into a single image and identified the object in them, in which case their example still shows contiguous images without the object ever disappearing so my question still pertains.

So anyway my conclusion is what said when addressing me was wrong, to quote: "what SAM does is immediately apparent when you view the home page" because I (the you addressed) viewed the homepage I wondered about some things? Obviously wrong things that you have identified as being wrong.

And thus my question is: If what SAM does is immediately apparent when you view the home page can you point out where my understanding has failed?

On edit: grammar fixes for last paragraph / question.

nsonha
0 replies
5h25m

A segment then is a collection of images that follow each other in time?

A segment is a visually distinctive... segment of image, segmentation is basically splitting an image into objects: https://segment-anything.com, as such it has nothing to do with time or video.

Now SAM 2 is about video, so they seem to add object tracking (that is attributing same object to the same segment throughout frames)

The videos in the main article demonstrate that it can track objects in and out of frame (the one with bacteria or the one with boy going around the tree). However they do acknowledge this part of the algorithm can produce incorrect result sometimes (example with the horses).

The answer to your question is img1, img2, img4, as there is no reason to believe that it can only track objects in contiguous sequence.

stabbles
0 replies
12h9m

Classification per pixel

robbomacrae
4 replies
19h16m

Code, model, data and under Apache 2.0. Impressive.

Curious how this was allowed to be more open source compared to Llama's interesting new take on "open source". Are other projects restricted in some form due to technical/legal issues and the desire is to be more like this project? Or was there an initiative to break the mold this time round?

8organicbits
1 replies
13h13m

Yeah, but there's a CLA for some reason. I'm wary they will switch to a new license down the road.

phkahler
0 replies
10h48m

So get it today. You can't retroactively change a license on someone.

swyx
0 replies
17h39m

data is creative commons

Nesco
0 replies
17h22m

LLMs are trained on the entire internet so loads of copyrighted data, which Meta can’t distribute, and is afraid to even reference

acacac
3 replies
19h2m

will the model ever be extended to being able to segment audio (eg. different people talking, different instruments in a soundtrack?)

sagz
0 replies
15h4m

Check out Facebook DeMucs, and more newer: Ultimate Vocal Remover project on GitHub

mrdjtek
0 replies
18h4m

There are a ton of models that do Stemming like this. We use them all the time. Lookup MvSep on Replicate.com

TheHumanist
0 replies
18h8m

That would be really cool to try out. I hope someone is doing that.

madduci
2 replies
12h32m

Thank you for sharing it! Is there any plans to move the codebase to a more performant programming language?

cinntaile
0 replies
11h10m

It's all C, C++ and Fortran(?) under the hood so moving languages probably won't matter as much as you expect.

Legend2440
0 replies
12h17m

Everything in machine learning uses Python.

It doesn't matter much because all the real computation happens on the GPU. But you could take their neural network and do inference using any language you want.

Yoric
2 replies
8h41m

Oh, nice!

The first one was excellent. Now part of my Gimp toolbox. Thanks for your work!

jacooper
1 replies
8h24m

How did you add it to gimp?

ulrikhansen54
0 replies
16h43m

Awesome model - thank you! Are you guys planning to provide any guidance on fine-tuning?

sea-shunned
0 replies
8h18m

I've been supporting non-computational (i.e. scientists) to use and finetune SAM for biological applications, so excited to see how SAM2 performs and how the video aspects work for large image stacks of 3D objects.

Considering the instant flood of noisy issues/PRs on the repo and the limited fix/update support on SAM, are there plans/buy-in for support of SAM2 on the medium-term beyond quick fixes? Either way, thank you to the team for your work on this and the continued public releases!

ed
0 replies
18h1m

Grounded SAM has become an essential tool in my toolbox (for others: it lets you mask any image using a text prompt, only). HUGE thank you to the team at Meta, I can't wait to try SAM2!

benreesman
0 replies
19h16m

Huge fan of the SAM work, one of the most underrated models.

My favorite use case is that it slays for memes. Try getting a good alpha mask of Fassbender Turtleneck any other way.

Keep doing stuff like this. <3

minimaxir
17 replies
18h22m

The web demo is actually pretty neat: https://sam2.metademolab.com/demo

I selected each shoe as individual objects and the model was able to segment them even as they overlapped.

rkagerer
7 replies
14h26m

I guess the demo simply doesn't work unless you accept cookies?

sashank_1509
6 replies
12h4m

Are there people who don’t accept cookies?

Don’t most websites require you to accept cookies?

bazzargh
1 replies
9h52m

In many jurisdictions requiring blanket acceptance of cookies to access the whole site is illegal, eg https://ico.org.uk/for-organisations/direct-marketing-and-pr... . Sites have to offer informed consent for nonessential cookies - but equally don't have to ask if the only cookies used are essential. So a popup saying 'Accept cookies?' with no other information doesn't cut it.

afh1
0 replies
7h22m

lol

wongarsu
0 replies
5h48m

You don't need consent for functional cookies that are necessary for the website to work. Anything you are accepting or declining in a cookie popup shouldn't affect the user experience in any major way.

I know a lot of people who reflexively reject all cookies, and the internet indeed does keep working for them.

shreddit
0 replies
11h35m

If someone gives me the choice i don’t.

brk
0 replies
4h22m

I reject cookies on the regular. Generally do not see any downsides for the things I browse.

SanderNL
0 replies
10h43m

Always refuse them, close to zero problems.

I can’t think of a technical reason a website without auth needs cookies to function.

ks2048
2 replies
16h56m

It is giving me "Access Denied".

rawrawrawrr
1 replies
14h55m

Might have issues if you're from Texas or Illinois due to their local laws.

swamp40
0 replies
1h39m

What is the Illinois law?

Edit: Found lower in thread: biometric privacy laws

vitorgrs
1 replies
10h43m

"The Firefox browser doesn’t support the video features we’ll need to run this demo. Please try again using Chrome or Safari."

barnabask
0 replies
5h43m

Same :(

Just a guess, maybe it's the VideoFrame API? It was the only video-related feature I could find that Chrome and Safari have and FF doesn't.

https://caniuse.com/mdn-api_videoframe

rvnx
0 replies
17h28m

I tried on the default video (white soccer ball), and it seems to really struggle with the trees in the background, maybe you could benefit of more of such examples.

dhon_
0 replies
17h51m

Try tracking the table tennis bat

Lucasoato
0 replies
1h23m

This research demo is not open to residents of, or those accessing the demo from, the States of Illinois or Texas.

Are there laws stricter than in California or EU in those places?

gpjanik
9 replies
10h9m

Hi from Germany. In case you were wondering, we regulated ourselves to the point where I can't even see the demo of SAM2 until some other service than Meta deploys it.

Does anyone know if this already happened?

pavlov
5 replies
9h1m

It’s more like “Meta is restricting European access to models even though they don’t have to, because they believe it’s an effective lobbying technique as they try to get EU regulations written to their preference.”

The same thing happened with the Threads app which was withheld from European users last year for no actual technical reason. Now it’s been released and nothing changed in between.

These free models and apps are bargaining chips for Meta against the EU. Once the regulatory situation settles, they’ll do what they always do and adapt to reach the largest possible global audience.

phyrex
2 replies
4h30m

The same thing happened with the Threads app which was withheld from European users last year for no actual technical reason. Now it’s been released and nothing changed in between.

No technical reason, but legal reasons. IIRC it was about cross-account data sharing from Instagram to Threads, which is a lot more dicey legally in the EU than in NA.

pavlov
1 replies
4h3m

It’s not like Meta doesn’t know how it works. They ship many apps that share accounts like FB + Messenger most prominently.

They’ve also had separate apps in the past that shared an Instagram account, like IGTV (2018 - 2022).

The Threads delay was primarily a lobbying ploy.

phyrex
0 replies
3h15m

No, it really was a legal privacy thing. I worked in privacy at Meta at that time. Everybody was eager to ship it everywhere, but it wasn't worth the wrath of the EU to launch without a clear data separation between IG and threads.

michaelt
0 replies
5h33m

> Meta is restricting European access to models even though they don’t have to

This video segmentation model could be used by self-driving cars to detect pedestrians, or in road traffic management systems to detect vehicles, either of which would make it a Chapter III High-Risk AI System.

And if we instead say it's not specific to those high-risk applications, it is instead a general purpose model - wouldn't that make it a Chapter V General Purpose AI Model?

Obviously you and I know the "general purpose AI models" chapter was drafted with LLMs (and their successors) in mind, rather than image segmentation models - but it's the letter of the law, not the intent, that counts.

bakje
0 replies
6h47m

Not saying you're wrong, but in this instance it might be a regulation specific to Germany since the site works just fine from the Netherlands.

maeil
0 replies
8h20m

Sounds like big tech's strategy to make you protest against regulating them is working brilliantly.

consumer451
0 replies
9h25m

Which German regulation prevents this? Is it biometric related?

It seems that https://mullvad.net is a necessary part of my Internet toolkit these days, for many reasons.

analyzethis
0 replies
6h54m

Looking at it right now from Denmark. You must have some other problem.

simonw
7 replies
19h11m

Has anyone built anything cool with the original SAM? What did you build?

totalview
1 replies
18h30m

We are using it to segment different pieces of an industrial facility (pipes valves, etc.) before classification

sobellian
0 replies
17h3m

Are you working with image data or do you have laser scans? If laser scans, how are you extending SAM to work with that format?

rocauc
0 replies
18h15m

One thing its enabled is automated annotations for segmentation, even on out-of-distribution examples. e.g. in the first 7 months of SAM, users on Roboflow used SAM-powered labeling to label over 13 million images, saving over ~21 years[0] of labeling time. That doesn't include labeling from self hosting autodistill[1] for automated annotation either.

[0] based on comparing avg labeling session time on individual polygon creation vs SAM-powered polygon examples [1] https://github.com/autodistill/autodistill

ed
0 replies
17h55m

Grounded SAM[1] is extremely useful for segmenting novel classes. The model is larger and not as accurate as specialized models (e.g. any YOLO segmenter), but it's extremely useful for prototyping ideas in ComfyUI. Very excited to try SAM2.

[1] - https://github.com/IDEA-Research/Grounded-Segment-Anything

daemonologist
0 replies
16h19m

I used it for segmentation for this home climbing/spray wall project: https://freeclimbs.org/wall/demo/edit-set

It does detection on the backend and then feeds those bounding boxes into SAM running in the browser. This is a little slow on the first pass but allows the user the adjust the bboxes and get new segmentations in nearly real time, without putting a ton of load on the server. Saved me having to label a bunch of holds with precise masks/polygons (I labeled 10k for the detection model and that was quite enough). I might try using SAM's output to train a smaller model in the future, haven't gotten around to it.

(Site is early in development and not ready for actual users, but feel free to mess around.)

benreesman
0 replies
19h7m

As mentioned in another comment I use it all the time for zero-shot segmentation to do quick image collage type work (former FB-folks take their memes very seriously). It’s crazy good at doing plausible separations on parts of an image with no difference at the pixel level.

Someone who knows Creative Suite can comment on what Photoshop can do on this these days, one imagines it’s something, but the SAM stuff is so fast it can run in low-spec settings.

kajecounterhack
4 replies
12h21m

It looks like it’s working to me. Segmentation isn’t supposed to be used for tracking alone. If you add tracking on top, the uncertainty in the estimated mask for the white ball (which is sometimes getting confused with the wall) would be accounted for and you’d be able to track it well.

phillypham
3 replies
12h11m

The blog post (https://ai.meta.com/blog/segment-anything-2/) mentions tracking as a use case. Similar objects is known to be challenging and they mention it in the Limitations section. In that video, I only used one frame, but in some other tests even when I prompted in several frames as recommended, it didn't really work, still.

kajecounterhack
2 replies
9h33m

Yeah, it's a reasonable expectation since the blog highlights it. Just figure it's worth calling out that SOTA trackers are able to deal with object disappearance well enough that when used with this it would handle things. I'd venture to say that most people doing any kind of tracking aren't relying on their segmentation process.

richard___
1 replies
8h1m

Reference?

ska
0 replies
3h14m

I’m not sure what you are looking for a reference to exactly, but segmentation as a preprocessing step for tracking has been one of, if not the primary, most typical workflow for decades.

mattigames
0 replies
13h20m

I bet it would do a lot better if it had a more frames per second (or slow-mo)

vanjajaja1
4 replies
17h26m

Cool! Seems this is cuda only?

rawrawrawrr
3 replies
14h54m

Can run on CPU (slower) or AMD GPUs.

mnk47
2 replies
13h36m

What about Mac/Metal?

vanjajaja1
1 replies
6h36m

this is what I was getting at, i tried on my mbp and no luck. might be just an installer issue but I wanted confirmation from someone with more know-how before diving in

leodriesch
0 replies
5h57m

I got SAM 1 to work with MPS device on my MacBook Pro M1, don’t know if it works with this one too.

swyx
4 replies
17h45m

i covered SAM 1 a year ago (https://news.ycombinator.com/item?id=35558522). notes from quick read of the SAM 2 paper https://ai.meta.com/research/publications/sam-2-segment-anyt...

1. SAM 2 was trained on 256 A100 GPUs for 108 hours (SAM1 was 68 hrs on same cluster). Taking the upper end $2 A100 cost off gpulist means SAM2 cost ~$50k to train - surprisingly cheap for adding video understanding?

2. new dataset: the new SA-V dataset is "only" 50k videos, with careful attention given to scene/object/geographical diversity incl that of annotators. I wonder if LAION or Datacomp (AFAICT the only other real players in the open image data space) can reach this standard..

3. bootstrapped annotation: similar to SAM1, a 3 phase approach where 16k initial annotations across 1.4k videos was then expanded to 63k+197k more with SAM 1+2 assistance, with annotation time accelerating dramatically (89% faster than SAM1 only) by the end

4. memory attention: SAM2 is a transformer with memory across frames! special "object pointer" tokens stored in a "memory bank" FIFO queue of recent and prompted frames. Has this been explored in language models? whoa?

(written up in https://x.com/swyx/status/1818074658299855262)

alsodumb
2 replies
17h2m

I might be minority, but I am not that surprised by the results or the not so significant GPU hours. I've been video segment tracking for a while now using SAM for mask generation and some of the robust academic video-object segmentation models (see CUTIE: https://hkchengrex.com/Cutie/ presented at CVPR this year.)for tracking the mask.

I need to read SAM2 paper, but 4. seems a lot like what Rex has in CUTIE. CUTIE can consistently track segments across video frames even if they get occluded/ go out of frame for a while.

michaelt
0 replies
10h57m

Of course Facebook has had a video tracking ML model for a year or so - Co-tracker [1] - just tracking pixels rather than segments.

[1] https://co-tracker.github.io/

dingaling
0 replies
12h24m

Seems like there's functional overlap between segmentation models and the autofocus algorithms developed by Canon and Sony for their high-end cameras.

The Canon R1 for example will not only continually track a particular object even if partially occluded but will also pre-focus on where it predicts the object will be when it emerged from being totally hidden. It can also be programmed by the user to focus on a particular face to the exclusion of all else.

ulrikhansen54
0 replies
17h12m

A colleague of mine has written up a quick explainer on the key features (https://encord.com/blog/segment-anything-model-2-sam-2/). The memory attention module for keeping track of objects throughout a video is very clever - one of the trickiest problems to solve, alongside occlusion. We've spent so much time trying to fix these issues in our CV projects, now it looks like Meta has done the work for us :-)

zengineer
3 replies
11h4m

Would love to use it for my startup, but I believe it is to self-host on a server with GPU? Or is there an easy to use API?

pzo
0 replies
10h59m

Previous SAM v1 you can use e.g. in here:

https://fal.ai/models

https://replicate.com/

You just have to wait probably few weeks for the SAM v2 to be available. Hugging Face might also have some offering

leodriesch
0 replies
6h20m

I ran it with 3040x3040px images on my MacBook M1 Pro in about 9 seconds + 200ms or so for the masking.

Gisbitus
0 replies
11h0m

It's OSS, so there isn't an "official" hosted version, but someone probably is gonna offer it soon.

gpm
3 replies
16h34m

This research demo is not open to residents of, or those accessing the demo from, the States of Illinois or Texas.

Alright, I'll bite, why not?

ipsum2
1 replies
14h17m

It's because their biometric privacy laws are written in such a general way that detecting the presence of a face is considered illegal.

boppo1
0 replies
7h57m

I'm kinda on board with this.

daemonologist
0 replies
16h25m

I know Illinois and Texas have biometric privacy laws; I would guess it's related to that. (I am in Illinois and cannot access the demo, so I don't know what if anything it's doing which would be in violation.)

glandium
3 replies
12h53m

We extend SAM to video by considering images as a video with a single frame.

I can't make sense of this sentence. Is there some mistake?

RobinL
2 replies
12h52m

Everything is a video. An image is the special case of length 1 frame

glandium
1 replies
12h30m

Here's a sentence I would understand: > We extend SAM to video and retrofit support for images by considering images as a video with a single frame.

As it is written, I don't see the link between "We extend SAM to video" and "by considering images as a video with a single frame".

ZephyrBlu
0 replies
10h46m

I read it like this:

- "We extend SAM to video", because is was previously only for images and it's capabilities are being extended to videos

- "by considering images as a video with a single frame", explaining how they support and build upon the previous image functionality

The main assumptions here are that images -> videos is a level up as opposed to being a different thing entirely, and the previous level is always supported.

"retrofit" implies that the ability to handle images was bolted on afterwards. "extend to video" implies this is a natural continuation of the image functionality, so the next part of the sentence is explaining why there is a natural continuation.

Imnimo
3 replies
18h2m

I think the first SAM is the open source model I've gotten the most mileage out of. Very excited to play around with SAM2!

djsavvy
1 replies
15h23m

What have you found it useful for?

snovv_crash
0 replies
13h53m

Annotating datasets so I can train a smaller more specialized production model.

pgt
2 replies
11h26m

Wonder if I can use this to count my winter wood stock. Before resuscitating my mutilated Python environment, could someone please run this on a photo of stacked uneven bluegum logs to see if it can segment the pieces? OpenCV edge detection does not cut it:

https://share.icloud.com/photos/090J8n36FAd0_lz4tz-TJfOhw

Havoc
1 replies
10h30m

Heads up that link reveals real name. Maybe edit it out if you care

pgt
0 replies
7h32m

thx for the heads up :) full name is in my HN profile. Good to know iCloud reveals that.

pzo
1 replies
14h8m

Impressive, wondering if this is now out of the box fast enough to run on iphone. Previous SAM had some community projects such as FastSAM, MobileSAM, EfficientSAM that tried to speed up. Wish when Readme reporting FPS, provided on what hardware it was tested

leodriesch
0 replies
10h55m

I’d guess testing hardware is same as training hardware, so A100. If it was on a mobile device they would have definitely said that.

nullandvoid
1 replies
7h41m

Anyone have any home project ideas (or past work) to apply this to / inspire others?

I was initially thinking the obvious case would be some sort of system for monitoring your plant health. It could check for shrinkage / growth, colour change etc and build some sort of monitoring tool / automated watering system off that.

jonnyscholes
0 replies
7h15m

I used the original SAM (alongside Grounding DINO) to create an ever growing database of all the individual objects I see as I go about my daily life. It automatically parses all the photos I take on my Meta Raybans and my phone along with all my laptop screenshots. I made it for an artwork that's exhibiting in Australia, and it will likely form the basis of many artworks to come.

I haven't put it up on my website yet (and proper documentation is still coming) so unfortunately the best I can do is show you an Instagram link:

https://www.instagram.com/p/C98t1hlzDLx/?igsh=MWxuOHlsY2lvdT...

Not exactly functional, but fun . Artwork aside it's quite interesting to see your life broken into all its little bits. Provides a new perspective (apparently, there are a lot more teacups in my life than I notice).

ei8htyfi5e
1 replies
14h44m

Will it handle tracking out of frame?

i.e. if I stand in the center of my room and take a video of the room spinning around slowly over 5 seconds. Then reverse spin around for 5 seconds.

Will it see the same couch? Or will it see two couches?

snovv_crash
0 replies
13h51m

I think it depends how long it is out of frame for, there is a cache that you might be able to tweak the size of.

albert_e
1 replies
14h41m

How do these techniques handle transparent, translucent, mesh/gauge/hair like objects that interact with background.

Splashing water or Orange juice, spraying snow from skis, rain and snowfall, foliage, fences and meshes, veils etc.

andy_ppp
0 replies
14h5m

State of the art still looks pretty bad at this IMO.

ximilian
0 replies
11h17m

Roughly how many fps could you get running this on a raspberry pi?

vicentwu
0 replies
16h37m

It's amazing!

unnouinceput
0 replies
8h8m

Trying to run https://sam2.metademolab.com/demo and...

Quote: "Sorry Firefox users! The Firefox browser doesn’t support the video features we’ll need to run this demo. Please try again using Chrome or Safari."

Wtf is this shit? Seriously!

shaunregenbaum
0 replies
12h21m

Very excited to give it a try, SAM has had great performance in Biology applications.

sails
0 replies
11h16m

Any use of this category of tools in OCR?

renewiltord
0 replies
19h5m

This is a super-useful model. Thanks, guys.

naitgacem
0 replies
11h26m

Anyone managed to get this to work on Google collab? I am having trouble with the imports and not sure what is going on.

maxdo
0 replies
16h10m

How many days it will take to see this in military use killing people …

ks2048
0 replies
16h6m

I would like to train a model to classify frames in a video (and identify "best" frame for something I want to locate, according to my training data).

Is SAM-2 useful to use as a base model to finetune a classifier layer on? Or are there better options today?

j0e1
0 replies
18h58m

This is great! Can someone point me to examples how to bundle something like to run offline on a browser, if possible at all?

gpm
0 replies
15h59m

Interesting how you can bully the model into accepting multiple people as one object, but it keeps trying to down-select to just one person (which you can then fix by adding another annotated frame in).

doubleorseven
0 replies
11h37m

Thank you for this amazing work you are sharing.

I do have a 2 questions: 1. isn't addressing the video frame by frame expensive? 2. In the web demo when the leg moves fast it loses it's track from the shoe. Does the memory part not throwing some uristics to over come this edge case?

daemonologist
0 replies
16h4m

Nice! Of particular interest to me is the slightly improved mIoU and 6x speedup on images [1] (though they say the speedup is mainly from the more efficient encoder, so multiple segmentations of the same image presumably would see less benefit?). It would also be nice to get a comparison to original SAM with bounding box inputs - I didn't see that in the paper though I may have missed it.

[1] - page 11 of https://ai.meta.com/research/publications/sam-2-segment-anyt...

carbocation
0 replies
16h8m

Huge fan of the SAM loss function. Thanks for making this.

blackeyeblitzar
0 replies
11h40m

Somewhat related: is there much research into how these models can be tricked or possible security implications?

_giorgio_
0 replies
10h18m

Does it segment and describe or recognize objects? What "pipeline" would be needed to achieve that? Thanks.

Mxbonn
0 replies
12h31m

What happened to text prompts that were shown as early results in SAM1? I assume they never really got them working well?