return to table of content

Meta's new AI image generator was trained on 1.1B Instagram and FB photos

cowboyscott
108 replies
3d

Is training with user-generated content a way to launder copyrighted images? That is, if I upload an image of Ironman or whatever to my Facebook or Instagram page as a public post and Meta trains their model on that data, is there wording in my user agreement that says that I declare that I own the content, which then gives Meta plausible deniability when it comes to training with copyrighted material?

(apologies for the run-on sentence - it is early still)

glimshe
37 replies
3d

I think Meta is already assuming that there will be no liability for training with copyrighted material. I find it very unlikely that image owners will win the AI training battle.

lxgr
29 replies
3d

I'd be extremely surprised if the "Mickey Mouse standing on the moon" example image was a legitimate way to "launder copyright".

The interesting question is just who will be liable for the copyright violation: The party that hosts the AI service? The party that trained it on copyrighted images? The user entering a prompt? The (possibly different) user publishing the resulting image?

ryoshu
15 replies
3d

I can draw as many Disney characters as I want to and Disney has no recourse as long as I'm not publishing them somewhere.

JohnFen
10 replies
3d

Posting them on IG, Facebook, etc. is publishing them.

airstrike
9 replies
2d22h

Yes, but importantly, generating them with the AI trained on Mickey is not.

slaymaker1907
7 replies
2d21h

But is publishing a model which can generate images of Mickey a copyright violation? It's definitely a violation if the model is overfitted to the extent that you can, perhaps lossily, extract the original images.

mr_toad
2 replies
2d16h

But is publishing a model which can generate images of Mickey a copyright violation?

A photocopier can generate images of Mickey. Does that make a photocopier illegal?

slaymaker1907
1 replies
2d

A photocopier is extremely different because the user is providing the copyrighted material. In the AI case, it is much more like writing a Google search for the copyrighted material. If I ask an artist to draw a cartoon of Mickey Mouse in violation of copyright, the artist is in violation of copyright if they produce said drawing and give it to me. Are we to give special rights to AI that human artists don't enjoy?

JohnFen
0 replies
2d

When I've heard people talking about using copyright to defend against AI, they've always talked about it in the sense that their works being used to train the AI is where the copyright violation takes place.

That stance is clearly not supported by copyright law.

If, however, we're talking about copyright violations applying to the distribution of works generated by AI, that's an entirely different conversation. It's still not really clear-cut, but there are ways that could be in violation of copyright law.

It isn't the case that AI is being treated differently, though. The issues would be the same if a human were doing all of this stuff.

airstrike
1 replies
2d20h

But is publishing a model which can generate images of Mickey a copyright violation?

Is selling colored pencils that can draw images of Mickey a copyright violation?

The way I see it, the tool can't ever be at fault for its use, unless its sole use (or something close enough to its sole use) is to infringe in copyright.

Besides, the safeguarding of copyright isn't the single variable we as a society should be solving for. General global productivity is way more valuable than guaranteeing Disney's bottom line.

shagie
0 replies
2d17h

The way I see it, the tool can't ever be at fault for its use, unless its sole use (or something close enough to its sole use) is to infringe in copyright.

Even then, you could look at a tape recorder or a photocopier and one of their primary uses is to make a copy of a copyrighted work.

The question isn't "can it be used for" but rather "does it have valid non-infringing use" and "when it does infringe, is it the person who uses the tool or the tool that is at fault?"

ska
0 replies
2d20h

It's definitely a violation if

That is certainly not clear, unless its only purpose was to do that.

JohnFen
0 replies
2d20h

But is publishing a model which can generate images of Mickey a copyright violation?

I don't think that courts have ruled on that specifically (yet), but I seriously doubt that it would be. Taking the image of Mickey and distributing it would certainly be, though.

JohnFen
0 replies
2d21h

True. This is why I think it's pointless to try to use copyright law to defend yourself against AI companies. Right now, anyway, I don't see any law (or any other mechanism) that provides any protection. If I did, I wouldn't have had to remove all of my websites from the public web.

beAbU
2 replies
2d19h

You can't make revenue off those drawings. An AI generator will presumably make money off generating content that violates copyright.

merrywhether
1 replies
2d3h

Tattoo artists also make money off generating infringing content all the time. I thought the issue was not in the generation but in the subsequent usage. Outlawing generation borders on thoughtcrime.

beAbU
0 replies
1d6h

Well thats my point though.

Are tattoo artists breaking the law by creating tattoos of copyrighted material? I think they are. And if an artist becomes really popular for their mickey mouse tattoos, then they will provably be noticed by Disney and there will be consequences.

CharlesW
0 replies
3d

Clearly you're still living in a pre-Neuralink™ world.

darkwraithcov
7 replies
3d

MM will be public domain in Jan.

JohnFen
2 replies
3d

Unless Disney can engineer yet another oppressive extension to copyright durations.

RcouF1uZ4gsC
1 replies
2d23h

Not going to happen.

When Disney did their copyright extension last time, they had bipartisan influence.

Now Disney is in the middle of the culture war, and there is no Republican that will risk being primaried to support Disney.

Given that you de facto need 60 votes in the Senate, it is not happening.

JohnFen
0 replies
2d23h

I guess that's some sort of silver lining to the state of things today!

alphabettsy
1 replies
2d23h

Still protected by trademark depending on how it’s used.

fsckboy
0 replies
2d11h

the author of a novel has a copyright to the contents, but can't trademark the contents of the novel.

the same is true for artwork.

liotier
0 replies
3d

Some early versions will.

andreasmetsala
0 replies
2d23h

Only the first movie, the trademark is not expiring.

glimshe
2 replies
3d

Here the problem isn't that the AI was trained on Mickey, but that it generated Mickey. The generated images can still violate copyright if too similar to copyrighted artwork - if published.

I think AI companies are working hard on preventing generated images from being similar to training images unless the user very explicitly asks the result to look like some well known image/character.

alphabettsy
1 replies
2d23h

It can violate copyright, but as or equally important, companies have trademark protection on their characters and symbols.

JAlexoid
0 replies
2d22h

You can violate copyright by intentionally drawing Mickey Mouse, the medium of drawing is not relevant (AI can be considered a medium, as much as a digital camera is a medium)

ska
0 replies
2d20h

The interesting question is just who will be liable for the copyright violation

I don't think this is going to be hard for courts. If you borrow your friends copy of a copyright text, got to kinkos and duplicate it, then distribute the results - you are the one violating copyright, not your friend or kinkos.

The same will hold here I think, mutatis mutandis. This is all completely separable from the training issue.

__loam
0 replies
2d20h

The person getting sued there would be the user of the model, not meta, as much as I wish that wasn't how it is. If you use photoshop to infringe on copyright, you're at fault, not Adobe.

__loam
4 replies
2d20h

Ultra shitty corporate interests win again...

gumby
3 replies
2d20h

I don’t agree in this case. Well, maybe I agree on the ultra shitty corporate part. But these are public photos, and if I’d looked at one it could have some influence, probably tiny, on my own drawings. Seems reasonable that the same would be true of my tools.

If they were scanning my private messages, things would be different.

thfuran
2 replies
2d19h

So you think a model trained on only a single copyrighted image would be a violation but one trained on many copyrighted images isn't?

gumby
1 replies
2d17h

No. I mean two things:

1 - human experience ends up informing human ingenuity. A sketch of Wile E. Coyote comes from someone’s (Chuck Jones?) experience of dogs and seeing coyotes, plus innumerable experience with things that are funny, constraints from experience of certain features that do or don’t work well on animation cels etc. Perhaps a stray tweak in his ears come from a Rembrandt seen as a child or from a glance at a sketch in progress by the person sitting at the next easel in a drawing class long ago.

In todays’s jargon our experiences are all parts of our training set (though today’s massive RNN models are infinitesimal by comparison).

And I think of my tools the same: a ton of inputs stirred together is fine by me.

2 - a difference is that fb’s model is made from public posts: posts offered for anyone to see. In the human case even my private experiences are part of my “training set.”

__loam
0 replies
2d

I don't think any argument in favor of these models that includes reasoning about how humans learn is any good. That's a completely separate process that has very little to do with how these systems work. The issue here is Facebook is creating a commercial system based on data their users have uploaded to their system. If artists had known their work would be used this way, I think they'd rethink using this platform. Facebook's monopoly power over internet content also makes it impractical for you not to have a social media presence if you're trying to make a living as an artist. So you either submit to bullshit like this or damn yourself to obscurity. The fact that it's only training on public content is irrelevant.

codingdave
1 replies
2d23h

It is in big bold letters right in instagram's terms of service: "We do not claim ownership of your content, but you grant us a license to use it."

This isn't about copyright, it is about the fact that most people don't realize that by posting photos, they are licensing those photos.

glimshe
0 replies
2d22h

A lot of the content posted there isn't owned by the people who post it, that's a big part of the problem.

sp332
23 replies
3d

They don't own the copyright, but they do have a "non-exclusive, royalty-free, transferable, sub-licensable, worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works". https://www.facebook.com/help/instagram/478745558852511

bee_rider
12 replies
3d

They user might upload something that they don’t have rights to.

Technically the user is the one misbehaving, but we, Facebook, and any reasonable court know that users are doing that.

JAlexoid
11 replies
2d23h

That's why there is a safe harbor provision in DMCA.

grogenaut
10 replies
2d22h

Does that provision allow them to build derivative works, when they get a dmca request do they retrain the AI after removing the copyrighted work?

luma
9 replies
2d20h

Copyright law as it exists today allows one to create transformative works. There is little to suggest that an AI trained on copyrighted works is in any way violating that copyright when inference is run.

TheDong
8 replies
2d17h

Copyright law as it exists allows a creative process to create transformative works.

Computers cannot create copyright. They are not creative. Just because I save your image as webp or jpeg or whatever doesn't mean I have changed the copyright. Just because I zip it up with a hundred other images doesn't mean the zipfile is free of your copyright.

Effectively, computers are executing math, and math by itself does not construct new copyright, since copyright is the result of a creative human process.

As far as I can tell, current AI are fundamentally not too different from wildly complex compression algorithms. You compress a billion images down to a model. The model now can reproduce a fraction or the whole of the copyrighted work with some low probability. Rote and probabilistic compression.

The creator of the AI might own the copyright for what it produces if constructing the AI was suitably creative, i.e. if you construct an AI that trains on random noise and produces images, those are clearly something you, the author of the AI's code, can claim copyright over... But current AI seem like math more than anything else. It's plausible that reinforced learning or some other part of training does imbue creativity into the process, but that doesn't seem obviously true to me.

kolinko
5 replies
2d16h

Argument "it's just math underneath" is flawed. Photoshop also has math underneath - does it mean that if you use Photoshop, you're not doing a creative process?

Also, saying that it's math ergo it's not creative is something that most people on HN would not agree with.

As for "it's just compression" - compression means that you can recover the original data - perhaps with a loss of quality, but still you can. With modern ML you mostly can't.

TheDong
3 replies
2d13h

The human using photoshop is providing the creativity, and thus the human using photoshop owns the copyright, if they do sufficiently creative work (like actually drawing).

However, when using the current image gen AIs, the input you provide is a sentence of text and a couple parameters, a minimal amount of creativity.

This would be akin to opening photoshop and doing minimal work, such as choosing "resize image, apply blur filter".

If you open photoshop and do a few rote transformations, you indeed have not imbued enough creativity to create a new copyrighted work, the work retains its original copyright if you just open it in photoshop and resize it.

pyuser583
1 replies
2d10h

Is creativity a legal concept?

hnfong
0 replies
2d7h

Yes. But the bar for creativity is very low for a work to be considered copyrightable.

See eg https://www.copyright.gov/comp3/chap300/ch300-copyrightable-...

308.2 Creativity A work of authorship must possess “some minimal degree of creativity” to sustain a copyright claim. Feist, 499 U.S. at 358, 362 (citation omitted). “[T]he requisite level of creativity is extremely low.” Even a “slight amount” of creative expression will suffice. “The vast majority of works make the grade quite easily, as they possess some creative spark, ‘no matter how crude, humble or obvious it might be.’” Id. at 346 (citation omitted).

PeterStuer
0 replies
2d11h

'the input you provide is a sentence of text and a couple parameters, a minimal amount of creativity"

Have you tried creating art with AI? Usually it takes hundreds of iterations of text-to-image, image-to-image, inpainting, outpainting using dozens of different models.

"A sentence is all it takes" is like saying all it takes to make a million is crossing some numbers on a grid.

pritambaral
0 replies
2d15h

Photoshop also has math underneath - does it mean that if you use Photoshop, you're not doing a creative process?

By the mere act of using Photoshop, no. By the act of providing your own inputs to Photoshop, yes.

mr_toad
0 replies
2d16h

You’re confusing the training process with inference, you’re confusing the copyright status of a model with the copyright status of the model output, and your confusing compressed data with a compression algorithm.

incrudible
0 replies
2d3h

The model now can reproduce a fraction or the whole of the copyrighted work with some low probability. Rote and probabilistic compression

The VAE can be thought of as a codec, but the denoising process can recover images that are far removed from anything that is in the training data. Nobody has ever created an impressionist painting of Winston Churchill riding a purple lizard through the gates of retrofuturist Constantinople, yet almost infinite variations of that image exist in the latent space. If anything, it can be thought of as an intricate form of collage, which we do give special treatment for copyright purposes.

Bjartr
4 replies
2d23h

If they didn't have that (or something similar) they couldn't serve the image to other users. Well, they could, but without something like that someone will sue them for showing a picture they uploaded to someone they didn't want to see it (or any number of other gotchas).

They store the image or video (host/copy), distribute it over their network and to users (use/run), they resize it and change the image format (modify/translate), their site then shows it to the user (display/derivative work), and they can't control the setting in which a user might choose to pull up an image they have access to (the "publically" caveat)

It sounds like a lot, but AFAIK that's what that clause covers and why it's necessary for any site like them.

thfuran
1 replies
2d19h

It certainly does cover the needs of hosting and display to other users, but it doesn't permit just that. It's expansive enough to let them do just about anything they could imagine with the pictures.

Bjartr
0 replies
1d20h

Only insofar as legal precedent has established it to mean that. If someone sues you for a use that hasn't been found in court to fall under this clause it will be more difficult to win that case.

IANAL, and my jargon may be off, but I think that in the scenario where you get sued for something that's been litigated to fall under this clause in the past, you can basically say "even if we assume the evidence and claims are accurate, it's obviously in the clear based on prior cases", if the judge agrees, you win without going to trial, which is a "summary judgement" I think.

On the flip side, if someone is trying to apply the clause in a novel, not previously litigated way, you're way less likely to get that summary judgement and it will have to be argued in court.

It works the other way too, if I wrote a eula that used different phrasing than what's been established prior, say to make it more obviously cover just the normal stuff for user uploaded images, summary judgement is less likely to succeed because no court had ever weighed in on my novel phrasing as covering those actions in that way.

There's also the risk that if you make the phrasing too narrow (specifying resizing of the image) then when a new tech comes along that's reasonable to apply (e.g. some ML process to derive a 3d scene from images, or make them) exactly zero user uploaded images you store at that point could benefit from that until you go back and ask the user to agree to that too. The question then becomes how worth is narrowing the wording when you can accidentally paint yourself into a corner.

Or how about if it had been phrased "display on a monitor" had been used years back pre-smartphone era? You could be sued for making user uploaded media available to view on phones since that wasn't in the license granted to you by your users!

When you cover all the little edge cases, you end up with the seemingly overbroad clause most companies use.

An important thing to remember is that the legal interpretation of a text can differ almost arbitrarily from the plain English meaning of the text as written.

anileated
1 replies
2d12h

Training generative ML tools is qualitatively different from showing on website, even if both are technically “derivative works”, so this is a massive bait-and-switch. Is it the first time something is acceptable by the letter of pre-existing law but not the spirit?

sgift
0 replies
2d7h

Is it the first time something is acceptable by the letter of pre-existing law but not the spirit?

Well .. no. It happens each time that Google et. al find a new way to use your data. It's what all we German "privacy nuts" have warned people about for years and the reason that the older German data protection laws and now EU regulations require you to state exactly what you are doing with data ("purpose limitation"). If companies can just write "oh well, we will use it for something" how can anyone evaluate whether they should accept without knowing the future? Right. They cant.

So, this could be another case of the EU kicking Facebook in the face. We'll see.

ezoe
2 replies
2d16h

You're just stating an agreement between Meta and cowboyscott. The copyright holder of Ironman image never agreed it.

The problem here is cowboyscott doesn't own a copyright of Ironman image. But his uploading of image may match the condition of fair use of US or similar copyright exemption rule in their country's copyright law. It effectively works as copyright laundering.

sonicanatidae
0 replies
1d22h

Do we even do Fair Use in the US anymore?

DMCA take downs seem to feel that this is not a thing any longer.

fasthands9
0 replies
2d13h

You don't even really need the middleman - Disney has surely uploaded pictures of Ironman to these sites so it would have them either way.

But I don't know if it's really laundered anything. If you say "Hey Meta AI, make me poster for my cookie company that has Iron man eating my cookies" I'm pretty sure Disney could still sue you. It could still sue you if you instructed a human to draw a picture that had Ironman in it so I don't even know if you need a new legal framework.

laylower
0 replies
2d12h

You forgot the "in perpetuity" /s

costcofries
0 replies
2d17h

This is why you don't also download the music from stories when you download stories, no such agreement with Spotify.

KaiserPro
17 replies
3d

When an image us uploaded is it re-licensed:

  > When you share, post, or upload content that is covered by intellectual property rights (like photos or videos) on or in connection with our Service, you hereby grant to us a non-exclusive, royalty-free, transferable, sub-licensable, worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content (consistent with your privacy and application settings). This license will end when your content is deleted from our systems. You can delete content individually or all at once by deleting your account.

ROFISH
15 replies
3d

So if you delete your image the entire trained data set is invalid because they no longer have license to the copyright?

notatallshaw
7 replies
2d23h

If having copyright were a prerequisite of training data this would be true.

But in the US this hasn't been tested in the courts yet, and there's reason to think from precedent this legal argument might not hold (https://www.youtube.com/watch?v=G08hY8dSrUY - sorry don't have a written version of this).

And the lawsuits so far aren't fairing well for those who think training should require having copyright (https://www.hollywoodreporter.com/business/business-news/sar...)

JAlexoid
6 replies
2d22h

I would imagine if we use a very strict interpretation of copyright, then things like satire or fan-fiction and fan-art would be in jeopardy.

As well as learning, as a whole.

Unless there is literally a substantial copy of some particular piece of copyrighted material, it seems to be a massive hurdle to prove that analyzing something is copyright infringement.

slaymaker1907
3 replies
2d21h

Most people in the fanfiction community recognize that it's probably not strictly allowed under copyright. However, the community response has generally been to do it anyway and try to respect the wishes of the author. Hence why you won't find Interview with a Vampire fanfiction on the major sites.

If anything, I think that severely hinders the pro-AI argument if fanfiction made by human authors are also bound by copyright.

ETA: I just tested it out and you can totally create Interview with a Vampire fanfiction with Bing Compose. That presumably is subject to at least as strong copyright as human authors and is thus a copyright violation.

shagie
1 replies
2d17h

I would suggest also a read of https://en.wikipedia.org/wiki/Copyright_protection_for_ficti...

Copyright protection is available to the creators of a range of works including literary, musical, dramatic and artistic works. Recognition of fictional characters as works eligible for copyright protection has come about with the understanding that characters can be separated from the original works they were embodied in and acquire a new life by featuring in subsequent works.

Creating a work using Harry Potter or Darth Vader or Tarzan ("As of 2023, the first ten books, through Tarzan and the Ant Men, are in the public domain worldwide. The later works are still under copyright in the United States.") is a copyright infringement.

You may also find https://www.hollywoodreporter.com/business/business-news/dc-... interesting as well as the entire legal saga of Eleanor.

---

Creating Interview with a Vampire fan fiction with Bing - Bing didn't have any agency. The question of copyright infringement (I believe) should be only applied to entities with agency to (or not) ask for copyright infringing works.

pr337h4m
0 replies
2d13h

Creating a work using Harry Potter or Darth Vader or Tarzan is a copyright infringement

Transformative works are a thing:

https://www.transformativeworks.org/faq/#:~:text=investments...

https://www.transformativeworks.org/faq/#:~:text=Open%20Door...

mr_toad
0 replies
2d16h

I just tested it out and you can totally create Interview with a Vampire fanfiction with Bing Compose.

That’s the output of the model, it doesn’t have much bearing on the copyright status of the model.

kjkjadksj
0 replies
2d22h

The difference is when writing satire its not strictly necessary to possess the work to do so. You can merely hear of something and make a joke or a fake story. Training data on the other hand uses the actual material not some derivative you gleamed from a thousand overheard conversations.

ClumsyPilot
0 replies
2d17h

if we use a very strict interpretation of copyright, then things like satire ... would be in jeopardy.

Satire, criticism, reviews and journalism are explicitly permitted under fair use.

If I wish to publicly express my disdain or praise for your art, it is necessary that I can show samples / pictures/ photos when I express whatever my deal is.

panarky
2 replies
2d20h

Let's say you post an image, and I learn something by viewing it, then you delete the image. Is my memory of your now deleted image wiped along with everything I learned from viewing it?

opello
0 replies
2d17h

Unfortunately computer memory, unlike your memory, is so easily wiped. Having the infrastructure in place to make sure it happens on the other hand, seems more like human memory.

dylan604
0 replies
2d18h

I have seen plenty of images on the internet where I would gladly accept this as thing. Unfortunately, what's been seen, can't be unseen.

KaiserPro
2 replies
2d23h

Now that is a multi-million dollar question.

How derived data is handled after copyright is revoked is a question thats hard to answer.

I suspect that the data will be deleted from the dataset, and any new models will not contain derivatives from that image.

How legal that is, is expensive to find out. I suspect you'd need to prove that your image had been used, and that it's use contradicts the license that was granted. It would take a lot of lawyer and court time to find out. (I'm not a lawyer, so there might already be case history here. I'm just a systadmin who's looking after datasets. )

postscript: something something GDPR. There are rules about processed data, but I can't remember the specifics. There are caveats about "reasonable"

grogenaut
1 replies
2d22h

s/m/tr/

klyrs
0 replies
2d16h

Now that is a trulti-million dollar question.

Huh? I think you want s/(:?m[^m]*)m/tr/

dragonwriter
0 replies
2d23h

So if you delete your image the entire trained data set is invalid because they no longer have license to the copyright?

The portion of the training set might. The actual trained result -- the outcome of a use under the license -- would, at least arguably, not.

Of course, that's also before the whole "training is fair use and doesn't require a license" issue is considered, which if it is correct renders the entire issue moot -- in that case, using anything you have access to for training, irrespective of license, is fine.

carstenhag
0 replies
2d21h

Yeah, derative works in this case afaik was always be meant as "we can generate thumbnails etc" and not "we will train our AI with it". I am pretty sure this is illegal in many countries...

ezoe
5 replies
2d16h

Another method of copyright laundering is doing ML learning in a country where it doesn't protected under copyright law.

Personally, I'm on a side of using copyrighted data for machine learning input source doesn't violate copyright. Statistically, learned model for generative Ai doesn't retain even 1 bit of input. It's hard to say NN model data infringe any copyright of the input source. The copyright is applied to the expression, not the process. If the generative AI produces an image that's clearly a copy of a specific Ironman image which existed before the image generation, that's copyright infringement.

abrookewood
1 replies
2d16h

"learned model for generative Ai doesn't retain even 1 bit of input". If that was true, it shouldn't be possible to trick the models into regurgitating their source material, but cleary that is possible [0].

[0] https://stackdiary.com/chatgpts-training-data-can-be-exposed...

Kubuxu
0 replies
2d4h

Very LLMs are quite a different from diffusion models. The model size vs training set size is skewed the other way.

lawlessone
0 replies
2d3h

learned model for generative Ai doesn't retain even 1 bit of input.

It does. The data is just obfuscated.

WanderPanda
0 replies
2d16h

I agree with you but I think the argument is flawed. If you think about it like this h265 also just steals 10% (or whatever the compression ratio is) of an artifact

AlotOfReading
0 replies
2d15h

Copyright doesn't require a single bit of input to be shared. You can't avoid copyright by using a paintbrush for example, you're simply creating a derived work. You might still be in violation even if you create an entirely new context around the copied elements or substitute for the original in the market, as was the case with Warhol vs. Goldsmith.

Obviously not every generative output is a copyright violation, but it seems equally clear that there are outputs that would be if they were produced by humans.

onlyrealcuzzo
4 replies
3d

Is training with user-generated content a way to launder copyrighted images?

Doubt it. If you upload child porn to Instagram and they distribute it - it's still an Instagram problem, AFAIK.

dragonwriter
3 replies
3d

Child porn is not a copyright issue, so the DMCA safe harbor for UGC doesn't apply, and its criminal, so the Section 230 safe harbor doesn't apply, so its very much not an applicable example as to whether use of UGC in other contexts is a way of leveraging safe harbor protections for content, whether for copyright or more generally.

onlyrealcuzzo
2 replies
3d

It's still an Instagram problem if someone uploads copyrighted info and Instagram distributes it...

whywhywhywhy
0 replies
3d

It literally/legally isn't and is one of the reasons US is king for hosting services like IG. Read Section 230.

fragmede
0 replies
3d

As long as Instagram follows the DMCA and takes it down, they're covered by Section 230, do I don't know if it's a problem per se.

zeruch
3 replies
2d16h

"Is training with user-generated content a way to launder copyrighted images?" Pretty much.

incrudible
2 replies
2d3h

You are very very unlikely to stumble upon something resembling a training image closely enough for copyright to take effect, and in any event this is not the purpose of these systems. You may be running into trademarked content, but in that case you can not speak of laundry, because you can not use a trademark even if the image is AI generated.

zeruch
1 replies
1d22h

"You are very very unlikely to stumble upon something resembling a training image closely enough for copyright to take effect" That is definitely not the case, and is completely contingent on the prompt matching closely what the training set has in it.

incrudible
0 replies
1d21h

I think you misunderstand copyright and perhaps conflate it with a trademark. A given prompt may yield a result closely resembling some copyrighted work, but that in and of itself does not violate copyright. Getting a nearly identical result is very unlikely, perhaps with enough tries on a very famous painting. Even then, that is not the purpose.

raincole
3 replies
2d23h

At this point all big players assume it's okay to train on copyrighted materials.

If you can[0]crawl materials from other sites, why can't you crawl from your own site?

[0]: "can" in quotes

carstenhag
2 replies
2d21h

Because your users have agreed to terms of service that don't mention analyzing the images to train an AI model.

PeterisP
1 replies
2d20h

If their legal assumption is it's not a copyright violation to train a model on some image, then it's logical that their ToS doesn't mention it, as they need the user's permission only for the scenarios where the law says that they do.

kolinko
0 replies
2d16h

Within Polish (European?) legislation, an agreement on use of copyright needs to explicitly state in what areas you are allowed to copy/use the copyrighted work. So, e.g. if an agreement didn't explicitly state that a company can use the work in TV (or Radio, or sth), then they don't have the right to do so.

When new mediums are invented (like internet), you need to sign an annex to the agreement extending it to this medium.

Having said that, I would still consider it a fair use to train model on given images, but using the trained model to replicate a specific style etc, would most likely be considered a new medium. (IANAL though)

SirMaster
3 replies
2d2h

What about all the photos of people at Disney taking pictures of themselves standing next to Mickey Mouse etc.

I don’t think there’s a question that people are allowed to upload photos like that.

JohnFen
2 replies
2d

I don’t think there’s a question that people are allowed to upload photos like that.

Technically, that's a copyright violation. Disney just opts not to enforce their rights for that sort of use.

Similarly, you technically can't take and post pictures of statues, paintings, some buildings, etc., and some rightsholders do enforce their copyright when people do those things.

dragonwriter
1 replies
1d23h

Technically, that's a copyright violation.

Outside of things within the scope of fair use would be within Disney’s rights to restrict, but given the actual public policies and guidance on photography at Disney parks, I think there is a very strong case that noncommercial photography (for people are present as paid guests) is permitted by implied license.

Similarly, you technically can't take and post pictures of statues, paintings, some buildings, etc., and some rightsholders do enforce their copyright when people do those things.

Well, not buildings if they are in or visible from a public place in the US, at least under copyright law. (Photography of some, particularly government, buildings may run afoul of other law.) This may be different in other countries.

JohnFen
0 replies
1d23h

Well, not buildings if they are in or visible from a public place in the US, at least under copyright law.

Ahhh, you're correct. This was apparently changed in 1990. I just hadn't updated my mental model in accordance with that change.

https://www.nolo.com/legal-encyclopedia/copyright-architectu...

PeterisP
1 replies
2d19h

It's not a legal way to "launder" copyrighted images, because for things where copyright law grants exclusive rights to the authors, they need the author's permission, and having permission from someone and plausible deniability is not a defense against copyright violation - the only thing that it can change is when damages are assessed, then successfully arguing that it's not intentional can ensure that they have to pay ordinary damages, not punitive triple amount.

However, as others note, all the actions of the major IT companies indicate that their legal departments feel safe in assuming that training a ML model is not a derivative work of the training data, they are willing to defend that stance in court, and expect to win.

Like, if their lawyers wouldn't be sure, they'd definitely advise the management not to do it (explicitly, in writing, to cover their arses), and if executives want to take on large risks despite such legal warning, they'd do that only after getting confirmation from board and shareholders (explicitly, in writing, to avoid major personal liability), and for publicly traded companies the shareholders equals the public, so they'd all be writing about these legal risks in all caps in every public company report to shareholders.

rpdillon
0 replies
2d4h

However, as others note, all the actions of the major IT companies indicate that their legal departments feel safe in assuming that training a ML model is not a derivative work of the training data, they are willing to defend that stance in court, and expect to win.

I think the move will be to argue fair use, declaring the derivative work to be transformative, and possibly to point out that only a small amount (1%-3%) of the original data is retained.

sosodev
0 replies
3d

It seems like this is still very much a legal gray area. If it's concretely decided in court that generative AI cannot produce copyrighted work then I assume it makes no difference what the source of the copyrighted training material was.

caesil
0 replies
2d19h

Training on copyrighted content isn't a copyright violation. Sarah Silverman is currently learning that the hard way.

FpUser
0 replies
2d16h

It is not any different than actual live artist learning from works of others

xnx
25 replies
3d1h
tikkun
9 replies
3d

I tried it now.

My experience:

Took 4 minutes to log in and do one generation. (Login to FB, then it took me through a process to merge accounts with Meta, which didn't sound good, so I restarted with 'sign in via email' which ended up doing the same thing anyway, I think. Then I was logged in, did the generation.)

My at a glance is that it's:

For image quality

1. Midjourney

2. Dall e 3

3. SDXL and this

For overall ease of use and convenience

1. Dall e 3

2. Midjourney

Of course, this is all biased personal opinion, and YMMV.

whywhywhywhy
8 replies
3d

Depends what you want really, Midjourney and Dall-E 3 have specific looks to them which kind of look cheap/tacky now its everywhere.

SDXL is reconfigurable and completely flexible so really its the only tool in the game for pure creativity.

brcmthrowaway
6 replies
3d

What is the best tool wrapping SDXL

xnx
0 replies
2d16h

On Windows, StabilityMatrix (https://github.com/LykosAI/StabilityMatrix) is a very easy way to get any (or all) of those wrappers installed without conflicts.

stavros
0 replies
2d14h

Fooocus is fantastic for working out of the box.

loudmax
0 replies
2d23h

Depends what you mean by "best", but Fooocus is very accessible for getting started with Stable Diffusion.

danielbln
0 replies
2d23h

There is no best, it depends on your usecase. Auto1111 is popular, ComfyUI extremely flexible but complex, and there is a myriad of other wrappers, some with a focus on simplicity, some not so much.

Ologn
0 replies
2d21h

I find Automatic1111 better for point and click simplicity. ComfyUI has been good for custom flows.

Also Automatic1111 is more centralized, so you have to wait for something to make its way in (or a pull request for it anyhow), whereas people put up their ComfyUI custom JSON workflows. So I am doing Stable Diffusion video via ComfyUI right now, whereas it has not made its way into Automatic1111.

Der_Einzige
0 replies
2d18h

Btw, automatic1111 was made by a racist rimworld mod author, and directly cites 4chan as being a primary contributor!

Fun world we live in.

holoduke
0 replies
2d12h

For hackernews users definitely comfyui. Good place to start playing arround with tlcheckpoints, loras, controlnets and ipadapters.

Centigonal
6 replies
2d16h

I clicked the link to try generating an image.

Several beautiful modal dialogs later, my Meta account has been linked to my Facebook account, my Oculus profile is now my Horizon profile, and I have chosen a publicly viewable(!?) display name for my Horizon profile (a profile for a game I have never played and never intend to play). I have been informed that my Oculus friends are now Horizon followers, given the chance to select "how social [I] want to be," asked to invite my Facebook friends to join Horizon -- and I still haven't generated an image. I almost feel like this image generator is somehow a long con to get people to update their Meta accounts.

I want to find the group of product managers responsible for this user journey and just... shake them out of it! The design you shipped is really dumb! None of this makes sense outside of Meta! There's a whole world out here! Nobody cares about Horizon Worlds!

Invictus0
3 replies
2d16h

This is also called "shipping the org chart".

LargeTomato
2 replies
2d2h

People say that a lot around me. What does it mean?

wibblewobble125
0 replies
10h28m
isthispermanent
0 replies
1d

“Shipping the org chart” refers to a phenomenon in product development where the structure of an organization is reflected in its products. This concept suggests that the design and functionality of a product can inadvertently mirror the internal structure of the company that created it.

esafak
0 replies
2d13h

It's almost beginning to resemble the UX of an enterprise application!

dzink
0 replies
2d12h

Bet Horizon worlds will get great engagement metrics for the next earnings release on the topic.

floathub
3 replies
3d

Note that you need to "Log On" to Facebook/Meta/WhateverTheyCallThemselvesNow to try it. Kind of curious, but not curious enough to create yet another burner Facebook account.

[edit: still learning to spell]

vsnf
1 replies
2d13h

How are you guys even making burner facebook accounts? Every time I try (though granted, I haven't tried since 2016), I get stymied by a hard phone number requirement.

ruszki
0 replies
2d12h

You can get a phone number in almost every country relatively cheaply. I used one of such service (I think it was Numero or something) to order from CVS because I live in Europe and they need an American phone number (and American bank card, but that’s another story).

theonlybutlet
0 replies
3d

Thanks I should've read your post before opening the link and promptly having to close it.

misja111
2 replies
3d

"Not available in your location yet" (Switzerland)

JumpCrisscross
1 replies
3d

"Not available in your location yet" (Switzerland)

Have the GDPR questions around data provenance been resolved? I thought EU/EEA is currently off limits for publicly- or user-data-trained AI.

lxgr
0 replies
2d22h

ChatGPT (free and paid) are available in the EU, so I don't think there is a blanket ban.

Different companies might have very different interpretations of the legality of what they're doing, of course. I don't think there's any precedent, and no explicit regulations – there's an "AI act" being currently discussed in parliament, though.

mvdtnz
0 replies
3d

Not available in my region (New Zealand), darn.

lumost
19 replies
3d

This is almost certainly going to be used to generate actual pictures of real people in the nude etc.

wongarsu
9 replies
3d

That has been a thing since 2019's DeepNude, and the world hasn't ended. If anything it has been relegated to obscurity.

Manuel_D
3 replies
2d23h

I encountered the same stories of people's faces being photoshopped onto nude models when I was a kid back in the 2000s. Deepfakes are nothing new.

acdha
2 replies
2d18h

That’s like saying the invention of the rifle didn’t matter because your grandfather used a bow and arrow to shoot things. The skill level required to photoshop an image with the same quality as ML tools significantly limits who can do it and how many fakes they can produce. That changes it from something few people are likely to ever experience to something which can affect, say, most girls in a middle school class – and because the quality is higher, the consequences are worse because people are more likely to believe the fakes are real.

Manuel_D
1 replies
2d16h

Not really, the skill required to do a photoshoot is just copy paste and a bit of the healing brush tool. This is considerably easier than a deep fake. I also disagree with the idea that quality is superior. Resolution is lower, and the seams are often visible. Though ultimately this is subjective: is a high resolution still "better" than a low-resolution video?

The core complaint about deep fakes is, word for word, the exact same complaint about Photoshop: someone might use a computer to produce a image with someone's face pasted onto another person's body (presumably naked and doing a sexual act). People could - and no doubt some did - Photoshop classmate's faces onto nude women's bodies.

acdha
0 replies
2d15h

Not really, the skill required to do a photoshoot is just copy paste and a bit of the healing brush tool

That works if you have very similar images: same pose, lighting angle, intensity, hue, etc. In most cases, people quickly recognize it as fake because it’s harder than it might seem to get those details right, which is where the skill requirement comes in (not to mention things like a huge image collection to search for compatible images). Things like tattoos, clothing, jewelry, birthmarks, etc. add to the challenge since it looks highly fake to just use the healing brush.

In contrast, the apps which are being built now allow the attacker to upload the images they have and generate more realistic images very quickly. Again, the concern here is availability and scale – while people certainly have misused Photoshop (and analog darkroom techniques before that) in the past, the reason we’re hearing about it more is that it’s much easier than it used to be. There are far more teenagers who’ll be terrible to a peer in the moment but aren’t going to spend days getting new software and learning how to use it.

KaiserPro
2 replies
3d

Its not obscure. There are a bunch of paid apps that allow you to "virtually undress" any image you upload.

Which is already causing pain for a bunch of people.

wongarsu
1 replies
3d

There are paid apps or websites for lots of obscure things, that's not really a high threshold to clear in today's world.

broscillator
0 replies
2d23h

Yeah the key take away from that sentence was the harm caused, not the obscurity.

GeoAtreides
0 replies
2d23h

If anything it has been relegated to obscurity.

oh man, if /b/ could read this they would be very upset right now

0cf8612b2e1e
2 replies
3d

Fake celebrity nudes pre-date the internet.

rchaud
1 replies
2d23h

Barriers to entry were a lot higher, and distribution capacity was a lot lower. Surely you can see how the change in that combination could make for a significantly different reality now.

rightbyte
0 replies
2d23h

I honestly don't see the problem. Especially since any solution to the non-problem is censorship and big tech monopoly since a FOSS model can't be censored.

A LLM wont be able to estimate the size of my wiener. I can always claim it's the wrong size in the picture.

soultrees
1 replies
2d23h

At this point who cares honestly. The more ‘fake’ generated nudes out there, means it’s just not going to be a novelty. And if everyone has the ability to generate an image of everyone naked, the value for ‘real’ nudes will go high but it will also be good cover for people who get their nudes leaked.

merrywhether
0 replies
2d3h

I’ve wondered about a similar thing. If there was something automatically constantly generating nudes of everyone, surely the noise would desensitize people to the signal.

dopa42365
0 replies
2d22h

How's that any different from the gazillions of more or less good "how would you look like older/younger", "how would your kids look like", "how would you look like as barbie" and what not tools? One click to generate a thousand waifus. It's not real, who cares.

delecti
0 replies
3d

Doesn't seem to be possible. I tried a variety of real people (Tom Hanks, George Bush, George Washington) and each time got the error "This image can't be generated. Please try something else." It did work with some fictional characters though, namely Santa and Mickey Mouse. I'd rather not try asking for nudes while at work, so I can't attest to that part either way. Though "Sherlock Holmes dancing" looked pretty clearly like Benedict Cumberbatch (though the face was pretty mangled looking).

btbuildem
0 replies
3d

I really really doubt that. If anything, it'll be nerfed into complete uselessness.

KaiserPro
0 replies
3d

For that to work, you need to have a dataset of nudes to start with.

Given that instagram is pretty anti nudity (well women's nipples at least) I'd be surprised if there is enough data to work properly.

Its not impossible, but I'd be surprised.

TheCoreh
10 replies
3d1h

“Not available in your location

Imagine with Meta Al isn't available in your location yet. You can learn more about Al at Meta in the meantime and try again soon.”

I wonder why it's region-locked?

philipov
7 replies
3d

Which region is locked? That might give a clue.

fallensatan
1 replies
2d22h

Canada seems to be locked out as well.

philipov
0 replies
2d21h

Is there anyone outside the US that isn't locked out, or was this a US-only release? Could this possibly have to do with the sanctions on China?

triggerhappy77
0 replies
2d13h

Me to Meta: Please let me in(dia), we are locked out too

avallach
0 replies
3d

I got the same from the Netherlands

TheCoreh
0 replies
2d21h

Brazil. So it's unlikely to be GDPR-related, unless they're also treating our LGPD as a special case.

RowanH
0 replies
3d

New Zealand is locked out. (Normally we get first dibs on things being a small test market)

K5EiS
0 replies
3d

Norway is blocked, so probably some GDPR issues.

lxgr
1 replies
3d

Meta's AI stickers also only seem to be available in the US for now (or at least not in WhatsApp in the EU).

mvdtnz
0 replies
3d

AI stickers are in my region (not USA) but imagine is not.

WendyTheWillow
9 replies
3d

Because it’s trained on “real” people, will it be easier to generate ugly people? I have a hard time convincing DALL-E to give me ugly DnD character portraits.

PUSH_AX
4 replies
2d23h

In order for a model to understand what ugly is, someone or something has to tag training data as “ugly”, I find this to be a complete can of worms

Jerrrry
2 replies
2d22h

In order for a model to understand what ugly is, someone or something has to tag training data as “ugly”,

that is a very dated (2008) concept.

the model "understands" that 50% of people are below/above median.

consequently, those that are not "OMG girl ur BEAUTIFUL"-tagged are horse-faced.

It understands that the girl with the profile picture with 200 likes and 2k friends is better looking than the girl with 4 likes and 500 friends.

squigz
0 replies
2d18h

It understands that the girl with the profile picture with 200 likes and 2k friends is better looking than the girl with 4 likes and 500 friends.

I'm not very familiar with model training. How does it understand this? Is such information part of the training data?

PUSH_AX
0 replies
2d22h

I fine tuned some checkpoints this year (2023), and that's exactly how it worked.

Unless your model is single focus for humans and faces I find it hard to believe there is specific business logic in the training process around inferring beauty from social engagement. Metas model is general purpose.

Guillaume86
0 replies
2d20h

Put beautiful/pretty in the negative prompt, should get a similar result without the need for tagging ugly in the training set.

wobbly_bush
1 replies
2d23h

Aren't Insta images heavily edited?

rchaud
0 replies
2d12h

Yes, with filters supplied by Instagram, so they would still have the original camera images.

hbossy
0 replies
2d22h

Try asking for asymmetry. The more images of faces you average, the better they look.

doctorpangloss
0 replies
2d23h

Because it’s trained on “real” people, will it be easier to generate ugly people?

In the literature, testing concepts in image generation is asking human graders "which image do you prefer more for this caption?," so the answer is probably no. You could speculate on all the approaches that would help this system learn the concept "ugly," and they would probably work, but it would be hard to measure.

junto
4 replies
2d22h

Before anyone tries it out from the EU, be warned that it will push to make a Meta account and merge any Facebook/ Instagram profiles together and once you’ve finally bitten that bullet, it will tell you that it isn’t available in your region.

CGamesPlay
1 replies
2d14h

Strangely, it appears to decide this based on IP geolocation. My account is listed as US-based but the site does not work when using non-US VPN.

phatskat
0 replies
1d23h

I wouldn’t be surprised if they have several layers of location checks and if any of them fail they bail. Typically with geolocation on projects I’ve done we will rely on the best available info - location permission, IP geolocation, or the user telling us where they are via form.

mrits
0 replies
2d18h

It took me about 10 minutes to do what should have been a single click. They even wanted me to generate a new password even though I login with facebook. In US though and had some fun. It isn't as good as ChatGPT but very impressive.

kevincox
0 replies
2d21h

Same in Canada

tiffanyh
3 replies
2d18h

1.1B is tiny.

Given that FB & IG combined have ~0.5B photos uploaded daily, this effectively translates to training data from just a few days of user generated content.

https://www.brandwatch.com/blog/facebook-statistics/#:~:text....

https://www.zippia.com/advice/instagram-statistics/#:~:text=....

ssss11
0 replies
2d9h

And once they notice no one gives a sh* they’ll throw 100b at it, then more…..

next_xibalba
0 replies
2d18h

But is it tiny with respect to the volume of data required to create a good model and the compute costs associated with the training and operation of that model?

acchow
0 replies
2d16h

They are using only the publicly available photos. Not the ones you share only with friends.

astrange
3 replies
2d18h

Doesn't do "hard prompts" better than other systems I've tried. Looks pretty similar to them too.

eg: "horse riding an astronaut", "upside-down mini cooper", "kanji alphabet soup".

ClumsyPilot
2 replies
2d17h

Ooh, so that's what they are called! I tried to do "bicycle in a jar" and they could do it, but when I did "car in a jar" or "toy car in a jar" all of them failed.

prawn
0 replies
2d15h

Sometimes you need to massage the prompt a bit to avoid it getting distracted. e.g., it took me a few tries to get a family having dinner inside a home aquarium. I'd specified it being a large feature wall aquarium, and it got hung up on showing the dining table in front of the feature wall, rather than literally inside the aquarium.

astrange
0 replies
2d17h

I don't know if that's the right kind of term, it's just a list of prompts I've noticed usually don't work in image generation models - specifically they ignore what you said and just make an image with some of the words in it.

This looks like a SD1.5-like latent diffusion model though. The giveaway is that it can't spell.

neilv
2 replies
2d14h

Interesting. Unlike some other popular image generation training, is there a chance that Meta technically got copyright permission for many/most of the images that were posted to its properties?

I'm thinking: When the user who uploaded the image was also the copyright holder, that might've been covered by an agreement that technically permitted this use by Meta.

(Copyright isn't the only legal issue, though. For example, a person in a photo that someone else uploaded doesn't necessarily lose right to their likeness being used for every purpose to which a generative AI service might be put.)

deegles
1 replies
2d14h

They probably added a clause to their terms of service retroactively granting them permission to use your images for this purpose.

neilv
0 replies
2d14h

I was thinking Meta might be on more solid copyright ground here than most large generative AI models have.

(Not that legal ground is going to stop anyone, in a generative AI race worth trillions of dollars.)

brucethemoose2
2 replies
2d23h

It can handle complex prompts better than Stable Diffusion XL, but perhaps not as well as DALL-E 3

This is a interesting statement, as Stable Diffusion XL implementations vary from "worse than SD 1.5" to "Competitive with DALL-E 3."

sjfjsjdjwvwvc
1 replies
2d22h

It depends what you want to gen and what prompting style you prefer. I have found SD 1.5/6 to be far more flexible than SDXL. SDXL seems more „neutered“ and biased towards a specific style (like dalle/midj); but this may change as people train more diverse checkpoints and loras for SDXL.

brucethemoose2
0 replies
2d21h

See, this is totally my opposite experience. SDXL handles styles incredibly well... With style prompting.

Hence my point. SDXL implementations vary wildly. For reference I am using Fooocus.

nothrowaways
1 replies
3d

The title is misleading. It uses publicly available photos, which means it uses the same image as other AI models like GPT, midjiurney ...

holoduke
0 replies
2d12h

Who is gonna use these heavily moderated generators?. You cant even generate a nipple or a famous person. There is almost no control or finetuning. There are zillion of checkpoints, loras, controlnets and ipadapters out there to get almost anything with sd. No filters. You can literally generate whatever you like.

miguelazo
1 replies
3d

Wow, another reason to delete my accounts.

leptons
0 replies
2d23h

If nothing else they've done so far hasn't convinced you to delete your accounts, then why would this? They've done worse before.

dmazzoni
1 replies
2d21h

If you ask it to generate an image of Taylor Swift, it refuses. But if you ask it to generate an image of a popular celebrity singer performing the song "Blank Space", it generates an image that looks exactly like Taylor Swift some fraction of the time.

a_wild_dandan
0 replies
2d17h

I wonder if celebrity doppelgangers can't find modeling work. Like, without EVER referencing your celebrity twin, how closely can your work implicitly approach Swifthood before your free expression gets violated? To dramatize for effect:

Can you act in films? Or model a company's products like a guitar/microphone? Or genuinely start a band? Can your credits/band name reference you, if your given name is coincidentally also "Taylor Swift"? Can Facebook AIs train on your Facebook images, and produce a "celebrity female singer" images (with/without a "Blank Space" reference)? What if your LLM's purpose is strictly "parody, caricature, and images whose likeness is purely coincidental"? Can generative AIs have intention? Let alone intention to break copyright?

The consequences are endless in both kind/degree when pretending that "likeness" is some unique fingerprint. Ditto for thought-policing what (artificial or human) neural networks can learn from without paying royalties or whatever. It's all absurd.

What's more, our society must face these issues. We can't dismiss them as all hyperbolic catastrophizing about slippery slopes. Our system is already subjective, inadequate, and incapable of sorting itself out. The situation becomes more dire each day. Given our trend of sacrificing public interest for private greed (e.g. Disney's hatchet job on copyright), I'm worried about our future.

RegW
1 replies
2d20h

I wonder what other purposes FB have used those 1.1B+ publicly visible photos to train models for?

mr_toad
0 replies
2d15h

Anything that classifies and/or recommends images will likely be a deep learning model these days.

zoklet-enjoyer
0 replies
2d12h

I'm not sure if they cut me off for generating too many images or because of the content of my images. Everything is now giving the response "This image can't be generated. Please try something else."

This only started after I put in the prompt manbearpig did 9/11. It was ok with some really weird stuff though

squigglydonut
0 replies
2d13h

Artists...leave.

seydor
0 replies
2d21h

So it's just faces?

nextworddev
0 replies
3d

I tried this and was floored how good this was

neom
0 replies
2d13h

Really struggles with fingers, probably worse than any AI image generator I've seen so far. Maybe there aren't a lot of finger-showing images on IG and FB!

miked85
0 replies
2d13h

Yet another reason to steer far clear of anything Meta.

jafitc
0 replies
2d22h

All I can say is it’s really fast

andsoitis
0 replies
2d13h

The images of ourselves have now been absorbed into an AI.

An intelligence that knows a shit ton about a very very large number of people.

andrewstuart
0 replies
2d19h

And weirdly, every image it generates is sort of a combination of your grandma and an influencer on a beach on a tropical island.

al_be_back
0 replies
2d22h

to me these innovations seem akin to Concept Cars in the Motor industry; there's some utility, until some executive takes it center-stage, and pisses-off most of the core users.

the biggest value in these networks is real User-generated content, you can't beat billions of real users capturing real content and sharing habitually.

even if wording in the Terms permit certain research/usage, you've got market and political climates to consider.

__loam
0 replies
2d20h

This is extremely shitty to a lot of users.

Havoc
0 replies
2d20h

Meta is asking me to log in with my facebook account. Then after authenticating with my FB account meta says I don't have a meta account.

Is this all some sort of scam to get me to click accept on whatever godforsaken ToS comes with a meta account? If the FB account is good enough to freakin AUTHENTICATE me then just use that ffs.

FpUser
0 replies
2d15h

Canada. It asked me to create Meta account only to tell me that it is "not available in your region".

Fuck you Meta and fuck you Zukerberg.