return to table of content

A reality bending mistake in Apple's computational photography

Aurornis
165 replies
14h24m

Purists will always hate the idea of computational photography, but I love how well my iPhone captures handheld nighttime, low-light, and motion images. I’ve captured some photos at night that I never would have thought possible from a physically small sensor due to the laws of physics.

Is it 100% pixel perfect to what was happening at the time? No, but I also don’t care.

I’ve used HDR exposure stacking in the past. I’ve used focus stacking in the past for shallow depth of field. I’ve even played with taking multiple photos of a crowded space and stitching them together to make an image of the space without any people or cars. None of them are pixel perfect representations of what I saw, but I don’t care. I was after an image that captured the subject and combining multiple exposures gets the job done.

autoexec
124 replies
13h42m

Is it 100% pixel perfect to what was happening at the time? No, but I also don’t care.

No photographer thinks images the they get on film are perfect reflections of reality. The lens itself introduces flaws/changes as does film and developing. You don't have to be a purist to want the ability to decide what gets captured or to have control over how it looks though. Those kinds of choices are part of what make photography an art.

In the end, this tech just takes control from you. If you're fine with Apple deciding what the subject of pictures should be and how they should look that's fine, but I expect a lot of people wont be.

frogblast
24 replies
13h34m

In the end, this tech just takes control from you.

In these difficult scenarios, the alternative photo I'd get using such a small camera without this kind processing would be entirely unusable. I couldn't rescue those photos with hours of manual edits. That may be "in control", but it isn't useful.

autoexec
19 replies
13h23m

For decades people have had the ability to get great photos using cell phones that included useful features like automatically adjusting focus, or exposure, or flash all without their phones inventing total misrepresentations of what the camera was pointed at.

I mean, at a certain point taking a less than perfect photo is more important than getting a fake image that looks good. If I see a pretty flower and want to take a picture of it, the result might look a lot better if my phone just searched for online images of similar flowers, selected one, and saved that image to my icloud, but I wouldn't want that.

nucleardog
5 replies
12h20m

The case in the article is obviously one that any idiot could have taken without these tools.

But "in difficult scenarios", as the GP comment put it, your mistake is assuming people have been taking those photos all along no problem. They have not. People have been filling their photo albums and memory cards up with underexposed blurry photos that look more like abstract art than reality. That's where this sort of technology shines.

I'm pretty reasonable at getting what I want out of a camera. But at some point you just hit limitations of the hardware. In "difficult scenarios" like a fairly dark situation, I can open the lens on my Nikon DLSR up to f/1.4 (the depth of field is so shallow I can focus your eyes while your nose stays blurry, so it's basically impossible to focus), crank the ISO up to 6400 (basically more grain than photo at that point), and still not get the shutter speed up to something that I can shoot handheld. I'd need a tripod and a very still subject to get a reasonably sharp photo. The hardware cannot do what I want in this situation. I can throw a speedlight on top, but besides making the camera closer to a foot tall than not and upping the weight to like 4lbs, a flash isn't always appropriate or acceptable in every situation. And it's not exactly something I carry with me everywhere.

These photos _cannot_ be saved because there just isn't the data there to save. You can't pull data back out of a stream of zeros. You can't un-motion-blur a photo using basic corrections.

Or I can pull out my iPhone and press a button and it does an extremely passable job of it.

The right tool for the right job. These tools are very much the "right" tool in a lot of difficult scenarios.

lukeschlather
4 replies
9h17m

In circumstances where it really matters having a prettied up image might be worse than having no image at all. If you rely on the image being correct to make some consequential decision, you could convict someone of a crime, or if you were trying to diagnose some issue with some machine you might cause damage. While if the camera gave an honest but uninterpretable picture you would be forced to try again.

TeMPOraL
2 replies
8h43m

Couple other common cases:

- Photographing serial numbers or readouts on hard-to-reach labels and displays, like e.g. your water meter.

- Photographing damage to walls, surfaces or goods, for purpose of warranty or insurance claim.

- DIY / citizen science / school science experiments of all kind.

- Workshops, auto-repairs, manufacturing, tradespeople - all heavily relying on COTS cameras for documenting, calibrating, sometimes even automation, because it's cheap, available, and it works. Well, it worked.

Imagine your camera fighting you on any of that, giving you bullshit numbers or actively removing the very details you're trying to capture. Or insurance rejecting your claim on the possibility of that happening.

Also let's not forget that plenty of science and even military ops are done using mass-market cameras, because ain't anyone have money to spend on Dedicated Professional Stuff.

wizzard0
0 replies
4h4m

Photo copiers replacing digits on scanned financial reports with digits that compress better are a decade old already :)

http://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres_...?

quesera
0 replies
2h59m

Can people taking documentary photos can disable the feature? Obviously casual users won't be aware of the option, if it exists at all.

I've often wished for an image format that is a container for [original, modified]. Maybe with multiple versions. I hate having to manage separate files to keep things together.

kaba0
0 replies
5h58m

That’s a pretty hand-wavy argument. You can just frame a picture however you want to give a very different image of a situation, que that overused media-satire picture of a foot vs a knife being shown.

bawolff
5 replies
12h49m

Are you saying that you're just bad at photography?

That is a bit harsh. The vast majority of all people are worse at photography than apple's algorithm.

autoexec
4 replies
12h37m

The claim was that any camera without this feature is "entirely unusable" and the photos couldn't be saved even with "hours of manual edits". The fact is that for decades countless of beautiful photos have been captured with cell phone cameras without this feature and most of them needed no manual edits at all. Many of those perfectly fine pictures were taken by people who would not consider themselves to be photographers.

Anyone who, by their own admission, is incapable of taking a photo without this new technology must be an extraordinarily poor photographer by common standards. I honestly wasn't trying to shame them for that though (I'll edit that if I still can), I just wasn't sure what else they could mean. Maybe it was hyperbole?

scott_w
0 replies
5h58m

The claim was that any camera without this feature is "entirely unusable" and the photos couldn't be saved even with "hours of manual edits".

You missed the "in certain situations" part, which completely changes the meaning of your "quotes."

oktoberpaard
0 replies
9h7m

The two of you might have a different threshold of what you consider to be usable photos, and that’s fine. However, there is no way around physics. Under indoor lighting, a single exposure of a cellphone camera will either be a blurry mess, a noisy mess, or both. Cellphones used to get around that by adding flashlights and adding strong noise suppression, and it was up to the photographer to make sure that the subject didn’t move too much. Modern smartphones let you take pretty decent photos without flash en without any special considerations, by combining many exposures automatically. I think I t’s quite amazing. The hardware itself has also improved a lot, and you can also take a better single exposure photo than ever, but it won’t be anywhere near the same quality straight out of the camera.

And, yes, I have been taking a lot of pictures with my Sony Ericsson K750i almost two decades ago and I did like them enough to print them back then, but even the photos taken under perfect lighting conditions don’t stand a chance to the quality of the average indoor photo nowadays. The indoor photos were all taken with the xenon flash and were very noisy regardless.

kaba0
0 replies
5h53m

The fact is that for decades countless of beautiful photos have been captured with cell phone cameras

Which phones are you thinking of? Because you definitely have a very rose tinted view if you are thinking literally of cell phones, over smartphones, and even in the latter case, only the past couple of years have had acceptable quality where it might pass some rudimentary view as a proper photograph. Everything else was just noise.

adgjlsfhk1
0 replies
12h30m

traditionally phones took good photos in good light but as the light decreases so does photo quality (quickly). the point of the ai photography isn't to get the best photograph when you control the lighting and have 2 minutes before hand to pick options, it's to get the best photograph when you realize you need the best possible photo in the next 2 seconds

Toutouxc
5 replies
12h9m

For decades people have had the ability to get great photos using cell phones

Not in these light conditions. Simple as that. What iPhones are doing nowadays gives you the ability to take some photos you couldn’t have in the past. Try shooting a few photos with an iPhone and the app Halide. It can give you a single RAW of a single exposure. Try it in some mildly inconvenient light conditions, like in a forest. Where any big boy camera wouldn’t bat an eye, what the tiny phone sensor sees is a noisy pixel soup that, if it came from my big boy camera, I’d consider unsalvageable.

autoexec
4 replies
11h35m

Not in these light conditions.

Again, decades of people photographing themselves in wedding dresses while in dress shops (which tend to be pretty well lit) would disagree with you. Also, the things that help most with lighting (like auto-exposure) aren't the problem here. That's not why her arms ended up in three different positions at once.

amluto
2 replies
11h19m

Apple's horrible tech featured in the article had nothing to do with the lighting.

Of course it did.

iPhones take an “exposure” (scare quotes quite intentional) of a certain length. A conventional camera taking an exposure literally integrates the light hitting each sensor pixel (or region of film) during the exposure. iPhone do not — instead (for long enough exposures), iPhones take many pictures, aka a video, and apply fancy algorithms to squash that video back to a still image.

But all the data comes from the video, with length equal to the “exposure”. Apple is not doing Samsung-style “it looks like an arm/moon, so paint one in”. So this image had the subject moving her arms such that all the arm positions in the final image happened during the “exposure”.

Which means the “exposure” was moderately long, which means the light was dim. In bright light, iPhones take a short exposure just like any other camera, and the effect in question won’t happen.

(Okay, I’m extrapolating from observed behavior and from reading descriptions from Google of similar tech and from reading descriptions of astrophotography techniques. But I’m fairly confident that I’m right.)

astrange
0 replies
10h49m

It could also have been taken with too much motion (either the phone or the subject), meaning some of the closer-in-time exposures would be rejected because they're blurry/have rolling shutter.

Kye
0 replies
9h16m

This is probably harder for people to believe if they've never seen a progress video of stacking long exposures.

I've seen the kinds of images people get out of stacking images for astrophotography. Individually, the images are mostly noise. Put enough together and you get stuff like this: https://web.archive.org/web/20230512210222/https://imgur.com... (https://old.reddit.com/r/astrophotography/comments/3gx29m/st...)

The phone is operating under way less harsh conditions since there's usually quite a bit of light even in most night scenes.

The iPhone is actually too good at this. You can't do light trails: it over-weights the first image and removes anything too divergent when stacking, so you get a well-lit frozen scene of vehicles on the road. I can get around it shooting in burst mode and stack in something like Affinity Photo, but that's work.

astrange
0 replies
10h47m

You can tell this scene isn't well lit - just look at her reflected face. It's too dark.

Sure, some of it is bright, but that just means it's backlit.

FalconSensei
0 replies
12h12m

you can still install any other photo app on your phone and use that instead

andrei_says_
3 replies
9h24m

Is an argument about corner cases where computational improvement does make sense an argument in good faith when used as an excuse to take away control in all cases?

kaba0
1 replies
5h52m

How is that different from a noise removal algorithm? Is it also taking away control from you!

andrei_says_
0 replies
15m

The Apple algorithm visibly and significantly changes color, contrast, and likely curves, for every image, including in RAW format. Without my consent and with no option to opt out.

The Apple camera app is accessible quickly without unlocking the phone and I use that feature most of the time when I travel.

So I end up with thousands of crappyfied images. Infuriating.

jack_pp
0 replies
9h15m

Afaik there are photo apps on the iphone that let you have all the control you want so you can use those when you want control and use the default when you just want a quick photo knowing the pros/cons.

TeMPOraL
20 replies
9h18m

No photographer thinks images the they get on film are perfect reflections of reality. The lens itself introduces flaws/changes as does film and developing.

Don't fall into this trap. A lens and computational photography are not alike. One is a static filter, doing simple(ish) transformation of incoming light. The other is arbitrary computation operating in semantic space, halfway between photography and generative AI. Those are qualitatively different.

Or, put another way: you can undo effects of a lens, or the way photo was developed classically, because each pixel is still correlated with reality, just modulo a simple, reversible transformation. It's something we intuitively understand, which is why we often don't notice. In contrast, computational photography decorrelates pixels from reality. It's not a mathematical transformation you can reverse - it's high-level interpretation, and most of the source data is discarded.

Is this a big deal? I'd say it is. Not just because it rubs some the wrong way (it definitely makes something no longer be a "photo" to me). But consider that all camera manufacturers, phone or otherwise, are jumping on this bandwagon, so in a few years it's going to be hard to find a camera without built-in image-correcting "AI" - and then consider just how much science and computer vision applications are done with COTS parts. A lot of papers will have to be retracted before academia realizes they can no longer trust regular cameras in anything. Someone will get hurt when a robot - or a car - hits them because "it didn't see them standing there", thanks to camera hardware conveniently bullshitting them out of the picture.

(Pro tip for modern conflict: best not use newest iPhones for zeroing in artillery strikes.)

Ultimately you're right, though: this is an issue of control. Computational photography isn't bad per se. It being enabled by default, without an off-switch, and operating destructively by default (instead of storing originals plus composite), is a problem. It wasn't that big of a deal with previous stuff like automatic color corrections, because it was correlated with reality and undoable in a pinch, if needed. Computational photography isn't undoable. If you don't have the inputs, you can't recover them.

fsloth
7 replies
5h35m

Yeah, computational photography is actually closer to Terry Pratchet’s Discworld version of a camera - a box with an imp who paints the picture.

Artistic interpretation of a scene is often very nice.

But we would really need to be able to discern cameras that give you the pixels from the ccd from the irreversible kind.

In the worst case scenario it’s back to photographic film if you want to be sure no-one is molesting the data :D

jorvi
3 replies
4h35m

Yeah, computational photography is actually closer to Terry Pratchet’s Discworld version of a camera - a box with an imp who paints the picture.

I mean… you’ve pretty much described our brain. Your blue isn’t my blue.

People need to stop clutching pearls. For the scenario it is used in, computational photography is nothing short of magic.

In the worst case scenario it’s back to photographic film if you want to be sure no-one is molesting the data :D

Apple already gives you an optional step back with ProLog. Perhaps in the feature they’ll just send the “raw” sensor data, for those that really want it.

fsloth
0 replies
3h19m

”Your blue isn’t my blue.”

Is there actually sufficient understanding of qualia to state this concretely? Brain neurology is unknown territory for me.

TheFuzzball
0 replies
4h21m

I think you might mean Apple ProRaw [1]. ProLog might mean ProRes Log [2], which is Apple's implementation of the Log colour profile, which is a "flat looking" video profile that transforms colours to preserve shadow and highlight detail.

[1]: https://support.apple.com/en-gb/HT211965 [2]: https://support.apple.com/en-gb/guide/iphone/iphde02c478d/io...

TeMPOraL
0 replies
2h10m

Your blue isn’t my blue.

Of course it is, in practical sense ("qualia" don't matter in any way). If it isn't, it means you're suffering from one of a finite number of visual system disorders, which we've already identified and figured out ways to deal with (that may include e.g. preventing you from operating some machinery, or taking some jobs, in the interest of everyone's safety).

Yes, our brains do a lot of post-processing and "computational photography". But it's well understood - if not formally, then culturally. The brain mostly does heuristics, optimizing for speed at the cost of accuracy, but it still gives mostly accurate results, and we know where corner cases happen, and how to deal with them. We do - because otherwise, we wouldn't be able to communicate and cooperate with each other.

Most importantly, our brains are calibrated for inputs highly correlated with reality. Put the imp-in-a-box in between, and you're just screwing with our perception of reality.

actionfromafar
2 replies
5h15m

I kinda like that scenario!

underlipton
0 replies
3h5m

That's the scenario where evidence of police/military misconduct gets dismissed out-of-hand because the people with the ability to capture it only had/could afford/could carry their smartphone, and not a film camera. Thanks, I hate it.

TeMPOraL
0 replies
2h22m

I don't, because the set sum of population of people who will buy selfie cameras if marketed heavily, and artistic photographers, is orders of magnitude larger than people who proactively know an image recorder could come in handy. And, as tech is one of the prime examples, if the sector can identify and separate out a specific niche, it can... tell it to go fuck itself, and optimize it away so the mass-market product is more shiny and more profitable.

bambax
4 replies
8h21m

Yes. Also, it seems inevitable that at some points photos that you can't publish on Facebook won't be possible to make. Is a nipple present in the scene? Then too bad, you can't press the shutter.

draugadrotten
2 replies
7h52m

Is a nipple present in the scene? Then too bad, you can't press the shutter. reply

Oh yes you can, and the black helicopters will be dispatched, your social score obliterated and credit ratings becomes skulls and bones. EU Chat Control will morph to EU Camera Control. Think of the children!

Majestic121
1 replies
4h21m

Nipple restriction is an American thing, not so much an EU thing

bambax
0 replies
1h46m

Yes but American puritanism is being imposed onto the rest of the world. It's not like there's a nipple-friendly version of Facebook for all non-US countries.

TeMPOraL
0 replies
8h7m

Or you can, but the nipple will magically become covered by a leaf falling down, or a lens flare, or subject's hand, or any other kind of context-appropriate generative modification to the photo.

Auto-censoring camera, if you like.

(Also don't try to point such camera at your own kids, if you value your freedom and them having their parent home.)

jcynix
3 replies
7h52m

While I agree with the gist of your comment, I cannot leave this detail uncommented

Or, put another way: you can undo effects of a lens, or the way photo was developed classically, because each pixel is still correlated with reality,

You cannot undo each and every effect. Polarizing filters (filters, as are lens coatings, are part of a classical lens in my opinion), gradual filters, etc effectively disturb this correlation.

As does classic development, if you work creatively in the lab (as I did as a hobby a long time ago in analog times) where you decide which photographic paper to use, how to dodge or burn, etc.

But yes, I agree that computational photography offers a different kind of reality distortion.

TeMPOraL
2 replies
7h16m

You cannot undo each and every effect.

Fair enough.

Polarizing filters

Yeah, I see it. This one is as pure signal removal as it comes in analog world. And they can, indeed, drop significant information - not just reflections, but also e.g. by blacking out computer screens - but they don't introduce fake information either, and lost information could in principle be recovered -- because in reality, everything is correlated with everything else.

But yes, I agree that computational photography offers a different kind of reality distortion.

A polarizing filter or choice of photographic paper won't make e.g. shadows come out the wrong way. Conversely, if you get handed a photo with wrong shadows, you not only can be sure it was 'shopped, but could use those shadows and other details to infer what was removed from the original photo. If you tried the same trick with computational photograph, your math would not converge. The information in the image is no longer self-consistent.

That's as close as I can come up to describing the difference between the two kinds of reality distortion; there's probably some mathematical framework to classify it better.

lloeki
1 replies
6h26m

The most extreme case being Samsung outright painting the moon in when it thinks a piece of picture is semantically likely to be the moon.

Whether the detail data is encoded as PNG or as AI weights is immaterial, it is adding data that is not there, by a long shot.

https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...

PawgerZ
0 replies
2h34m

That's weird. Whenever I tried to take a picture of the moon, it would look great in the camera view on the screem, but look terrible once I actually took the picture.

kaba0
1 replies
6h6m

because each pixel is still correlated with reality, just modulo a simple, reversible transformation

It’s absolutely not reversible. Information gets lost all the time with physical systems as well.

Also, probably something like the creation of the image of the black hole is closer to computational photography than “AI”, and it seems a bit like yours is a populist argument against it.

TeMPOraL
0 replies
2h5m

Also, probably something like the creation of the image of the black hole is closer to computational photography than “AI”, and it seems a bit like yours is a populist argument against it.

I do have reservations about that image, and don't consider it a photograph, because it took lots of crazy math to assemble it from a weak signal, and as anyone in software who ever wrote simulations should know, it's very hard to notice subtle mistakes when they give you results you expected.

However, this was a high-profile case with a lot of much smarter and more experienced people than me looking into it, so I expect they'd raise some flags if the math wasn't solid.

(What I consider precedent to highly opaque computational photography is MRI - the art and craft of producing highly-detailed brain images from magnetic field measurements and a fuck ton of obscure maths. This works, but no one calls MRI scans "photos".)

oivey
0 replies
22m

Or, put another way: you can undo effects of a lens

No, you most definitely cannot. The roots of computational photography are in things like deblurring, which in general do not have a nice solution in any practical case (like non-zero noise). Same deal with removing film grain in low light conditions.

graypegg
14 replies
13h12m

To play devils advocate here: the best camera is the one you have with you, and computational photography does mean you can just push a button without thinking and get a clear picture that captures a memory.

It’s obviously an argument to say that Apple shouldn’t get to choose how your memories are recorded, but I know I’ve captured a lot more moments I would’ve otherwise missed because of just how automatic phone cameras are.

There’s a place for both kinds of cameras IMO.

autoexec
8 replies
12h58m

computational photography does mean you can just push a button without thinking and get a clear picture that captures a memory.

I'd argue that the women trying to take a photo of herself in her wedding dress did not get a clear picture that captures a memory. She got a very confusing picture that captured something which never happened. There are lots of great automatic camera features which are super helpful, but don't falsify events. If I take a picture of my kid I want my actual child in the photo, not a cobbled together AI generated monstrosity of what apple thinks my kid ought to have looked like in that moment.

Automatic cameras are great. Cameras that outright lie to you are not.

dahart
4 replies
11h40m

don’t falsify events […] Cameras that outright lie […] AI generated monstrosity of what apple thinks

Oh the irony of framing things (pun intended) so hyperbolically. Somehow it never seems to dawn on people that like to throw around the word ‘lie’ that they’re doing exactly what they’re complaining about, except intentionally, which seems way worse. Nobody sat down to say bwahahah let’s make the iphone create fake photos, the intent obviously is to use automated methods to capture the highest quality image while trying to be relatively faithful to the scene, which might mean capturing moving subjects at very slightly different times, in order to avoid photo-wrecking smudges. When you blatantly ignore the stated intent and project your own negative assumptions of other people’s motivations, that becomes consciously falsifying the situation.

Photographs are not and never have been anything but an unrepresentative slice of a likeness of a moment in time, framed by the photographer to leave almost everything out, distorted by a lens, recolored during the process, and displayed in a completely different medium that adds more distortion and recoloring. There is no truth to a photograph in the first place, automatic or not, it’s an image, not reality. Photos have often implied the wrong thing, ever since the medium was invented. The greatest photos are especially prone to being unrealistic depictions. Having an auto stitch of a few people a few milliseconds apart is no different in its truthiness from a rolling shutter or a pano that takes time to sweep, no different from an auto shutter that waits for less camera shake, no different from a time-lapse, no different from any automatic feature, and no different from manual features too. Adjusting my f-stop and focus is really just as much distorting reality as auto-stitching is.

Anyway, she did get a clear memory that was quite faithful to within a second, it just has a slightly funny surprise.

ClumsyPilot
3 replies
8h5m

Photographs are not and never have been anything but an unrepresentative slice of a likeness of a moment in time,

This is a complete absurdity

Photos are used daily to find suspects of crimes

to convict people to prison

to establish the level of devastation in a war

To find launch site for rocket attacks

To make scientific measurements of distance,size, etc. in architecture and war

By scientists to determine position of things in the sky, meteors, etc.

You are like a guy that collects knifes and swords and has no idea what they are actually for

kaba0
1 replies
5h37m

And some of those use cases require special cameras in the first place. Photography is basically just a measurement of light at different positions - there are endless priorities to make. You don’t need stacking multiple photos for scientific measurement of distance, as you would be having proper illumination in the first place.

A smartphone camera works in a vastly different environment, and has to adapt to any kind of illumination, scene etc.

timeon
0 replies
3h16m

And some of those use cases require special cameras in the first place.

Yes same as with tools like ChatGPT, it is OK for some uses, but you can not use it for something where you need to have trust in the output.

Problem is that people are not making this distinction, in cases like when they are posting pictures as evidence. This was also the case with manual post-process. However, now it is by default.

dahart
0 replies
2h21m

It seems like you completely misunderstood what I said and decided to take it out of context and throw in a jab on top because none of that contradicts my point. What kind of guy does that make you? ;) You are failing to account for how many photos have been used to implicate someone of a crime they did not commit, how many photos have been used to exaggerate or mislead the effects of war (a couple of the most famous photos in all of history were staged war photos), and how many photos suggested measurements and scientific outcomes that turned out to be wrong. Scientists, unlike the general public, are generally aware of all the distortions in time, space, color, etc., and they still misinterpret the results all the time.

The context here is what the parent was talking about, about the meaning and truth in casual photography, and your comment has made incorrect assumptions and ignored that context. I wasn’t referring at all to the physical process, I was referring to the interpretation of a photo, because that’s what the parent comment was referring to. Interpretation is not contained in the photo, it’s a process of making assumptions about the image, just like the assumptions you made. Sometimes those assumptions are wrong, even when the photo is nothing more than captured photons.

mplewis
2 replies
10h16m

Without the iPhone’s computational camera, she wouldn’t have this photo at all because she wouldn’t have a camera in her pocket that could get a good picture in this situation.

mirsadm
0 replies
7h7m

A 10 year old smartphone would have taken a perfectly good photo in that scenario. There's nothing challenging about it.

TeMPOraL
0 replies
7h9m

On the contrary, she would have a perfectly good phone with a perfectly good camera making perfectly good pictures that don't fake reality and turn her into a vampire.

WWLink
2 replies
13h4m

I used to keep a powershot digital elph in my pocket whenever I left the house. TBH I took way more pictures with it. I could blindly turn it on and snap a picture while driving, without ever taking my eyes off the road. There's no way in the world I could do that with an iphone lol. I mean, I suppose if I happened to hit the bottom right corner button and then used the volume button maybe? Maybe. It's way more likely I'd cock it up lol.

vladvasiliu
0 replies
3h47m

You can always configure it to start the camera by swiping left on the lock screen. Then, the volume buttons are big enough to easily hit one of them.

DonHopkins
0 replies
4h0m

I could blindly turn it on and snap a picture while driving, without ever taking my eyes off the road.

Isn't driving while blind illegal?

timeon
1 replies
4h25m

Here iPhone is using collage instead of showing particular photo. Then there was Samsung taking sharp picture of moon that was just completely generated.

the best camera is the one you have with you

I’ve captured a lot more moments I would’ve otherwise missed

Soon these moments will be 'captured' so perfectly with simulation that there wouldn't be reason taking them. Generate them when you want to recall them, not when the moment is happening.

JIT 'photography'

TeMPOraL
0 replies
1h42m

Generate them when you want to recall them, not when the moment is happening.

At this point, photography becomes a shittier version of what our brains do, so why bother? Or maybe let's do that, but then let's also do the kind of photography that accurately represents photons hitting the imaging plate.

kazinator
13 replies
11h57m

No lens is going to produce a straight arm out of one that was bent at the elbow, or vice versa, while not disturbing anything else. It is not "photography".

eproxus
9 replies
9h23m

No lens is going to reproduce a perfectly straight line without lens compensation either. Lenses are bending light to distill a moment in space time into a 2D representation which by definition will always be imperfect. What counts as “photography” really?

GP> In the end, this tech just takes control from you.

In the end, any tech is always going to take control away from you one way or another. That’s the whole point of using it, so you can achieve things you wouldn’t otherwise be able to.

kazinator
7 replies
8h51m

Though it may be that no lens will reproduce a perfectly straight line, the transformation which warps the line is a relatively simple and invertible mapping which preserves local as well as global features.

It might not be a conformal mapping but, say, if the image of a grid is projected through the lens, the projected image will have the same number of divisions and crossings, in the same relation to each other.

We cannot say this about an AI-powered transformation which changes a body posture.

What counts as “photography” really?

Focusing light via a lens into an exposure plate, from which an image is sampled.

foldr
6 replies
6h52m

There’s no AI transformation altering her body posture. It’s just that the phone is blending multiple exposures during which she was posing in different ways. The phone doesn’t realize that the images of the women in the mirror correspond to the woman in the center, so it hasn’t felt it necessary to use the same keyframe as the reference for the different parts of the photo. People are jumping to crazy explanations for this when there is a much simpler explanation that’s obvious to anyone who’s ever done a bit of manual HDR.

Ironically, the solution to this problem is more AI, not less. The phone needs to understand mirrors in order to make smarter HDR blending decisions.

HN has a strangely luddite tendency when it comes to photography. Lots of people are up in arms about phones somehow ruining photography by doing things that you could easily do in a traditional darkroom. (It's not difficult to expose multiple negatives onto the same sheet of photo paper.)

TeMPOraL
5 replies
6h28m

Ironically, the solution to this problem is more AI, not less. The phone needs to understand mirrors in order to make smarter HDR blending decisions.

No, the solution is for the phone to stop blending multiple distinct exposures into one photo, because that's the crux of the problem - that's the "AI transformation" right there, decorrelating the photo from reality.

HN has a strangely luddite tendency when it comes to photography.

It's not about luddite tendencies (leaving aside the fact that Luddites were not against technology per se - they were people fucked over by capitalist business owners, by means of being automated away from their careers). It's about recognizing that photos aren't just, or even predominantly, art. Photos are measurements, recorded observations of reality. Computational photography is screwing with that.

foldr
2 replies
6h12m

No general-purpose camera, analogue or digital, functions particularly well as a measurement tool. They are simply not designed for that purpose.

Blending multiple exposures is not an "AI transformation". The first iPhone to do HDR was the iPhone 4, introduced in 2010.

If you dislike HDR for some reason, no-one is forcing you to use it. There are lots of iPhone apps that will let you shoot a single exposure. That's never going to be the default in a consumer point-and-shoot camera, though, because it will give worse results for 99% of people 99% of the time.

autoexec
1 replies
15m

Cellphones have been capable of HDR for a very very long time and yet none of those older phones were capable of producing a picture like the one in the article. The problem here was not HDR. The problem was using AI to create a composite image by selectively erasing the subject of the photo from parts of the image, detecting that same person found in other photos, then cutting the subject out of those other images and pasting them into the deleted parts of the original picture before filling in the gaps to make it look like it was all one picture.

This was absolutely an "AI transformation" as the original article correctly pointed out:

“It’s made like an AI decision and it stitched those two photos together,”
foldr
0 replies
9m

Cellphones have been capable of HDR for a very very long time and yet none of those older phones were capable of producing a picture like the one in the article.

I don't think this is true? You can easily get an image like this just by choosing a different keyframe for different areas of the photo when combining the stack of exposures. No AI needed. Nor is there any 'selective erasing'.

You talk about 'detecting the same person found in other photos', which suggests that you're not aware that HDR involves blending a stack of exposures (where the woman may have been in various different poses) that are part of the same 'photo' from the user's point of view (i.e. one press of the shutter button). There is no reason at all to think that the phone is actually scanning through the existing photo library to find other images.

kaba0
1 replies
5h42m

No, the solution is for the phone to stop blending multiple distinct exposures into one photo, because that's the crux of the problem - that's the "AI transformation" right there, decorrelating the photo from reality.

Thanks, but I will take a proper picture over a blurry noisy mess.

And please learn a bit about the whole field of measurement theory before you comment bullshit. Taking multiple measurements is literally the most basic way to get better signal to noise ratio. A blurry photo is just as much an alteration of reality, there is no real-world correspondence to my blurred face. Apple just does it more intelligently, but in rare cases messes up. It’s measurement. AI-generated stuff like a fake moon is not done by apple, and is a bad thing I also disapprove of. Smart statistics inside cameras are not at all like that.

TeMPOraL
0 replies
1h45m

Taking multiple measurements is literally the most basic way to get better signal to noise ratio.

Yes. That is not a problem.

Stacking those and blending them in correctly isn't a problem either - information is propagated in well-defined, and usually not surprising ways. If you're doing it, you know what you'll get.

Selectively merging photos by filling different areas from different frames, and using ML to paper over the stitches, is where you start losing your signal badly. And doing that to unsuspecting people is just screwing with their perception of reality.

Yes, AI generated stuff like fake Moon is even worse, but my understanding of the story in question is that the iPhone did a transformation that's about halfway between the HDR/astrophotography stuff and inpainting the Moon. And my issue isn't with the algorithms per se - hey, I'd love to have a "stitch something nice out of these few shots" button, or "share to img2img Stable Diffusion" button. My issue is when this level of AI transformation happens silently, in the background, in spaces and contexts where people still expect to be dealing with photographs, and won't notice until it's too late. And secondarily, with this becoming the default, as the default with wide-enough appeal tends to eventually become the only thing available.

octacat
0 replies
2h38m

idk, sigma 50mm art will give you pretty straight lines. At least straight enough for people. Why would you even mention it? Are you taking photos of line meshes or something?

Life is imperfect. Lens specs are only good for reviews / marketing because it is hard to otherwise compare lenses for there real value.

I am fine with control I am getting from the big camera.

kaba0
0 replies
5h52m

That’s basic photo-stacking.

dsego
0 replies
6h48m

If you use long exposure, you will capture a blurry blob instead. I don't know about you but I don't see blurry hands with my eyes (mostly because our brain also does computational photography).

addaon
0 replies
11h7m

Depends on the refractive index of the lens [1].

[1] https://gebseng.com/media_archeology/reading_materials/Bob_S...

willseth
10 replies
13h16m

Try to photograph a toddler without this feature. Good luck. Does the fact that the iPhone allows me to do this pretty reliably without any fuss mean I have more or less control?

leephillips
7 replies
13h13m

Odd. I have hundreds of great shots of my toddlers, none taken with these “features”.

willseth
6 replies
12h58m

I know you’re pretending to be dense on purpose, but taking a dozen pictures at once and automatically picking the best one can obviously save a lot of shots. Same with combining exposures.

autoexec
5 replies
12h48m

The camera in this case wasn't taking many shots and selecting one. It wasn't just combining exposures either. It did a bunch of shitty cut/paste operations to produce an altered composite image which showed something that never happened.

Some automatic features are wonderful. People have been able to take pictures of fast moving toddlers and pets for ages because of them. The camera "features" that secretly alter images to create lies and don't give you a photograph of what you asked them to capture are a problem.

willseth
3 replies
12h33m

I didn’t misunderstand what happened, and I know how the iPhone camera works. I was only noting a couple common cases where computational photography is really nice. I’m pretty picky about my photos and nothing I’ve shot with my iPhone has ever approached “lies”. I know those types of anomalies can happen, but they’re clearly design flaws, i.e. Apple doesn’t actually want their camera to make fantasy images. I don’t want that either.

TeMPOraL
2 replies
8h52m

I’m pretty picky about my photos and nothing I’ve shot with my iPhone has ever approached “lies”.

That you know of.

There could be many lies in those photos, both subtle and blunt, which you didn't notice at the moment because you weren't primed to look for them, and which you won't notice now, because you no longer remember the details of the scenes/situations being photographed, days or months ago.

Are you sure I'm wrong? Are you sure that there are no hard lies in your photos? Would you be able to tell?

willseth
0 replies
26m

If we’re going to be this pedantic about it: iPhone’s camera will not add or remove things that weren’t captured in its temporal window. It won’t change the semantic meaning of an image. It won’t make a sad face person look happy, make daytime look like nighttime, or change dogs into cats. It can make mistakes like from the OP, but the point is that Apple also sees this as a mistake, so I would expect the error rate to improve over time. This is a different approach from other computational photography apps, like Google’s, where it looks like they are more willing to push the boundaries into fantasy images. Apple’s approach seems to be more grounded, like they simply want to get great pro DSLR level images automatically.

kaba0
0 replies
5h28m

A blurry limb is just as much a lie.

kaba0
0 replies
5h30m

It’s literally photo stacking, which is as old as digital photography itself. Look up any astrophotographer, or any photographer that did an HDR shot.

This is a clickbait non-article, and people’s bullshit comments with no knowledge of cameras.

orbital-decay
1 replies
13h6m

It's solved with constantly taking pictures into a ring buffer, either compensating for the touchscreen lag automatically or letting you select the best frame manually with a slider after the shot. Most cameras can do that (if you disable the best frame auto-selection).

kaba0
0 replies
5h32m

The two features are not exclusionary. For low-light situations you will need this feature either way, a bunch of noisy pictures doesn’t help you.

bawolff
8 replies
12h47m

Those kinds of choices are part of what make photography an art.

Ah yes, for all those people using their iphone to make true art instead of for taking a selfie.

The market for cell phone cameras are generally casual users. High end artists generally use fancy cameras. Different trade offs for different use cases.

vlovich123
7 replies
12h21m

I think you’d be surprised how much phones have encroached on the fancy camera space. Also, the limiting factor in phone camera quality is more the lens than anything else. The SW backing it is much more powerful than what you get in standalone cameras although those manufacturers are trying to keep up (that’s why phone cameras can be so competitive with more expensive ones). I expect within the next 10 years we’ll see meaningful improvements to the lenses used in phones to the point where dedicated cameras will have shrunk even more (i.e. decoupling the size of the lens / sensor from the surface area on the back of the phone).

FalconSensei
6 replies
12h13m

The SW backing it is much more powerful than what you get in standalone cameras

that is very due to camera makers not giving a sh*t though. Maybe now they do, but it's overdue. And I'm not even talking about the phone apps for connection, or image processing, but more on a user interface and usability PoV, and also interesting modes for the user.

Fuji and Ricoh are the only ones that I see are trying to make things easier or more fun or interesting for non-professionals. Fuji has the whole user customization that people use for recipes and film simulation (on top of the film simulations they already have), and Ricoh is the only one (I know of) that has snap focus, distance priority, and custom user modes that are easy to switch to, and to use. But even Fuji and Ricoh could still improve a lot, since there's always a detail or another that I'm like... why did they made it like that? or... why didn't they add this thing?

wkat4242
5 replies
12h6m

I don't think it's only that. The processing in a phone is expensive. If you'd build a high end camera with a Snapdragon 8 gen 3 it would raise the price a lot. In a phone that's not an issue because you need the chip there anyway for other tasks. So there's just much more compute power available, not to mention a connection to the cloud with even more potential compute.

astrange
3 replies
10h46m

Also, many more people buy phones than standalone cameras, so they can afford a lot more software R&D.

wkat4242
2 replies
10h4m

Well yes but a lot of that can be reused across devices I would guess.

vlovich123
1 replies
9h20m

Naw, astrange is correct. At peak in 2010, 121 standalone cameras were shipped worldwide. By 2021, that number was down to ~8 million. By comparison, 83 million smartphones are shipped each quarter (~1.4 billion for the year). Those kinds of economies of scale means there's more R&D revenue to sustain a larger amount of HW & SW innovation. Even though smartphone shipments should come down over the next few years as they incremental jump each year is minimal & the market is getting saturated, there's always going to be more smartphone shipments.

Individual camera vendors just can't compete as much and I don't think there was possibly anything they could have done to compete because of the ergonomics of a smartphone camera. The poor UX and lack of computational photography techniques doesn't matter as much because the pro market segment is less about the camera itself and more about the lenses / post-processing. Even professional shoots that use smartphones (as Apple likes to do for their advertising) ultimately tend to capture RAW + desktop/laptop post-processing when they can because of the flexibility / ergonomics / feature set of that workflow. The camera vendors do still have the advantage of physical dimensions in that they have a bigger sensor and lenses for DSLR (traditional point & click I think is basically dead now), but I expect smartphones to chip away at that advantage through new ways of constructing lenses / sensors. Those techniques could be applied to DSLRs potentially for even higher quality, but at the end of the day the market segment will just keep shrinking as smartphones absorb more use-cases (interchangeable lenses will be the hardest to overcome).

Honestly, I'm surprised those device manufacturers haven't shifted their DSLR stack to just be a dumb CMOS sensor, lens, a basic processor for I/O, and a thunderbolt controller that you slot the phone into. Probably heat is one factor, the amount of batteries you'd need would go up, the BOM cost for that package could be largely the same, & maybe the external I/O isn't quite yet fast/open enough for something like that.

wkat4242
0 replies
8h54m

Good points.

Honestly, I'm surprised those device manufacturers haven't shifted their DSLR stack to just be a dumb CMOS sensor, lens, a basic processor for I/O, and a thunderbolt controller that you slot the phone into. Probably heat is one factor, the amount of batteries you'd need would go up, the BOM cost for that package could be largely the same, & maybe the external I/O isn't quite yet fast/open enough for something like that.

This is not really an option. There were actually clip-on cameras back in the day when cameras were new on phones. But there were several problems:

- Software support tended to lag with updates

- Phones change form factor very regularly and phones in general are replaced much more often than camera hardware, leaving you a highly expensive paperweight

- I don't think any phones have thunderbolt yet, just some tablets

- Dealing with issues is a nightmare because you don't control the hardware end to end

eproxus
0 replies
9h14m

Would be cool if there ever were a product like a large external sensor and lens hardware dongle for phones (essentially all parts of a system camera without the computer) that would use the phone as a software processing platform. They would “just” ship the raw data over to the phone for post processing.

ALittleLight
8 replies
13h38m

I think you have that backwards. The vast majority of people will prefer Apple's computational enhancements. A small number of photography enthusiasts will prefer manual control (and a smaller number will benefit from it).

sharkweek
5 replies
13h32m

Yeah I 100% agree - When I pull my phone out to take a picture of the kids/my dog/something randomly interesting, all I want is to point my camera at whatever I want to capture and have it turn out.

Don’t care how it happens I don’t want to think about settings.

thfuran
4 replies
5h55m

And you simply don't care if two different people are shown in the photo in poses that didn't happen at the same time? That doesn't seem to me like capturing whatever it was you wanted to capture.

kaba0
2 replies
5h33m

It won’t happen in direct sunlight, and yeah, I will definitely take it over not having that picture in a presentable state.

How is that different than creating a panorama photo?

thfuran
1 replies
5h27m

Panoramas are mostly taken of static scenes or at least with the knowledge that it's going to mess things up if people move around. Regular photographs are generally both taken and viewed with the assumption that all the things in it happened at the same time (modulo shutter speed).

kaba0
0 replies
5h13m

Modulo now in low-light, moving scenes you actually get a decent photo. It doesn’t happen in good lighting, and this whole article is bullshit.

DonHopkins
0 replies
3h54m

How does that even affect your life or happyness? Do you have to sign an affidavit that the photo exactly depicts what happened or you go to jail? Will your friends never speak to you again if they suspect the phone you bought off the shelf at the Apple store is using computational photography? Do your religious beliefs prohibit photographing poses that didn't happen or you go to hell? Or are you just being ultra pedantic to make a meaningless point?

I want a phone with computational photography that automatically renders Mormon Bubble Porn!

https://knowyourmeme.com/memes/mormon-porn-bubble-porn

katbyte
1 replies
13h23m

and for those who don't there are a bunch of photo apps that allow far more control & iphones can shoot raw now

orbital-decay
0 replies
13h17m

RAW in modern phones and apps is often stacked/processed as well. However, it always tries to stay as photometrically correct as possible (at least I'm not aware of any exceptions). All "dubious" types of processing happen after that.

colordrops
4 replies
12h39m

If this feature or similar features could be disabled, it doesn't need to be an either/or situation. I don't have an iPhone though, so no idea if it's configurable. But seems that it should be, assuming it's not implemented deep in hardware.

autoexec
2 replies
12h35m

I agree. I'd have no issue with this if it were something people could disable and it was clearly disclosed and explained as the default.

TeMPOraL
1 replies
8h36m

Doesn't even need to have an off-switch, if it would preserve the inputs that went into AI stitching magic, so that one could recover reality if/when they needed it.

dsego
0 replies
6h37m

I think you can with live photos, basically you get a short video included.

kaba0
0 replies
5h26m

But why? Because out of the literally trillions of images shot with an iphone a tiny tiny percentage comes out wrong, similarly to how panorama images will look if you move? It unequivocally improves the quality of images in low light by a huge margin.

This is a non-issue.

nextaccountic
2 replies
9h52m

In the end, this tech just takes control from you. If you're fine with Apple deciding what the subject of pictures should be and how they should look that's fine, but I expect a lot of people wont be.

Can't this be disabled?

And can't some other product with a similar technology offer knobs for configuring exactly how it behaves?

kaba0
1 replies
5h36m

It can make a RAW file as well, so just the usual clickbait article/comments.

TeMPOraL
0 replies
1h41m

It can, but does it? Defaults matter. I wouldn't want to discover that my camera was screwing with me only after I sent the photos of an accident to my insurer, a day after it happened.

bhpm
2 replies
2h51m

The tech doesn’t “just” take control from me. It also exponentially increases the chances that the photo I wanted to take will be the photo that is saved on my device and shared with my friends.

It’s just picture of my dog. It’s not that serious.

TeMPOraL
1 replies
1h57m

It’s just picture of my dog. It’s not that serious.

Until you want to send it to your vet, but you can't, because your phone keeps beautifying out the exact medical problem you're trying to image.

bhpm
0 replies
1h12m

When will that happen? I’ve sent phots to my dog’s vet (I go to a doctor for humans myself) but this hasn’t been a problem. If you could let me know when this will happen I’ll make sure I don’t rely on it.

esskay
1 replies
8h2m

If you're fine with Apple deciding what the subject of pictures should be and how they should look that's fine, but I expect a lot of people wont be.

The majority dont care, as the majority are not photographers, nor is it meant to be a product for photographers. Average joe just wants a good photo of that moment, which it does exceptionally well.

The tech 'taking control from you' is its exact purpose, as again, not everyone is a photographer. The whole point is to allow 'normal people' to get a good photo at the press of a button, it'd be incredibly foolish and unreasonable to expect them to faff about with a bunch of camera settings so it does it for you.

alt227
0 replies
5h40m

The bride in the original article doesnt seem to be a photographer, yet she seems to care quite a bit that the photo is not an acurate representation of reality at that moment.

dumbfounder
1 replies
12h58m

But regular people expect it. And then they take a picture of the moon and they are disappointed.

astrange
0 replies
10h42m

It's easy to take a picture of the moon. It's always lit the same because it's in space.

ISO 100, f/11, manual focus at infinity, shutter speed 1/100.

Phones aren't good at this, partly because autofocus isn't designed for moons, partly because you want a really long lens and they don't have one.

throw0101b
0 replies
5h4m

No photographer thinks images the they get on film are perfect reflections of reality.

More importantly than that: framing, exposure, aperture choices by anyone not simply shooting with their camera in the Auto mode.

* https://petapixel.com/digital-camera-modes-a-complete-guide/...

* https://en.wikipedia.org/wiki/Mode_dial

* https://photographylife.com/understanding-digital-camera-mod...

orbital-decay
0 replies
13h24m

>In the end, this tech just takes control from you.

1. "This tech" is too broad of a qualifier, as GP is talking about computational photography in general, which is many things at once. Most of those things work great; some others are unpredictable. There are plenty of custom camera apps besides Apple's default camera which will always try to stay as predictable as possible.

2. There is such a thing as too much control. Proper automation is good, especially if you aren't doing a carefully set up session. The computational autofocus in high-end cameras is amazing nowadays. You can nail it every time now without thinking, with rare exceptions.

lo_zamoyski
0 replies
12h11m

In the end, this tech just takes control from you.

Having control isn't always the best option. While photographers may appreciate having a camera over which they can have a high degree of control, they're not snobs about it. Photographers will tell you that having a handy point and shoot with good automation or correction of some kind is extremely useful when you're out and about and need to take a photo quickly. For everyday photos, it's what you'll want usually. If not, there are plenty of cameras on the market to choose from.

kaba0
0 replies
5h47m

Getting a roughly correct image vs a blurry mess doesn’t take control away from you and it is just bullshitting on a non-issue.

easyThrowaway
0 replies
8h20m

The first thing they teach you at any photography course worth its money is that framing the picture itself is a distortion of reality. It's you deciding what's worth being recorded and what should be discarded.

There's no "objective photography", no matter how hard we try distinguishing between old and new tech.

afavour
15 replies
14h9m

I have no objection with what we have today either but it does feel slippery slope-y, especially in the AI era. Before we know it our cameras won’t just be picking the best frame for each person, it’ll be picking the best eye, tweaking their symmetry, making the skin tone as aesthetically pleasing as possible, etc etc etc. It was always this way to an extent but now that we’ve given up the pretence of photos being “real” it feels inevitable to me.

I’m reminded of that scene in WALL-E where it shows the captain portraits as people get fatter and fatter. It’s clearly inaccurate: over time the photos should show ever more attractive, chisled captains. They’d still be obese in real life though.

easygenes
6 replies
13h38m

Phones over-beautifying faces by default already happened with the iPhone XS, and it wasn't received well. See #beautygate: https://www.imore.com/beautygate

astrange
5 replies
10h39m

"Over-beautifying" never happened. Noise reduction/smoothing happens in camera processing even if you don't try to do it, because it's trying to preserve signal and that's not signal. If you want that noise, you have to actually put in processing that adds it back.

saiya-jin
2 replies
7h51m

Sure it did, no wrinkles, no moles unless huge, skin tone like after 2 weeks vacation in Carribean. What previously had to be done in photoshop to make people look younger is now done automatically, for every photo, and you can't turn it off.

They call it 'instagram look' for quite some time, Apple is the worst among all phone manufacturers (in form of furthest from actual ugly reality, but a lot of people got used to it and actually prefer it now), but all of them are making it.

kaba0
0 replies
5h20m

It’s called noise reduction, and has been done by the most primitive cell phones since forever.

astrange
0 replies
6h12m

I promise there is nothing in the camera trying to make you look even a little better than normal. Especially not removing moles; people use their phones to send pictures to their doctors, so that would kill your customers with melanoma.

If you want pores you need a higher resolution sensor, but even most of those have physical low pass (blur) filters because most people don't like moiré artifacts. Could try an A7R or Fuji X-series.

Dah00n
1 replies
6h44m

Yes, it actually did. And as it says in the link:

".... if they can detect and preserve fabric, rope, cloud, and other textures, why not skin texture as well?"

The phone changed the image on skin and not on random other things that wasn't a face. That is a filter, not normal image processing, when it happens only on X but not on Y. The only way to not get this filter on the phone is using RAW.

astrange
0 replies
6h6m

And as it says in the link

That is the opinion of an uninformed tech writer, and even besides that, he's not claiming it did happen but just that it could possibly happen.

In this case you do have someone who knows how it works, that someone being me.

The phone changed the image on skin and not on random other things that wasn't a face.

There are some reasons you'd want to process faces differently, namely that viewers look at them the most, and that if you get the color wrong people either look sickly or get annoyed that you've whitewashed them. Also, when tone mapping you likely want to treat the foreground and background differently, and people are usually foreground elements.

_kb
2 replies
13h59m

That already happens, in realtime. FaceTime uses eye gaze correction, Cisco is all in on AI codecs for image and audio compression, and other vendors are on similar tracks too.

When you talk to a client, a colleague or a loved one we’re on the verge of you conversing with a model that mostly represents their image and voice (you hope). The affordances of that abstraction layer will only continue to deepen from here too.

karlshea
0 replies
9h6m

This is nothing new. Very old examples are the xerox machines that accidentally changed numbers on scanned documents, and speech coding for low bit rate digital audio over radio, phone, and then VoIP.

bigallen
0 replies
12h58m

When you talk to a client, a colleague or a loved one we’re on the verge of you conversing with a model that mostly represents their image and voice (you hope)

Wow, this threw me for a loop. There’s not much difference between talking to a heavily filtered image of a person and texting with them. Both the image and the words are merely representations of the person. Even eye-to-skin perceiving someone is a representation of their “person” to a degree. The important part is that “how” and “how much” the representation differs from reality is known to the observer

WWLink
1 replies
13h3m

Samsung phones (and I guess iphones too) already do this lmao.

https://www.insider.com/samsung-phones-default-beauty-mode-c...

My favorite one is the phones that put a fake picture of a moon in pictures where the moon is detected!

https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...

astrange
0 replies
10h37m

My favorite one is the phones that put a fake picture of a moon in pictures where the moon is detected!

This is just about as wrong as saying Stable Diffusion contains a stock photo of the moon it spits out when you prompt it with "the moon".

They work the same way. Samsung's camera has a moon mode, so it gets the prior (a much, much lower quality camera raw than you think it's getting), it processes it with a bias (this noise is a Gaussian distribution centered on the moon), and you get a result (an image that looks like the moon).

thaumasiotes
0 replies
12h5m

I have no objection with what we have today either but it does feel slippery slope-y, especially in the AI era. Before we know it our cameras won’t just be picking the best frame for each person, it’ll be picking the best eye, tweaking their symmetry, making the skin tone as aesthetically pleasing as possible, etc etc etc.

What we have today already includes phone cameras that will add teeth to the image of your smiling newborn, or replace your image of what looks like the moon with stock photography of the moon.

I’m reminded of that scene in WALL-E where it shows the captain portraits as people get fatter and fatter. It’s clearly inaccurate: over time the photos should show ever more attractive, chiseled captains.

Interestingly, the opposite thing happened with official depictions of Roman emperors.

simias
0 replies
13h59m

While I don't use TikTok I often see videos from there and it's really spooky to me how aggressive and omnipresent filtering seems to have become in that community. Even mundane non-fashion non-influencer "vlog" style content is often heavily filtered and, even more scary IMO, I often don't notice immediately and only catch it if there's a small glitch in the algorithm for instance when the person moves something in front of their face. And sometimes you can tell that the difference from their real appearance is very significant.

I really wonder what's that doing to insecure teenagers with body issues.

kaba0
0 replies
5h23m

It’s not slippery slope-y, it is slippery slope.

This is not at all like DALL-e and midjourney, this is literally putting together multiple images taken shortly after another, but instead of dumbly putting it on another as it would manually happen in photoshop with transparent layers, it takes the first photo as a key, and merges info from the other photos into the first where it makes sense (e.g. the background didn’t move, so even this blurry frame is useful for improving that).

This is just dishonest to mix AI into that.

makeitdouble
2 replies
7h46m

How many "purists" are there in the wild ? I'd only see police and insurance agents needing pixel perfect depiction of reality as it's out of the sensor.

Photography as an art was never about purity, and I think most of us want photos that reflect what we see and how we see it, and will take the technical steps to get that rendered. If the moon is beautiful and gently lights the landscape, I want a photo with both the bright moon and shadowy background, and will probably need a lot of computation for that.

But the doppelganger brides, or the over-hdred photos, or the landscapes with birds and pilars removed aren't what someone is seeing. They can be nice pictures, but IMO we're entering a different art than photography.

octacat
0 replies
2h29m

oh, there are a lot of people who like control though. Journalists/competition photography probably also would go into purists.

netsharc
0 replies
5h7m

I think the police/insurance topic will be a big deal soon. I still remember the first building inspectors walking around with digital cameras, and of course when smartphones with "good enough" cameras came they started using these...

Now if society (including the courts) learn that the photos might not reflect reality, photo "evidence" could face issues being accepted by courts...

kazinator
2 replies
11h58m

Purists will always hate the idea of computational photography

A non-purist will hate it too, as soon as the technology re-imagines the positions of his hands such that they are around someone's throat.

TeMPOraL
0 replies
1h36m

Or moves the hill a little to the left, and the forest a little to the front, and the tanks a little to the side, and next thing you know, artillery is raining down on a nearby village.

ClumsyPilot
0 replies
7h49m

exactly, this tech eull send innocent people to prison

tonmoy
1 replies
11h46m

I just want to be able to turn it off when I want to

HumblyTossed
0 replies
4h22m

Agreed. But since the 13 (12 was the last), there's no way to remove even the HDR processing.

cesaref
1 replies
7h39m

It's best to just think of it as a different art form.

B&W film photography + darkroom printing is an art form, as is digital photography + photoshop. These modern AI assisted digital photography methods are another art form, one with less control left to the photographer, but there's nothing inherently wrong with that. I wouldn't want to say which is better, it's not really an axis that you can use to compare art is it?

At the end of the day, do you generate an image which communicates something that the photographer had in mind at the time? If so, success!

TeMPOraL
0 replies
6h38m

The mistake is in thinking photography is only, or even primarily, art. It's also measurement. A recorded observation of reality.

People use their cameras - especially phone cameras - for both these purposes, and often which one is needed is determined after the fact. So e.g. I might appreciate the camera touching up my selfie for the Instagram post today, but if tomorrow I discover a rash on my face and I want to figure out how long it was developing, discovering that all my recent selfies have been automatically beautified would really annoy me.

Or, you know, just try being a normal person and make a photo of your kid's rash to mail/MMS to a doctor, because it's a middle of a fucking pandemic, your pediatrician is only available over the phone, and now the camera plain refuses to make a clear picture of the skin condition, because it knows better.

I'm also reminiscing about that Xerox fiasco, with copy machines that altered numbers on copied documents due some over-eager post-processing. I guess we'll need to repeat that with photos of utility meters, device serial numbers, and even scans of documents (which everyone does with their phone camera today) having their numbers computationally altered to "look better".

EDIT:

Between this and e.g. Apple's over-eager, globally-enabled autocorrect at some point auto-incorrecting drug names in medical documents written by doctors, people are going to get killed by this bullshit, before we get clear indication and control over those "magic" features.

_glass
1 replies
6h19m

I think the future will have digital mirrors with filters, so we don't have to mess with reality anymore, and our own imperfections. The raw image/reflection of oneself will be a societal taboo.

rereasonable
0 replies
5h43m

Sounds a bit like the "transmittable tableau" from David Foster Wallace's infinite jest:

https://www.litcharts.com/lit/infinite-jest/chapter-27

unethical_ban
0 replies
9h25m

That last example is where I draw the line. It's one thing to enhance an image, or to alter the contrast across multiple frames to capture a more vibrant photo and a challenging lighting. But for our photography apps to be instantly altering the actual reality of what is occurring in a photo, such as whether someone is smiling or has their hands in a certain pose, or whether they're in a crowd or all alone next to a landmark, is not a feature that I think should be lauded.

underlipton
0 replies
3h10m

Is it 100% pixel perfect to what was happening at the time? No, but I also don’t care.

I do. Photos can be material to court cases where people's money, time, even freedom are at stake. They can sway public opinion, with far-reaching consequences. They can change our memories. At the very least, there should be a common understanding of exactly how camera software can silently manipulate photos that they present as accurate representations of reality.

stevage
0 replies
13h2m

I think purists are fine with it as long as you can turn it off.

saiya-jin
0 replies
7h55m

You are overblowing things, 99% of the photographers out there and 99% of the professionals out of those don't have this sentiment, since its primitive emotional one and detrimental to any actual work. Maybe few loud people desperate for attention make different impression on you about the state of affairs, but these days for any topic internet discussions can easily twist perception of reality and give very wrong impression.

You simply have to be practical, use the best took for the job you need. If you ever actually listened to photo artists talking among their peers about their art (ie Saudek), they practically never talk about technical details of cameras or lenses, its just a tool. If they go for analog photography its because they want to achieve something thats easier for them like that maybe due to previous decades of experience, not some elitist Luddites mindset. Lightning of the scene, composition, following the rules and then breaking them cleverly, capturing/creating the mood etc are what interests them.

rmaccloy
0 replies
13h2m

It's honestly better than this on all fronts, since you can get ProRAW out of recent iPhones even in the default camera app and get RAW without DeepFusion out of different alternative camera apps.

I think I had to spend ~$1k to get my first DSLR with RAW support back in the 2000s. Adjusted for inflation, Halide + a recent iPhone feels like a pretty good deal.

philistine
0 replies
39m

Apple's obsession with hiding the magic trick is hurting them badly here. Just like Live pictures show you a video from before and after your pressed the shutter, every single picture should include an unprocessed, probably way too dark picture without any computational photography. That way regular users could truly grasp at what their phone is doing.

pasabagi
0 replies
7h30m

I figure that digital photography is by its very nature 'computational', both in the obvious sense, and in the sense that the camera from hardware up imposes a set of signal-forming decisions on what is essentially just structured noise.

The problem is more one of what controls the camera exposes to the user. If you can just take one kind of picture: whatever picture the engineers decided was 'good', then it limits your expressive options.

hibikir
0 replies
13h1m

My issue with this has nothing to do with purism, but with how often the results are just no good, for reasons that have nothing to do with the sensor, but the choices of whatever model they run. Does it take a picture at night? Yes, but it's often unrecognizable compared to what my own sensor, my eyes, sees. It's not a matter of a slightly better reality, but the camera making choices about how light should go that have nothing to do with the composition in front of it.

You might remember an article about how there are many situations where the iphone just takes bad portraits, because its idea of what good lightning is breaks down. 5 year old phones often take pictures I like more than one of the latest phones, and not because the hardware was better, but because the tuning is just bad.

Fun things also happen when you take pictures of things that are not often in the model: Crawlspaces, pipes, or, say, dentistry closeups. I've had results that were downright useless outside of raw mode, because computational photography step really had no idea of what it was doing. It's not that the sensors are limited, but that the things that the iphone does sometimes make the picture far worse than in the past, when it took fewer liberties.

dkarras
0 replies
13h23m

when you think about it, with rolling shutter, no row (or column?) of pixels are from the same moment in a given picture unless you are shooting with a global shutter camera - which is rare for consumer type devices.

crotho
0 replies
13h30m

There's a big difference between looking at reality through a bad filter and looking at a completely different constructed reality.

alentred
0 replies
9h3m

As a layman in photography, I agree with you.

But it is easy to understand the artists. It is said that in art everyone needs to master the technique first, the tool of the trade, but true works of art are the expressions that are created with these various techniques. At this point, a tool - computational photography in this case - may get in a way. So, it is not about purism. Quite the contrary, it is about being able to use the tools and bend the reality the way an artist wants.

Having said that, I would think anyone would normally use *all* the tools available at their disposal, and the truth is that iPhone camera among else is a great one anyway.

adamredwoods
0 replies
12h32m

Why not both?

I am more in the purist camp, because when people take iOS photo, I remind them that someone else made that decision on how the photo should look. Additionally, we are in an era of not trusting anything on the internet or in a photo anymore. Do we want photojournalism to go that same path? I don't. So I enjoy being closer to "reality" than the computational photos, but for average entertainment photos, I don't mind.

leephillips
36 replies
14h32m

The article says that “The final composite image should be the best, most realistic interpretation of that moment.” But that doesn’t make any sense. If the there were three real people, rather than one person and two reflections, the algorithm would have created an image of a moment that never existed, stitching together different images taken at different times. The only difference is that we might not notice and mistake the fake image for what we expect from a old-fashioned photograph. I find what Apple is doing repulsive.

tmalsburg2
19 replies
14h30m

Imagine such an image being used as evidence is a court case. E.g. showing someone pointing a gun at another person when that actually never happened.

_kb
7 replies
13h54m

That threat model has existed since the birth of photoshop et al.

autoexec
3 replies
13h50m

There's a huge difference between someone intentionally altering an image according to their wishes and someone not even aware of changes that have been done.

Before, forensic experts could decide if an image had been altered in photoshop, but I guess the only sane conclusion now is that anything taken with an iphone is fake and untrustworthy.

KennyBlanken
2 replies
12h23m

As opposed to Samsungs which will take any vaguely circular bright white object and turn it into the moon?

https://www.reddit.com/r/Android/comments/11nzrb0/samsung_sp...

autoexec
0 replies
9h54m

Yeah, that's trash too. At this point I think it's fair to say you just can't trust a picture taken from a cell phone.

Dah00n
0 replies
6h28m

Yes, except it is called "Moon Mode" isn't it? The iPhone default mode isn't called "unReality Mode", so they aren't saying they are faking it. I don't know if Samsung state it is faked, but Apple definitely does not.

flashback2199
2 replies
13h41m

Scenario: It's a decade from today and phones are not just stitching together multiple photos but also using generative AI. Apple and all of the other phone makers insist the implementation is safe. The suspect appears in the photo doing something they never actually did. In the chaos of the crime, the phone was left at the crime scene where it was found by law enforcement, no chance the photo could have been photoshopped. The photo is shown as evidence in court. Without the photo, there is not enough evidence to convict. The entire case now hinges on the content of the photo. The suspect insists they never did what is shown in the photo, but it doesn't matter. The suspect is convicted, put on death row, and executed by the state. Thankfully, there is a silver lining: everyone's photos are a couple percent prettier, which helped bring more value to shareholders.

_kb
1 replies
8h36m

There was a thread here a little while back [0] on cryptographic proofs for photojournalism. Ultimately, that style of attestation seems the end game.

Journalists, security cameras, emergency service / military body cams and other safety systems provide a trust mechanism that enables provable authenticity from the point of capture (and some form of web of trust to weight or revoke that trust for different audiences). Anything else is assumed to be at least partial hallucination.

[0]: https://news.ycombinator.com/item?id=37398317

thfuran
0 replies
3h57m

That doesn't really help when the edits are happening inside the camera as part of the acquisition process rather than in later manipulation.

dylan604
4 replies
14h11m

images like this would easily be torn apart by experts brought in to testify against it. then again, for something this obvious, it probably wouldn't need an expert.

tmalsburg2
3 replies
14h7m

If this phenomenon is so obvious, why is this story the top post on HN? And why did it take a picture of the mirror scenario to make people aware of this issue? Hell, the article even implies that this an issue only with images of mirrors when that is of course completely false.

dylan604
2 replies
13h36m

You seem to think that all lawyers are dumb, and unable to defend against photographic evidence. If you think a lawyer would not be able to find a witness that fully understands how the modern mobile device camera systems alter images, you're just not being honest with yourself. The Apple propaganda videos tout the fact their cameras are doing something with "AI" even if they don't tell you exactly what. To assume that people are so unawares that it took this picture is just not being honest with the conversation

tmalsburg2
1 replies
9h48m

No, I do not think that all lawyers are dumb. What a bizarre thing to say. Why derail an interesting conversation in such an aggressive way?

dylan604
0 replies
2h10m

I guess we have different definitions of aggressive, but okay, wasn't meant to be aggressive. Your comment about "so obvious" seems a bit obtuse to me and is where the conversation went off the rails. Why are you even questioning how obvious this is one a source image as the one from the TFA? Just based on that, I rejected your premise of this conversation as not being very conducive to anything approaching realistic.

Aurornis
3 replies
14h20m

Seems extremely unlikely. The phone isn’t going to make drastic alterations to reality. It’s just combining multiple exposures. It can’t make someone point a gun if they’re not pointing a gun. You could try to imagine a scenario where someone was moving extremely fast through the frame while someone was whipping a gun around at high speed in the same direction and the phone just happened to stitch two moments a fraction of a second apart such that they look slightly closer together, but to get to that point you’d still need someone pointing a gun in the same direction that someone is going.

It’s really hard to imagine a scenario where two exposures a fraction of a second apart could be stitched together to tell a completely false story, but maybe it exists. In that case, I suspect the lawyers would be all over this explanation to try to dismiss the evidence.

tmalsburg2
0 replies
14h17m

It’s unlikely, true. But precisely that makes it so dangerous. If such a picture is presented as evidence in a murder case, the possibility that it is telling the wrong story will be discounted and someone may go to prison for the rest of their lives.

adastra22
0 replies
14h9m

What the comment you're replying to is saying could absolutely have happened. Imagine a "Han shot first" sort of situation: two people with guns, one shoots the other. The shooter claims it was self-defense, as the other guy went to fire first but was just slower. An iPhone picture captures the moment, but has the shooter firing, and the other guy's gun still at his side.

This is perfectly analogous to TFA--notice that the woman has enough time to move her arms into very different positions in the same composited moment.

0cVlTeIATBs
0 replies
14h5m

Well, the Rittenhouse case had a very important moment when the victim admitted that, while looking at a freeze-frame view of a video taken when he was shot, he had raised his pistol which was pointed at Rittenhouse only a fraction of a second before. [0]

That photo was critical for the defense getting him to say that.

There were also concerns over wether or not zooming in on an iPad should be allowed in that case--like if a red pixel next to a blue could create a purple one, etc.

[0] https://www.denverpost.com/2021/11/08/shooting-victim-says-h...

PlunderBunny
1 replies
14h14m

As far as the phone knows, there were three people in the photo, and it captured the 'best' picture of each one and composited them.

thfuran
0 replies
3h54m

Yes, and when there actually are three people, the so-called photograph could, by the same mechanism, show them in an arrangement that never actually happened.

dylan604
10 replies
14h13m

How is what it is attempting to do any different from when someone takes multiple pictures of a group shot, and then uses the different shots to ensure no blinks and "best" expression from each subject?

There's a reason professional photogs use the mulitiple snaps mode for non-sports. It used to be a lot of work in post, but a lot of apps have made that easier to now it's a built in feature of our phones.

autoexec
6 replies
13h54m

Choice. If I'm using a digital camera I can take 20+ shots of any subject, but then I get to choose which to keep.

If cameras don't give you the option for "reality" you're just left with whatever they choose for you no matter how many pictures you take.

dylan604
5 replies
13h43m

So use a non-mobile device with this feature. Nobody is forcing you to use the camera. You know what the camera does, but then continue to use it, and then complain about it doing exactly what you knew it would do. Doing the same thing over and over expecting a different result has a name

autoexec
4 replies
13h35m

I think the issue is that people don't know what the camera does. The woman who tried to take a picture of herself in her wedding dress had no idea she'd end up looking like three different people.

I expect that as more and more people come to learn that their iphone photos are all fakes we will see more people reaching for cameras instead of cell phones for the important things when they can.

dylan604
2 replies
11h30m

I think like most things on HN, people are confusing the people here are a much smaller percentage of the population and the majority of the world does not think like HN. Most people don't care one little bit about what the camera does. They only care that it shows them a picture that looks like what they saw. Does it hold up to further scrutiny? Maybe not, but these are also not the people that will be scrutinizing it. Unless they take a picture of their cat and it ends up with the head of a dog, the "your moms" of the world are not going to care.

Dah00n
1 replies
6h24m

They only care that it shows them a picture that looks like what they saw

Here it most definitely does not look like what they saw.

dylan604
0 replies
2h4m

Maybe, but how many "your moms" would even actually notice this on their own?

As far as computational imaging goes, I've seen way way way worse and much more offensive. Samsung's total replacement of the moon comes to mind as being much more offensive. This one just makes me laugh as a developer at the realization of what happened as being such a strange edge case. Other than that, I've seen similar results from intentional creative ideas, so it's not offensive at all to me.

It's just another example of we can not believe anything we hear, and only half (maybe less) of what we see.

NavinF
0 replies
9m

Are you serious?

Lemme try: I expect that as more and more people come to learn that their iphone's auto mode is better than their DSLR's auto mode, we will see more people reaching for cell phones instead of cameras for the important things when they can.

leephillips
2 replies
14h5m

The difference is in what some photographers call “editorial integrity”. There’s nothing wrong with any kind of image manipulation, as long as it’s done knowingly and deliberately, and as long as the nature of the manipulation is known to the audience. But the typical iphone consumer is just taking snaps and sharing them around, and almost no one knows what’s really happening. It’s creepy and unethical.

Toutouxc
1 replies
11h52m

and almost no one knows what’s really happening

And almost no one cares, btw.

And what do you even propose? A mandatory “picture might not represent reality” watermark? Because the way I see it, you either take the computational features away from people, and prevent them from taking an entire class of pictures, or you add a big fat warning somewhere that no one will read anyway, or you keep things the way they are. Which one of these is the ethical choice?

leephillips
0 replies
10h58m

I’m not proposing anything mandatory. I would like it if the biggest corporation in the world would consider the effects of their technology on society, rather than solely on their profits. That they would at least let ethical considerations potentially influence their design decisions. But that’s not how they got to be the biggest company in the world, so I don’t expect this to happen.

drtgh
4 replies
13h34m

"interpretation of that moment" from a camera it is not a photograph, it's a fake image kidnapping the word "photograph".

Toutouxc
2 replies
11h34m

A photograph is always an interpretation. A photograph from, say, a modern $2000 big boy camera:

Does not capture the color of every pixel, and merely infers it from the surrounding ones. Is usually viewed on a sRGB screen with shitty contrast and a dynamic range significantly smaller than the one of the camera, which is significantly smaller than the one of human eyes, which is still significantly smaller than what we encounter in the real world. Does not capture a particular moment in time, but a variable length period that’s also shifted between the top part of the image and the bottom part (a couple of ms for mechanical shutter, tens or hundreds of ms for electronic). Has no idea about the white balance of the scene. Has no idea about the absolute brightness of the scene. Usually has significant perspective distortion. Usually has most of the scene out-of focus and thus misrepresents reality (buildings aren’t built and people aren’t born out of focus).

octacat
0 replies
2h24m

good enough. decompiling photography make all the magic go away.

drtgh
0 replies
2h59m

Yep, of course, the colors, the sensor or the film chemicals that react to the frequencies of those photons, the angles of those photons through the lenses, all adapted to our range of view and color perception could be considered even are in the limit of the physics within natural perception, what also could be being crossed when the parameters and mechanics takes certain values, and from MHO its more or less acceptable, our naked eyes can discern how are being evaluated those parameters and mechanics.

Nevertheless, what we are seeing with the post-process of the cameras described in the OP cross the threshold of what until this moment in the history is considered natural "photograph" that can be obtained analogically. Those are manipulated images, undetectable for the naked eye, that are incorporating elements, those are a "composition" that can not be obtained analogically. those images are fakes, lies.

and by the read in the comments it seems the user can not even disable such composed interpretation.

autoexec
0 replies
13h7m

It's impossible to enforce, but it'd be a more honest if people called them something else. iPictures maybe? As in "Hey, check out this iPicture™ of my kid taking his first steps!"

dgacmu
22 replies
14h32m

I had a related experience with my Pixel 7 a few months ago. In this picture of a deer:

https://photos.app.goo.gl/qChwaw9C29WVdAmr6

If you zoom in, you'll note that .. everything at a detail level looks like an oil painting, especially the deer's face and the wall behind it. Very weird effect, and that's certainly not what the wall or deer actually look like. No filters applied.

Computational photography is really awesome and modestly worrisome.

Aurornis
8 replies
14h18m

That’s denoising. Those mosaic like patterns are more pleasing to the eye than a noisy photograph.

The laws of physics put a limit on how well a tiny phone lens and sensor can capture light. We’re not at the limit yet even though hardware is quite good. Still, you wouldn’t like if your phone spat out raw, noisy images everywhere without any processing (beyond what’s necessary to translate it to the right pixel grid and color space).

kortilla
5 replies
14h9m

Is there any way to get the noisy version? Or is that all forced by the hardware.

Ambroos
3 replies
13h58m

Shooting in RAW gets you the real sensor data without any computational magic.

helf
0 replies
13h57m

That's not entirely true and deoends heavily on the phone.

dharmab
0 replies
12h21m

My Fuji XA5 has denoising in the RAW files.

datadrivenangel
0 replies
13h22m

Incorrect. It gets you less editorial magic, but all cameras make decisions about how they create RAW images.

Even cameras that don't make any intentional decisions end up making decisions because of the physics of the sensor as an electrical device and how you read each sensor element.

helf
0 replies
13h57m

Depends. The DNG/Raw photos you can force out of gcam will sorta have the noise. Less processing /sorta/. HDR+ with gcam is going to be exceptionally "touched" no matter what.

You can modify the camera drivers etc on some devices to get nearly no processing output (other than the debayering color filtering etc by the ISP). I custom modded an LG V20 that I use for my mobile photography. I've either totally disabled or adjusted nearly every function of the signal processor as well as modified the camera program itself. I also use a custom modified gcam on it.

Some samples: https://imgur.com/gallery/W42TVGf

Look at my user (helforama) on imgur and pretty much every uploaded photo I took and edited on my V20... I need to update that account with newer photos :) been using the phone aince 2016 for photos!

throwaway290
0 replies
12h39m

Those mosaic like patterns are more pleasing to the eye than a noisy photograph.

Noise does not automatically look worse than this weirdness. It can look bad when resizing with fast algos (moire and all). And of course noise is super fine detail so preserving it blows up filesize.

actionfromafar
0 replies
4h39m

These patterns more pleasing? Not to me.

rafram
6 replies
14h28m

That’s due to basic denoising, not anything more advanced. I have photos from my iPhone 6s circa 2016 that look similar when you zoom in.

KennyBlanken
4 replies
12h29m

That's not "basic denoising" and photos from my 8 look nothing like that; the "watercolor" or "oil painting" look started around the iPhone 12.

Basic denoising is stuff like chroma or hue smoothing. This is very aggressive patterning. That's a daylight photo that the phone turned into an oil painting.

https://www.google.com/search?q=pixel+photo+oil+painting+loo...

dagmx
2 replies
10h47m

It’s just what happens when you really aggressively do noise reduction on small sensor data.

The noise pattern of any sensor that small will blotch in patterns that when combined with aggressive noise reduction, it will look this way.

It’s less apparent on bigger sensors but you’ll see the same if you really crank the noise reduction

panarky
1 replies
10h23m

Yes, it's denoising and sharpening due to extreme digital zoom that only uses a small portion of an already tiny sensor.

dgacmu
0 replies
9h31m

That deer image definitely had digital zoom (I wasn't standing next to the critter).

alt227
0 replies
5h37m

If you go into photoshop and mess around with filters for 5 mins, you will soon see that if you over aggressively apply any filter to a photo it soon starts looking like a cartoon or painting due to the way that image processing groups blocks of colour together for manipulation.

dgacmu
0 replies
14h26m

interesting - thank you!

zoklet-enjoyer
1 replies
14h15m

This is how all my bird and squirrel photos look. It really disappoints me

Toutouxc
0 replies
12h49m

Well, in the big camera world, birding and wildlife means huge and expensive lenses that are both fast (i.e. they let a lot of light in) and have a large focal length. The fact that you can even attempt to take similar photos on something as tiny as a phone camera and have the object come out recognizable is nothing short of amazing.

luuurker
1 replies
12h18m

Was that picture taken with zoom?

I ask because the Pixel 7 doesn't have a dedicated zoom camera, so any zoom was digital which reduces quality a lot. The processing part has to use low quality frames and what you have there is often the result.

Phone users shouldn't have to think about this, but from experience I think that on phones without a zoom camera it's often better to take a photo without zoom (or avoid going past 2x) and crop the image afterwards.

dgacmu
0 replies
9h30m

Yes, it was.

tmalsburg2
0 replies
14h27m

This effect likely resulted from aggressive noise filtering. You can probably reduce the amount of filtering in the settings.

jimmux
0 replies
14h26m

It looks like it's treating the deer and wall as the same subject, averaging the texture, and applying it to both.

xyztimm
15 replies
14h14m

So the mirror with the hands clasped?… is just completely made up? Because she’s clearly not doing that irl.

w-ll
12 replies
14h11m

2 mirrors, 3 differnt arm poses. "Photos" are not really any more, the seem to be few second clips, and yea the processing merged 3 differnt poses.

threeseed
11 replies
13h35m

You are conflating two different aspects.

Live Photo is a feature where it captures a couple of seconds of video before/after the photo is taken. From the article that feature was not enabled.

The computational pipeline is where you press the shutter and it blends a few frames together in order to do focus stacking, HDR etc. Based on what I have tried with my iPhone it is doing this in < 100ms which is not enough time to produce these sort of artefacts.

w-ll
8 replies
13h5m

maybe? you can also turn your iphone off, but it still broadcast btle and cell signals. who knows what any feature toggle does anymore, let alone on/off. lol do-not-track is tracked.

threeseed
6 replies
13h0m

You can tell because of the type of file that gets saved in your photo library.

Live Photos are clearly marked.

w-ll
5 replies
12h58m

why are they not marked as video?

Toutouxc
3 replies
12h40m

Because they’re photos, with a bit of video added for effect when you’re scrolling through them.

w-ll
2 replies
12h29m

is that not just a video with a still frame?

threeseed
0 replies
11h43m

Videos are recorded at 4K/8MP. Photos up to 48MP.

Toutouxc
0 replies
7h43m

No, they're regular photos, with some lower-quality video added for context.

dharmab
0 replies
12h20m

Because you can share them as photos and the composited image can be shared to the recipient.

astrange
0 replies
10h32m

Cell signals use way too much battery to be sending them with the phone turned off.

hnburnsy
1 replies
13h21m

Couldn't it be using frames from before the shutter was pressed? Could the processing take longer when people\animals are in the frame?

threeseed
0 replies
13h4m

Absolutely. But only about less than like 100ms worth at best.

The total latency of shutter + computational photography + save to disk is not greater than the time between her moving from one pose to another.

So the artefacts you would expect to see if this whole pipeline was out of sync was both poses stitched on top of each other in the mirror. Not multiple ones in different mirrors. And definitely not with the whole image as clean as this.

quadrature
0 replies
14h9m

not made up as in generated. it was sampled from different instances of time. This is the article's explanation

Coates was moving when the photo was taken, so when the shutter was pressed, many differing images were captured in that instant.

Apple's algorithm stitches the photos together, choosing the best versions for saturation, contrast, detail, and lack of blur.
mleo
0 replies
14h7m

The arms are in three different positions with the processing composing three different frames based on which frame/composite it thought that “person” looked best.

khazhoux
11 replies
14h14m

I’m gonna call bullshit here. The difference in arm pose couldn’t be covered in any reasonable burst period.

threeseed
4 replies
13h43m

Surprised you were downvoted because this doesn't sound right at all.

The poses would be at least a few seconds apart which rules out anything in Apple's computational photography pipeline e.g. focus and dynamic range stacking. At least based on what they have communicated to date.

We know Google demonstrated an AI model that was capable of selecting human parts from multiple photos but that was a showcase feature not something quietly added.

bryanlarsen
2 replies
13h27m

It takes 250 ms for something to fall 1 foot under the weight of gravity, which is also approximately how long it takes to go from arms crossed to arms at side.

Which is well under the 2 seconds computational photography clip.

threeseed
1 replies
13h9m

There is no computational photography "clip".

Live Photos was not enabled and even still doesn't work like this.

Toutouxc
0 replies
12h33m

Why does everyone assume it was a Live photo? Could’ve been night mode, which comes on automatically in low light scenarios, and does the same thing (takes multiple exposures over a longer timeframe).

furyofantares
0 replies
10h59m

The poses would be at least a few seconds apart

Are those 3 poses she's doing, or did she have her hands clasped, then performed some gesture where she dropped her left arm then right arm?

The right definitely looks like a deliberate pose, at first glance the middle does too, but the left doesn't at all, it looks like a gesture. The splay of the hands among other things indicate this to me.

I think the middle isn't a pose either, just a transition. I think only the right is a pose, and it goes Right > Middle > Left in time. Her hand is splayed in the middle like it is in the left.

Filligree
2 replies
13h57m

The “burst” is actually a two second video. Plenty of time for that.

threeseed
1 replies
13h38m

She said she never used live photo.

And the latency for taking a standard photo is not 2 seconds.

Filligree
0 replies
12h26m

Indeed, since it looks at data from before you press the trigger as well. The camera is always active while you have the camera app; storing a few seconds of pictures is a trivial amount of memory.

Live Photo means storing the video in the image file. Turning it off doesn’t mean the video is no longer used, just that it isn’t stored.

rf15
0 replies
10h47m

Yeah this strikes me as half generated, especially since it's three different poses. This is the baby teeth thing all over again and it must suck knowing that your Wedding photos are just an AI's reinterpretation of the moment.

mdonahoe
0 replies
14h1m

Do you work at Apple?

charcircuit
0 replies
14h4m

I'm skeptical too. Her arms look so stiff so I would exepct her to have been moving slowly.

sinuhe69
9 replies
8h23m

Of course, this is one in a million chance but it highlights very nicely IMO a much bigger issue: what is reality?

Some would say only the classical optical camera would capture faithfully our reality. But does it? The reality of the sunlight is a broad spectrum of radio emissions: UV, infrared and more. Does the optical camera capture these? No. Thus, which reality does it capture? Our perceived reality? Other would argue: at least the optical system would capture events in time faithfully. But does it? What would we see in a femto second? Certainly not the pictures we normally see. So the results of an optical system are also super imposed realities, not very much different than the results of a computational photography.

There is simply no single one reality, only our perceived realities. But if so, can we still call it reality or it’s merely a product of our sense, our perception and hallucination?

belugacat
1 replies
7h19m

The classical optical camera does not capture anything. It is a light sealed box, with a pinhole for a lens. As an optical system, it interacts with electromagnetic waves that go through it, that's the only 'reality' you can really care about.

What captures an image is an imaging surface; traditionally a chemical emulsion on a piece of film, now a complex array of digital sensors.

This imaging surface is of human design, it therefore images what its designers designed it to image. But don't forget that it is a sampling of reality; by definition always partial, and biased (biased to the 400~700 nm range, for starters).

TeMPOraL
0 replies
6h23m

But don't forget that it is a sampling of reality; by definition always partial, and biased (biased to the 400~700 nm range, for starters).

This does not matter in any way. What matters is that, what comes out on the other end of filtering and bias, is highly correlated with what came in, and carries information about the imaged phenomenon.

This is what both analog films and digital sensors were designed for. The captured information is then preserved through most forms of post-processing, also by design. Computational photography, in contrast, is destroying that information, for the sake of creating something that "looks better".

Gigachad
1 replies
8h18m

Probably something closest to what your eye sees is ideal for most photos. But dumb optical cameras have all kinds of artefacts that eyes don’t. When I slightly bump the camera, the whole image comes out blurry, my eyes don’t do that.

Things like lens flares don’t exist either.

andreicap
0 replies
7h21m

all kinds of artefacts that eyes don’t.

Eyes do have lots of artefacts, your brain fills in the gaps, like the blind spot [1] It's not much more different than computational photography, really.

[1] https://en.wikipedia.org/wiki/Blind_spot_(vision)

visarga
0 replies
4h16m

Of course, this is one in a million chance but it highlights very nicely IMO a much bigger issue: what is reality?

What photo is admissible in court?

verisimi
0 replies
8h2m

what is reality?

Well, the only thing we can say is that we can eliminate Apple phones and their software from our enquiries!

sanroot99
0 replies
7h40m

But computational should be able to atleast reproduce what optical does

TeMPOraL
0 replies
8h11m

There is simply no single one reality, only our perceived realities.

This does not follow at all from your earlier paragraph.

The reality we're talking about here, which regular photography reflects while computational one doesn't, is the correlation of recorded data with the state of the world. The pixels of a regular photo are highly correlated with reality - they may have been subject to some analog and digital transformation, and of course quantization, but there's a straightforward and reversible (with some loss of fidelity) function mapping pixels to the photographed event. Computational photography, in contrast, decorrelates pixels from reality, and discards source measurements, leaving you with a recording of something that never happened, but is sort of similar to the thing that did.

I elaborated on this elsewhere in the thread, so let me instead point at another way of noticing the difference. Photogrammetry is the science and technology of recovering the information about reality from photos, and it works because pixels of regular photos are highly correlated with reality. Apply the same techniques to images made via computational photography, and the degree of uncertainty and fidelity loss will reflect the degree to which the computational photos are AI interpretations/bullshit.

ClumsyPilot
0 replies
7h51m

The reality of the sunlight is a broad spectrum of radio emissions: UV, infrared

Thats irrelevant, radar reflects reality, photoshop doesn't. Even children understand this distiction.

gnabgib
9 replies
14h23m

Is maybe the original article[0] (referenced by this short recap piece) a better source?

[0]: https://petapixel.com/2023/11/16/one-in-a-million-iphone-pho...

atomlib
8 replies
9h26m

Why not link directly to Instagram instead then? https://www.instagram.com/p/CzPGNmJIebC/

jahnu
5 replies
9h22m

Off topic, but does anyone know why when I click on the above link in Firefox the back history is gone as if it was opened in a fresh tab? IG doing sketchy stuff to discourage navigating back to where I came from? Or perhaps the Firefox Facebook container protecting me?

Kye
4 replies
8h48m

It opens Facebook and Instagram in a Facebook container by default. The container has no history.

justinclift
3 replies
8h41m

Ugh, that seems like a bug which should be fixed. It's pretty inconvenient for users. :(

Timshel
2 replies
8h29m

Probably not trivial since the back button might be linked to history and the container is doing it's job in isolation.

I would expect there is a setting to force opening the container in a new tab if you want to be able to go back.

jahnu
0 replies
7h55m

I think a new tab for the container and leaving the original tab open would be the least surprising/inconvenient behaviour.

Kye
0 replies
3h36m

There must be since that's how it works for me. Either I changed it, or it stopped being default at some point.

yorwba
1 replies
9h21m

For one, the Instagram post says "Full story in my highlights (THE MIRROR)" but I can't figure out how to actually view those highlights. (Without creating an account at least.)

TeMPOraL
0 replies
8h59m

Can't, those are login-walled.

PlunderBunny
8 replies
14h10m

I heard mention on a podcast recently [0] that if you hold down the button in the iPhone camera app it will capture a set of photos and then mark the one that it thinks is the 'best' (based on, for example, the photo where everyone has their eyes open). Not the same as what happened here of course. (I keep forgetting to try this, not least because I always try to get the people out of my photos!)

[0] This one, I think: https://podcasts.apple.com/nz/podcast/the-talk-show-with-joh...

macintux
6 replies
13h40m

Unfortunately they changed the behavior a few major releases ago: now holding the button down switches to video recording. Annoying for me, since I don't do video.

If you use the software button you can slide to the left for burst, but afaik there's no way to trigger burst photos from the volume buttons. Maybe the new programmable button on the iPhone 15 series.

kderbe
1 replies
13h1m

https://support.apple.com/guide/iphone/take-burst-mode-shots...

"Tip: You can also press and hold the volume up button to take Burst shots. Go to Settings > Camera, then turn on Use Volume Up for Burst."

macintux
0 replies
12h44m

Thanks, I glanced at that page but didn't notice the tip at the bottom.

fshbbdssbbgdd
1 replies
12h36m

I think if you have the Live photos option turned on (the default) it will automatically take a burst every time you hit the red button.

macintux
0 replies
11h54m

Video, although there might be a configuration option somewhere.

Historically Live Photos were of poorer overall image quality, so I only turn them on when I want to simulate a long exposure. Not sure whether that's still true.

dmix
1 replies
12h57m

That’s still a cool shortcut I didn’t know about. It not only switches to video but it “gates” the button press so it only records a video for as long as you press the button

macintux
0 replies
12h45m

You can also lock the video recording by sliding to the right.

carterschonwald
0 replies
13h40m

I think there’s also how the Live Photo feature works. Where taking a photo is literally a short video and it picks the best frame

ClassyJacket
6 replies
14h28m

So this wasn't just rolling shutter?

tmalsburg2
2 replies
14h24m

Rolling shutter could in principle explain this, but it’s probably too fast to capture these three poses which must be temporally rather far apart.

stephen_g
1 replies
14h16m

Not probably too fast - definitely too fast. Rolling shutter occurs over a single read-out of the sensor, so she'd have to have held all the three poses during the 1/100th of a second or so exposure for it to be possibly related to rolling shutter.

tmalsburg2
0 replies
14h14m

Completely valid nitpick.

vlovich123
0 replies
14h22m

No. Rolling shutter should typically be too fast for a shot like that. Much more likely it's a result of 3 photos taken at slightly different times and automatically composited. Appears to be a well known effect.

rjeli
0 replies
14h25m

No, the difference is too large for it to be rolling shutter. (not sure if the rolling shutter is even vertical or horizontal). It seems like it takes a short video and stitches together the sections with the least motion blur in each frame.

kag0
0 replies
14h17m

No. A rolling shutter deals with different parts of the sensor being exposed at different times within one frame. It generally requires a much faster object (think airplane propeller or light with PWM). This is effectively multiple frames captured normally and a mistake in the algorithm stitching them into one frame.

weird-eye-issue
5 replies
13h39m

With how much manipulation happens to photos by default it seems like it would be easier to get them thrown out of court as evidence

j16sdiz
3 replies
13h37m

Without better evidence, the court will keep accepting them.

Just look at how bad fingerprints are accepted

callalex
1 replies
10h49m

See also: polygraphs.

actionfromafar
0 replies
4h34m

I'd say, the courts will keep accepting bad evidence no matter what you do.

joeframbach
0 replies
12h31m
Frummy
5 replies
9h5m

I was in a butterfly house, and closeup photos removed the legs of the butterflies to keep only the wings lol.

e: image link https://ibb.co/nwbw5xY

londons_explore
4 replies
6h3m

That will be the denoising. Legs are thin and can easily be interpreted as noise - especially when using digital zoom and dim light so there are very few pixels and each pixel is very noisy.

cubefox
1 replies
2h4m

Denoising is often great, but sometimes it really destroys details. It is basically impossible to capture a snowy day (or heavy rain) with a digital camera. The denoiser will remove 90% of the snowflakes. Analog cameras, by contrast, remove nothing.

londons_explore
0 replies
1h46m

Interestingly, some phones now use a model of the camera sensor to detect how much noise there 'should' be (based on temperature, age of sensor, test results of the specific sensor from the factory, exposure time and light levels). They then only remove that much noise, hopefully leaving snow.

Every other phone just has an algorithm to estimate how much noise is in an image by sampling a few patches - and that will remove snow.

vachina
0 replies
1h46m

I love how confident you sound but yet is factually wrong about your assumptions.

It’s scary because people like you mislead people, which misleads even more people.

gpas
0 replies
5h31m

More like the poor masking algorithm interpreted the legs as background and blurred them.

TeMPOraL
4 replies
9h33m

Tangent, but:

A U.K. comedian and actor named Tessa Coates

Why is it always some kind of celebrity that "discovers" stuff like this? Was it luck (yet again), or are those extreme failure modes of computational photography already somewhat known, just too nerdy to report on until they can be attached to a public person?

edent
3 replies
9h16m

Because the typical Instagram account is only followed by a handful of people. So anything unusual is unlikely to get any traction.

TeMPOraL
2 replies
9h3m

Might be my age showing, but I swear it wasn't like this just a few years ago - this kind of story would genuinely go viral from the first rando posting it on social media ("wtf is my camera broken?") or the second rando that adapted it into a goofy scene/video ("look I'm a vampire now").

londons_explore
1 replies
6h0m

Social media algorithms have changed. Viral things used to be semi-random (eg. the 6:01pm on sunday bug, which heavily promoted things initially published at 6:01pm on a sunday because thats when some weekly stats aggregation function ran, and the age of the video in seconds was used as a divisor, so you got huge popularity boosts for the first week if you could post in the right second).

actionfromafar
0 replies
4h33m

Yep. The algorithms have locked onto value stream extraction mode.

kazinator
3 replies
11h59m

Changing the positions of limbs is computational, but it's not photography.

Photography refers to the capturing of the reflected light to create an image, which then corresponds to how the scene actually was.

Bringing out detail in shadows being called "computational photography": that I could swallow.

hgomersall
1 replies
9h35m

Are colour filters allowed?

cubefox
0 replies
1h49m

The human eye does color temperature correction automatically, and so do digital cameras, basically since they are available. So yes that is okay obviously.

thfuran
0 replies
4h1m

It's not changing limb positions computationally, it's computationally jamming limb positions recorded at different instants into one 'photograph'.

doctoboggan
3 replies
12h51m

I don't think this is true, Apple hasn't openly said they do this level of manipulation (although that doesn't mean they don't necessarily), and I don't think the range of motions she would have to go through would be possible within a single capture. Even with "Live Photos" this wouldn't happen.

nickelpro
1 replies
5h18m

Ya the uncritical acceptance of these very non-technical articles on HN is a little disappointing

dfxm12
0 replies
1h59m

I think you have unrealistic expectations for this community.

tiltowait
0 replies
12h46m

I’m also skeptical. I want it to be true, because it’s pretty wild, but it seems a little too perfect. I suppose people must already be trying to prove or disprove it.

NikkiA
2 replies
8h23m

If the 'camera' is guessing as to intent, it's no longer a camera, sorry.

devnullbrain
0 replies
2h56m

Every digital photo interpolates or otherwise guesses the unknown data between sensors of each color during debayering.

Gigachad
0 replies
8h14m

It isn’t guessing. It’s just stacked multiple shots. Everything in the photo is real, just at slightly different moments.

ClumsyPilot
2 replies
7h55m

So many comments, not a single one pointing out that someone will go to prison over this 'photo', when it shows something that never actually happened.

PUSH_AX
1 replies
6h39m

We've been able to create images and video of things that didn't happen for decades.

Dah00n
0 replies
6h23m

This is not the same thing and you know it.

tomovo
1 replies
5h17m

Good luck capturing a sprint finish with that kind of camera.

actionfromafar
0 replies
4h36m

The photo shows they all ran over the line at the same time!

rompledorph
1 replies
11h5m

If it has not already happened, evidence will get thrown out of court because the image no longer represents the reality. Too many filters and other AI improvements.

hnburnsy
0 replies
3h33m

Some one is writing an episode of law and order with AI photograohy getting defendant off.

pluc
1 replies
3h50m

It's funny that the iPhone has half a dozen cameras, uses the cameras as the main selling point (look at posters!) yet the image generated is coming essentially from a piece of code.

octacat
0 replies
2h26m

Code is important too. They sell a good combo.

luuurker
1 replies
12h43m

A modern Google Pixel starts saving frames as soon you open the camera app. When we finally take the picture, it uses some of the older frames to have what's essentially HDR stacking without the delay.

I wouldn't be surprised if a similar thing happened here. Different frames, processing picks the best exposure for each part of the picture and you get this effect.

londons_explore
0 replies
6h6m

It's also why having the camera open and previewing destroys the battery. Some people I know keep the camera open for hours - especially when exploring a new place and wanting to take a photo of a cool thing at a moments notice, and then moan their battery dies at lunchtime.

karmakaze
1 replies
8h24m

It wasn't long before we hit this Xerox digits[0] moment in 'photos'. What features do you want in your photocopier or photos?

[0] https://www.theverge.com/2013/8/6/4594482/xerox-copiers-rand...

coremoff
0 replies
7h52m

very different causes though; the xerox one was an image compression bug, IIRC; this is a "chop up multiple photots and stich together the best composite".

Still got room for library bugs! (wasn't the Samsung moon pictures sort of more like the xerox one?)

emptybits
1 replies
13h53m

If not already, I expect examples like this will be accumulated and trotted out as part of legal defenses. Valid or not, gross examples like this will probably nudge some judges and juries over their threshold for reasonable doubt. AI manipulations are happening and their bounds can be hard to predict.

ealexhudson
0 replies
8h44m

Imagine a politician photoed reacting to a member of the public in disgust, except the face was stitched in from moments after. Or, worse, someone captured at the scene of a bomb reacting before the bomb went off?

We have an inbuilt set of assumptions about causality that this AI now violates. That's potentially huge in some very specific scenarios...

bmacho
1 replies
4h1m

This reminds me of printers, scanners and photocopies that have a "compressing" step, and replace numbers with other numbers just for fun. Like you have one job, make the numbers correct, and you fail to do that.

cubefox
0 replies
1h59m

Yeah. The difference is that people get really angry when they hear scanners are doing this, while in the current case (look at the other comments) most people treat it like a perfectly acceptable curiosity.

STELLANOVA
1 replies
11h7m

Slight off-topic but I still can't believe Apple engineers are not able to fix stupid green dot flare when shooting with direct sunlight in the frame... Super easy to fix computationally but for some reason still there for years...

panarky
0 replies
10h20m

Pretty cool how the green dot became a green crescent during the last solar eclipse.

ISL
1 replies
12h28m

I hope, but am not certain, that by configuring my Pixel to save both a jpeg and a raw for each image, at least the raw would avoid these shenanigans.

Someone1234
0 replies
12h22m

Unless the raw is actually a stack of images, it may not. Both Android and iOS are taking multiple exposures and combining them into a single HDR image. This is before it hits the camera app.

This has nothing to do with "Live Photos" on iOS to be clear.

verytrivial
0 replies
7h16m

Defense lawyers take note.

verisimi
0 replies
7h55m

And, of course, there is the story where Samsung replaces the moon with its high res version.

I don't think photography from phone can actually be trusted to be a faithful representation of reality as it is not a purely mechanical series of actions. This will be even worse if ai is involved in the process.

I certainly think that there is an argument for photographic evidence to be inadmissible in court.

vachina
0 replies
1h59m

So can we say iPhone pictures are not admissible in the court of law? Since the photos are more like hand drawings (I.e. subject to interpretation by the algo) than a snapshot of time.

shrizza
0 replies
14h28m

This is Apple so we like to call it a reality-distortion effect.

seshagiric
0 replies
12h2m

I am always amazed how people even catch (come across) issues like this...the chances are 1 in a million?

saiya-jin
0 replies
8h5m

Next time somebody here will be bashing other manufacturers like say Samsung for clarifying 50x zoom moon shots (like most of HN did when this was a topic for a day or two few months ago), remember kids that this is what all manufacturers do. Or half-reversing photo of kitten in the grass as somebody mentioned.

po__studio
0 replies
12h44m

That’s fascinating

karmakaze
0 replies
8h39m

'photo' is now synonymous with photoshopped.

hnburnsy
0 replies
13h34m

How long is this burst (feels longer than 'short' in this case)? Does it start before you press the shutter button? Does this post processing apply to RAW? Does Apple document how this post processing works? More importantly, how does one turn it off or is there another camera app that allows one to turn it off?

gnicholas
0 replies
11h51m

In my family we like to take 'pano-rankenstein' photos, where you do a pano across a person's face as they are rapidly changing expressions in dramatic ways. The results are pretty hilarious, as the phone tries to stitch your face together into one cohesive image.

dusted
0 replies
8h4m

This happens more and more, I guess it's an unholy mix of "better compression as long as you don't actually look at the image" and "ai improvement"..

A few weeks ago, I took a really lovely picture of my son, composition, facial expression, focus, light, it was _PERFECT_ except..

The algorithms in my phone had decided that it'd be better to scrape off his fucking skin and replace it with the texture of the wall behind him!

Of course it must be my fault for buying such a cheap phone, it only a Galaxy 22 Ultra, I'm sure the 23 Ultra is better... But it was not out when I changed phones..

Wtf^wtf..

So I go turn on RAW so I can at least salvage picture in the future, except RAW only works in the "pro" camera mode which is inconvenient to use and sometimes it silently falls back to non-pro..

In the end I gave up and installed a third party camera application, I guess I just have to trust Mark, at least he hasn't actively messed up my photos. https://play.google.com/store/apps/details?id=net.sourceforg...

comfysocks
0 replies
12h51m

My best guess is that there was a brightness gradient across the scene, and this is the result of tone mapping from an EV bracketed burst. This might result in “time delay” instead of the more typical “ghosting” artifacts.

causi
0 replies
10h49m

One is reminded of how Samsung cameras are trained to put the details of the moon over anything that looks vaguely like the moon, but just one, so you can photograph two blurry images of the moon and one of them will look like it was taken by Hubble.

Moldoteck
0 replies
8h46m

Computational photography is just at it's first steps. Google photos on p8 allow to replace faces. I expect this feature will be implemented to be done automatically to show you the ideal photo. And I expect some parents will be happy with it - having an ideal photo with ideal face of their children