Purists will always hate the idea of computational photography, but I love how well my iPhone captures handheld nighttime, low-light, and motion images. I’ve captured some photos at night that I never would have thought possible from a physically small sensor due to the laws of physics.
Is it 100% pixel perfect to what was happening at the time? No, but I also don’t care.
I’ve used HDR exposure stacking in the past. I’ve used focus stacking in the past for shallow depth of field. I’ve even played with taking multiple photos of a crowded space and stitching them together to make an image of the space without any people or cars. None of them are pixel perfect representations of what I saw, but I don’t care. I was after an image that captured the subject and combining multiple exposures gets the job done.
No photographer thinks images the they get on film are perfect reflections of reality. The lens itself introduces flaws/changes as does film and developing. You don't have to be a purist to want the ability to decide what gets captured or to have control over how it looks though. Those kinds of choices are part of what make photography an art.
In the end, this tech just takes control from you. If you're fine with Apple deciding what the subject of pictures should be and how they should look that's fine, but I expect a lot of people wont be.
In these difficult scenarios, the alternative photo I'd get using such a small camera without this kind processing would be entirely unusable. I couldn't rescue those photos with hours of manual edits. That may be "in control", but it isn't useful.
For decades people have had the ability to get great photos using cell phones that included useful features like automatically adjusting focus, or exposure, or flash all without their phones inventing total misrepresentations of what the camera was pointed at.
I mean, at a certain point taking a less than perfect photo is more important than getting a fake image that looks good. If I see a pretty flower and want to take a picture of it, the result might look a lot better if my phone just searched for online images of similar flowers, selected one, and saved that image to my icloud, but I wouldn't want that.
The case in the article is obviously one that any idiot could have taken without these tools.
But "in difficult scenarios", as the GP comment put it, your mistake is assuming people have been taking those photos all along no problem. They have not. People have been filling their photo albums and memory cards up with underexposed blurry photos that look more like abstract art than reality. That's where this sort of technology shines.
I'm pretty reasonable at getting what I want out of a camera. But at some point you just hit limitations of the hardware. In "difficult scenarios" like a fairly dark situation, I can open the lens on my Nikon DLSR up to f/1.4 (the depth of field is so shallow I can focus your eyes while your nose stays blurry, so it's basically impossible to focus), crank the ISO up to 6400 (basically more grain than photo at that point), and still not get the shutter speed up to something that I can shoot handheld. I'd need a tripod and a very still subject to get a reasonably sharp photo. The hardware cannot do what I want in this situation. I can throw a speedlight on top, but besides making the camera closer to a foot tall than not and upping the weight to like 4lbs, a flash isn't always appropriate or acceptable in every situation. And it's not exactly something I carry with me everywhere.
These photos _cannot_ be saved because there just isn't the data there to save. You can't pull data back out of a stream of zeros. You can't un-motion-blur a photo using basic corrections.
Or I can pull out my iPhone and press a button and it does an extremely passable job of it.
The right tool for the right job. These tools are very much the "right" tool in a lot of difficult scenarios.
In circumstances where it really matters having a prettied up image might be worse than having no image at all. If you rely on the image being correct to make some consequential decision, you could convict someone of a crime, or if you were trying to diagnose some issue with some machine you might cause damage. While if the camera gave an honest but uninterpretable picture you would be forced to try again.
Couple other common cases:
- Photographing serial numbers or readouts on hard-to-reach labels and displays, like e.g. your water meter.
- Photographing damage to walls, surfaces or goods, for purpose of warranty or insurance claim.
- DIY / citizen science / school science experiments of all kind.
- Workshops, auto-repairs, manufacturing, tradespeople - all heavily relying on COTS cameras for documenting, calibrating, sometimes even automation, because it's cheap, available, and it works. Well, it worked.
Imagine your camera fighting you on any of that, giving you bullshit numbers or actively removing the very details you're trying to capture. Or insurance rejecting your claim on the possibility of that happening.
Also let's not forget that plenty of science and even military ops are done using mass-market cameras, because ain't anyone have money to spend on Dedicated Professional Stuff.
Photo copiers replacing digits on scanned financial reports with digits that compress better are a decade old already :)
http://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres_...?
Can people taking documentary photos can disable the feature? Obviously casual users won't be aware of the option, if it exists at all.
I've often wished for an image format that is a container for [original, modified]. Maybe with multiple versions. I hate having to manage separate files to keep things together.
That’s a pretty hand-wavy argument. You can just frame a picture however you want to give a very different image of a situation, que that overused media-satire picture of a foot vs a knife being shown.
That is a bit harsh. The vast majority of all people are worse at photography than apple's algorithm.
The claim was that any camera without this feature is "entirely unusable" and the photos couldn't be saved even with "hours of manual edits". The fact is that for decades countless of beautiful photos have been captured with cell phone cameras without this feature and most of them needed no manual edits at all. Many of those perfectly fine pictures were taken by people who would not consider themselves to be photographers.
Anyone who, by their own admission, is incapable of taking a photo without this new technology must be an extraordinarily poor photographer by common standards. I honestly wasn't trying to shame them for that though (I'll edit that if I still can), I just wasn't sure what else they could mean. Maybe it was hyperbole?
You missed the "in certain situations" part, which completely changes the meaning of your "quotes."
The two of you might have a different threshold of what you consider to be usable photos, and that’s fine. However, there is no way around physics. Under indoor lighting, a single exposure of a cellphone camera will either be a blurry mess, a noisy mess, or both. Cellphones used to get around that by adding flashlights and adding strong noise suppression, and it was up to the photographer to make sure that the subject didn’t move too much. Modern smartphones let you take pretty decent photos without flash en without any special considerations, by combining many exposures automatically. I think I t’s quite amazing. The hardware itself has also improved a lot, and you can also take a better single exposure photo than ever, but it won’t be anywhere near the same quality straight out of the camera.
And, yes, I have been taking a lot of pictures with my Sony Ericsson K750i almost two decades ago and I did like them enough to print them back then, but even the photos taken under perfect lighting conditions don’t stand a chance to the quality of the average indoor photo nowadays. The indoor photos were all taken with the xenon flash and were very noisy regardless.
Which phones are you thinking of? Because you definitely have a very rose tinted view if you are thinking literally of cell phones, over smartphones, and even in the latter case, only the past couple of years have had acceptable quality where it might pass some rudimentary view as a proper photograph. Everything else was just noise.
traditionally phones took good photos in good light but as the light decreases so does photo quality (quickly). the point of the ai photography isn't to get the best photograph when you control the lighting and have 2 minutes before hand to pick options, it's to get the best photograph when you realize you need the best possible photo in the next 2 seconds
Not in these light conditions. Simple as that. What iPhones are doing nowadays gives you the ability to take some photos you couldn’t have in the past. Try shooting a few photos with an iPhone and the app Halide. It can give you a single RAW of a single exposure. Try it in some mildly inconvenient light conditions, like in a forest. Where any big boy camera wouldn’t bat an eye, what the tiny phone sensor sees is a noisy pixel soup that, if it came from my big boy camera, I’d consider unsalvageable.
Again, decades of people photographing themselves in wedding dresses while in dress shops (which tend to be pretty well lit) would disagree with you. Also, the things that help most with lighting (like auto-exposure) aren't the problem here. That's not why her arms ended up in three different positions at once.
Of course it did.
iPhones take an “exposure” (scare quotes quite intentional) of a certain length. A conventional camera taking an exposure literally integrates the light hitting each sensor pixel (or region of film) during the exposure. iPhone do not — instead (for long enough exposures), iPhones take many pictures, aka a video, and apply fancy algorithms to squash that video back to a still image.
But all the data comes from the video, with length equal to the “exposure”. Apple is not doing Samsung-style “it looks like an arm/moon, so paint one in”. So this image had the subject moving her arms such that all the arm positions in the final image happened during the “exposure”.
Which means the “exposure” was moderately long, which means the light was dim. In bright light, iPhones take a short exposure just like any other camera, and the effect in question won’t happen.
(Okay, I’m extrapolating from observed behavior and from reading descriptions from Google of similar tech and from reading descriptions of astrophotography techniques. But I’m fairly confident that I’m right.)
It could also have been taken with too much motion (either the phone or the subject), meaning some of the closer-in-time exposures would be rejected because they're blurry/have rolling shutter.
This is probably harder for people to believe if they've never seen a progress video of stacking long exposures.
I've seen the kinds of images people get out of stacking images for astrophotography. Individually, the images are mostly noise. Put enough together and you get stuff like this: https://web.archive.org/web/20230512210222/https://imgur.com... (https://old.reddit.com/r/astrophotography/comments/3gx29m/st...)
The phone is operating under way less harsh conditions since there's usually quite a bit of light even in most night scenes.
The iPhone is actually too good at this. You can't do light trails: it over-weights the first image and removes anything too divergent when stacking, so you get a well-lit frozen scene of vehicles on the road. I can get around it shooting in burst mode and stack in something like Affinity Photo, but that's work.
You can tell this scene isn't well lit - just look at her reflected face. It's too dark.
Sure, some of it is bright, but that just means it's backlit.
you can still install any other photo app on your phone and use that instead
Is an argument about corner cases where computational improvement does make sense an argument in good faith when used as an excuse to take away control in all cases?
How is that different from a noise removal algorithm? Is it also taking away control from you!
The Apple algorithm visibly and significantly changes color, contrast, and likely curves, for every image, including in RAW format. Without my consent and with no option to opt out.
The Apple camera app is accessible quickly without unlocking the phone and I use that feature most of the time when I travel.
So I end up with thousands of crappyfied images. Infuriating.
Afaik there are photo apps on the iphone that let you have all the control you want so you can use those when you want control and use the default when you just want a quick photo knowing the pros/cons.
Don't fall into this trap. A lens and computational photography are not alike. One is a static filter, doing simple(ish) transformation of incoming light. The other is arbitrary computation operating in semantic space, halfway between photography and generative AI. Those are qualitatively different.
Or, put another way: you can undo effects of a lens, or the way photo was developed classically, because each pixel is still correlated with reality, just modulo a simple, reversible transformation. It's something we intuitively understand, which is why we often don't notice. In contrast, computational photography decorrelates pixels from reality. It's not a mathematical transformation you can reverse - it's high-level interpretation, and most of the source data is discarded.
Is this a big deal? I'd say it is. Not just because it rubs some the wrong way (it definitely makes something no longer be a "photo" to me). But consider that all camera manufacturers, phone or otherwise, are jumping on this bandwagon, so in a few years it's going to be hard to find a camera without built-in image-correcting "AI" - and then consider just how much science and computer vision applications are done with COTS parts. A lot of papers will have to be retracted before academia realizes they can no longer trust regular cameras in anything. Someone will get hurt when a robot - or a car - hits them because "it didn't see them standing there", thanks to camera hardware conveniently bullshitting them out of the picture.
(Pro tip for modern conflict: best not use newest iPhones for zeroing in artillery strikes.)
Ultimately you're right, though: this is an issue of control. Computational photography isn't bad per se. It being enabled by default, without an off-switch, and operating destructively by default (instead of storing originals plus composite), is a problem. It wasn't that big of a deal with previous stuff like automatic color corrections, because it was correlated with reality and undoable in a pinch, if needed. Computational photography isn't undoable. If you don't have the inputs, you can't recover them.
Yeah, computational photography is actually closer to Terry Pratchet’s Discworld version of a camera - a box with an imp who paints the picture.
Artistic interpretation of a scene is often very nice.
But we would really need to be able to discern cameras that give you the pixels from the ccd from the irreversible kind.
In the worst case scenario it’s back to photographic film if you want to be sure no-one is molesting the data :D
I mean… you’ve pretty much described our brain. Your blue isn’t my blue.
People need to stop clutching pearls. For the scenario it is used in, computational photography is nothing short of magic.
Apple already gives you an optional step back with ProLog. Perhaps in the feature they’ll just send the “raw” sensor data, for those that really want it.
”Your blue isn’t my blue.”
Is there actually sufficient understanding of qualia to state this concretely? Brain neurology is unknown territory for me.
I think you might mean Apple ProRaw [1]. ProLog might mean ProRes Log [2], which is Apple's implementation of the Log colour profile, which is a "flat looking" video profile that transforms colours to preserve shadow and highlight detail.
[1]: https://support.apple.com/en-gb/HT211965 [2]: https://support.apple.com/en-gb/guide/iphone/iphde02c478d/io...
Of course it is, in practical sense ("qualia" don't matter in any way). If it isn't, it means you're suffering from one of a finite number of visual system disorders, which we've already identified and figured out ways to deal with (that may include e.g. preventing you from operating some machinery, or taking some jobs, in the interest of everyone's safety).
Yes, our brains do a lot of post-processing and "computational photography". But it's well understood - if not formally, then culturally. The brain mostly does heuristics, optimizing for speed at the cost of accuracy, but it still gives mostly accurate results, and we know where corner cases happen, and how to deal with them. We do - because otherwise, we wouldn't be able to communicate and cooperate with each other.
Most importantly, our brains are calibrated for inputs highly correlated with reality. Put the imp-in-a-box in between, and you're just screwing with our perception of reality.
I kinda like that scenario!
That's the scenario where evidence of police/military misconduct gets dismissed out-of-hand because the people with the ability to capture it only had/could afford/could carry their smartphone, and not a film camera. Thanks, I hate it.
I don't, because the set sum of population of people who will buy selfie cameras if marketed heavily, and artistic photographers, is orders of magnitude larger than people who proactively know an image recorder could come in handy. And, as tech is one of the prime examples, if the sector can identify and separate out a specific niche, it can... tell it to go fuck itself, and optimize it away so the mass-market product is more shiny and more profitable.
Yes. Also, it seems inevitable that at some points photos that you can't publish on Facebook won't be possible to make. Is a nipple present in the scene? Then too bad, you can't press the shutter.
Oh yes you can, and the black helicopters will be dispatched, your social score obliterated and credit ratings becomes skulls and bones. EU Chat Control will morph to EU Camera Control. Think of the children!
Nipple restriction is an American thing, not so much an EU thing
Yes but American puritanism is being imposed onto the rest of the world. It's not like there's a nipple-friendly version of Facebook for all non-US countries.
Or you can, but the nipple will magically become covered by a leaf falling down, or a lens flare, or subject's hand, or any other kind of context-appropriate generative modification to the photo.
Auto-censoring camera, if you like.
(Also don't try to point such camera at your own kids, if you value your freedom and them having their parent home.)
While I agree with the gist of your comment, I cannot leave this detail uncommented
You cannot undo each and every effect. Polarizing filters (filters, as are lens coatings, are part of a classical lens in my opinion), gradual filters, etc effectively disturb this correlation.
As does classic development, if you work creatively in the lab (as I did as a hobby a long time ago in analog times) where you decide which photographic paper to use, how to dodge or burn, etc.
But yes, I agree that computational photography offers a different kind of reality distortion.
Fair enough.
Yeah, I see it. This one is as pure signal removal as it comes in analog world. And they can, indeed, drop significant information - not just reflections, but also e.g. by blacking out computer screens - but they don't introduce fake information either, and lost information could in principle be recovered -- because in reality, everything is correlated with everything else.
A polarizing filter or choice of photographic paper won't make e.g. shadows come out the wrong way. Conversely, if you get handed a photo with wrong shadows, you not only can be sure it was 'shopped, but could use those shadows and other details to infer what was removed from the original photo. If you tried the same trick with computational photograph, your math would not converge. The information in the image is no longer self-consistent.
That's as close as I can come up to describing the difference between the two kinds of reality distortion; there's probably some mathematical framework to classify it better.
The most extreme case being Samsung outright painting the moon in when it thinks a piece of picture is semantically likely to be the moon.
Whether the detail data is encoded as PNG or as AI weights is immaterial, it is adding data that is not there, by a long shot.
https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...
That's weird. Whenever I tried to take a picture of the moon, it would look great in the camera view on the screem, but look terrible once I actually took the picture.
It’s absolutely not reversible. Information gets lost all the time with physical systems as well.
Also, probably something like the creation of the image of the black hole is closer to computational photography than “AI”, and it seems a bit like yours is a populist argument against it.
I do have reservations about that image, and don't consider it a photograph, because it took lots of crazy math to assemble it from a weak signal, and as anyone in software who ever wrote simulations should know, it's very hard to notice subtle mistakes when they give you results you expected.
However, this was a high-profile case with a lot of much smarter and more experienced people than me looking into it, so I expect they'd raise some flags if the math wasn't solid.
(What I consider precedent to highly opaque computational photography is MRI - the art and craft of producing highly-detailed brain images from magnetic field measurements and a fuck ton of obscure maths. This works, but no one calls MRI scans "photos".)
No, you most definitely cannot. The roots of computational photography are in things like deblurring, which in general do not have a nice solution in any practical case (like non-zero noise). Same deal with removing film grain in low light conditions.
To play devils advocate here: the best camera is the one you have with you, and computational photography does mean you can just push a button without thinking and get a clear picture that captures a memory.
It’s obviously an argument to say that Apple shouldn’t get to choose how your memories are recorded, but I know I’ve captured a lot more moments I would’ve otherwise missed because of just how automatic phone cameras are.
There’s a place for both kinds of cameras IMO.
I'd argue that the women trying to take a photo of herself in her wedding dress did not get a clear picture that captures a memory. She got a very confusing picture that captured something which never happened. There are lots of great automatic camera features which are super helpful, but don't falsify events. If I take a picture of my kid I want my actual child in the photo, not a cobbled together AI generated monstrosity of what apple thinks my kid ought to have looked like in that moment.
Automatic cameras are great. Cameras that outright lie to you are not.
Oh the irony of framing things (pun intended) so hyperbolically. Somehow it never seems to dawn on people that like to throw around the word ‘lie’ that they’re doing exactly what they’re complaining about, except intentionally, which seems way worse. Nobody sat down to say bwahahah let’s make the iphone create fake photos, the intent obviously is to use automated methods to capture the highest quality image while trying to be relatively faithful to the scene, which might mean capturing moving subjects at very slightly different times, in order to avoid photo-wrecking smudges. When you blatantly ignore the stated intent and project your own negative assumptions of other people’s motivations, that becomes consciously falsifying the situation.
Photographs are not and never have been anything but an unrepresentative slice of a likeness of a moment in time, framed by the photographer to leave almost everything out, distorted by a lens, recolored during the process, and displayed in a completely different medium that adds more distortion and recoloring. There is no truth to a photograph in the first place, automatic or not, it’s an image, not reality. Photos have often implied the wrong thing, ever since the medium was invented. The greatest photos are especially prone to being unrealistic depictions. Having an auto stitch of a few people a few milliseconds apart is no different in its truthiness from a rolling shutter or a pano that takes time to sweep, no different from an auto shutter that waits for less camera shake, no different from a time-lapse, no different from any automatic feature, and no different from manual features too. Adjusting my f-stop and focus is really just as much distorting reality as auto-stitching is.
Anyway, she did get a clear memory that was quite faithful to within a second, it just has a slightly funny surprise.
This is a complete absurdity
Photos are used daily to find suspects of crimes
to convict people to prison
to establish the level of devastation in a war
To find launch site for rocket attacks
To make scientific measurements of distance,size, etc. in architecture and war
By scientists to determine position of things in the sky, meteors, etc.
You are like a guy that collects knifes and swords and has no idea what they are actually for
And some of those use cases require special cameras in the first place. Photography is basically just a measurement of light at different positions - there are endless priorities to make. You don’t need stacking multiple photos for scientific measurement of distance, as you would be having proper illumination in the first place.
A smartphone camera works in a vastly different environment, and has to adapt to any kind of illumination, scene etc.
Yes same as with tools like ChatGPT, it is OK for some uses, but you can not use it for something where you need to have trust in the output.
Problem is that people are not making this distinction, in cases like when they are posting pictures as evidence. This was also the case with manual post-process. However, now it is by default.
It seems like you completely misunderstood what I said and decided to take it out of context and throw in a jab on top because none of that contradicts my point. What kind of guy does that make you? ;) You are failing to account for how many photos have been used to implicate someone of a crime they did not commit, how many photos have been used to exaggerate or mislead the effects of war (a couple of the most famous photos in all of history were staged war photos), and how many photos suggested measurements and scientific outcomes that turned out to be wrong. Scientists, unlike the general public, are generally aware of all the distortions in time, space, color, etc., and they still misinterpret the results all the time.
The context here is what the parent was talking about, about the meaning and truth in casual photography, and your comment has made incorrect assumptions and ignored that context. I wasn’t referring at all to the physical process, I was referring to the interpretation of a photo, because that’s what the parent comment was referring to. Interpretation is not contained in the photo, it’s a process of making assumptions about the image, just like the assumptions you made. Sometimes those assumptions are wrong, even when the photo is nothing more than captured photons.
Without the iPhone’s computational camera, she wouldn’t have this photo at all because she wouldn’t have a camera in her pocket that could get a good picture in this situation.
A 10 year old smartphone would have taken a perfectly good photo in that scenario. There's nothing challenging about it.
On the contrary, she would have a perfectly good phone with a perfectly good camera making perfectly good pictures that don't fake reality and turn her into a vampire.
I used to keep a powershot digital elph in my pocket whenever I left the house. TBH I took way more pictures with it. I could blindly turn it on and snap a picture while driving, without ever taking my eyes off the road. There's no way in the world I could do that with an iphone lol. I mean, I suppose if I happened to hit the bottom right corner button and then used the volume button maybe? Maybe. It's way more likely I'd cock it up lol.
You can always configure it to start the camera by swiping left on the lock screen. Then, the volume buttons are big enough to easily hit one of them.
Isn't driving while blind illegal?
Here iPhone is using collage instead of showing particular photo. Then there was Samsung taking sharp picture of moon that was just completely generated.
Soon these moments will be 'captured' so perfectly with simulation that there wouldn't be reason taking them. Generate them when you want to recall them, not when the moment is happening.
JIT 'photography'
At this point, photography becomes a shittier version of what our brains do, so why bother? Or maybe let's do that, but then let's also do the kind of photography that accurately represents photons hitting the imaging plate.
No lens is going to produce a straight arm out of one that was bent at the elbow, or vice versa, while not disturbing anything else. It is not "photography".
No lens is going to reproduce a perfectly straight line without lens compensation either. Lenses are bending light to distill a moment in space time into a 2D representation which by definition will always be imperfect. What counts as “photography” really?
GP> In the end, this tech just takes control from you.
In the end, any tech is always going to take control away from you one way or another. That’s the whole point of using it, so you can achieve things you wouldn’t otherwise be able to.
Though it may be that no lens will reproduce a perfectly straight line, the transformation which warps the line is a relatively simple and invertible mapping which preserves local as well as global features.
It might not be a conformal mapping but, say, if the image of a grid is projected through the lens, the projected image will have the same number of divisions and crossings, in the same relation to each other.
We cannot say this about an AI-powered transformation which changes a body posture.
Focusing light via a lens into an exposure plate, from which an image is sampled.
There’s no AI transformation altering her body posture. It’s just that the phone is blending multiple exposures during which she was posing in different ways. The phone doesn’t realize that the images of the women in the mirror correspond to the woman in the center, so it hasn’t felt it necessary to use the same keyframe as the reference for the different parts of the photo. People are jumping to crazy explanations for this when there is a much simpler explanation that’s obvious to anyone who’s ever done a bit of manual HDR.
Ironically, the solution to this problem is more AI, not less. The phone needs to understand mirrors in order to make smarter HDR blending decisions.
HN has a strangely luddite tendency when it comes to photography. Lots of people are up in arms about phones somehow ruining photography by doing things that you could easily do in a traditional darkroom. (It's not difficult to expose multiple negatives onto the same sheet of photo paper.)
No, the solution is for the phone to stop blending multiple distinct exposures into one photo, because that's the crux of the problem - that's the "AI transformation" right there, decorrelating the photo from reality.
It's not about luddite tendencies (leaving aside the fact that Luddites were not against technology per se - they were people fucked over by capitalist business owners, by means of being automated away from their careers). It's about recognizing that photos aren't just, or even predominantly, art. Photos are measurements, recorded observations of reality. Computational photography is screwing with that.
No general-purpose camera, analogue or digital, functions particularly well as a measurement tool. They are simply not designed for that purpose.
Blending multiple exposures is not an "AI transformation". The first iPhone to do HDR was the iPhone 4, introduced in 2010.
If you dislike HDR for some reason, no-one is forcing you to use it. There are lots of iPhone apps that will let you shoot a single exposure. That's never going to be the default in a consumer point-and-shoot camera, though, because it will give worse results for 99% of people 99% of the time.
Cellphones have been capable of HDR for a very very long time and yet none of those older phones were capable of producing a picture like the one in the article. The problem here was not HDR. The problem was using AI to create a composite image by selectively erasing the subject of the photo from parts of the image, detecting that same person found in other photos, then cutting the subject out of those other images and pasting them into the deleted parts of the original picture before filling in the gaps to make it look like it was all one picture.
This was absolutely an "AI transformation" as the original article correctly pointed out:
I don't think this is true? You can easily get an image like this just by choosing a different keyframe for different areas of the photo when combining the stack of exposures. No AI needed. Nor is there any 'selective erasing'.
You talk about 'detecting the same person found in other photos', which suggests that you're not aware that HDR involves blending a stack of exposures (where the woman may have been in various different poses) that are part of the same 'photo' from the user's point of view (i.e. one press of the shutter button). There is no reason at all to think that the phone is actually scanning through the existing photo library to find other images.
Thanks, but I will take a proper picture over a blurry noisy mess.
And please learn a bit about the whole field of measurement theory before you comment bullshit. Taking multiple measurements is literally the most basic way to get better signal to noise ratio. A blurry photo is just as much an alteration of reality, there is no real-world correspondence to my blurred face. Apple just does it more intelligently, but in rare cases messes up. It’s measurement. AI-generated stuff like a fake moon is not done by apple, and is a bad thing I also disapprove of. Smart statistics inside cameras are not at all like that.
Yes. That is not a problem.
Stacking those and blending them in correctly isn't a problem either - information is propagated in well-defined, and usually not surprising ways. If you're doing it, you know what you'll get.
Selectively merging photos by filling different areas from different frames, and using ML to paper over the stitches, is where you start losing your signal badly. And doing that to unsuspecting people is just screwing with their perception of reality.
Yes, AI generated stuff like fake Moon is even worse, but my understanding of the story in question is that the iPhone did a transformation that's about halfway between the HDR/astrophotography stuff and inpainting the Moon. And my issue isn't with the algorithms per se - hey, I'd love to have a "stitch something nice out of these few shots" button, or "share to img2img Stable Diffusion" button. My issue is when this level of AI transformation happens silently, in the background, in spaces and contexts where people still expect to be dealing with photographs, and won't notice until it's too late. And secondarily, with this becoming the default, as the default with wide-enough appeal tends to eventually become the only thing available.
idk, sigma 50mm art will give you pretty straight lines. At least straight enough for people. Why would you even mention it? Are you taking photos of line meshes or something?
Life is imperfect. Lens specs are only good for reviews / marketing because it is hard to otherwise compare lenses for there real value.
I am fine with control I am getting from the big camera.
That’s basic photo-stacking.
If you use long exposure, you will capture a blurry blob instead. I don't know about you but I don't see blurry hands with my eyes (mostly because our brain also does computational photography).
Depends on the refractive index of the lens [1].
[1] https://gebseng.com/media_archeology/reading_materials/Bob_S...
Try to photograph a toddler without this feature. Good luck. Does the fact that the iPhone allows me to do this pretty reliably without any fuss mean I have more or less control?
Odd. I have hundreds of great shots of my toddlers, none taken with these “features”.
I know you’re pretending to be dense on purpose, but taking a dozen pictures at once and automatically picking the best one can obviously save a lot of shots. Same with combining exposures.
The camera in this case wasn't taking many shots and selecting one. It wasn't just combining exposures either. It did a bunch of shitty cut/paste operations to produce an altered composite image which showed something that never happened.
Some automatic features are wonderful. People have been able to take pictures of fast moving toddlers and pets for ages because of them. The camera "features" that secretly alter images to create lies and don't give you a photograph of what you asked them to capture are a problem.
I didn’t misunderstand what happened, and I know how the iPhone camera works. I was only noting a couple common cases where computational photography is really nice. I’m pretty picky about my photos and nothing I’ve shot with my iPhone has ever approached “lies”. I know those types of anomalies can happen, but they’re clearly design flaws, i.e. Apple doesn’t actually want their camera to make fantasy images. I don’t want that either.
That you know of.
There could be many lies in those photos, both subtle and blunt, which you didn't notice at the moment because you weren't primed to look for them, and which you won't notice now, because you no longer remember the details of the scenes/situations being photographed, days or months ago.
Are you sure I'm wrong? Are you sure that there are no hard lies in your photos? Would you be able to tell?
If we’re going to be this pedantic about it: iPhone’s camera will not add or remove things that weren’t captured in its temporal window. It won’t change the semantic meaning of an image. It won’t make a sad face person look happy, make daytime look like nighttime, or change dogs into cats. It can make mistakes like from the OP, but the point is that Apple also sees this as a mistake, so I would expect the error rate to improve over time. This is a different approach from other computational photography apps, like Google’s, where it looks like they are more willing to push the boundaries into fantasy images. Apple’s approach seems to be more grounded, like they simply want to get great pro DSLR level images automatically.
A blurry limb is just as much a lie.
It’s literally photo stacking, which is as old as digital photography itself. Look up any astrophotographer, or any photographer that did an HDR shot.
This is a clickbait non-article, and people’s bullshit comments with no knowledge of cameras.
It's solved with constantly taking pictures into a ring buffer, either compensating for the touchscreen lag automatically or letting you select the best frame manually with a slider after the shot. Most cameras can do that (if you disable the best frame auto-selection).
The two features are not exclusionary. For low-light situations you will need this feature either way, a bunch of noisy pictures doesn’t help you.
Ah yes, for all those people using their iphone to make true art instead of for taking a selfie.
The market for cell phone cameras are generally casual users. High end artists generally use fancy cameras. Different trade offs for different use cases.
I think you’d be surprised how much phones have encroached on the fancy camera space. Also, the limiting factor in phone camera quality is more the lens than anything else. The SW backing it is much more powerful than what you get in standalone cameras although those manufacturers are trying to keep up (that’s why phone cameras can be so competitive with more expensive ones). I expect within the next 10 years we’ll see meaningful improvements to the lenses used in phones to the point where dedicated cameras will have shrunk even more (i.e. decoupling the size of the lens / sensor from the surface area on the back of the phone).
that is very due to camera makers not giving a sh*t though. Maybe now they do, but it's overdue. And I'm not even talking about the phone apps for connection, or image processing, but more on a user interface and usability PoV, and also interesting modes for the user.
Fuji and Ricoh are the only ones that I see are trying to make things easier or more fun or interesting for non-professionals. Fuji has the whole user customization that people use for recipes and film simulation (on top of the film simulations they already have), and Ricoh is the only one (I know of) that has snap focus, distance priority, and custom user modes that are easy to switch to, and to use. But even Fuji and Ricoh could still improve a lot, since there's always a detail or another that I'm like... why did they made it like that? or... why didn't they add this thing?
I don't think it's only that. The processing in a phone is expensive. If you'd build a high end camera with a Snapdragon 8 gen 3 it would raise the price a lot. In a phone that's not an issue because you need the chip there anyway for other tasks. So there's just much more compute power available, not to mention a connection to the cloud with even more potential compute.
Also, many more people buy phones than standalone cameras, so they can afford a lot more software R&D.
Well yes but a lot of that can be reused across devices I would guess.
Naw, astrange is correct. At peak in 2010, 121 standalone cameras were shipped worldwide. By 2021, that number was down to ~8 million. By comparison, 83 million smartphones are shipped each quarter (~1.4 billion for the year). Those kinds of economies of scale means there's more R&D revenue to sustain a larger amount of HW & SW innovation. Even though smartphone shipments should come down over the next few years as they incremental jump each year is minimal & the market is getting saturated, there's always going to be more smartphone shipments.
Individual camera vendors just can't compete as much and I don't think there was possibly anything they could have done to compete because of the ergonomics of a smartphone camera. The poor UX and lack of computational photography techniques doesn't matter as much because the pro market segment is less about the camera itself and more about the lenses / post-processing. Even professional shoots that use smartphones (as Apple likes to do for their advertising) ultimately tend to capture RAW + desktop/laptop post-processing when they can because of the flexibility / ergonomics / feature set of that workflow. The camera vendors do still have the advantage of physical dimensions in that they have a bigger sensor and lenses for DSLR (traditional point & click I think is basically dead now), but I expect smartphones to chip away at that advantage through new ways of constructing lenses / sensors. Those techniques could be applied to DSLRs potentially for even higher quality, but at the end of the day the market segment will just keep shrinking as smartphones absorb more use-cases (interchangeable lenses will be the hardest to overcome).
Honestly, I'm surprised those device manufacturers haven't shifted their DSLR stack to just be a dumb CMOS sensor, lens, a basic processor for I/O, and a thunderbolt controller that you slot the phone into. Probably heat is one factor, the amount of batteries you'd need would go up, the BOM cost for that package could be largely the same, & maybe the external I/O isn't quite yet fast/open enough for something like that.
Good points.
This is not really an option. There were actually clip-on cameras back in the day when cameras were new on phones. But there were several problems:
- Software support tended to lag with updates
- Phones change form factor very regularly and phones in general are replaced much more often than camera hardware, leaving you a highly expensive paperweight
- I don't think any phones have thunderbolt yet, just some tablets
- Dealing with issues is a nightmare because you don't control the hardware end to end
Would be cool if there ever were a product like a large external sensor and lens hardware dongle for phones (essentially all parts of a system camera without the computer) that would use the phone as a software processing platform. They would “just” ship the raw data over to the phone for post processing.
I think you have that backwards. The vast majority of people will prefer Apple's computational enhancements. A small number of photography enthusiasts will prefer manual control (and a smaller number will benefit from it).
Yeah I 100% agree - When I pull my phone out to take a picture of the kids/my dog/something randomly interesting, all I want is to point my camera at whatever I want to capture and have it turn out.
Don’t care how it happens I don’t want to think about settings.
And you simply don't care if two different people are shown in the photo in poses that didn't happen at the same time? That doesn't seem to me like capturing whatever it was you wanted to capture.
It won’t happen in direct sunlight, and yeah, I will definitely take it over not having that picture in a presentable state.
How is that different than creating a panorama photo?
Panoramas are mostly taken of static scenes or at least with the knowledge that it's going to mess things up if people move around. Regular photographs are generally both taken and viewed with the assumption that all the things in it happened at the same time (modulo shutter speed).
Modulo now in low-light, moving scenes you actually get a decent photo. It doesn’t happen in good lighting, and this whole article is bullshit.
How does that even affect your life or happyness? Do you have to sign an affidavit that the photo exactly depicts what happened or you go to jail? Will your friends never speak to you again if they suspect the phone you bought off the shelf at the Apple store is using computational photography? Do your religious beliefs prohibit photographing poses that didn't happen or you go to hell? Or are you just being ultra pedantic to make a meaningless point?
I want a phone with computational photography that automatically renders Mormon Bubble Porn!
https://knowyourmeme.com/memes/mormon-porn-bubble-porn
and for those who don't there are a bunch of photo apps that allow far more control & iphones can shoot raw now
RAW in modern phones and apps is often stacked/processed as well. However, it always tries to stay as photometrically correct as possible (at least I'm not aware of any exceptions). All "dubious" types of processing happen after that.
If this feature or similar features could be disabled, it doesn't need to be an either/or situation. I don't have an iPhone though, so no idea if it's configurable. But seems that it should be, assuming it's not implemented deep in hardware.
I agree. I'd have no issue with this if it were something people could disable and it was clearly disclosed and explained as the default.
Doesn't even need to have an off-switch, if it would preserve the inputs that went into AI stitching magic, so that one could recover reality if/when they needed it.
I think you can with live photos, basically you get a short video included.
But why? Because out of the literally trillions of images shot with an iphone a tiny tiny percentage comes out wrong, similarly to how panorama images will look if you move? It unequivocally improves the quality of images in low light by a huge margin.
This is a non-issue.
Can't this be disabled?
And can't some other product with a similar technology offer knobs for configuring exactly how it behaves?
It can make a RAW file as well, so just the usual clickbait article/comments.
It can, but does it? Defaults matter. I wouldn't want to discover that my camera was screwing with me only after I sent the photos of an accident to my insurer, a day after it happened.
The tech doesn’t “just” take control from me. It also exponentially increases the chances that the photo I wanted to take will be the photo that is saved on my device and shared with my friends.
It’s just picture of my dog. It’s not that serious.
Until you want to send it to your vet, but you can't, because your phone keeps beautifying out the exact medical problem you're trying to image.
When will that happen? I’ve sent phots to my dog’s vet (I go to a doctor for humans myself) but this hasn’t been a problem. If you could let me know when this will happen I’ll make sure I don’t rely on it.
The majority dont care, as the majority are not photographers, nor is it meant to be a product for photographers. Average joe just wants a good photo of that moment, which it does exceptionally well.
The tech 'taking control from you' is its exact purpose, as again, not everyone is a photographer. The whole point is to allow 'normal people' to get a good photo at the press of a button, it'd be incredibly foolish and unreasonable to expect them to faff about with a bunch of camera settings so it does it for you.
The bride in the original article doesnt seem to be a photographer, yet she seems to care quite a bit that the photo is not an acurate representation of reality at that moment.
But regular people expect it. And then they take a picture of the moon and they are disappointed.
It's easy to take a picture of the moon. It's always lit the same because it's in space.
ISO 100, f/11, manual focus at infinity, shutter speed 1/100.
Phones aren't good at this, partly because autofocus isn't designed for moons, partly because you want a really long lens and they don't have one.
More importantly than that: framing, exposure, aperture choices by anyone not simply shooting with their camera in the Auto mode.
* https://petapixel.com/digital-camera-modes-a-complete-guide/...
* https://en.wikipedia.org/wiki/Mode_dial
* https://photographylife.com/understanding-digital-camera-mod...
>In the end, this tech just takes control from you.
1. "This tech" is too broad of a qualifier, as GP is talking about computational photography in general, which is many things at once. Most of those things work great; some others are unpredictable. There are plenty of custom camera apps besides Apple's default camera which will always try to stay as predictable as possible.
2. There is such a thing as too much control. Proper automation is good, especially if you aren't doing a carefully set up session. The computational autofocus in high-end cameras is amazing nowadays. You can nail it every time now without thinking, with rare exceptions.
Having control isn't always the best option. While photographers may appreciate having a camera over which they can have a high degree of control, they're not snobs about it. Photographers will tell you that having a handy point and shoot with good automation or correction of some kind is extremely useful when you're out and about and need to take a photo quickly. For everyday photos, it's what you'll want usually. If not, there are plenty of cameras on the market to choose from.
Getting a roughly correct image vs a blurry mess doesn’t take control away from you and it is just bullshitting on a non-issue.
The first thing they teach you at any photography course worth its money is that framing the picture itself is a distortion of reality. It's you deciding what's worth being recorded and what should be discarded.
There's no "objective photography", no matter how hard we try distinguishing between old and new tech.
I have no objection with what we have today either but it does feel slippery slope-y, especially in the AI era. Before we know it our cameras won’t just be picking the best frame for each person, it’ll be picking the best eye, tweaking their symmetry, making the skin tone as aesthetically pleasing as possible, etc etc etc. It was always this way to an extent but now that we’ve given up the pretence of photos being “real” it feels inevitable to me.
I’m reminded of that scene in WALL-E where it shows the captain portraits as people get fatter and fatter. It’s clearly inaccurate: over time the photos should show ever more attractive, chisled captains. They’d still be obese in real life though.
Phones over-beautifying faces by default already happened with the iPhone XS, and it wasn't received well. See #beautygate: https://www.imore.com/beautygate
"Over-beautifying" never happened. Noise reduction/smoothing happens in camera processing even if you don't try to do it, because it's trying to preserve signal and that's not signal. If you want that noise, you have to actually put in processing that adds it back.
Sure it did, no wrinkles, no moles unless huge, skin tone like after 2 weeks vacation in Carribean. What previously had to be done in photoshop to make people look younger is now done automatically, for every photo, and you can't turn it off.
They call it 'instagram look' for quite some time, Apple is the worst among all phone manufacturers (in form of furthest from actual ugly reality, but a lot of people got used to it and actually prefer it now), but all of them are making it.
It’s called noise reduction, and has been done by the most primitive cell phones since forever.
I promise there is nothing in the camera trying to make you look even a little better than normal. Especially not removing moles; people use their phones to send pictures to their doctors, so that would kill your customers with melanoma.
If you want pores you need a higher resolution sensor, but even most of those have physical low pass (blur) filters because most people don't like moiré artifacts. Could try an A7R or Fuji X-series.
Yes, it actually did. And as it says in the link:
The phone changed the image on skin and not on random other things that wasn't a face. That is a filter, not normal image processing, when it happens only on X but not on Y. The only way to not get this filter on the phone is using RAW.
That is the opinion of an uninformed tech writer, and even besides that, he's not claiming it did happen but just that it could possibly happen.
In this case you do have someone who knows how it works, that someone being me.
There are some reasons you'd want to process faces differently, namely that viewers look at them the most, and that if you get the color wrong people either look sickly or get annoyed that you've whitewashed them. Also, when tone mapping you likely want to treat the foreground and background differently, and people are usually foreground elements.
That already happens, in realtime. FaceTime uses eye gaze correction, Cisco is all in on AI codecs for image and audio compression, and other vendors are on similar tracks too.
When you talk to a client, a colleague or a loved one we’re on the verge of you conversing with a model that mostly represents their image and voice (you hope). The affordances of that abstraction layer will only continue to deepen from here too.
This is nothing new. Very old examples are the xerox machines that accidentally changed numbers on scanned documents, and speech coding for low bit rate digital audio over radio, phone, and then VoIP.
Wow, this threw me for a loop. There’s not much difference between talking to a heavily filtered image of a person and texting with them. Both the image and the words are merely representations of the person. Even eye-to-skin perceiving someone is a representation of their “person” to a degree. The important part is that “how” and “how much” the representation differs from reality is known to the observer
Samsung phones (and I guess iphones too) already do this lmao.
https://www.insider.com/samsung-phones-default-beauty-mode-c...
My favorite one is the phones that put a fake picture of a moon in pictures where the moon is detected!
https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...
This is just about as wrong as saying Stable Diffusion contains a stock photo of the moon it spits out when you prompt it with "the moon".
They work the same way. Samsung's camera has a moon mode, so it gets the prior (a much, much lower quality camera raw than you think it's getting), it processes it with a bias (this noise is a Gaussian distribution centered on the moon), and you get a result (an image that looks like the moon).
What we have today already includes phone cameras that will add teeth to the image of your smiling newborn, or replace your image of what looks like the moon with stock photography of the moon.
Interestingly, the opposite thing happened with official depictions of Roman emperors.
While I don't use TikTok I often see videos from there and it's really spooky to me how aggressive and omnipresent filtering seems to have become in that community. Even mundane non-fashion non-influencer "vlog" style content is often heavily filtered and, even more scary IMO, I often don't notice immediately and only catch it if there's a small glitch in the algorithm for instance when the person moves something in front of their face. And sometimes you can tell that the difference from their real appearance is very significant.
I really wonder what's that doing to insecure teenagers with body issues.
It’s not slippery slope-y, it is slippery slope.
This is not at all like DALL-e and midjourney, this is literally putting together multiple images taken shortly after another, but instead of dumbly putting it on another as it would manually happen in photoshop with transparent layers, it takes the first photo as a key, and merges info from the other photos into the first where it makes sense (e.g. the background didn’t move, so even this blurry frame is useful for improving that).
This is just dishonest to mix AI into that.
How many "purists" are there in the wild ? I'd only see police and insurance agents needing pixel perfect depiction of reality as it's out of the sensor.
Photography as an art was never about purity, and I think most of us want photos that reflect what we see and how we see it, and will take the technical steps to get that rendered. If the moon is beautiful and gently lights the landscape, I want a photo with both the bright moon and shadowy background, and will probably need a lot of computation for that.
But the doppelganger brides, or the over-hdred photos, or the landscapes with birds and pilars removed aren't what someone is seeing. They can be nice pictures, but IMO we're entering a different art than photography.
oh, there are a lot of people who like control though. Journalists/competition photography probably also would go into purists.
I think the police/insurance topic will be a big deal soon. I still remember the first building inspectors walking around with digital cameras, and of course when smartphones with "good enough" cameras came they started using these...
Now if society (including the courts) learn that the photos might not reflect reality, photo "evidence" could face issues being accepted by courts...
A non-purist will hate it too, as soon as the technology re-imagines the positions of his hands such that they are around someone's throat.
Or moves the hill a little to the left, and the forest a little to the front, and the tanks a little to the side, and next thing you know, artillery is raining down on a nearby village.
exactly, this tech eull send innocent people to prison
I just want to be able to turn it off when I want to
Agreed. But since the 13 (12 was the last), there's no way to remove even the HDR processing.
It's best to just think of it as a different art form.
B&W film photography + darkroom printing is an art form, as is digital photography + photoshop. These modern AI assisted digital photography methods are another art form, one with less control left to the photographer, but there's nothing inherently wrong with that. I wouldn't want to say which is better, it's not really an axis that you can use to compare art is it?
At the end of the day, do you generate an image which communicates something that the photographer had in mind at the time? If so, success!
The mistake is in thinking photography is only, or even primarily, art. It's also measurement. A recorded observation of reality.
People use their cameras - especially phone cameras - for both these purposes, and often which one is needed is determined after the fact. So e.g. I might appreciate the camera touching up my selfie for the Instagram post today, but if tomorrow I discover a rash on my face and I want to figure out how long it was developing, discovering that all my recent selfies have been automatically beautified would really annoy me.
Or, you know, just try being a normal person and make a photo of your kid's rash to mail/MMS to a doctor, because it's a middle of a fucking pandemic, your pediatrician is only available over the phone, and now the camera plain refuses to make a clear picture of the skin condition, because it knows better.
I'm also reminiscing about that Xerox fiasco, with copy machines that altered numbers on copied documents due some over-eager post-processing. I guess we'll need to repeat that with photos of utility meters, device serial numbers, and even scans of documents (which everyone does with their phone camera today) having their numbers computationally altered to "look better".
EDIT:
Between this and e.g. Apple's over-eager, globally-enabled autocorrect at some point auto-incorrecting drug names in medical documents written by doctors, people are going to get killed by this bullshit, before we get clear indication and control over those "magic" features.
I think the future will have digital mirrors with filters, so we don't have to mess with reality anymore, and our own imperfections. The raw image/reflection of oneself will be a societal taboo.
Sounds a bit like the "transmittable tableau" from David Foster Wallace's infinite jest:
https://www.litcharts.com/lit/infinite-jest/chapter-27
That last example is where I draw the line. It's one thing to enhance an image, or to alter the contrast across multiple frames to capture a more vibrant photo and a challenging lighting. But for our photography apps to be instantly altering the actual reality of what is occurring in a photo, such as whether someone is smiling or has their hands in a certain pose, or whether they're in a crowd or all alone next to a landmark, is not a feature that I think should be lauded.
I do. Photos can be material to court cases where people's money, time, even freedom are at stake. They can sway public opinion, with far-reaching consequences. They can change our memories. At the very least, there should be a common understanding of exactly how camera software can silently manipulate photos that they present as accurate representations of reality.
I think purists are fine with it as long as you can turn it off.
You are overblowing things, 99% of the photographers out there and 99% of the professionals out of those don't have this sentiment, since its primitive emotional one and detrimental to any actual work. Maybe few loud people desperate for attention make different impression on you about the state of affairs, but these days for any topic internet discussions can easily twist perception of reality and give very wrong impression.
You simply have to be practical, use the best took for the job you need. If you ever actually listened to photo artists talking among their peers about their art (ie Saudek), they practically never talk about technical details of cameras or lenses, its just a tool. If they go for analog photography its because they want to achieve something thats easier for them like that maybe due to previous decades of experience, not some elitist Luddites mindset. Lightning of the scene, composition, following the rules and then breaking them cleverly, capturing/creating the mood etc are what interests them.
It's honestly better than this on all fronts, since you can get ProRAW out of recent iPhones even in the default camera app and get RAW without DeepFusion out of different alternative camera apps.
I think I had to spend ~$1k to get my first DSLR with RAW support back in the 2000s. Adjusted for inflation, Halide + a recent iPhone feels like a pretty good deal.
Apple's obsession with hiding the magic trick is hurting them badly here. Just like Live pictures show you a video from before and after your pressed the shutter, every single picture should include an unprocessed, probably way too dark picture without any computational photography. That way regular users could truly grasp at what their phone is doing.
I figure that digital photography is by its very nature 'computational', both in the obvious sense, and in the sense that the camera from hardware up imposes a set of signal-forming decisions on what is essentially just structured noise.
The problem is more one of what controls the camera exposes to the user. If you can just take one kind of picture: whatever picture the engineers decided was 'good', then it limits your expressive options.
My issue with this has nothing to do with purism, but with how often the results are just no good, for reasons that have nothing to do with the sensor, but the choices of whatever model they run. Does it take a picture at night? Yes, but it's often unrecognizable compared to what my own sensor, my eyes, sees. It's not a matter of a slightly better reality, but the camera making choices about how light should go that have nothing to do with the composition in front of it.
You might remember an article about how there are many situations where the iphone just takes bad portraits, because its idea of what good lightning is breaks down. 5 year old phones often take pictures I like more than one of the latest phones, and not because the hardware was better, but because the tuning is just bad.
Fun things also happen when you take pictures of things that are not often in the model: Crawlspaces, pipes, or, say, dentistry closeups. I've had results that were downright useless outside of raw mode, because computational photography step really had no idea of what it was doing. It's not that the sensors are limited, but that the things that the iphone does sometimes make the picture far worse than in the past, when it took fewer liberties.
when you think about it, with rolling shutter, no row (or column?) of pixels are from the same moment in a given picture unless you are shooting with a global shutter camera - which is rare for consumer type devices.
There's a big difference between looking at reality through a bad filter and looking at a completely different constructed reality.
As a layman in photography, I agree with you.
But it is easy to understand the artists. It is said that in art everyone needs to master the technique first, the tool of the trade, but true works of art are the expressions that are created with these various techniques. At this point, a tool - computational photography in this case - may get in a way. So, it is not about purism. Quite the contrary, it is about being able to use the tools and bend the reality the way an artist wants.
Having said that, I would think anyone would normally use *all* the tools available at their disposal, and the truth is that iPhone camera among else is a great one anyway.
Why not both?
I am more in the purist camp, because when people take iOS photo, I remind them that someone else made that decision on how the photo should look. Additionally, we are in an era of not trusting anything on the internet or in a photo anymore. Do we want photojournalism to go that same path? I don't. So I enjoy being closer to "reality" than the computational photos, but for average entertainment photos, I don't mind.