return to table of content

Motion blur all the way down (2022)

geor9e
39 replies
4d13h

The tradeoff of rendering or filming motion blur at finite refresh rates is the audience can move their eyes to follow an object moving around the screen. In real life, that causes the object to become sharp. So, you either need to track eye motion and blur according to relative motion, or do no motion blur at an infinite refresh rate. Neither is practical with todays's technology, so it's always going to look wrong. A good director or game designer will choose shutter rate or render blur according to how they expect the audience eyes to be moving.

chaboud
19 replies
4d13h

However, the audience is used to shutter angle as part of the visual vernacular (e.g., narrow shutter angle for hyper edgy rap videos, long shutter angle for dreamy retro vibes). If rendered content can't speak the same visual language, a tool is missing, regardless of framerate (up to a point... At 400Hz, I'd be impressed by someone really seeing the difference).

What's interesting about rendered content is that it can extend that (e.g., a shutter angle beyond the duration of a frame), playing with something we thought we had a handle on.

orlp
13 replies
4d6h

At 400Hz, I'd be impressed by someone really seeing the difference.

Trivial. Drag your white cursor quickly across the screen against a black background. You will see clear gaps between the individual cursor afterimages on your retina.

Double the FPS, halve the size of the gaps. On a 240Hz monitor I can so clearly see gaps such that if they were halved the gaps would still easily be visible. Ergo 400Hz would still easily be distinguishable from continuous motion.

To put numbers on this, consider a vertical line 1 pixel wide moving across a 4K screen in 1 second (that's not even that fast). At 480Hz that's a shift of 8 pixels per frame. So you'd need at least 8x the framerate for motion of this line to be continuous.

zozbot234
5 replies
4d5h

Trivial. Drag your white cursor quickly across the screen against a black background. You will see clear gaps between the individual cursor afterimages on your retina.

That's the outcome of aliasing, not of the FPS limitation itself. You could analytically supersample the motion in the time domain and then blur it just enough to remove the aliasing, and the distinct images would then disappear. Motion blur approximates the same result.

blauditore
2 replies
4d4h

That's not true at all. When moving multiple pixels per frame, there will always be visible jumps. Aliasing only becomes relevant at the pixel level.

zozbot234
1 replies
4d3h

"Pixels" and "frames" are the exact same thing analytically, only in the spatial vs. time domain. This is very much an instance of aliasing, which is why blur happens to correct it.

blauditore
0 replies
7h54m

Alright, but supersampling would exactly achieve another approximation to (real) motion blur, but still suffer from the same issues (like not becoming sharp when tracking with your eyes).

01HNNWZ0MV43FF
1 replies
4d4h

Yeah but if your eyes were tracking it smoothly, it would not appear blurry. You could try to approximate _that_ with eye tracking but achieving such low latency might be even harder than cranking up the FPS

zozbot234
0 replies
4d3h

If your eyes were tracking the motion smoothly, the moving object would not appear blurry, but the static background absolutely would. So you'd need to apply anti-aliasing to the background while keeping the object images sharp. (Similar to how a line segment that's aligned with the pixel grid will still look sharp after applying line anti-aliasing. Motion tracking here amounts to a skewing of the pixel grid.)

Ruarl
5 replies
4d2h

Some experiments by a colleague a few years ago indicated 700Hz might be the limit. Will take some verification, obvs.

corysama
2 replies
4d1h

I read a study that put people in a dark room with a strobing LED and told them to dart their eyes left and right. 1000Hz was the limit before everyone stopped seeing glowing dashes and saw a solid line streak instead.

RedlineTriad
0 replies
3d7h

I was researching this because I was wondering how fast you can make LED flicker for lighting effects before it looks like constant brightness.

I found most of the information on Wikipedia[0], and the limit seems to be at about 80hz, but together with movement, some people can see stroboscopic effects up to 10khz.

[0] https://en.wikipedia.org/wiki/Flicker_fusion_threshold#Strob...

Modified3019
0 replies
4d1h

I’m reminded by an old Microsoft input research video, where 1ms latency response is what is needed for the most lifelike response for touch drawing on a screen: https://m.youtube.com/watch?v=vOvQCPLkPt4

mathgradthrow
0 replies
4d2h

trying watching ping pong under a 1000hz strobe.

geor9e
0 replies
3d14h

That's definitely not the limit. At 700Hz, a 700px wide screen with the 1px line crossing it every 1s would qualify as reaching the limit in that situation. But, speed that line up so it crosses every 0.5s, and it's no longer good enough. You've introduced an artifact - the object looks like multiple copies now, equally spaced apart by 1px gaps. It's not a smooth blur. The display never displayed a gap, but human eyes merge together an afterimage, so we see multiple frames at once over the last ~1/30s. Now go to 1400Hz, but double the speed of the line so it crosses in 1/4s. Now you need 2800Hz to eliminate the artifact. Or, you can artificially render motion blur, but then it won't look right if you eye follows the line as it crosses. So it's also a function of how fast your eye muscles can move across the screen. Thirdly, we can't limit ourselves to a 700px screen - a screen filling the human field of view would need to be 20-30 million pixels wide before one can no longer resolve a 1px gap between two vertical lines. There is eventually a completely artifactless limit, but it's way higher than 700Hz. Of course, 700Hz is nice, and if you fudge the criteria (how often do you see a 1px line moving across your field of view at high-speed in real life) you can argue it's good enough.

mrob
0 replies
4d4h

Another example is multiplexed LED 7-segment displays. These distort in different ways depending on how you move your eyes. Even 400Hz is too low to display a typical one realistically. If you use motion blur you'll lose the eye-movement-dependent distortion of the real thing.

harimau777
1 replies
4d5h

Can you recommend any resources on how shutter angle is used to communicate different messages in film? The Wikipedia article only gives some very basic examples (short shutter angle when you want to capture particles in the air or for action scenes).

drc500free
1 replies
4d13h

I first learned about shutter angle when reading about its use in Saving Private Ryan; the narrow 45 and 90 degree shutters made the opening scenes much more visceral with so much flying through the air.

https://cinemashock.org/2012/07/30/45-degree-shutter-in-savi...

arrowsmith
0 replies
4d10h

That was a really interesting article, thanks!

CarVac
0 replies
4d12h

It's also used for subject isolation in much the same way as depth of field.

Only objects not moving relative to the frame can be seen sharply.

cubefox
7 replies
4d7h

An alternative to infinite refresh rate is to show each frame only a fraction of the normal frame time, i.e. to quickly "flash" the individual frames and showing a black screen otherwise. This reduces the maximum screen brightness, an it requires a minimum frame rate to avoid flicker, but it reduces the type of "eye tracking blur" (persistence blur) which isn't present in real life. To be precise, to completely remove the tracking blur you would need to flash each frame only for an infinitesimal time period, which of course isn't realistic. But VR headsets do use this flashing/strobing approach.

By the way, this is also the reason why CRT and Plasma screens had much better motion clarity than LCD or OLED. The former flash each frame for a short time, while the latter "sample and hold" the frame for the entire frame time (e.g. 1/60th of a second for 60 Hz). 60 FPS on a CRT looks probably more fluid than 120 FPS on an OLED.

Another option for games is to indeed add a lot of frames by using reprojection techniques. This can approximate the real camera movements without needing to render a ton of expensive frames in the engine. This also is already used in VR, just currently not at overly high frame rates. This great article goes into more detail:

https://blurbusters.com/frame-generation-essentials-interpol...

Something like 1000 FPS with reprojection are apparently quite realistic, which should solve the problem of tracking blur without reducing screen brightness.

account42
5 replies
4d6h

Something like 1000 FPS with reprojection are apparently quite realistic, which should solve the problem of tracking blur without reducing screen brightness.

Is a 1000 FPS screen more realistic than a screen capable of the higher maximum brightness needed to compensate for black frame insertion though? HDR screens are already a thing and you could already gain persistence improvements there for LDR scenes without needing any new hardware by always driving pixels at max brighness but only for a reduced time depending on the target luminance.

Or just reduce the ambient light enough - I even run even my LDR monitor at 10% brightness.

mrob
3 replies
4d4h

Low persistence only gives perfect motion when your eye motion perfectly matches the object motion[0]. For all other motion, low persistence causes false images to be generated on the retina, which can be seen as "phantom array effect"[1].

I think interpolation up to several kilohertz is the best solution, preferable starting from a moderately high frame rate (e.g. 200fps) to minimize the latency penalty and artifacts.

[0] https://en.wikipedia.org/wiki/Smooth_pursuit

[1] https://en.wikipedia.org/wiki/Flicker_fusion_threshold#Visua...

cubefox
2 replies
3d23h

I think the second effect is quite small compared to the first because CRTs are generally good at the first but don't suffer much from the second, as far as I'm aware.

mrob
1 replies
3d22h

CRTs suffer strongly from phantom array effect, but individual sensitivity to this effect varies. The people who notice it are the same ones who complain about PWM lighting.

cubefox
0 replies
3d1h

I think most people don't see this effect on CRTs, so I wouldn't say they (the CRTs) suffer much from it.

cubefox
0 replies
4d5h

As far as I understand, the only thing that matters for reducing unrealistic "tracking blur" is indeed how short a frame is displayed on screen. Which could be achieved by strobing / black frame insertion or by increasing frame rate. Or even both, as in some VR applications. So the effect should be the same, except for the brightness thing, which may not be a problem if there is a lot of screen brightness headroom anyway. For HDR LCD TVs the LED backlights get quite bright actually. OLED not so much.

One advantage of higher frame rates (as opposed to strobing) would be quicker input responses to, e.g., moving the camera. That's not overly important on a normal screen but quite significant on a VR headset where we expect head movements to be represented very quickly.

terribleperson
0 replies
4d

Many gaming monitors offer a 'blur reduction' feature using a strobing backlight.

nyanpasu64
6 replies
4d10h

Moving objects can become even sharper if each frame is displayed in a fixed location for a shorter period of time (reduced MPRT), preventing eye-tracking motion from smearing each displayed frame. This can be achieved through CRT/OLED scanout (often rolling) or LCD backlight strobing (usually full-frame by necessity). Unfortunately displaying each frame for a short time is unbearably flickery at 24 Hz (so movie projectors would show film frames 2-3 times), just barely tolerable at 50 (to the point some European CRT televisions frame-doubled 50 Hz video to 100 Hz, causing objects to appear doubled in motion), and ideally needs 70-75 Hz or above for optimally fluid motion and minimum eyestrain (which can't show 60 FPS recorded video without judder and/or tearing).

cubefox
3 replies
4d6h

Double images are already present in NTSC/PAL due to interlacing. So when doubling the Hz of PAL on a CRT, you get a quadruple image. Though this probably still looks better than non-interlaced content on LCD/OLED, which exhibits smearing instead.

mrob
2 replies
4d5h

Double images are only present in converted film content, which in the case of PAL is done by speeding it up from 24fps to 25fps, and splitting each frame into two fields. In the case of native PAL video content, each interlaced field shows image data from a different time, so you don't see double images.

cubefox
1 replies
3d23h

In case of PAL video we don't see two duplicate frames, but we still see two different frames (fields) at the same time. That's the "double image" I meant.

mrob
0 replies
3d22h

On an interlaced display, the two different fields are not displayed at the same time. If you merge the two fields into a single non-interlaced frame and display it on a progressive scan display you might see a double image, but that's not inherent to interlaced video.

s4i
1 replies
4d7h

to the point some European CRT televisions frame-doubled 50 Hz video to 100 Hz, causing objects to appear doubled in motion

Hmm, this is not accurate (or I don't understand what you mean). 100Hz CRT TVs available in the 90s/00s did not interpolate frames to get smoother motion – they only existed to reduce flicker. I think such TVs also existed in NTSC markets (120Hz)?

Anyway, ever since the late 00s, pretty much all the TVs you can buy from a store do come with an interpolation algorithm to artificially display a higher frame rate image (e.g. 100Hz/120Hz) from a lower frame rate source (e.g. 23.976/24/29.97/30/50/59.94/60 fps) – which (personal opinion) looks terrible (and can be turned off from the settings – but the default is always on). This is an interesting side tangent when it comes to motion blur, because the blur is prebaked in the input signal and cannot be easily removed. Thus, the end result always has an artificial look.

For instance, if the source material is shot in 24fps with the typical 180 degree shutter angle, each frame spans 41.6ms of which the shutter was open for 20.8ms. Then your TV interpolates that to be 96Hz or whatever. However, the individual output frames still look (or, with the added artifacts etc., mostly look) like the shutter was open for 20.8ms per frame. However, each frame now spans 10.2ms which is a shorter time than the shutter speed!

cubefox
0 replies
4d6h

As far as I understand, you get doubling not because of any interpolation but because you track objects with your eye. That means the tracked object is in rest in your vision. Which means the screen actually moves relative to your vision. Which means the object smears, where the pixel length of the smear is equal to the pixel distance the tracked object moves in the time the frame is visible on screen. On CRT there is less smear, because the frames are only flashed for a short time on the otherwise dark screen. But if the FPS are not equal to the display Hz, e.g. a half, the tracked object appears on multiple, e.g. two, locations on the screen. Because you show the frame twice, and if you track an object, the screen has moved (relative to your vision) for a few pixels by the time you show the second frame. This happens e.g. if you play a 30 FPS game on a 60 Hz CRT, or when you watch 50 FPS content on a 100 Hz screen. Though the effect isn't usually as dramatic as it may sound here. Doubling is also not as bad as smearing, e.g. in the case of scrolling text.

Veedrac
2 replies
4d13h

You can't properly make a moving image sharp at a finite refresh rate (though you can approach it for sufficiently high ones), because the object can only move in a temporally juddered path, which doesn't match the eye's movement.

planede
1 replies
4d7h

You can with a strobed backlight.

Veedrac
0 replies
3d10h

Fair point.

pierrec
0 replies
4d13h

Exactly, while doing background research for this I read a really neat paper that describes what you're saying, and offers a solution based on trying to predict the viewer's eye motion: Temporal Video Filtering and Exposure Control for Perceptual Motion Blur (Stengel et al., 2015)

jvanderbot
18 replies
4d1h

Despite the promise of this making visual media feel more real, I can't help but feel like, for games, this makes the video games look like a cheap approximation of over-edited movies. For things moving real fast, or real close, or especially with non-ego motion, this all makes sense. But it is over-used for things like "the character turns quickly".

When I snap my head (or eyes) around, I don't see a blurry image, I see the new image, and my brain drops the intermediate data. You can test this by looking at yourself in a mirror, focusing on one eye, then another. Do you see your eyes or face or anything blur?

Adding blur just delays the presentation of the new view when moving your POV in games. It's distracting and unrealistic.

sxp
10 replies
4d

You can test this by looking at yourself in a mirror, focusing on one eye, then another. Do you see your eyes or face or anything blur?

This isn't the best test since your brain compensates for saccades by censoring input for a while. And it has built-in motion stabilization which allows you to read a sign while walking down the street.

A better test would be to look at your finger or hand while waving it very quickly. You'll see motion blur as your finger moves. In certain cases, you can see individual "frames" as an afterimage. E.g, if you look at a modern car tail light, the LED isn't on consistently but uses PWM to blink really quickly. So if you move your eyes while looking at a tail light at night, you'll see a series of dots rather than a blurry image. Once you learn this trick, you can distinguish between analog lights and PWMs by noticing if they are blurry or discrete.

jvanderbot
6 replies
4d

I'm more referring to FPS obsession with blurring things when the character moves, for that, turn your head quickly: Do you see a blurry room? I do not. Because your eyes must also move, and as you say, the brain censors because a "turn and look" includes an eye movement (I hypothesize).

For fast moving things outside of ego motion, I can see a reason to use blur (e.g., your hand waving).

SamBam
5 replies
3d22h

That's a good point, but how would turning your head in a first-person perspective ever look good in a game? As you say, when we turn our own heads our eyes naturally saccade. But I'm not sure having this done unnaturally -- e.g. by having a couple static intermediary images -- would look right at all. But smooth isn't right either.

jvanderbot
3 replies
3d22h

It would rely on the eye's natural saccade. My brain drops the intermediate frames already and I mostly just see the new perspective. just like real life. Adding blur makes the motion stand out.

SamBam
2 replies
3d22h

Except that your natural saccade is also wired to your motor neurons in your head and neck. For example, there's a strong connection between the neurons that fire to turn your head 10⁰ to the right, and the neurons that fire to turn your eyes 10⁰ to the left, which allows your eyes to fixate on something even as your head moves.

Indeed, it's almost impossible to turn your head slowly without your eyes fixing on points in your field of view, and then saccading to the next point. Try it. This is a hard-wired response. (You can perform some tricks to do it, like completely blurring your eyes, but that's not really the point.)

Meanwhile, if we watch a video with the camera turning, it's easy to keep our eyes staring straight ahead.

This shows that there is going to be a difference between a head naturally turning and watching a video of a head turning.

I think you're describing saccades as just being the "dropping out" of frames, and not including the extremely-relevant motion that our eyes do during a saccade.

jvanderbot
1 replies
3d21h

I appreciate the attention to detail in this thread.

I think the only thing I can say here is - I dont see anything weird when I turn my head, and when FPS games added blur to "character moves his own body", it felt much less natural, not more. So, to answer question a few posts up "how would we do it?" I'd just say "The way we always have before we got enough GPU to make everything look like a movie"

account42
0 replies
3d7h

The way we always have before we got enough GPU to make everything look like a movie

Actually I think motion blur is more of a (futile) attempt to hide low frame-rates because we don't have enough GPU to render the ever more demanding scenes.

account42
0 replies
3d7h

Just having a high frame-rate without (excessive) motion blur works pretty well. Perhaps it's not 100% realistic without eye movement but it's at least not as distracting as forced motion blur. I'm also not entirely sure that your eyes won't saccade in this case as they can lock onto moving objects just fine without body movement.

robertsdionne
1 replies
4d

The point is that ego-motion-blur is dumb. (It can also make players motion-sick.)

You say it yourself: "your brain compensates for saccades by censoring input for a while. And it has built-in motion stabilization which allows you to read a sign while walking down the street."

robertsdionne
0 replies
4d

I think the best implementation (without eye-tracking) would not motion-blur the entire scene when the player moves themselves (ego-motion-blur) but would blur moving objects. However, if the player's view motion happens to coincide with the motion of objects in the scene, for instance if the player is tracking the object (maybe another player so they can shoot it in an FPS) then the object-motion-blur should be reduced according to the degree of tracking. I'm sure some game engines already do this.

devilbunny
0 replies
3d20h

This was one of the interesting side effects of a small retinal hemorrhage. I had a fixed black spot (well, it actually looked a little like a thread) and the fact that it didn't move in my visual field made me realize just how many saccades you do.

Sohcahtoa82
3 replies
3d19h

Motion blur is a contentious subject in games because it's often done very poorly.

The biggest three crimes are blurring an object too far, blurring things that shouldn't be blurred, and blurring an entire scene (which I guess just results in #2, so maybe there's really only two crimes). The most important thing is that motion blur should be subtle. Properly-done motion blur doesn't make a game look blurry, it makes it look more real and smooth.

If an object is moving 50 pixels between frames, then the blur should not be more than 50 pixels wide. Really, it should likely be only 25 pixels to make the blur more subtle. But for some reason, racing games love to apply a full radial blur to an entire scene while going fast, and to me it just ruins it.

Likewise, an object that is not moving relative to the camera should have no motion blur at all. This is where so many games get it wrong. As you rotate, they blur the scene using post-processing. In a completely static scene, this makes sense and is a very computationally-efficient way to perform motion blur. But if you're rotating because you're tracking an object, the tracked object should not be blurred. If you're in a racing game and the car next to you is going the same speed as you, the other car should not be blurred.

Really, the only guaranteed way to get correct motion blur is to render multiple entire frames in a row and then blend them, but you need a LOT of samples to make this look good, or else something like a vertical line looks like a series of vertical bands as it flies across the screen, rather than a smooth blur.

The best is probably a hybrid approach. Render each object in the scene on its own and apply a post-process blur on a per-object basis based on which direction it is moving relative to the camera. Though Z-ordering could end up becoming a major challenge.

account42
2 replies
3d7h

As you rotate, they blur the scene using post-processing. In a completely static scene, this makes sense

No, it doesn't make sense because due to eye movement that's not how turning looks in the real world. Any game that blurs the scene on camera rotation is doing it wrong even if nothing else is moving at the time.

It's impossible to do it correctly without very low latency and high precision eye tracking so the best you can do is to not blur anything that the player might be focusing on.

subb
1 replies
3d3h

Games are not the real world. When you play a game, you are look at an image. For the current topic, motion blur in games is like motion blur with a camera, not like when you turn your head.

It's photorealism, not realism.

account42
0 replies
2d9h

Games however are not movies either. Specifically, they are interactive and thus have different requirements on their visual looks. Most games are also expending significant effort trying to make the experience as immersive as possible and simulating looking trough a film camera works against that. Camera motion blur, film grain, lens flares and other such effects should have no place in games where you play a human or human-like character rather than a robot.

throwaway8481
0 replies
4d1h

I feel like motion blur should be used like mouse acceleration. Like you said: If I whip my head around quickly that image should present clearly and immediately. However, if I'm in a car or flying through the air (in a game) this may distort the image as I'm glancing. This is the kind of drag-as-you-accelerate blur I'm expecting, and it can really make a fast-paced moment look that much more exciting. Motion blur is poorly executed everywhere.

Wowfunhappy
0 replies
3d22h

In video games, I find motion blur helps to compensate for lower framerates. It's particularly necessary at 30 fps, but nice even at 60 fps.

On the rare occasion I've been able to play a video game at 120 fps—my projector is limited to 60hz, sadly—I've found that I prefer to play without motion blur.

Log_out_
0 replies
4d

Actually you do not see a new image, you see a amount of details and your brain rewrites the history of what you saw so there is continuity.

causi
11 replies
4d4h

Motion blur makes sense for films. I have no idea why anyone wants it in a game. The limitations of display technology and of the human eye introduce all the blur I could ever want.

virtualritz
7 replies
4d4h

Motion blur makes sense for films. I have no idea why anyone wants it in a game.

TLDR; motion blur always makes sense atm. Not less so for games.

For games the difference it makes depends on the FPS and the distance an object moves within the FOV.

Estimates say humans can see max. 60FPS. Even if your game runs at 120FPS and something moves accross the entire FOV -- that is two discrete samples of that object at most. You will get strobing.

Only if you render the geometry with motion blur will this look natural.

The alternative is probably to have mutliple 10k FPS but that is wasteful.

If your biological limit is around 60FPS, it should be cheaper to generate motion-blurred frames at this rate (where more samples are used in areas with more motion blur) than rendering every pixel of a frame at muliple 10k FPS.

causi
5 replies
4d3h

Human eyes do not work based on discrete frames at all. The entire "sampling" paradigm is completely wrong.

Estimates say humans can see max. 60FPS.

This is a ridiculous statement. You know that's wrong if you've ever switched your phone between 60 and 120hz. Even without personal experience, most HN readers would be aware that, for example, when the Oculus Rift folks were developing their headset they found 90hz to be the minimum to avoid visual disturbances and nausea in an immersive environment.

virtualritz
3 replies
4d1h

Ofc the human visual system does not work at discrete frames.

However, there is a threshold on how much information your brain can process and tests indicate this maxes out at the temporal detail you can fit into 60 'discrete' frames per sec. For many people it is actually much less.

That is: in general.

Specifically this threshold is differs based on certain variables; the most important one being available light (number of photons per unit of time).

I.e. the less light, the more motion blur there is because there is more temporal integration happening in your brain's visual system. Estimates say your brain goes down to a temporal information density of less than 10 FPS in very low light circumstances [1].

Your brain does perceive motion blurred frames differently. How do I know? I worked in VFX on many blockbuster movies. We do preview stuff using realtime graphics (aka what game engines do) w/o motion blur at equal frame rates as the final frames (which are with motion blur, ofc).

Even on projects where we used 60 fps there is visible strobing w/o motion blur. And as I said: even when you preview stuff at 120 fps or more there is visible strobing with geometry that covers big distances in your FOV.

Motion from game engines w/o motion blur does not least look like game engines because of this. There is also a night and day difference between post-process motion blur, based on 'smearing' the image based on a motion vector pass and true 3D motion blur.

"The Hobbit" triology was done at 60FPS. But it still had motion blur, for obvious reasons.

[1] https://www.sciencedirect.com/science/article/pii/S004269890...

EDIT: typos/grammar

causi
1 replies
4d

The Hobbit was recorded at 48fps, not 60.

And as I said: even when you preview stuff at 120 fps or more there is visible strobing with geometry that covers big distances in your FOV.

Exactly. The game has no idea what is and isn't moving in my FoV because it doesn't know what I'm looking at, only what the camera is looking at. Subjectively, in my experience, games do a terrible job at predicting what I'm looking at and I'd prefer they stop messing up my experience by trying. Strobing is better than smeary mess.

virtualritz
0 replies
3d6h

The Hobbit was recorded at 48fps, not 60.

True, but it doesn't really change the point.

TLDR; gamers hating motion blur are not hating motion blur. They hate the way its implemented. It's far from the real thing.

The actual misunderstand here is that you are comparing apples to oranges. Motion blur for games is just as good and well-placed as it is for movies.

But only if it is actual true 3D motion blur that mirrors what happens in the real world (as in the parent article of this discussion).

In practice almost all games only blur the camera which blurs everything which leads to gamers then complaining about motion blur (and some even getting motion sickness/headaches).

Simple example: an object moves left accross the frame and the player tracks it while they're strafing fast to the right. Real motion blur means the object has almost zero blur since the player is tracking it (keeping it approx. in his FOV's center) while the BG is heavily blurred because of the strafing move the player is making.

When you only have camera blur, everything, including the tracked object is heavily blurred. That's what most games do and what gamers rightfully hate. But it is not the motion blur you see in film. Some games seem to do better now, I think this is called 'per pixel motion blur' in the games context (but I may be wrong).

Exactly. The game has no idea what is and isn't moving in my FoV because it doesn't know what I'm looking at [...]

That is valid point if you had true motion blur and wanted to make this even better in a gaming context. See above.

I used to own and play around the FOVE VR headset (their SDK was absolutely terrible at the time and Windows-only, unfortunately, I sold it to a uni eventually) [1].

It had eye tracking. And that is indeed what you'd need to do the motion blur "better" for games where a player tracks fast moving objects accross the frame. Because you will have a slight discrepancy between the tracking of the eyes and the tracking of the look-at direction from motor latency and under-/overshoot.

I.e. calculate the difference in movement of eyes against final transform of geometry (including camera) to calculate how much motion blur you actually need.

For lack of eye-tracking hardware: I wonder if the game could guess what you're looking at and compensate the look-at vector with that to simulate fake eye tracking and get motion blur that is better received by some people.

That is the theory. In practice most games just do the "motion blur" wrong for starters. See above.

[1] https://en.wikipedia.org/wiki/Fove

scheeseman486
0 replies
3d5h

The Hobbit is interesting in that they made the choice to keep the shutter fully open, essentially maxing out the motion blur slider. It was a controversial choice, I'd argue the wrong one given the many complaints about the imagery during camera movement looking swimmy and indistinct.

virtualritz
0 replies
4d1h

See also https://news.ycombinator.com/item?id=39589268 for a test you can do yourself, if you have a 120fps screen.

If that screen had e.g. 12,000fps, you would see the mouse cursor as a motion-blurred streak, not individial images of the cursor next to each other.

kroltan
0 replies
4d1h

For (the vast majority of) games the player can be looking at anything on screen, not just a specific point.

In real life, eyes track objects in order to minimise blur.

But realtime motion-blur implementations just blur everything that moves relative to the camera, which is not correct, and makes the human looking at the screen lose track of the objects they were looking at.

If we had pervasive perfect low latency eye tracking then a game could pull off something that actually looks plausible, but that is not the case, so the only games that can get away with it are the ones that are largely a movie and can take advantage of the motion blur cinematographically.

jabroni_salad
0 replies
4d2h

You only have it on a moving element so that's okay.

What gamers really dont like is when you move the camera slightly and the entire scene turns into a deep fried oil painting.

ranger207
0 replies
4d2h

Yes, motion blur gets turned off immediately on every game that I have. I consider games that don't allow you to turn it off overproduced and likely to have other directorial choices I won't agree with. I would like to see the game environment, not a smeared mess of random colors

Rexxar
9 replies
4d11h

Shouldn't the torus and the latter sphere be partially transparent if they are build from motion blur ? They seems to become opaque again at some point and it feels strange to me.

xeyownt
4 replies
4d10h

I guess it cannot become transparent because you cannot see through an infinitely fast moving / bouncing object, because necessarily the object would intercept any ray of light crossing its course.

However the light emitted by the moving object should be lower than the static object. So as the distance increases, the object should become darker and darker.

vultour
2 replies
4d8h

Yes but that one is not moving infinitely fast. If it kept accelerating it would eventually become opaque.

willis936
0 replies
4d6h

If the ball takes up 45 degrees in the torus then it would be 7/8 transparent when moving infinitely fast. It would be another 1/8 transparent when spinning the torus into a sphere.

"Infinitely fast" being a stand-in for "sufficiently fast" and gated by the speed of light, of course.

CyberDildonics
0 replies
4d4h

There is no such thing as "moving infinitely fast", that's something the other person made up. What if light didn't fly straight and made loops and flowed like water and clustered up and stuck together?

If you're going to make up nonsense, you're not talking about physical reality you're talking about cartoons or magic and fantasy.

When something moves fast in photography it has motion blur and that spreads its coverage over a larger area. In this demo the motion blurred 'torus' and 'sphere' both get made artificially more opaque to make the animation work. You can see it happen.

johanvts
1 replies
4d10h

Is light discrete or continuous?

account42
0 replies
4d5h

Yes.

pierrec
0 replies
4d4h

Yes, the torusphere concept is physically impossible. Because the whole thing is a loop, the object doesn't even have any theoretical substance, it's all made of (artificially thickened) motion blur. Hence the title, motion blur all the way down.

drewtato
0 replies
4d10h

It's definitely blending between sphere and torus, and then torus and sphere. Otherwise it would keep getting fainter as time went on.

Animats
7 replies
4d13h

This gets past trying to simulate a film camera and tries to simulate the human visual system. That's useful. Another step towards reality and away from simulating obsolete technology. Shutter-type motion blur may go the way of sepia-toned prints, 16 FPS black and white movies, and elliptical wheels from mechanical shutters.

subb
6 replies
4d12h

Let me know when we have the tech to render the real sun's power on your TV screen.

Reproduction of reality is not the goal, because it's unachievable.

kuschku
1 replies
4d4h

Modern HDR screens already cause the same subconscious perception as the sun would. You flinch, your eyes adjust, you even move a little bit back and feel a kind of warmth that isn't actually there.

subb
0 replies
4d4h

Anyone permanently burned their retina yet?

Animats
1 replies
4d11h

With enough money...

Larry Ellison used to have a TV projector in his house, with the light output for a drive-in movie theater but aimed at a small screen, so he could watch movies in broad daylight. That was before everyone got bright screens.

account42
0 replies
4d5h

I don't see how the result would avoid being either uncomfortably bright or still having shitty contrast due to the unavoidably high "black" levels from the ambient light.

astrodust
0 replies
4d2h

Watching Sunshine (2007) will be a real experience when that happens.

account42
0 replies
4d5h

Well HDR screens are becoming more common so we're moving in that direction.

Also, simulating realistic processes is not incompatible with tonemapping the result to be able to display it on limited screens.

pierrec
3 replies
4d13h

Ah, I assume this was posted because ambient.garden was just on the front page. Re-reading it now, I think it should really have been 2 articles.

I think I still like the first half, it's a reasonable dive into the detail of what motion blur is, or should be in theory. The second half is a slightly crazy, extremely condensed explanation of how the shader works for this particular "torusphere" animation based on motion blur. I find that part mainly useful because otherwise the code would be impenetrable at this point, at least to me. In retrospect the transition between the 2 parts feels a bit like jumping into a frozen lake, sorry about that!

chaboud
2 replies
4d13h

I read through it and got the sense that it just jumped into one particularly odd special case bag of crazy after providing a clean explanation of motion blur as a function of shutter-angle (exposure window). I'm glad it wasn't just the glass of wine I finished.

I've played with motion blur as a function of projection of 4D extrusions a few times, but the practical context (video, textures) ends up leaving it an impractical approach when compared to sampling in hardware with caching. This write up leaves me thinking "maybe just one more time".

pierrec
1 replies
4d11h

If you're working with triangle meshes, I think the approach of extruding triangles into prisms looks promising. It allows drawing arbitrarily long trails with OK performance. Sadly it did not match the constraints of cramming the whole scene into a shader, hence the weirder volume-based techniques.

Fast analytical motion blur with transparency: https://www.sciencedirect.com/science/article/pii/S009784932...

chaboud
0 replies
3d13h

I played with an approach like Gribel in a software abuffer engine years ago, where coverage fragment lists were already part of the picture. Combined with video textures, it's fair to say that my result was... slow. But I'm more and more inclined to take another crack at it.

modeless
3 replies
4d13h

The torus demo is really neat. I think it's interesting how high frame rate changes the perception of blur. I'm using a 240 Hz display and on figure 5 I can't see the discrete circles until around 12 rad/s. And even at 40 rad/s I can't see any difference between the traditional and sine shutter options (in motion).

I highly recommend 240 Hz if only for the smoothness and low latency of mouse movements. A 60 Hz mouse pointer is really, really hard to go back to.

iforgotpassword
1 replies
4d11h

I've used both, a 4k 24" display ("once you used retina you can't go back") and a 144Hz display, and I can't be arsed to upgrade my main setup which is three old 24" 1920x1200 screens at 60Hz. Some people really don't care. Heck I don't get those discussions about input latency and terminal emulator holy wars, I can't tell the difference between working on a framebuffer console and working via ssh with an additional 50ms.

I can tell the difference between a 30 and 60fps game, but only if I pay attention to it. As long as it's steady and not bouncing between the two I just don't care.

Avamander
0 replies
3d8h

I can tell the difference in basically all use-cases instantly. The first window I drag on the desktop or camera I pan in game and it's very noticeable. I find higher fps causes significantly less eye strain in my experience.

Microstutters also really annoy me. But it's too much to ask for if a lot of things still force 30fps scroll on me (for example AMD's own Radeon software).

Etherlord87
0 replies
4d8h

I agree the, in my case, 144 Hz cursor is nicely smooth, But I have no problems going back to 60 FPS cursor.

I wonder, though, if maybe an interface with smoother transitions is more soothing and maybe having a higher refresh rate does affect one's mental state over time...

bjano
3 replies
4d12h

It seems to me that the demonstration is calculated in sRGB space, so with non-linear brightness values and I suspect that most of the unnaturalness of the smear is due to that. To simulate the physics this would need to be done with linear brightness values and only at the end converted to sRGB.

(unless some non-linear effects in human visual perception cancel all that out, but it should at least be mentioned if so)

pierrec
0 replies
4d2h

Thanks for pointing that out. I've done color space conversion in other graphics applications but clearly haven't learned my lesson. I'll double check and update the interactive figures (and text) if it makes sense. The main "torusphere" shader should be fine because the motion blur is non-realistic and hand tuned, but the first couple of interactive figures are a direct application of theory so what you're saying applies to those. Overall I don't think it invalidates the main ideas in the text though.

WithinReason
0 replies
4d9h

Yes I noticed that the gamma is all wrong, it actually defeats the purpose of the article since you don't get smooth perceived motion blur with the figures. This could have been so much better.

Solvency
0 replies
4d3h

It's wild to me the author would overlook such a crucial thing (ignoring linear space). But then again, even Adobe barely supports linear and it's 2024.

zubairq
1 replies
4d11h

Wow, watching motion blur all the way down really made me think about string theory, atoms, and how the universe is made for some reason!

cjdell
0 replies
4d11h

It's an interesting metaphor. Electron shells are sometimes described as a cloud of probabilistic density, as in the shade is an indication of the probability of finding an electron at that point if one were to freeze time. Obviously it gets weirder the deeper you get.

nuclearsugar
1 replies
4d13h

Related: Gotta love the ReelSmart Motion Blur plugin for adding motion blur based on the automatic tracking of every pixel from one frame to the next. It's not always perfect but still does an amazing job. Sometimes I get experimental and crank the calculated motion blur way past the default value and it looks surreal!

CyberDildonics
0 replies
4d4h

That's not related at all. That is optical flow, which is a 2d filter technique that has been around for 28 years. It can't do this and has nothing to do with anything here except for the term 'motion blur'.

yeknoda
0 replies
4d4h

Motion is an invisibility cloak.

virtualritz
0 replies
4d8h

A good overview is in [1].

Curiously, before the classic paper on modeling shutter efficiency [2] came out in 2005, all renderers used in VFX production used box shutters. I.e. the shutter opens instantly, stays open for the specified duration and then closes instantly.

When you watch movies like "Jurassic Park" or "The Mask" and see scenes with extreme motion blur, that's PhotoRealistic RenderMan with a box shutter.

The first in-the-wild 1:1 implementation of the parameterization in [2] was done in [3] in the same year the paper came out (hasn't changed until today). It was first used on "Charlotte's Web" (only the spider character done by Rising Sun Pictures has it as they used [3] for it).

Pixar added it a few years later too and went a tad overboard with theirs in [4]. Most offline renderers nowadays have this feature and call it a "shutter curve".

[1] O. Navarro et al.: Motion Blur Rendering: State of the Art (https://citeseerx.ist.psu.edu/doc_view/pid/fc23fb525cafa8fe6...)

[2] Stephenson, Ian: Improving Motion Blur: Shutter Efficiency and Temporal Sampling (https://staffprofiles.bournemouth.ac.uk/display/journal-arti...)

[3] https://www.3delight.com/, see specifically https://nsi.readthedocs.io/en/latest/nodes.html

[4] https://renderman.pixar.com/resources/RenderMan_20/cameramod...

spondylosaurus
0 replies
4d13h

Wow, the live comparison demo is wild. When I was following along up to that point, I felt like I "got it" on an intellectual level, but it wasn't until I could toggle the motion blur on/off that the difference was really apparent!

dukeofdoom
0 replies
4d4h

I want to try this in my pygame game. Any python version of this.

dinobones
0 replies
4d10h

Alternative title: Visual proof of the Rutherford model

de6u99er
0 replies
4d3h

I love thise interactive tutorials. Since the method and it's parameters can be changed it would ve interesting to see some form of performance impact.

Etherlord87
0 replies
4d7h

At first I was wondering why the shading of the ball is off - turns out it's because it's not a ball but a rotating "torus" (orbiting ball)... The ball probably would look better if there was a longer period between the last and first phases, but that could reduce the power of the surprise.

Anotheroneagain
0 replies
4d11h

It was noted early in the development of cinema that the required framerate would probably be much lower, if (IIRC) the shutter could be somehow replaced by blending the neighboring frames together, that is the exposure would gradually shift from the first frame to the next.