return to table of content

Show HN: Free e-book about WebGPU Programming

shmerl
22 replies
17h30m

What's the story with WebGPU in Firefox? Why is it still not enabled by default?

erichdongubler
15 replies
16h28m

Hey, member of the Firefox WebGPU team here. The short summary is: it's not yet ready for general consumption, but we're hoping to do so on the order of months, not years! Many things already work, and we'd encourage you to try it out on Nightly.

There is a _lot_ of work to do still to make sure we comply with the spec. in a way that's acceptable to ship in a browser. We're 90% of the way there in terms of functionality, but the last 10% of fixing up spec. changes in the last few years + being significantly more resourced-constrained (we have 3 full-time folks, Chrome has/had an order of magnitude more humans working on WebGPU) means we've got our work cut out for us.

If you're interested if your use cases work already a consumer of WebGPU in JS on Firefox, you can:

- Follow the `Platform Graphics: WebGPU` component in Bugzilla ([0]).

- CC yourself on the `webgpu-v1`[0] bug and its dependent meta bugs (which aggregate yet more work). These get updated every so often to recognize the (ahem) ever-growing list of things we have to do in Firefox.

- Try things out in Nightly, and file issues to help us prioritize things!

[0]: https://bugzilla.mozilla.org/buglist.cgi?component=Graphics%...

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=webgpu-v1

Joel_Mckay
11 replies
9h52m

Add proper game-pad and joystick calibration support.

The whole sandbox HID scope capture issue is a core problem that needs resolved.

Best of luck =3

lukan
10 replies
9h22m

I don't think that is related to WebGPU ..

Joel_Mckay
8 replies
8h57m

If people want it to be relevant to its primary use case, than low-latency HID interface callbacks are certainly required... Otherwise you will have people fighting the mouse-in-browser scope issues, and partying like its 1993 on the keyboard.

If people don't keep the IMU/navigation interfaces close to graphics interface layer, than the project will develop difficult to debug syncing and or lag issues in the VM layer.

I could be wrong about this issue, but it was the primary reason we culled the js based game engine project. =3

lukan
7 replies
8h22m

I did not say your wishes are unreasonable. Just that here is not the place for them. The person you replied to works on WebGPU and is quite buisy with that ..

Joel_Mckay
6 replies
8h6m

It is not personal feature requests, but rather a historical comparison with why VRML/X3D failed to gain popularity.

Ignoring users and project architects is hardly a new problem by any stretch of the imagination. Leave the noted key features out, and the GPU API will remain vestigial. =3

erichdongubler
5 replies
6h7m

You're informing people (i.e., me and my team) that are working on implementing a spec. from a single piece of the web platform that their piece of the platform (graphics programming) is useless for a specific use case (gaming) without a very different piece of functionality being implemented on the web platform. That's valid feedback, but also difficult to act on with it's current form and audience.

I think what lukan is trying to tell you is that if you're serious about your advice being taken, you will need to find a venue in which the right people in the web ecosystem can engage with it. Neither me nor my team are the right audience for that, unfortunately. I suggest you file an issue on Bugzilla, if you want to start there! I'm happy to assist in making that happen, if you want.

If you do actually follow up with the above, I think you need to answer first: What APIs already exist on the web platform, and why are they not sufficient? For example, the Gamepad API exists; were you aware of it before, and do you think it fulfills the stated use case? Why or why not?

I will also push back on these statements:

If people want it to be relevant to its primary use case, than low-latency HID interface callbacks are certainly required...

Leave the noted key features out, and the GPU API will remain vestigial. =3

...because it appears to ignore valid use cases that exist outside of gaming. My team doesn't just serve the gaming use case (though we sure hope it will), and calling the API we're working on "vestigial [without supporting gaming]" is disrespectful to both people who need those other use cases filled _and_ people like me who are trying to make them possible. It also implies a responsibility for the platform as a whole that we can't possibly shoulder ourselves; the amount of expertise required to make a platform across all major OSes for just _one_ of these web platform APIs can be huge, and WebGPU is, indeed, very large.

Joel_Mckay
4 replies
5h35m

If the intended primary use-case is Google Maps specific, than they should not require your team to be volunteering their time (should be sponsoring the project at minimum.) The fact remains that simply click-capturing the mouse interface is an insufficient way to handle the GUI in most Browsers, and for most general users (CAD/Games/VR etc.)

Your team needs to consider _why_ the history of VRML browser standards ended up fragmenting, becoming overly niche, or simply being replaced with something proprietary.

I am unaware how perceived disrespect is derived from facts. If you choose to be upset, than that is something no one except yourself can control.

Certainly APIs can grow in complexity with time, but this is often a result of unconstrained use-case permutation issues or the Second-system effect.

Have a great day, and certainly feel free to reach out if you have any questions, observations, or insights. =3

lukan
3 replies
3h55m

You can run LLMs via WebGPU today, among many other things. If you call this useless, you probably mean, this is useless to your use case and this is right.

pjmlp
1 replies
3h45m

In theory, this should have been possible long ago with WebGL Compute, had Google not given up on it, and removed it from Chrome, quoting WebGPU as the future, and the OpenGL execuse on Apple platforms (excuse, because they ended up switching to Metal for WebGL anyway, and use DirectX on Windows).

raphlinus
0 replies
52m

WebGL compute was not viable, and only existed as an engineering prototype with lots of rough edges. There were a bunch of hard problems that needed to get solved to ship WebGPU. Perhaps in an alternate universe, that work could have been done in the context of WebGL, but it didn't.

I'll give one example (as it's one I was personally invested in). Doing a barrier in non-uniform control flow is wildly unsafe undefined behavior (I've had it reboot my computer, and it's easy to believe it could be exploited by malicious actors). To make these barriers safe WebGPU does a "uniformity analysis." However, that in turn required adding uniform broadcast intrinsic to the shader language, otherwise a class of algorithms would be impossible to express.

As I say, it's plausible this kind of work could have been done as extensions to WebGL, but I think the end result would have been a lot less compelling than what we have now.

Joel_Mckay
0 replies
3h29m

LLM are still considered niche to most, but thank you for confirming my point about ignoring what users say. =)

The use cases we initially thought were worth testing out:

https://doc.babylonjs.com/features/featuresDeepDive/webXR

https://doc.babylonjs.com/features/featuresDeepDive/physics/...

https://doc.babylonjs.com/features/featuresDeepDive/mesh/gau...

The goofy game pad use case, and the user level experience: https://doc.babylonjs.com/features/featuresDeepDive/input/ga...

In the end, I culled that project due to simpler inexpensive engine options.

Have a great day, and good luck =3

pjmlp
0 replies
6h38m

It is related to 3D Web being meaningful for anything besides shadertoy like demos, data visualization and ecommerce 360° visualizations.

shmerl
0 replies
15h38m

Good to know, thanks!

raphlinus
0 replies
15h37m

Cheering on the progress. As a long time user of wgpu in Vello, I'm impressed with the progress there. The advanced compute stuff has been working quite well, and it's also got some nice extensions like subgroups (so far only usable in native configurations, but I'm hopeful the extension will be standardized so it's available in browser as well).

koolala
0 replies
4h3m

Thanks for posting here! Very encouraging!

Can't wait to see WebGL / WebGL2 / WebGPU performance benchmark tables comparing all the browser and platforms when it ships everywhere.

nox101
5 replies
16h31m

Because they haven't finished implementing it and lots of functionality is missing. Neither has Safari ship it. Same reason. They aren't done.

Joel_Mckay
4 replies
10h25m

We investigated this many years ago:

https://github.com/BabylonJS/Babylon.js/

1. It works just fine for basic 3D functionality, but has limited support for tricks like fog FX etc. WebGL is actually quite capable when combined with simplified physics engines, and animated mesh packing formats.

2. In a business context it fails to check all the boxes, as people are not going to invest in a platform with near zero IP protection.

3. Steam + Unreal has all the advantages, and none of the problems/overhead of a Browser. i.e. peoples game pads, spacial audio, and shader cache buffers will work properly.

WebGL is just hitting the same VRML market that failed decades ago for the same reasons. =3

nox101
3 replies
8h17m

WebGL powers both Google Maps and Figma. maybe you're thinking too small

as for ip protection. Which platforms do that?

https://www.models-resource.com/

Joel_Mckay
2 replies
8h3m

Then Google Maps is still in App form because... just kidding... we both already know why. =3

nox101
1 replies
7h17m

confused . There is no native desktop app for google maps. And it does full 3d. maybe you meant on mobile? But you brought up Steam so clearly not mobile

Earth is WebGL now too

https://earth.google.com

Joel_Mckay
0 replies
7h3m

Google Earth Pro download link:

https://www.google.com/earth/about/versions/

Probably a bad example, but it is not our concern if Google wants to repeat the VRML/X3D trajectory. Best of luck =3

koolala
11 replies
4h17m

WebGPU burned me out too when the Firefox developers got let go.

WebGPU is super slow on GPU and all the official benchmarks only care about CPU performance.

FragmentShader
10 replies
3h33m

WebGPU is super slow on GPU and all the official benchmarks only care about CPU performance.

omg I thought I was the only one that found that. I tried webgpu (On a native context) and it was slowwwww. Only 10k non-overlapping triangles can bring my RTX GPU to its knees. It's not the shader because it was only a color. It's not the overlapping (And I tried a depth prepass as well). It's not the draw calls. The API is slow, straight up.

In fact, you can try to open a WebGPU demo in your browser and check the GPU usage in the Task Manager. Close it, open a random webgl Unity game and you'll see how much a single WebGPU triangle takes compared to a full-fledged game.

On my computer, the average Unity game with shadows, shaders 'n stuff takes 5% GPU and a simple WebGPU demo takes 7%.

grovesNL
9 replies
3h13m

Only 10k non-overlapping triangles can bring my RTX GPU to its knees

Your benchmark doesn't match the experience of people building games and applications on top of WebGPU, so something else is probably going on there. If your benchmark is set up well, you should be limited by the fill rate of your GPU, at which point you should see roughly the same performance across all APIs.

On my computer, the average Unity game with shadows, shaders 'n stuff takes 5% GPU and a simple WebGPU demo takes 7%.

GPU usage isn't a great metric for performance comparisons in general because it can actually imply the inverse depending on the test case. For example, if the scenes were exactly the same, a lower GPU usage could actually suggest that you're bottlenecked by the CPU, so you can't submit commands fast enough to the GPU and the GPU is sitting idle for longer while it waits.

FragmentShader
8 replies
2h21m

Your benchmark doesn't match the experience of people building games and applications on top of WebGPU

Here's an example of Bevy WebGL vs Bevy WebGPU:

I get 50 fps on 78k birds with WebGPU: https://bevyengine.org/examples-webgpu/stress-tests/bevymark...

I get 50 fps on 90k birds with WebGL: https://bevyengine.org/examples/stress-tests/bevymark/

So you test the difference between them with technically the same code.

(They can get 78k birds, which is way better than my triangles, because they batch 'em. I know 10k drawcalls doesn't seem good, but any 2024 computer can handle that load with ease.)

Older frameworks will get x10 better results , such as Kha (https://lemon07r.github.io/kha-html5-bunnymark/) or OpenFL (https://lemon07r.github.io/openfl-bunnymark/), but they run at lower res and this is a very CPU based benchmark, so I'm not gonna count them.

be limited by the fill rate of your GPU

They're 10k triangles and they're not overlapping... There are no textures per se. No passes except the main one, with a 1080p render texture. No microtriangles. And I bet the shader is less than 0.25 ALU.

at which point you should see roughly the same performance across all APIs.

Nah, ANGLE (OpenGL) does just fine. Unity as well.

a lower GPU usage could actually suggest that you're bottlenecked by the CPU

No. I have yet to see a game on my computer that uses more than 0.5% of my CPU. Games are usually GPU bound.

grovesNL
3 replies
1h31m

Here's an example of Bevy WebGL vs Bevy WebGPU

I think a better comparison would be more representative of a real game scene, because modern graphics APIs is meant to optimize typical rendering loops and might even add more overhead to trivial test cases like bunnymark.

That said though, they're already comparable which seems great considering how little performance optimization WebGPU has received relative to WebGL (at the browser level). There are also some performance optimizations at the wasm binding level that might be noticeable for trivial benchmarks that haven't made it into Bevy yet, e.g., https://github.com/rustwasm/wasm-bindgen/issues/3468 (this applies much more to WebGPU than WebGL).

They're 10k triangles and they're not overlapping... There are no textures per se. No passes except the main one, with a 1080p render texture. No microtriangles. And I bet the shader is less than 0.25 ALU.

I don't know your exact test case so I can't say for sure, but if there are writes happening per draw call or something then you might have problems like this. Either way your graphics driver should be receiving roughly the same commands as you would when you use Vulkan or DX12 natively or WebGL, so there might be something else going on if the performance is a lot worse than you'd expect.

There is some extra API call (draw, upload, pipeline switch, etc.) overhead because your browser execute graphics commands in a separate rendering process, so this might have a noticeable performance effect for large draw call counts. Batching would help a lot with that whether you're using WebGL or WebGPU.

FragmentShader
2 replies
1h13m

I think a better comparison would be more representative of a real game scene, because modern graphics APIs is meant to optimize typical rendering loops and might even add more overhead to trivial test cases like bunnymark.

I know, but that's the unique instance where I could find the same project compiled for both WebGL and WebGPU.

Either way your graphics driver should be receiving roughly the same commands as you would when you use Vulkan or DX12 natively or WebGL, so there might be something else going

Yep, I know. I benchmarked my program with Nsight and calls are indeed native as you'd expect. I forced the Directx12 backend because the Vulkan and OpenGL ones are WAYYYY worse, they struggle even with 1000 triangles.

That said though, they're already comparable which seems great considering how little performance optimization WebGPU has received relative to WebGL (at the browser level).

I agree. But the whole internet is marketing WebGPU as the faster thing right now, not in the future once it's optimized. The same happened with Vulkan but in reality it's a shitshow on mobile. :(

There is some extra API call (draw, upload, pipeline switch, etc.) overhead because your browser execute graphics commands in a separate rendering process, so this might have a noticeable performance effect for large draw call counts. Batching would help a lot with that whether you're using WebGL or WebGPU.

Aha. That's kinda my point, though. It's "Slow" because it has more overhead, therefore, by default, I get less performance with more usage than I would with WebGL. Except this overhead seems to be in the native webgpu as well, not only in browsers. That's why I consider it way slower than, say, ANGLE, or a full game engine.

So, the problem after all is that by using WebGPU, I'm forced to optimize it to a point where I get less quality, more complexity and more GPU usage than if I were to use something else, due to the overhead itself. And chances are that the overhead is caused by the API itself being slow for some reason. In the future, that may change. But at the moment I ain't using it.

grovesNL
1 replies
1h0m

It's "Slow" because it has more overhead, therefore, by default, I get less performance with more usage than I would with WebGL.

It really depends on how you're using it. If you're writing rendering code as if it's OpenGL (e.g., writes between draw calls) then the WebGPU performance might be comparable to WebGL or even slightly worse. If you render in a way to take advantage of how modern graphics APIs are structured (or OpenGL AZDO-style if you're more familiar), then it should perform better than WebGL for typical use cases.

FragmentShader
0 replies
32m

The problem is that it's gonna be hard to use WebGPU in such cases, because when you go that "high" you usually require bindless resources, mesh shaders, raytracing, etc, and that would mean you're a game company so you'd end up using platform native APIs instead.

Meanwhile, for web, most web games are... uhhh, web games? Mobile-like? So, you usually aim for the best performance where every shader ALU, drawcall, vertex and driver overhead counts.

That said, I agree on your take. Things such as this (https://voxelchain.app/previewer/RayTracing.html) probably would run way worse in WebGL. So, I guess it's just a matter of what happens in the future and WebGPU is getting ready for that! I hope that in 10 years I can have at least PBR on mobiles without them burning.

stanleykm
1 replies
1h30m

this looks to be cpu bound. I’m not getting full gpu utilization but i am seeing the javascript thread using 100% of its time trying to render frames.

The webgpu and webgl apis are pretty different so im not sure you can call it “technically the same code”.

FragmentShader
0 replies
1h11m

The webgpu and webgl apis are pretty different so im not sure you can call it “technically the same code”.

Isn't Bevy using WGPU under the hood, and then they just compile with it both WebGL and WebGPU? That should be the same code Bevy-wise, and any overhead or difference should be caused by either the WGPU "compiler" or the browser's WebGPU.

kaibee
1 replies
2h14m

I have yet to see a game on my computer that uses more than 0.5% of my CPU.

Just a nitpick here, you probably have some multicore CPU while the render-dispatch code is gonna be single threaded. So that 0.5% you're seeing is the percent of total CPU usage, but you probably want the % usage of a single core.

FragmentShader
0 replies
2h4m

Yeah, you're right. Sorry about that one.

mistercow
7 replies
15h32m

This is great to see. I’ve been working on a WebGPU project for a couple months now, learning as I go, and it’s rough how many things there are that take tons of digging to find even remotely straight answers about. There’s the basic WGSL language and JS API, which are, strictly speaking, well documented, and then there’s the stuff where you have to skip past documentation and find the one post answering a tangentially related question by someone with a singleminded interest in GPU programming.

jamesu
5 replies
8h11m

My biggest problem with WebGPU is the documentation, especially if you delve into the native or emscripten variants. Not even fancy AI tools will save you - in fact in a lot of cases they make it worse as they often suggest solutions based on older webgpu variants or they make grossly inaccurate assumptions about how your particular webgpu implementation works. Figuring out how to properly setup the "native" version was a nightmare, I had to find a random github project where someone thankfully made a SDL2 example. I didn't really feel there was an active community that was interested in pointing me in the right direction either, more like a bunch of people too absorbed with building an API than actually caring about who is using it. Maybe things will improve in the future but I remain skeptical.

pjmlp
4 replies
6h41m

That is to be expected, it is rigth there in the name, Web.

The problem is that people are reaching out to it, as means to avoid the Vulkan mess, when the real alternative is middleware engines.

galangalalgol
1 replies
4h58m

I thought vulkan was getting better? I'm mostly interested from a platform agnostic compute perspective, and I've used some libraries that employed wgsl for that. I'd added it to the backlog of stuff to learn but lately it seemed like vulkan might be a better approach.

pjmlp
0 replies
3h54m

The extensions story isn't getting better than OpenGL,

https://vulkan.gpuinfo.org/listextensions.php

Additionally, setting up the infrastructure, even with Vulkan 1.3, is a bit piece of code.

johnnyanmac
0 replies
2h42m

when the real alternative is middleware engines.

Is there even a good Middleware engine that can also target web? My impressions are that both Unity and Unreal expressed interest in WebGPU support, but what's publicly available is pretty barebones.

Also, I imagine the people working with WebGPU to begin with are either hobbyists, looking to learn and work in a professional role with WebGPU, or are in fact making that future Middleware (be it proprietary or open source).

grovesNL
0 replies
5h14m

There's plenty of room for both approaches: a lot of projects can benefit from using a platform-agnostic API like WebGPU (web or native) directly, others might want to use engines. Anecdotally I use WebGPU (through wgpu) in a commercial application for a visualization, and would've never bothered to apply Vulkan or DX12 for that otherwise.

Documentation will keep improving with time. There have already been a number of high-quality tutorials and references created over the past few years, for example:

https://webgpufundamentals.org/ for JavaScript/the web API

https://sotrh.github.io/learn-wgpu/ for Rust (web or native)

https://eliemichel.github.io/LearnWebGPU/ for C++

pjmlp
0 replies
11h44m

This has been the state of the Web3D since WebGL 1.0, worse that after 10 years, browser vendors still don't see a need to have proper debugging in place.

dylanhu
2 replies
17h52m

This is super impressive and really exciting as I am looking to get deeper into WebGPU. Two quick notes on the playground before I dive into the content soon:

1. The playground code does not seem to fully work on Safari. The code is there and selectable but the glyphs are invisible.

EDIT: My Safari was just bugged, had to restart it and it works

2. Is the cover of the book on the right of the playground supposed to change depending on which example you are looking at? I think it could be nice if the book contents were rendered alongside the code if the user wanted instead of the cover which does not change.

nox101
1 replies
16h32m

Safari has not shipped WebGPU. Why would you expect it to work. Until Safari officially ship WebGPU it's guaranteed to be buggy and missing features.

dylanhu
0 replies
12h46m

I was referring to the code editor in the playground which I assume is not powered by WebGPU. I noticed the issue on other sites as well which prompted me to restart, fixing the issue.

I have WebGPU enabled in Safari through a preview flag, and running the playground examples worked fine.

danroc
2 replies
4h15m

Nice work! Thank you for sharing.

I'd like to get excited about WebGPU as well, but I am lacking enough imagination here. Games? 3D animations? What else beyond these obvious applications?

Those who are excited about it: why? what are you planning to build with it?

gizmo
0 replies
4h3m

Webgpu makes sense for anything that needs to render quickly or that needs a lot of control over how things get rendered.

A slideshow program with custom font rendering & line wrapping and nice shader based transitions between slides for instance.

Even something like a terminal/tty is much nicer when it renders at > 60fps with smooth scrolling. Very hard to do with DOM nodes.

Because of the web a generation of programmers has forgotten how fast and responsive applications can be. Webgpu can make the web feel fast again.

beardyw
0 replies
2h34m

I got interested in 3D animation so I took a look at this stuff. I quickly retreated to the safety of three.js!

nickpsecurity
1 replies
53m

Many of us are non-graphics programmers interested in learning to benefit from CUDA-style parallelization. I’ve read they’re mostly incompatible, though. I know there’s vendors supporting OpenCL and Vulkan. I didn’t see many AI projects using them, though. Makes me think cross-platform approaches aren’t great for some reason.

My questions: are there good, native implementations of WebGPU where learning it can replace learning CUDA or ROC for good, GPU utilization? If not, would a book like this teach us enough to easily pick up CUDA etc? Or are they very different to the point we’re better off learning the GPU-specific libraries?

raphlinus
0 replies
48m

WebGPU is not yet performance competitive with CUDA, in part because the cooperative matrix multiplication ("tensor cores" in Nvidia-speak) extension is not done yet. That in turn depends on subgroups, which are pretty far along (but not yet shipping).

That said, you can do machine learning in WebGPU, and, if your goal is to ship in a browser, it is viable.

I personally think the performance gap can be closed with some engineering effort, and that WebGPU has the potential to become a real contender, but it's too early to say for sure. Certainly CUDA has a major head-start.

ssahoo
0 replies
18h3m

Great book, thanks for writing it.

Just a low hanging issue. The rendering on mobile viewport especially on Firefox is not ideal. Navigation is broke and content does not scroll well.

sramam
0 replies
18h26m

This looks fantastic.

Just the notion of a hyperlinked code-playground is fantastic. Not to mention the content of the book.

And a side project at that? Wow. Congratulations and thanks for sharing.

pjmlp
0 replies
11h45m

It looks quite nice, great efforts.

pedrolins
0 replies
5h5m

This is awesome! I’ve given up learning graphics programming in the past due to the fragmented ecosystem of libraries. It just felt overwhelming. This seems exactly what I’ve been missing.

jay-barronville
0 replies
17h10m

Congratulations and well done!!!

However, I'm feeling burnt out and ready to call it finished, even though it may not feel completely done. Avoiding another abandoned side project has been my primary motivation in reaching this point.

Thank you for spending your time to produce this excellent resource and releasing it to us. Don’t feel too bad about it not being all the way where you’d like it to be. You can always improve it later, or even allow the community to help you improve it.

j45
0 replies
15h35m

This is a lot of good content for putting out there for free, thank you so much.

I know a young person who was quite interested in this and looking for a resource like this.

I love the focus you've place on video, and actually making it engaging. Subject Matter Experts who undertake this are my favourite audience to be around and help.

If you might be interested in exploring this content of yours into aligned educational content and delivery including videos that could support your work financially, I'd be happy to chat an share what I do as I like magnifying subject matter experts doing things like this. All knowledge is yours, strictly to add value.

erichdongubler
0 replies
15h38m

I haven't had time to dig into all of these demos, but the material looks _delightful_.

In case it's interesting to anyone, I did just go on a big bug-filing spree for Firefox. There are a handful of issues (that were already on the Firefox team's radar) to resolve before all of these playgrounds work as-is: https://bugzilla.mozilla.org/show_bug.cgi?id=webgpu-unleashe...

decodingchris
0 replies
5h9m

Exactly what I was looking for! Thanks

MalcolmDwyer
0 replies
4h51m

This looks great. I can't wait to dive in. Thank you very much for sharing.

Formatting nitpick for the site: when viewed on mobile, the text width is set to screen width minus 1 margin, but the page is fixed at text width plus both margins, so the scrolling has a lot of sloppy side-to-side play and it's irritating to get the content lined up in the middle.

DrMiaow
0 replies
9h55m

Nice. I was just about to embark on a small game prototype in WebGPU to learn it.

I'm going to start by rampaging through this book.