return to table of content

I sped up serde_json strings by 20%

zadokshi
75 replies
19h22m

Serde json has 3gb of dependencies once you do a build for debug and a build for release. Use serde on a few active projects and you run out of disk space. I don’t know why json parsing needs 3gb of dependencies.

I’m all for code reuse but Serde for json is a bit of a dogs breakfast when it comes to dependencies. all you need is an exploit in on of those dependencies and half of the rust ecosystem is vulnerable.

Rust should have Jason built in.

troad
26 replies
18h3m

Dependency bloat is an issue with Rust in general. The dependency trees for any meaty Rust project quickly become pretty horrifying. Auditing all these dependencies is infeasible, and my level of confidence in a lot of them is fairly low.

I worked with Rust for a few years, and with the benefit of a few years' experience, I don't think I'll be touching Rust again until the ecosystem matures a great deal (which will only come with significant corporate adoption), or if I need something for a no-std, no-deps, strictly-a-C-replacement kind of project. (Though Zig might edge out Rust for this use case once it stabilises.)

echelon
16 replies
14h49m

The dependency trees for any meaty Rust project quickly become pretty horrifying.

s/Rust//

This is really no different from any other language.

At least Rust, with Cargo, makes it easy to scan your dependencies. And many notable Rust projects attempt to keep third party dependencies to a minimum.

C++ gives you absolutely nothing to work with. Other languages with package managers don't keep dependency trees shallow. You're holding Rust up to a standard that nothing meets.

wiseowise
4 replies
10h57m

And yet with Python and JS I don’t need to pull a freaking third-party library for something as basic as JSON.

wtetzner
1 replies
5h40m

Nope, instead it's pulled into every project whether you need it or not.

wiseowise
0 replies
2h6m

Pulled how?

kelnos
0 replies
36m

Most of my Rust projects don't need a JSON parser. You probably haven't been a programmer that long if you think JSON is "basic". If it was 20 years ago, you'd be arguing for an XML parser in Rust's stdlib. In 20 years I'm sure it will be something else. File/data formats come and go. The stdlib of a systems programming language shouldn't be taking on a forever maintenance burden for something that likely won't be in widespread use for all that long.

ModernMech
0 replies
2h39m

If I don't need it, it's not basic it's just useless.

runevault
4 replies
14h15m

I'm not sure languages without a package manager being the default are nearly as bad.

When it is trivial (like running a single command line or a single line in a file like cargo.toml), having a proliferation of dependencies is so easy it almost becomes guaranteed. But in languages like c++ where not everyone uses them (even if several PMs exist) adding a dependency to a library is a much bigger deal as you either have to use something like git submodules or manage them yourself OR make any user of your library go get the correct versions themselves.

wtetzner
0 replies
5h34m

I think it's important to figure out what you're comparing though.

A language that makes it easy to pull in dependencies also encourages breaking code into separate modules.

Languages that make dependency management hard tend to have larger, heavier dependencies. Two dependencies in such a language are more likely to have duplicate functionality between them, instead of sharing the functionality through another dependency.

Is it better to vet many smaller dependencies, or fewer large ones that likely duplicate a lot of stuff? It depends on what those dependencies are.

I don't think just looking at dependency counts is that useful. Many libraries that would be a single dependency in other languages are split into several because they are useful on their own.

throwup238
0 replies
12h53m

> I'm not sure languages without a package manager being the default are nearly as bad.

The autoconf or cmake list of included libraries for many non-trivial C++ projects is usually just as long as stuff in Cargo, especially when breaking up Boost, Qt, or other megaframeworks into individual libraries.

They do "make any user of your library go get the correct versions themselves" but that has been far less of a problem this century thanks to OS package management.

evilduck
0 replies
1h21m

I'm not sure languages without a package manager being the default are nearly as bad.

Javascript doesn't have a specified package manager.

Lvl999Noob
0 replies
12h48m

The thing is, when adding dependencies _isn't_ trivial, you end up vendoring. You aren't gonna thoroughly test your command line parser if it is a very small feature of your overall project. A dedicated command line parser lib will (more likely than you in this case) thoroughly test their implementation.

troad
3 replies
12h16m

And many notable Rust projects attempt to keep third party dependencies to a minimum.

I don't think this is true. The only two major Rust crates that manage to keep their dependencies light are tokio and serde, and these are highly atypical projects. For a more typical example, look at something like axum (running `cargo tree` for project with a single dependency on axum returns 121 lines).

This is really no different from any other language.

You're holding Rust up to a standard that nothing meets.

Respectfully, I think you're creating a bit of a false dichotomy here. I'm not demanding perfection, I'm merely noting that I've found Rust dependency trees to grow noticeably faster than dependency trees in equivalent languages. You add two dependencies in Rust, and suddenly you have a dozen dependencies of dependencies of dependencies, including at least three different logging crates. In the world of C, which is what Rust is trying to displace, that's just not going to pass muster.

Rust is a very fine language with a bit of a dependency addiction (a dependency dependency?). I honestly don't see what service it does to the language to pretend otherwise.

wtetzner
0 replies
5h31m

You add two dependencies in Rust, and suddenly you have a dozen dependencies of dependencies of dependencies, including at least three different logging crates. In the world of C, which is what Rust is trying to displace, that's just not going to pass muster.

In the world of C, how many libraries just vendor their dependencies, so they're just not easily visible in a dependency tree?

burntsushi
0 replies
6h9m

tokio and serde are certainly not the only ones. You can put almost all of my crates into that category too.

The problem with your framing is that you look at this as a "dependency addiction." But that doesn't fully explain everything. The `regex` crate is a good case study. If it were a C library, it would almost certainly have zero dependencies. But it isn't a C library. It exists in a context where I can encapsulate separately versioned libraries as dependencies with almost no impact on users of `regex`. Namely, it has two required dependencies: regex-syntax and regex-automata. It also has two optional dependencies: memchr and aho-corasick.

This isn't a case of the regex crate farming out its core functionality to other projects. Indeed, it started as a single crate. And I split its code out into separately versioned crates that others can now use. And this has been a major ecosystem win:

* memchr is used in all sorts of projects, and it promises to give you exactly the same implementation of substring search that the regex crate (and also ripgrep) use in your own projects. Indeed, that crate is used here! What would you do instead? If you were in C-land, you'd re-roll all of the specialized SIMD that's in memchr? For x86-64, aarch64 and wasm32 right? If you haven't done that sort of thing before, good luck. That'll be a long ramp-up time.

* aho-corasick is packaged as a stand-alone Python library that is quite a bit faster than pyahocorasick: https://pypi.org/project/ahocorasick-rs/ There's tons of other projects on crates.io relying on aho-corasick specifically, separately from how its used inside of `regex`.

* regex-syntax gives you a production grade regex parser. More than that, it gives you exactly the same parser used by the regex crate. People have used this for all sorts of things, including building their own regex engine without needing to re-create the parser (which is a significant simplification).

* regex-automata gives you access to all of the internal APIs of the regex engine. This is all the stuff that is too complex to put into a general purpose regex library targeting the 99% use case. As far as I know, literally no other general purpose regex engine has ever attempted this because most regex engines are written in C or C++ where you'd be laughed out of the room for suggesting it because dependency management is such a clusterfuck. Yet, this has been a big benefit to other folks. The Yara project uses it for example, and the Helix editor uses it to search discontiguous strings: https://github.com/helix-editor/helix/pull/9422 (Instead of rolling your own regex engine, which is what I believe vim does.)

This isn't dependency addiction. This is making use of separately versioned libraries to allow other projects to depend on battle tested components independent of their primary use case. Yet, if people repeat this kind of process---exposing internals like I did with the regex crate---then you wind up with a bigger dependency tree.

Good dependency management is a trade-off. One the one hand, it enables the above to happen, which I think is an objectively Good Thing. But it also enables folks to depend on huge piles of code so easily that it actively discourages someone from writing their own base64 implementation. But as should be obvious, it doesn't prevent them from doing so: https://github.com/BurntSushi/ripgrep/blob/ea99421ec896fcc9a...

Good dependency management is Pandora's box. It has been opened and it is never going to get closed again. Just looking on and calling it an addiction isn't going to take us anywhere. Instead, let's look at it as a trade-off.

alexchamberlain
0 replies
11h12m

I think the GP's point was that all modern languages (and some of the older ones) have this problem - in fact, JS is infamous for it. Therefore, I don't think it's really a pro or a con on its own - only in the context of your business and problem.

rafaelmn
0 replies
6h38m

This is really no different from any other language.

There are languages with big standard libraries and first party frameworks.

I can build a complex web app in C# using only packages published by Microsoft in ASP.NET and EF.

Python ships with a lot of "batteries included".

Not saying I expect that from rust considering the funding/team size discrepancy and language targets - but I disagree that every language is same in this regard - JS/Node is notoriously bad, Rust is around C++ level, and plenty of higher level languages have first pary/standard library stacks.

devjab
0 replies
11h17m

It’s different from Go, but then, Go is probably not the language you’re going to replace Rust with.

(I know it’s exactly different from Go’s dependency management, but you frankly rarely need any thing outside of the STL in Go.)

haberman
6 replies
11h38m

A medium-sized Rust project can easily tip the scales at 2-300 crates, which is still rather more dependencies than anything I’ve looked at here, but that’s explained by the simple fact that using libraries in C is such a monumental pain in the ass that it’s not worth trying for anything unless it’s bigger than… well, a base64 parser or a hash function.

It's odd that this article spends so much time arguing that Rust is no different than C or C++, only to concede at the end that Rust projects do have more dependencies.

wiseowise
5 replies
10h48m

Because Rust has Cargo which makes it trivial to include dependencies.

C developers constantly reinvent because C dependency management is such a joke that no one bothers.

anta40
2 replies
4h38m

I wonder if Go tooling does the job umm... "better" here. At least faster build time and debug build is much smaller.

At my current company, we handle payment, transaction etc with Go (some Fiber, some Echo). None of the projects reach 100 MB, and my pkg folder size is around 2.5 GB-ish. Those are the dependencies of all of my Go codebase. Well not bad.

Compare it with building a Rust sqlite web API which easily eat 3 GB disk. 10 similar projects may eat at least 30 GB.... :D

Disclaimer: I don't use Rust for work... yet. Only for personal tinkering.

charrondev
1 replies
3h51m

To me the size of the codebase after installing dependencies isn’t that relevant. Are you starved for 30GB for your work? My current company has projects in PHP, rust, and a bunch of frontend projects with all the modern build tooling.

The largest service we deploy is ~300Mb, maybe 200Mb if you exclude copy of Monaco that we ship in the Dist folder. That web server will have terabytes of database storage behind it and 32 or 64 Gb of Redis/Memcached behind it. If we add in the Elasticsearch we’ve got another terabyte and a ton of memory.

If those dependencies aren’t checked into version control or being shipped to prod does it really matter?

anta40
0 replies
3h40m

For an application developer like me? No. It's technically not a dealbreaker. But probably more like a question for compiler devs.

uecker
0 replies
3h35m

C has no dependency management. There are various other package management you can use with C though. I am quite happy with my Linux distribution package manager.

But I have to say it clearly: cargo is a supply chain disaster and ever changing dependencies are major problem for Rust. Rust programs having many dependencies if not a good thing.

carlmr
0 replies
8h53m

This is so on-point. Number of dependencies is correlated with ease of package management.

If you have a well-working package manager you're more likely to use a dependency than just rewrite that little part of your code.

For little programs written in C or C++ this usually means that people write their own buggy CLI parsing that doesn't produce a proper help message and segfaults with the wrong combination of arguments. In Rust people use clap. And they just need to derive on their CLI arguments struct.

And this process happens for every "small" dependency. In the end you're faster developing with cargo, you get a more professional result, probably you can even generate an executable for ARM and Intel without changing anything.

But OMG, you have a dependency tree.

samatman
0 replies
2h1m

Though Zig might edge out Rust for this use case once it stabilises.

Zig has a meaningful advantage in the context of this discussion: lazy compilation. The compiler won't even semantically analyze a block of code unless the target requires that to happen.

Currently, dependencies are more eager than they really need to be, but making them just as lazy as compilation is on the roadmap. Lazy compilation means no tree shaking is needed, and it means that the actual build-graph of a program can be traced on a fine-grained level. We might not be able to audit a hundred dependencies, but auditing the actual used code from all those dependencies might be more practical.

This is well positioned to handle a common pattern: `small-lib` provides some useful stuff, but also has extensions for working with `huge-framework`, and it needs to have `huge-framework` as a dependency to do so. Currently this means that the build system will fetch `huge-framework`, but if we can get it lazy enough, even that won't have to happen unless the code consuming `small-lib` touches the `huge-framework` dependency, which won't happen unless the program itself needs `huge-framework`.

Existing build systems don't do such a great job with that kind of structure, and the culprit is eagerness.

purplesyringa
12 replies
18h44m

Rust should have Jason built in.

I don't think this is a reasonable approach. That's just a way to introduce bloat. Importantly, std does not differ from other crates, except for stability guarantees, so there would be no positive here. All it does is link the library's release cycle to the compiler's. (In fact, rustc-serialize used to be built-in, so Rust tried to go that way.)

But also, serde_json isn't large by default. I'm not sure where you are getting those numbers from. serde_json isn't large, serde isn't large. They both have very low MSRVs few other crates support, so in all truth they can't even have many dependencies.

wiseowise
8 replies
10h55m

I don't think this is a reasonable approach. That's just a way to introduce bloat.

Can this meme die already? The fact that out of the box install of Rust can’t parse JSON is a joke, and you know it.

hombre_fatal
2 replies
2h27m

I don’t want to wait on language releases to get updates to json, regex, etc. Nor do I want a crappy stdlib impl of something to become widespread just because it comes out of the box like Go’s worst of breed html templating and http “routing”.

wiseowise
1 replies
2h11m

Somehow Python and JS can get away with json in std lib, but thing that builds binaries can’t?

How often does it even need to be updated to parse freaking json?

jessekv
0 replies
1h52m

Python's json is an often-quoted example of why not to have it in the standard lib. There are some bad defaults that no one can fix for stability reasons. Though I admit it does come in handy sometimes.

In production, I've lately seen serde_json backed python implementations, this makes sense for performance and memory safety.

DougBTX
1 replies
9h44m

Rust has a great package manager, so moving libs into std doesn’t bring much benefit.

On the other hand a change like this perf improvement can be released without tying it to a language version, that’s good too.

wiseowise
0 replies
2h8m

On the other hand a change like this perf improvement can be released without tying it to a language version, that’s good too.

And you pay for that by having literally no way to parse something ubiquitous like json out of the box on install, relying to either installing third party lib (which is yet another security attack vector, requires yet another approval for upgrade, API can change on a whim by maintainer and other can of worms) or by using other language.

wtetzner
0 replies
5h45m

Out-of-the-box you can add serde_json to your Cargo.toml file in a single line and have JSON parsing.

    serde_json = "*"
I'm not sure I see the problem.

umanwizard
0 replies
5h24m

Repeating the same claim more incredulously isn’t really a good debating tactic.

kelnos
0 replies
51m

If Rust were a "web language", sure, I'd think it would have to have JSON support built in.

Rust is a systems programming language. If Rust had JSON support built in, I'd take it much less seriously. JSON is a fad, just like XML was 20 years ago. In 20 years, when JSON goes the way of XML, the Rust stdlib team should not have to continue maintaining a JSON parser.

An out of the box install of C can't parse JSON either. Do you think C is a joke? C++? Java?

IshKebab
1 replies
10h47m

Std definitely differs from other crates:

1. There's only one version so you can't end up with multiple copies of the crate.

2. It is precompiled, so it doesn't bloat your target directory or compile time.

3. It is able to use unstable features without using the nightly compiler.

It's a totally reasonable approach. Many other languages have JSON support in their standard libraries and it works fine. I'm not sure I'd want it, but I wouldn't say it's an obviously bad idea.

kelnos
0 replies
55m

Many other languages have JSON support in their standard libraries and it works fine. I'm not sure I'd want it, but I wouldn't say it's an obviously bad idea.

I would say it's a bad idea. JSON is, for lack of a better (less derogatory) term, a data-format fad. If Rust had been designed back in 2000 we'd be having this discussion about XML. Hell, consider Javascript (where JSON comes from), with XHR: remember that stands for "XMLHttpRequest"! Of course it can be used with data payloads other than XML; fortunately the people who added it weren't that short-sighted, but the naming is an interesting historical artifact that shows what was only fleetingly dominant at the time as an API data format.

In another 20 years, when Rust is hopefully still a relevant, widely-used language, we may not be using JSON much at all (and oof, I really hope we aren't), and yet the Rust team would still have to maintain that code were it in the stdlib.

Consider also that Rust's stdlib doesn't even have a TOML parser, even though that seems to be Rust's configuration format of choice.

kelnos
0 replies
48m

That's just a way to introduce bloat.

I don't think "bloat" is the issue; as I'm sure you know, Rust programs only contain code for the features of the stdlib they use; if it had JSON support and it wasn't used, the linker would omit that code from the final binary. (Granted, that would make the linker work a little harder.)

More at issue is maintenance burden. Data formats come and go over time. In 10 or 20 years when JSON has faded to XML's level of relevance, the Rust stdlib team shouldn't have to continue maintaining a JSON parser.

hermanradtke
10 replies
19h7m

Please show your work. I cannot reproduce "3gb of dependencies".

Here is my test:

Cargo.toml

   [package]
   name = "serde-test"
   version = "0.1.0"
   edition = "2021"
   
   [dependencies]
   serde = { version = "1.0.208", features = ["derive"] }
   serde_json = "1.0.127"
src/main.rs

    use serde::Deserialize;
    
    #[derive(Deserialize)]
    struct Foo {
        bar: String,
    }
    
    fn main() {
        let foo: Foo = serde_json::from_str("\"bar\": \"baz\"").unwrap();
    
        println!("{}", foo.bar);
    }
$ cargo build && cargo build --release && du -sh target

    ...

    78M target

cedws
6 replies
14h2m

Jesus, 78MB is still a lot for such a simple program.

ComputerGuru
5 replies
13h46m

That’s not the size of the program but the size of the build artifacts folder that includes all the intermediate files like .o in a C project and more.

what
4 replies
13h30m

That still seems like a lot of build artifacts for a 10 line program?

loa_in_
2 replies
8h53m

It deserializes a unicode string to a custom structure. Do not mistake it with C character-shuffling hello-world-programs.

Edit: s/to JSON/to a custom structure/

maccard
1 replies
8h46m

I’m on mobile so I can’t check at the moment. But I’d be shocked if the equivalent go binary was anywhere near as big, or took anywhere near as long to build.

I’ll check later

pezezin
0 replies
4h55m

I don't know what you consider big and long, but on my computer (Ryzen 5700X) it took 7 seconds to build, and the resulting binary is 556 kB.

commodoreboxer
0 replies
12h50m

A 10 line program with two dependencies and all their transitive dependencies.

tredre3
2 replies
17h2m

I arrive at almost the same result as you, with 76MB.

I've also checked .cargo, .rustup, and my various cache folders (just in case) and haven't found any additional disk usage.

OP is clearly mistaken.

inferiorhuman
1 replies
16h42m

The first thing that jumps out is that the code example doesn't work.

The next thing is that the example merely calls cargo build. Using an IDE of any sort will typically invoke rust-analyzer which will bloat the target directory quite a bit. I've also found that stale build artifacts tend to chew up a lot of space (especially if you're trying to measure the typically smaller release builds).

Beyond that, none of the serde features that will tend to generate a ton of code are being used.

So yeah a minimal example won't use a lot of space but if you start to use the bells and whistles serde brings you will definitely bloat your target directory. I expect a typical rust project to take around 3–4 gigs for build artifacts depending.

hermanradtke
0 replies
16h5m

The first thing that jumps out is that the code example doesn't work.

Good catch. I forgot the braces. It does not change the target directory size in a significant way.

As for your other comments: sure! We can have a real conversation about rust-analyzer and other serde features (though I am not sure which specific features you are referring to) causing the target directory to increase drastically in size. However, a sensationalist comment that claims the _dependencies_ are 3gb appears to be misleading at best.

jwells89
6 replies
18h13m

Built in JSON encoding/decoding is one of the things I’ve enjoyed about Swift. It’s nice when it’s not necessary to shop around for libraries for common needs like that.

throwup238
5 replies
17h39m

Almost nobody is shopping around Rust JSON libraries unless they need some specific feature not provided by serde and serde_json. They are the default everyone reaches for.

wiseowise
4 replies
10h50m

The moment you NEED to include a library to parse some basic JSON file - you’ve lost already.

ModernMech
2 replies
2h35m

Based on your other reply about JSON being a "basic" feature I assume you do a lot of work with JSON.

What you need to understand is not everyone works with JSON, and for them it's a feature to not have JSON parsing code in their binaries. It's not a loss for them.

wiseowise
1 replies
2h3m

Where did you get a notion that JSON parsing code will end up in a binary if it’s not used? Or Rust compiler is so obtuse it can’t tree shake unused code?

ModernMech
0 replies
49m

How did you get that from what I said? JSON isn't included in Rust binaries because of its ability to bring in only what's needed, and my ability as a developer to specify that as a fine-gained level at compilation time.

Using a language where you don't bring things in as needed means they're built-in at the interpreter level or some other scheme like a large required standard library.

Maybe in those languages your compiler is smart enough to filter out unused code, maybe you don't even have a compiler and you have to distribute your code with an interpreter or a virtual machine with all the batteries. Either way, languages where libs are brought in as-needed are at an advantage when it comes to compiler tech to generate such binaries.

tcfhgj
0 replies
10h33m

Why have I lost what exactly?

INGSOCIALITE
2 replies
17h49m

can rust use the json-c library?

rapsey
0 replies
11h24m

People use rust for its memory safety.

makeitshine
0 replies
17h5m

I'd assume you could use bindgen and create bindings no problem.

duped
1 replies
14h27m

The value prop isn't serde_json, it's automatically generated serializers and deserializers for structured data without needing an extra codegen step like with protobufs/capnproto, plus all that machinery decoupled from the actual data format you're reading.

It essentially generates a massive amount of code that you need to write anyway, at the cost of code size and compile time. And a lot of people are happy to make that trade off.

I wouldn't call that a "bloated monster" because of that. Also, none of those options are alternatives to serde_json, unless you restrict yourself to serde_json::Value - which no one does in practice.

38
0 replies
13h50m

none of those options are alternatives to serde_json, unless you restrict yourself to serde_json::Value - which no one does in practice.

check your facts, all the above options have derive support, serde is not special in that.

skitter
0 replies
18h46m

merde_json should also be relatively small.

pornel
3 replies
18h20m

Rust emits unreasonable amount of debug information. It's so freakishly large, I expect it's just a bug.

Anything you compile will dump gigabytes into the target folder, but that's not representative of the final product (after stripping the debug info, or at least using a toned-down verbosity setting).

khuey
0 replies
8h41m

Rust emits unreasonable amount of debug information. It's so freakishly large, I expect it's just a bug.

Rust relies on the linker (via -ffunction-sections and -gc-sections) to delete functions that aren't ever used but the linker isn't capable of removing the corresponding debug info.

https://github.com/rust-lang/rust/issues/56068

hinkley
0 replies
15h12m

Does it need a more compact representation of its debug info?

duped
0 replies
14h43m

Most of your target folder isn't debug info, but stale build artifacts because Cargo doesn't do any garbage collection.

sweca
2 replies
18h58m

I swear the target folder for literally any project of any scale is at least several GB in size.

gnuvince
0 replies
5h58m

I get a progress bar when I run `cargo clean` because it's so large.

aldanor
0 replies
5h54m

For a work project, my recent cargo clean removed 90 GB.

tinrab
0 replies
17h22m

From crates.io, `serde` is a 76.4 KiB dependency. And from what I've seen looking through the code, it's pretty minimal.

ninkendo
0 replies
19h14m

It has 5 dependencies, one of which is optional, and another is serde itself: https://github.com/serde-rs/json/blob/master/Cargo.toml

    indexmap = { version = "2.2.3", optional = true }
    itoa = "1.0"
    memchr = { version = "2", default-features = false }
    ryu = "1.0"
    serde = { version = "1.0.194", default-features = false }
I don’t think you’re measuring what you think you’re measuring when you say it has 3GB of dependencies. But I can’t say for sure because you don’t provide any evidence for it, you just declare it as true.

If I were to guess, I’d say you’re doing a lot of #[derive(Serialize, Deserialize)] and it’s generating tons of code (derive does code generation, after all) and you’re measuring the total size of your target directory after all of this. But this is just a guess… other commenters have shown that a simple build produces code on the order of tens of MB…

Klonoar
0 replies
18h32m

What kind of machine are you developing on that runs out of space that quickly…?

zorked
9 replies
9h0m

Teaching to _think_ is just as important as teaching to code, but this is seldom done

Oh, the arrogance of thinking that the other person doesn't think.

loa_in_
5 replies
8h49m

It isn't what author argues at all. It's about teaching how to think: that you have to do some research and it's not a step you can skip; having done your research doesn't free you from having to draw your own conclusions too. Skipping either of those steps is easy but wrong.

ramon156
4 replies
8h36m

is seldom done

It's this part

purplesyringa
3 replies
8h3m

Perhaps the wording was off on my part. What I meant is not that people don't think, it's that people seldom teach others to think, at least in web articles.

Most posts of such format I have seen are "we did this and got this", not "we tried this, it failed because of this, then we figured out something else might work and it worked after these modifications".

chipdart
2 replies
5h13m

Most posts of such format I have seen are "we did this and got this", not "we tried this, it failed because of this, then we figured out something else might work and it worked after these modifications".

That doesn't resemble anything remotely related to teaching how to think. You're just logging your trial and error process, which is exactly what each and every single developer goes through on a daily basis.

What exactly do you think other developers do?

tpmoney
0 replies
4h13m

In context they're very clearly talking about blogging/write-ups/presentation of technical things. A lot of the material about making / fixing things we're presented with in life are finished products, the results clean and tidy, and the steps to accomplish the result obvious with the benefit of someone else to tell you what they are. It's much less common to see even a glimpse of the effort it took to get there, or for someone to document the process, including dead ends and false starts.

Even here, we can imagine that had the author failed to actually make anything faster, they might not have written anything at all. And yet, wouldn't that still have had benefit to people? To see things attempted that didn't work, to understand why those things didn't work? Maybe it wouldn't have been as interesting to as wide an audience, but it's important to see failure. Both as a way of learning from others to not repeat the same efforts, but also because its really easy to fall into the trap of assuming you're incapable if you do fail when everyone around you always seems to be succeeding.

Or perhaps as an analogy, almost everyone creates some art in life, and certainly every artist struggles to create that art. Yet it would be a disservice to only ever present art to learning artists as complete master works and paint by numbers replications. We need to see the "happy little accidents" of Bob Ross, the sketch books of iterations on a design, the piles of failed clay firings. Not because no one experiences these things, but because they are instructive on their own in a way that only seeing success is not.

__s
0 replies
5h3m

Again, they aren't saying developers don't think. They're talking about blogging

I had this issue at PeerDB where we'd blog about some dev, when I wrote it'd be a stream of consciousness trying to communicate the mood, frustration, & flailing process. It wouldn't get published, in favor of blogs with clearer product messaging

wtetzner
2 replies
5h49m

I don't think there's any arrogance in the statement. It doesn't assume others don't think. It's simply observing that most blog posts and how-to articles show the final result, but not necessarily the steps that were needed to get there.

7bit
1 replies
3h33m

I don't see what one has to do with the other.

wtetzner
0 replies
2h15m

I'm not really sure I understand what you're saying and/or asking.

As far as I can tell, the statement in question was simply saying that showing the steps to come to a solution is rare, and also that it helps to teach how to think (how to solve a problem). I guess don't see where the arrogance lies.

s_Hogg
4 replies
6h12m

Very strong jart feel about this person's blog, that was a nice read

We would need to reinvent the wheel, but this is quite neat if you think about it.

Is this real or ironic though? I read it and started laughing at the writer but the rest of the page seems quite heavy on self-deprecation

vecplane
2 replies
6h2m

What does jart mean?

IggleSniggle
0 replies
3h12m

HN username for an extremely talented software engineer and software-engineering communicator, justine.lol. Probably most known around here for her cross-platform C code (cosmopolitan) and relatedly redbean, a zip file and tool that is also an executable file-server-file that hosts itself and can produce other such self-hosting cross platform executable zip file servers.

wtetzner
0 replies
5h53m

I think it means the approach is quite neat, not the fact that it requires reinventing the wheel.

Sytten
4 replies
17h38m

The utf-8 tricks make me very nervous since I have seen too many attacks with parser confusion. I for with serde for correctness not speed. I hope this was fuzzed all the way with a bunch of invalid utf-8 strings.

hsbauauvhabzb
1 replies
10h39m

Any bugs you can point to that come to mind of this class?

eesmith
0 replies
3h20m

https://en.wikipedia.org/wiki/UTF-8#Invalid_sequences_and_er...

Many of the first UTF-8 decoders would decode these, ignoring incorrect bits and accepting overlong results. Carefully crafted invalid UTF-8 could make them either skip or create ASCII characters such as NUL, slash, or quotes. Invalid UTF-8 has been used to bypass security validations in high-profile products including Microsoft's IIS web server[26] and Apache's Tomcat servlet container.[27] RFC 3629 states "Implementations of the decoding algorithm MUST protect against decoding invalid sequences."
hinkley
0 replies
15h34m

This is the sort of space where I’d like to see a fuzzer.

dwattttt
0 replies
16h20m

Luckily utf-8 structure is _very_ trivial compared to the average parser. Not to say there can't be bugs, but that the internal states of a parser shouldn't be large, and can be exhaustively tested.

spense
2 replies
19h33m

awesome that serde moves so quickly. i just ran across simdutf8 and realized the pr for simd-enabled uft8 parsing is coming up on 5 years:

https://github.com/rust-lang/rust/issues/68455

fyrn_
0 replies
13h16m

The parent is comparing the speed of Rust std improvement to the faster pace of Serde.