return to table of content

Jiff: Datetime library for Rust

sushibowl
44 replies
12h47m

Overall this looks nice, but I found myself stumbling over the ToSpan syntax:

    let span = 5.days().hours(8).minutes(1);
It feels sort of weird how the first number appears in front, and then all the other ones are function arguments. I suppose if you don't like that you can just write:

    let span = Span::new().days(5).hours(8).minutes(1);
at the expense of a couple characters, which is not too bad.

Galanwe
16 replies
12h36m

If only Rust had named function parameters, you could write what is IMHO the most readable option:

    Span::new(days=5, hours=8, minutes=1)

DemocracyFTW2
6 replies
12h28m

could you do that with `struct` / `record` fields? In JavaScript which doesn't have named function parameters either I often write functions with a single `cfg` parameter that are called like `f({ hours: 2, seconds: 53, })` which I find nice b/c it re-uses existing data structures.

devanl
2 replies
11h53m

In Rust, you can't implicitly omit fields when instantiating a struct, so it would have to be a bit more verbose, explicitly using Rust's analog to the spread syntax.

It would have to look something like:

  f({ hours: 2, seconds: 53, ..Default::default() })

The defaults could come from some value / function with a name shorter than Default::default(), but it would be less clear.

estebank
1 replies
3h55m

Adding support for struct default field values would allow for

- leaving some mandatory fields

- reduce the need for the builder pattern

- enable the above to be written as f(S { hours: 2, seconds: 53, .. })

If that feature ever lands, coupled with structural/anonymous structs or struct literal inference, you're getting everything you'd want from named arguments without any of the foot guns.

kibwen
0 replies
57m

Has anyone ever proposed it? It's such a straightforward feature with such obvious semantics ("default field values are const contexts") and impossible-to-bikeshed syntax (`a: i32 = 42`) that I've been meaning to write up an RFC myself for around, oh, ten years now...

4hg4ufxhy
1 replies
11h42m

Kind of inefficient. Also it's less ergonomic since every struct is it's own type, so you need to have the signature on both sides.

benmmurphy
0 replies
10h42m

It should compile to the same because a struct passed by value is loaded into the registers the same as method arguments.

Galanwe
0 replies
11h59m

Well the beautiful thing about software engineering is that pretty much everything is possible, it essentially boils down to "but should you really"? :-)

_flux
5 replies
11h26m

Yes, that could've been lended almost as-is from OCaml, in particular as Rust doesn't have partial application so optional arguments would work out-of-the-box as well.

berkes
4 replies
10h37m

Are named arguments on the roadmap somewhere? Or is it a won't-fix?

PoignardAzur
2 replies
9h40m

The feature is controversial enough that it's basically a wontfix.

darby_nine
0 replies
8h34m

I'm guessing that general support doesn't translate to support for a specific syntax with changes to the calling convention. I wouldn't put money on this coming together any time soon.

_flux
0 replies
9h42m

I haven't seen anything for or against in actual roadmaps. But there is, of course, at least one pre-proposal:

https://internals.rust-lang.org/t/pre-rfc-named-arguments/16...

It doesn't think about optional arguments (but somehow does include overloading). And a bit in a related fashion, it doesn't permit for reordering calling arguments, which I consider a downside:

Reordering named arguments when calling

No it is not possible. Just like unnamed arguments and generics, named arguments are also position-based and cannot be reordered when calling: register(name:surname:) cannot be called as register(surname:name:).

Reordering them at the definition site is an API break, just like reordering unnamed arguments or generics is an API break already.

The rationale for this expressed in the comments says it's incompatible with overloading, but to me I don't see why named arguments and overloading should go hand-in-hand—or, indeed, how desirable overloading is in the first place. Or why should overloading be able to overload that kind of scenario. The other reasons for this don't seem really problems at all.

    > fn func2(pub name: u32, name hidden: u32) { /\* ... */ }
    > fn func3(name hidden1: u32, name hidden2: u32) { /* ... */ }
> func2 and func3 could work in theory: named arguments as proposed in this RFC are position-based and their internal names are different: just like two arguments can have the same type without ambiguity, those functions could be allowed.

Maybe there are technical reasons that are simpler when considering type compatibility between function types that have or don't have labeled arguments? Seems the proposal has misunderstood something about OCaml labeled arguments when placing it under https://internals.rust-lang.org/t/pre-rfc-named-arguments/16... , though.

In addition the proposal doesn't seem to have a neat syntax for forwarding named parameters, like in constructing records you can just fill in a field called foo by mentioning its name by itself—or, like in OCaml you can have

    let foo ~bar = bar + 1
    let baz ~bar = foo ~bar

    let main () = baz ~bar:42
If it used the .-prefix as mentioned as an idea elsewhere, then this too could be naturally expressed.

Maybe there are other ideas how to go about the labeled arguments, though that one seems pretty well thought-out.

One thing I've enjoyed with Python (and Mypy) is the ability to require the caller to use named arguments with the asterisk marker in the parameter list. This idea is mentioned in the proposal.

nurettin
2 replies
10h7m

I'm all for named parameters. C++ is sorely lacking that feature as well.

Currently using vs code with C++, I like how it handles the missing language feature by adding a grayed out parameter name before the value for function calls and initializers. Maybe there is something like that for rust.

umanwizard
0 replies
8h43m

Yes, editors can be configured to do the same thing for rust.

hypeatei
0 replies
7h3m

These are called "inlay hints" and exist for most editors/languages.

DemocracyFTW2
10 replies
12h31m

I stumbled over

    use jiff::{Timestamp, ToSpan};

    fn main() -> Result<(), jiff::Error> {
        let time: Timestamp = "2024-07-11T01:14:00Z".parse()?;
I seem to remember Rust does that thing with interfaces instead of classes, is it that? How come I import a library and all of a sudden strings have a `parse()` method that despite its generic name results in a `Timestamp` object? or is it the left-hand side that determines which meaning `str.parse()` should have? What if I have two libraries, one for dates and one for say, Lisp expressions that both augment strings with a `parse()` method? Why use this syntax at all, why not, say, `Timestamp.parse( str )`? I have so many questions.

nindalf
2 replies
11h50m

All of these options work and are equivalent.

- let time = Timestamp::parse("2024-07-11T01:14:00Z")?;

- let time: Timestamp = "2024-07-11T01:14:00Z".parse()?;

- let time = "2024-07-11T01:14:00Z".parse::<Timestamp>()?;

You’re free to choose whatever you prefer, although the compiler needs to be able to infer the type of time. If it can’t, it’ll let you know.

So a fourth option is allowed, as long as the subsequent lines make the type of time unambiguous.

- let time = "2024-07-11T01:14:00Z".parse()?;

This is a direct consequence of Timestamp implementing the FromStr trait.

Sharlin
1 replies
11h0m

  let time = Timestamp::from_str("2024-07-11T01:14:00Z")?;
I think you meant :)

nindalf
0 replies
9h42m

Haha, yes I did. If only the HN textbox integrated with rust-analyzer, it would have caught the mistake.

csomar
2 replies
12h2m

It's implied. Here is the full syntax.

let time: Timestamp = "2024-07-11T01:14:00Z".parse::<Timestamp>()?;
dhosek
1 replies
11h51m

That ::<TYPE> thing at the end is called a turbofish. It is rarely necessary to give it explicitly (but sometimes you do when the compiler cannot infer the return type on its own—thusfar in my own rust coding I’ve needed it exactly once).

bluejekyll
0 replies
5h21m

It’s useful to know the full syntax, I’ve definitely encountered needing it more than one time.

nrabulinski
0 replies
12h17m

It’s because Timestamp implements the FromStr trait which is one of the first traits everyone learns about when learning rust. So when you say that your value is a Timestamp and the expression is string.parse()?, the compiler knows that it has to use the implementation which returns a Timestamp.

There will never be two libraries that clash because of Rust’s orphan rule: you can only implement either a trait which you define on any type, or define a foreign trait on a type which you define, so there’s no way for some random library to also ship an implementation of FromStr for Timestamp

kam
0 replies
12h18m

`parse` is actually an inherent method on `str` that always exists: https://doc.rust-lang.org/core/primitive.str.html#method.par...

Its return type is generic, and here it's inferred from the left hand side. It's implemented using the `FromStr` trait, and you can equivalently write `Timestamp::from_str(t)`.

You're thinking of the "extension trait" pattern for using traits to add methods to existing types when the trait in scope, but that's not what's going on here. Jiff's `ToSpan` mentioned above is an example of that pattern, though: https://docs.rs/jiff/latest/jiff/trait.ToSpan.html

Tigress8780
0 replies
12h19m

Rust will determine what `parse` does based on the inferred return type (which is being explicitly set to `Timestamp` here). This is possible when the return type has `FromStr` trait.

0x457
0 replies
1h24m

I have so many questions.

Not being snarky, but I suggest starting by reading at least a little about traits? None of your questions are really about this library - it's just FromStr and an orphan rule.

frereit
5 replies
12h28m

I agree. Personally, I'd prefer

    let span = 5.days() + 8.hours() + 1.minutes();

wging
1 replies
2h33m

That isn't an implementation of addition between Spans and other Spans. It looks like there isn't one in the library right now. `impl<'a> Add<Span> for &'a Zoned` means a borrow of Zoned is on the left hand side, and a Span on the right. So it says that if z is a Zoned (not a Span) and s is a Span, you can do `&z + s` to add a span to a Zoned. There are a bunch of implementations there, DateTime + Span, Date + Span, Time + Span, Offset + Span. All with Span on the right, but none for Span + Span (nor Span + &Span, or &Span + &Span, ...).

burntsushi
0 replies
1h48m

This is correct. You can't do a `span1 + span2`. You'd have to use `span1.checked_add(span2)`. The main problem I had with overloading `+` for span addition is that, in order to add two spans with non-uniform units (like years and months), you need a relative datetime. So `+` would effectively have to panic if you did `1.year() + 2.months()`, which seems like a horrific footgun.

It would be plausible to make `+` for spans do _only_ component wise addition, but this would be an extremely subtle distinction between `+` and `Span::checked_add`. To the point where sometimes `+` and `checked_add` would agree on the results and sometimes they wouldn't. I think that would also be bad.

So I started conservative for the time being: no `+` for adding spans together.

mijoharas
0 replies
11h21m

Have you checked the API to see if that works? I imagine it does.

csomar
0 replies
11h57m

I wonder if OP will accept a PR for such a change. Your proposal is much readable and flexible (it's not clear from the docs if you can add random time ranges together). Plus, you'll be able to create your own ranges like `1.decade` or `1.application_timeframe` and add/subtract them.

Sharlin
4 replies
11h2m

Yeah, or there could simply be a `days()` free function (and equivalents of the other methods too). No need for struct constructors to be associated functions.

creata
3 replies
10h26m

I haven't tried it (so I'm sorry if it's wrong or not what you're talking about) but can't you get a freestanding days function by

    use jiff::ToSpan::days;

the_mitsuhiko
2 replies
9h51m

You cannot import trait methods as free standing functions. I'm not sure if there was a discussion about making this a possibility but it's definitely not something you can do today.

dathinab
0 replies
11m

multiple discussions happened for this and I don't quite remember the outcome.

But it's much less simple then it seems.

Because `use Trait::method` would not be one (potential generic) method but a group of them so it would be it's own kind of thing working differently to free functions etc. Furthermore as generics might be on the trait you might not be able to fill them in with `::<>` and even if you fill them in you also wouldn't be able to get a function pointer without having a way to also specify the type the trait is implemented on.

All of this (and probably more issues) are solvable AFIK but in context of this being a minor UX benefit its IMHO not worth it, 1. due to additional compiler complexity but also due to 2. additional language complexity. Through maybe it will happen if someone really cares about it.

Anyway until then you can always define a free function which just calls the method, e.g. `fn default<T: Default>() -> T { T::default() }`. (Which is probably roughly how `use` on a trait method would work if it where a thing.)

creata
0 replies
8h34m

Oh, sorry about that then.

waterhouse
1 replies
12m

Or 0.days(5).hours(8).minutes(1)?

tempodox
0 replies
0m

Cool, that gives us 0-days.

tomas789
0 replies
12h35m

I agree with that. In my thought process is to specify what I’m doing and only then some details. This is the other way around. When reading the code, it would be better to see that I’m dealing with span at first.

gpderetta
0 replies
9h52m

Can you do 5.days() + 8.hours() + 1.minutes()?

coldtea
0 replies
2h39m

I like your version's consistency.

The original looks like something Ruby would do.

arijun
21 replies
12h32m

A little off topic but does anyone know the purpose of dual licensing MIT and the UNLICENSE? It seems like the second should already allow anyone to do whatever they want…

GolDDranks
10 replies
12h24m

What I gather about the author's thoughts about that, he isn't a fan of copyright in general, and uses UNLICENSE as an ideological statement, plus a practical way of saying "do whatever you want with this", but also slaps the option to use MIT as "something almost as good" because non-standard licenses deter corporate types, which kind of defeats the original "do whatever you want" purpose of UNLICENSE :D

globular-toast
9 replies
11h15m

That's what I gather from Unlicense too (in fact, this is confirmed in a linked bug thread where the author says he "hates copyright").

I think the author is actually looking for the GPL but doesn't realise it yet. Unlicense can't make something free forever, no matter how hard the author wishes it. GPL can. In other words, Unlicense/MIT is idealistic, GPL is pragmatic. You can't turn off copyright, but you can make it work for the people instead of against them.

rahkiin
8 replies
10h40m

Not at all. If this library was GPL, any software using it also needs to be GPL. This means all code needs to be open source, which severely limits freedom of makers of end-user software.

And almost the whole Rust ecosystem is MIT.

richrichardsson
3 replies
10h23m

severely limits freedom of makers of end-user software

Ironic for a "free" software license.

It would be great if there was a license somewhere inbetween GPL and MIT: you'd be required to upstream (or make available) any changes you made to the parts of other people's code you're making use of, but not required to open your entire codebase.

jstarks
0 replies
9h52m

I think the MPL attempts to be that license.

agosz
0 replies
8h24m

MPL or CDPL

GoblinSlayer
0 replies
8h28m

That's LGPL.

globular-toast
3 replies
9h15m

which severely limits freedom of makers of end-user software

And thereby severely guarantees the freedom of said end-users.

The freedom to deny the freedom of another person is not a freedom worth discussing.

The author expressly dislikes copyright. GPL is still the only real cure to copyright. "Permissive" licences are corporate friendly. They allow corporations to take what they want and give back nothing. In this day and age it's more important than ever to empower individuals and limit the growth of corporations/oligopolies.

usea
2 replies
8h39m

The freedom to deny the freedom of another person is not a freedom worth discussing.

If that was true, you wouldn't be doing just that.

globular-toast
1 replies
8h30m

So, to be clear, your argument is that the freedom to deny the freedom of other people is a freedom that should be protected? How do you deal with issues like slavery and, in particular, its abolishment?

gjm11
0 replies
7h30m

usea's argument is clearly not that but only that you can't literally think something is "not worth discussing" while you are actually discussing it.

The person who was explicitly defending non-GPL licences was rahkiin. I don't know how they'd respond to your challenge, but here is how I would:

"The freedom to deny the freedom of other people" is impossibly vague, because "the freedom of other people" can mean zillions of things. It's also confusing to talk about since we have two separate freedoms here, so let's talk about the freedom(1) to deny the freedom(2) of other people.

Suppose we put "the freedom to kill other people" in the freedom(2) slot. Most of us think that isn't a freedom people are entitled to, so the freedom(1) to deny that particular freedom(2) would be a good thing.

Suppose we put "the freedom to breathe the air" in the freedom(2) slot. Most of us think that is a freedom people are entitled to, so the freedom(1) to deny that particular freedom(2) would be a bad thing.

In the present case, what goes in the freedom(2) slot is something more complicated and less clear-cut -- it isn't a Super-Obvious Fundamental Human Right like the right to go on breathing, but it also isn't a Right To Do Very Evil Things like the right to murder.

It's something like "the freedom to read and modify the source code of a particular piece of software". We demonstrably don't presently have that freedom as regards many widely-used pieces of software; the world's legal systems pretty much unanimously agree that if you put this in the freedom(2) slot then the freedom(1) to deny it is worth having.

Why? Well, the usual arguments would be (1) that creating something gives you some rights to limit what other people do with it, and (2) that giving creators some such rights is a good thing overall because it increases the incentives for people to create nice things.

Of course you might disagree! (And, also of course, even if you agree with #1 and #2 in the abstract you might think that "intellectual property" law as currently implemented across the world is a very bad way to get #1 and #2.) But I hope your reasons are a matter of thinking carefully about the tradeoffs involved, not just of saying "yay freedom" and therefore denying every instance of "it's good for X to have the freedom(1) to deny Y's freedom(2) to do Z".

Not least because you literally can't consistently do so in every case -- if you say no one should ever have the freedom(1) to deny freedom(2) to others, whatever specific freedom(2) may be, then what you are calling for is precisely to deny that freedom(1) to others.

Xylakant
6 replies
12h15m

The unlicense is considered problematic in various jurisdictions, among them Germany - under German law, you cannot relinquish certain rights that are associated with the author at all. Dedicating something to the public domain is not a valid concept here. This means the whole license could be declared invalid in court. Other jurisdictions may be similarly problematic- thus the fallback to MIT

There’s a stackoverflow post that discusses some of the issues https://softwareengineering.stackexchange.com/questions/1471...

rsynnott
2 replies
8h44m

Never understood why anyone uses this one; it's just too potentially messy, and a permissive license like 0BSD provides the intended effect without the risk.

Xylakant
0 replies
6h0m

Part of the indended effect is making a policical/societal point about the copyright system, something the 0BSD license does not do. I personally believe that legal documents are a bad place to make these points, but obviously people differ.

chrismorgan
2 replies
12h2m

My own summary and collection of information about the problems with the Unlicense: https://chrismorgan.info/blog/unlicense/

(I collected that mostly because I didn’t find all the relevant information in one place, or explanation of the reasonable alternatives.)

treeshateorcs
0 replies
7h56m

i love your site!!

Xylakant
0 replies
10h26m

That looks like a pretty comprehensive overview. I'll bookmark this for further reference :)

nsajko
0 replies
12h26m

The "Unlicense" is not considered as serious.

nindalf
0 replies
12h23m

Burntsushi has written many important crates in the Rust ecosystem. He started with licensing under Unlicense exclusively, until people requested a dual license with MIT. See this issue from 2016 for more details - https://github.com/BurntSushi/byteorder/issues/26

Almost all of the Rust ecosystem is dual licensed under MIT/Apache 2.0, so this combination is a bit unusual. But the presence of MIT means that it hasn’t been a problem in practice.

goodpoint
0 replies
5h24m

Should have used GPL or at least LGPL

magnio
15 replies
5h18m

I have seen many people downplaying the complexity of a datetime library. "Just use UTC/Unix time as an internal representation", "just represent duration as nanoseconds", "just use offset instead of timezones", and on and on

For anyone having that thought, try reading through the design document of Jiff (https://github.com/BurntSushi/jiff/blob/master/DESIGN.md), which, as all things burntsushi do, is excellent and extensive. Another good read is the comparison with (mainly) chrono, the de facto standard datetime library in Rust: https://docs.rs/jiff/latest/jiff/_documentation/comparison/i...

Stuffs like DST arithmetic (that works across ser/de!), roundable duration, timezone aware calendar arithmetic, retrospective timezone conflict detection (!), etc. all contribute to a making the library correct, capable, and pleasant to use. In my experience, chrono is a very comprehensive and "correct" library, but it is also rigid and not very easy to use.

TacticalCoder
5 replies
4h20m

I love burntsushi's ripgrep and certainly use it all the time, calling it directly from my beloved Emacs (and I do invoke it all the time). If was using ripgrep already years before Debian shipped rg natively.

I was also using JodaTime back when some people still though Eclipse was better than IntelliJ IDEA.

But there's nothing in that document that contradicts: "just represent duration as nanoseconds".

Users needs to see timezones and correct hour depending on DST, sure. Programs typically do not. Unless you're working on stuff specifically dealing with different timezones, it's usually a very safe bet to: "represent duration as milliseconds/nanoseconds".

That humans have invented timezones and DST won't change the physics of a CPU's internal clock ticking x billion times per second.

Just look at, say, the kernel of an OS that didn't crash on half the planet a few days ago: there are plenty of timeouts in code expressed as milliseconds.

Reading your comment could be misinterpreted as: "We'll allow a 30 seconds cooldown, so let's take the current timezone, add 30 seconds to that, save that time as a string with the time 30 seconds from now, complete with its timezone, DST, 12/24 hours representation and while we're at it maybe add exta code logic to check if there's going to be a leap second or not to make sure we don't wait 29 or 31 seconds, then let the cooldown happen at the 'correct' time". Or you could, you know, just use a freakin' 30 seconds timeout/cooldown expressed in milliseconds (without caring about whether a leap second happened or not btw because we don't care if it actually happens after 29 seconds as seen by the user).

throwawaymaths
1 replies
4h10m

That humans have invented timezones and DST won't change the physics of a CPU's internal clock ticking x billion times per second.

Increasingly we are programming in distributed systems. One milli or nano on one node is not a milli or nano on another node, and that is physics that is more inviolable.

tracker1
0 replies
4h0m

In which case, does being off a few milli actually matter that much in any significant number of those distributed instances? No precision is exact, so near enough, should generally be near enough for most things.

It may depend in some cases, but as soon as you add network latency there will be variance regardless of the tool you use to correct for variance.

tijsvd
0 replies
4h5m

Of course you don't need a calendar library to measure 30 seconds. That's not the use case.

Try adding one year to a timestamp because you're tracking someone's birthday. Or add one week because of running a backup schedule.

coldtea
0 replies
2h41m

Unless you're just using time information to implement a stopwatch on your program, anything you do with time will eventually have to deal with timezones, and DSTs, and leap seconds, and tons of other intricasies.

Even something as simple as schedulling a periodic batch process.

burntsushi
0 replies
3h51m

I'm not sure what the issue is here exactly, but there are surely use cases where a `std::time::SystemTime` (which you can think of as a Unix timestamp) is plenty sufficient. ripgrep, for example, uses `SystemTime`. But it has never used a datetime library. Just because Jiff exists doesn't all of a sudden mean you can't use `SystemTime`.

But there's a whole world above and beyond timestamps.

J_Shelby_J
2 replies
4h13m

(that works across ser/de!)

uhg, I can't believe it took me this long to realize why Serde crate is named that!

kibwen
0 replies
2h7m

Once you've gathered yourself, allow me to blow your mind as to where "codec" and "modem" come from. :P

dist1ll
0 replies
4h2m

The abbreviation is also used by EE folks, e.g. SerDes [0]. The capitalization makes it a bit more obvious.

[0] https://en.wikipedia.org/wiki/SerDes

devman0
1 replies
3h27m

Java had a pretty comprehensive rewrite of it's own time handling library and it was much needed. Time is hard because time zones are not engineering they are political and arbitrary.

So yeah keeping things in Unix time is great if all your doing is reading back timestamps for when an event occurred, but the moment you have to schedule things for humans, everything is on fire.

soperj
0 replies
2h39m

Didn't they just incorporate JodaTime? I thought the changes were even made by the JodaTime developer.

power78
0 replies
2h28m

I have seen many people downplaying the complexity of a datetime library.

Where? Maybe people downplay storing dates but not making a library.

fridder
0 replies
3h15m

If someone wants an entertaining and approachable dive into the insanity that is datetime, Kip Cole did a great talk at ElixirConf in 2022: https://www.youtube.com/watch?v=4VfPvCI901c

coldtea
0 replies
2h45m

I have seen many people downplaying the complexity of a datetime library. "Just use UTC/Unix time as an internal representation", "just represent duration as nanoseconds", "just use offset instead of timezones", and on and on

Anyone thinking datetimes are easy, should not be allowed near any schedulling or date processing code!

LaffertyDev
0 replies
3h29m

Thank you for pointing me towards the design document. Its well written and I missed it on my first pass through the repository. I genuinely found it answered a lot of my questions.

zokier
14 replies
7h9m

While this does seem to be an improvement in general, I find it extremely disappointing that we now got another, greenfield, library that ignores leap seconds and continues the propagation of UNIXy time. I appreciate that it was at least informed decision, and seems to have been tough call to make. So full respect to burntsushi nevertheless.

That makes it mostly uninteresting to me; nice api is nice to have, but I'd personally appreciate correct results more.

From the wider ecosystem C++ std::chrono seems like the only one that shows some promise on this front. Last I checked the implementations were not there quite yet though, and the API definitely didn't seem all that pleasant. Maybe in couple of years we'll see how it'll work out.

Hifitime 4.0 seems like almost the only option at this point, and it is in early alpha still.

I recall using astropy at one point just for time calculations, but it is quite overkill solution.

The quest for perfect datetime lib (for any language) continues.

burntsushi
6 replies
4h37m

Can you say why you wouldn't want to use a TAI time zone? Like, generate the TZif data for TAI and then just do `jiff::tz::TimeZone("TAI", tzif_data)`. Then you'll get leap second accurate durations.

Can you also say why you need the precision? Like, what's the use case? What happens if your program computes durations that are off because of leap seconds?

spenczar5
3 replies
4h12m

I work in astronomy, on detection of asteroids. Catalogs of historical asteroid detections may be reported in UTC from some observatories for historical reasons.

Finding a trajectory that matches several candidate detections is called “linking” and it is very sensitive to time. Being off by even one second will result in a predicted position which is far off course, and so a candidate asteroid detection will not be linked.

Linking is not quite sensitive enough to demand a relativistic time scale, but definitely sensitive enough to require correct leap seconds.

burntsushi
2 replies
3h43m

Right, but I address this in the issue linked elsewhere in this thread: https://github.com/BurntSushi/jiff/issues/7

Like yes, scientific applications are a very valid use case for this. But scientific applications usually want other things not afforded by general purpose datetime libraries, like large time ranges and high precision. What I ask in that issue, and what I don't understand, is why folks who want leap second support aren't happy with using specialized libraries for that task, and instead request that leap second support be added to general purpose datetime libraries.

spenczar5
1 replies
3h40m

Yeah, I use the specialized libraries. But in Python, this has been painful: the good astronomy library is Astropy’s Time, but everyone uses datetime. So if I want to use a third library - for my database, or for making plots, or whatever - it will use datetime, and now I have to think really hard about how to do conversions. You can imagine how hard that is to get right!

Since Jiff hopes to be ubiquitous (I think? Seems that way) it would be nice if this sort of thing could be avoided. Time is such a fundamental in many APIs that having one common library is very important.

burntsushi
0 replies
3h14m

I think I would rather see this supported by paved path conversions to-and-from the specialized library. It's very hard to be all things to all people because there are irreducible trade-offs. The linked issue does a tortured tour through the trade-offs. I found it very difficult to wire in leap second support in a way that was satisfying. And even if Jiff supported leap seconds, that doesn't mean it would be well suited for scientific applications. Do you need more precision than nanoseconds? Do you need a bigger range than -9999 to 9999? If so, those come at the cost of everyone else by using bigger representations. They _could_ be opt-in crate features, but now we're talking about non-trivial additional maintenance/testing burden.

IDK, maybe there is a way to unify everything in a satisfying way, but they seem way too different to me.

zokier
1 replies
3h12m

Going through TAI is probably the best way for me, I'll have to play around with Jiff to see how practical that is. I'm glad if there is good support for TAI though!

One random use-case (reminded by other thread on the front-page) is that I occasionally have needed to analyze some logs, and get some stats from the data. For example having logs like "2024-07-24T14:46:53.123456Z Foo id=42 started" and "2024-07-24T14:46:54.654321Z Foo id=42 finished", and wanting to get some histogram on how long "Foo" took.

Sure, ideally you'd have some explicit metrics/tracing system or some other measurements for getting that data, but unfortunately in practice that is not always the case and I have to make do with what I have.

Or even more simple, I just want to sort bunch of events to establish a timeline. UNIX style time handling makes that difficult.

NTP adjustments can also cause problems in these sort of cases, but at least the systems I work with are usually kept in relatively tight sync so the "window of uncertainty" is much less than 1s.

burntsushi
0 replies
2h47m

I think Jiff should handle that log use case pretty fine? That seems pretty standard. Just parse into a `jiff::Timestamp` and then you can sort or whatever.

fanf2
3 replies
6h18m

Leap seconds are being abolished. The current rotation speed of the earth is very close to 24h/day and is changing very slowly, so it is not very likely there will be another leap second before they are abolished.

zokier
1 replies
5h41m

That doesn't change the situation of past leap seconds in any way, those still need to be accounted for.

spenczar5
0 replies
4h10m

The nice thing is that you would have a static table of leap seconds, and would not need to poll a URL to check for new leap second data (as Astropy does, for example, on import!).

weinzierl
0 replies
4h53m

While leap seconds are planned to be abolished, there is no plan to give up the coupling of UTC and the Earth's angle.

Leap seconds are just to be replaced by a yet to be defined adjustment, likely leap minutes.

If you don't like leap seconds and don't care about a small (but increasing) deviation from Earth's angle you can do so today: Just use TAI.

zokier
0 replies
5h37m

Yes, that was what I was referring to with

I appreciate that it was at least informed decision, and seems to have been tough call to make

but you are right, having that link here helps others.

binarycoffee
0 replies
4h11m

TBH I consider the C++ `std::chrono` as the worse possible design. `tai_clock::now` does not actually take into account leap seconds. Unless it does, who knows ("Implementations may use a more accurate value of TAI time."). Likewise, `tai_clock::from_utc/to_utc` does not correct for leap second. It just translates the UTC epoch to the TAI 1958 epoch.

I found Hifitime to be very opinionated and give a false sense of security due to its automatic computation of leap seconds based on historical tables. Yes, leap seconds are announced some ~6 month in advance, but what if you don't update regularly the library? Or if you can't because it is deployed on an embedded system?

In the end I wrote my own minimalistic TAI timestamp library [1] and made the conscious decision to let the user take the responsibility to deal with leap seconds in UTC conversion.

[1] https://github.com/asynchronics/tai-time

drtgh
10 replies
8h28m

IMHO, unwrap() , expect() and company, has infected the Rust language so deeply that one wonders when (and not "if") will a library send a panic and crash the whole program.

How could be erased those panic! methods that are used in most of Rust's libraries is something that may be is beyond the possible? beside is promoted from all the Rust's tutorials and reference code.

So much correctness in the Rust language just for to promote to all the community to crash the program from libraries without handling the error is something I can not understand.

I hope this philosophy do not reach the Linux kernel.

aw1621107
3 replies
7h48m

How could be erased those panic! methods that are used in most of Rust's libraries is something that may be is beyond the possible?

It's arguably quite possible, though not as straightforwards as one may hope. For example, there's no_panic, which results in a linker error if the compiler cannot prove a function cannot panic [0], albeit with some caveats.

So much correctness in the Rust language just for to promote to all the community to crash the program from libraries without handling the error is something I can not understand.

Is there that much "promoting" of unchecked unwrap()/expect()/etc. going on? How do you distinguish that from "genuine" cases of violations of the programmer's assumptions?

I ask because Result/? along with libraries like thiserror/anyhow/etc. are right there and arguably easier/more concise, so unwarranted unwrap()/etc. would seem "harder" to write/justify than the alternative. The main exception I can think of are more one-off cases where the author is intentionally sacrificing robust error handling for the sake of speed/convenience, but that's a more language-agnostic thing that pretty much "doesn't count" by definition.

I hope this philosophy do not reach the Linux kernel.

IIRC this is being worked on, especially given Linus's position on panics in the kernel.

[0]: https://github.com/dtolnay/no-panic

drtgh
2 replies
6h6m

Is there that much "promoting" of unchecked unwrap()/expect()/etc. going on? How do you distinguish that from "genuine" cases of violations of the programmer's assumptions?

More like promoted indirectly, I think by being used widely on reference code and tutorials the programmers absorbs as a familiar and quick to write method without planning much. And at same time by not being actively promoted that such methods should not be used within a library runtime or similar at least, because many people do not see it as wrong, what convert it in philosophy I guess.

When the dependency chain of library loading is fired, almost always I checked some unwrap ends within the program's runtime, so distinguishing whether those are genuine cases of violations (IMHO they can't be genuine if a lib can panic the program), or if it was just a unfinished prototyping part or etc, I think is not exactly important as individual until it reach terms of generalized behavior along the language libraries, and even seen in some programs of the community.

IIRC this is being worked on, especially given Linus's position on panics in the kernel.

These are good news

aw1621107
1 replies
21m

More like promoted indirectly, I think by being used widely on reference code and tutorials the programmers absorbs as a familiar and quick to write method without planning much.

For references/documentation/tutorials I think the use of unwrap() and friends is a tradeoff. It (arguably) allows for more focused/self-contained examples that better showcase a particular aspect, though with the risk that a reader uses those examples as-is without taking other factors into consideration. There's also the fact that documentation examples can be used as tests, in which case use of unwrap() in docs/examples/etc. is arguably a good thing.

And at same time by not being actively promoted that such methods should not be used within a library runtime or similar at least, because many people do not see it as wrong, what convert it in philosophy I guess.

I think it might depend on where you're looking. For example, the Rust book has a section titled "To panic! or Not to panic!" [0] which outlines some things to consider when deciding whether to panic/call unwrap()/etc. Not sure if that counts as active promotion, but the fact it's in official docs should count for something at least.

IMHO they can't be genuine if a lib can panic the program

I feel this is a rather strong position to take. Given how panics are intended to be used (handling unexpected state/precondition violations/etc.), it seems it seems akin to saying "just don't write bugs", which would certainly be nice but isn't really realistic for the vast majority of development. I suppose one could hypothetically bubble up every possible error but that comes with its own maintainability/readability/etc. costs.

In addition, that stance seems similar to stating that there's no "genuine" assertion failures or similar in libraries, which seems... bold? What would the alternative be?

or if it was just a unfinished prototyping part or etc

At least in Rust there's todo!() and unimplemented!() which more directly convey meaning.

[0]: https://doc.rust-lang.org/book/ch09-03-to-panic-or-not-to-pa...

burntsushi
0 replies
5m

it seems it seems akin to saying "just don't write bugs"

That is indeed exactly what is being said as far as I can tell. And yes, it's exactly as ridiculous as you think it is.

I'll link my blog on the topic again because I think it might help here as well: https://blog.burntsushi.net/unwrap/

`unwrap()` contains an assertion. Just like `slice[i]` or even `Box::new(whatever)`. The way to avoid these in C is to commit UB instead of panicking. I've seen arguments that seem understandable for why this is maybe appropriate in the Linux kernel ("I'd rather continue executing with garbage than shut down the user's system"), but I don't think it applies much beyond that. And to be clear, I'm not saying I agree with that either.

db48x
2 replies
7h58m

What does this have to do with anything?

hypeatei
0 replies
7h12m

It's done in bad faith. Some are vehemently against Rust because of the "culture" around criticizing other languages' memory safety models, namely C/C++.

diggan
0 replies
7h37m

Trying to figure this out as well... Tests have a bunch of .expect and .unwrap (which is to be expected), but core logic of the library doesn't seem to have any that seems they'll get in the way?

the_mitsuhiko
0 replies
7h31m

That is not my experience at all. It’s very rare that libraries unwrap. Beyond example code and tests I rarely see unwrap.

burntsushi
0 replies
7h47m

Eh? There isn't a single unwrap/expect in the examples at the top level crate documentation. There should be very few overall. But there are hundreds of executable doctests, so there are certainly some unwraps.

But I've already opined on this topic: https://blog.burntsushi.net/unwrap/

bigstrat2003
0 replies
3h53m

I hope this philosophy do not reach the Linux kernel.

Well, I hope it does. Albeit it almost certainly will not, because Linus is opposed to it. But ever since I read Joe Duffy's blog posts on the Midori research project at MS, I have been convinced that using panics leads to increased reliability, not decreased. From his blog[1]:

"Given that bugs are inherently not recoverable, we made no attempt to try. All bugs detected at runtime caused something called abandonment, which was Midori’s term for something otherwise known as “fail-fast”."

And:

"Abandonment, and the degree to which we used it, was in my opinion our biggest and most successful bet with the Error Model. We found bugs early and often, where they are easiest to diagnose and fix."

I think that the Midori team's work shows that a practice of "there's a bug, stop everything" leads to more reliable software. Sure, there's an initial period of pain where you're fixing a ton of bugs as they cause the software to panic. But you reap the rewards of that effort. I don't think Linux will ever move towards a model like this, but I think it would be beneficial in the end if they did.

1: https://joeduffyblog.com/2016/02/07/the-error-model/#bugs-ar...

alfiedotwtf
9 replies
12h15m

I've been dealing with time and timezones for a long time, but this is the first time I have ever seen the "[Olson/Name]" suffix. Is that actually standard?

alfiedotwtf
5 replies
11h54m

Ah, thank you.

Now that I'm back at my desk I had to check ISO8601, that suffix is not included. However it does look like an extension - RFC 9557 - which looks like is still in proposed state.

I would personally caution using these suffixes until wider adoption, because AFAIK the Olson database names themselves are not standardised on non-POSIX systems (i.e you might have a hard time on Windows).

teohhanhui
0 replies
1h56m

In addition to RFC meaning Request for Comments, when in fact they're standards lol

alfiedotwtf
0 replies
2h0m

Oh no... I've been treating "Proposed" as "Looks nice but I'll look at it when it gets out of proposed. Doh!

alfiedotwtf
0 replies
2h1m

Ah. Yep, that makes sense. Thanks!

demurgos
8 replies
3h52m

The main issue I have with existing time libraries, in Rust or other ecosystems is poor support for leap seconds. This is mostly caused by using UNIX timestamps instead of TAI internally, and this lib is no different unfortunately. There seems to be some way to support it with tzif files, but it does not have first-class support.

Here is the relevant Jiff issue with more details: https://github.com/BurntSushi/jiff/issues/7

UNIX timestamps don't use the SI second definition (instead it's 1/86000th of the current day, all UNIX seconds don't have the same duration) which breaks all correct duration computations. I understand that the tradeoff was to inherit compat with older time tracking methods and enable faster calendar formatting, I disagree that it was the right trade-off. UNIX timestamps mix representation concerns with data. In my opinion, leap seconds should be treated exactly like the 29th of February or time zones.

burntsushi
7 replies
3h48m

Why do you want it? What's your use case? And why doesn't a specialized scientific library like `hifitime` work for your use case?

Whenever people talk about leap seconds, it always seems to be in some abstract notion. But it's very rare to see folks connect them to real world use cases. I get the scientific use case, and I feel like that's well served by specialized libraries. Do we need anything else? I'm not sure that we do.

demurgos
3 replies
3h17m

This is about the "pit of success", being correct and predictable by default. A difference of timestamps not returning the corresponding elapsed wall-time is _very_ surprising.

I want to be able to compute durations between timestamps stored in the DB, received from API calls or retrieved from the system and get the right duration "out of the box". Computing these durations lets me apply business logic relying on it. A message can be editable for x amount of time, a token is valid for y amount of time, a sanction expires after z amount of time, etc.

For example, I want to issue some token valid for 60s. What should I set the expiry time to? `now + 60s`. Except if `now` is 2016-12-31T23:59:30Z, then most libs will return a time 61s in the future.

1 second may not be a big error, but it's still an error and depending on context it may be relevant. This is a systematic error unrelated to time sync / precision concerns, so it's pretty frustrating to see it being so common. It seems though that we won't have any new leap seconds in the near future so eventually it will just become a curiosity from the past and we'll be stuck with a constant offset between UNIX and TAI.

I feel like that's well served by specialized libraries.

Agreed that you need a specialized lib for this, but my point is that _you shouldn't have to_ and the current situation is a failure of software engineering. Computing `t2 - t1` in a model assuming global synchronized time should not be hard. I don't mean it as a personal critique, this is not an easy problem solve since UNIX timestamps are baked almost everywhere. It's just disappointing that we still have to deal with this.

burntsushi
2 replies
2h51m

What I'm not clear on though is what the failure mode is in your scenario. What happens when it's wrong? Does something bad happen? If something is one second longer or shorter than what it ought to be on very rare occasions, then what does wrong? I would, for example, imagine that the business use case of "editable for x amount of time" would be perfectly fine with that being plus-or-minus 1 second. It's not just about correctness, it's about the error and what it means.

A few months ago, Jiff did have leap second support. It worked. I know how to do it, but none of the arguments in its favor seem to justify its complexity. Especially when specialized libraries exist for it. You can't look at this in a vacuum. By make a general purpose datetime library more complex, you risk the introduction of new and different types of errors by users of the library that could be much worse than the errors introduced by missing leap second support.

demurgos
1 replies
1h58m

You can't look at this in a vacuum. By make a general purpose datetime library more complex, you risk the introduction of new and different types of errors by users of the library that could be much worse than the errors introduced by missing leap second support.

Agreed, I can easily imagine that it could cause situation where some numeric value is not interpreted correctly and it causes a constant offset of 37 seconds. UNIX timestamps are entrenched, so deviating from it introduces misuse risks.

Regarding my use-cases, I agree that these ones should still work fine. I could also come up with issues where a 1s error is more meaningful, but they would be artificial. The main problem I can see is using some absolute timestamp instead of a more precise timer in a higher frequency context.

Overall, it's the general discussion about correctness VS "good enough". I consider that the extra complexity in a lib is warranted if it means less edge cases.

burntsushi
0 replies
1h39m

Overall, it's the general discussion about correctness VS "good enough". I consider that the extra complexity in a lib is warranted if it means less edge cases.

Yeah I just tend to have a very expansive view of this notion. I live by "all models are wrong, but some are useful." A Jiff timestamp is _wrong_. Dead wrong. And it's a total lie. Because it is _not_ a precise instant in time. It is actually a reference to a _range_ of time covered by 1,000 picoseconds. So when someone tells me, "but it's not correct,"[1] this doesn't actually have a compelling effect on me. Because from where I'm standing, everything is incorrect. Most of the time, it's not about a binary correct-or-incorrect, but a tolerance of thresholds. And that is a much more nuanced thing!

[1]: I try hard not to be a pedant. Context is everything and sometimes it's very clear what message is being communicated. But in this context, the actual ramifications of incorrectness really matters here, because it gets to the heart of whether they should be supported or not.

LegionMammal978
2 replies
2h19m

It's not really in Rust (at least, not unless I choose to rewrite it in WASM), but there is one case where a lack of leap-second support has really been a thorn in my side. I've been working on and off on a JS web application that performs certain high-precision astronomical calculations, and leap seconds are a big challenge w.r.t. future correctness. Ephemerides don't change very much, but leap seconds can.

I can embed a big list into my application, but then I have to remember to periodically update it until 2035, or maybe longer if they decide to keep leap seconds after all. So I can try to download a leap-seconds.list from somewhere, except that only a few sources set "Access-Control-Allow-Origin: *", and none promise to keep it set indefinitely.

The annoying part is, applications on both Windows (since 10) and Unix-like systems generally have access to up-to-date leap-second information; at worst, browsers can be expected to be updated regularly. But since the JS standards devs want to stick strictly to POSIX time, they provide no means to obtain this information from the inside, through Date, Intl.Locale, or the proposed Temporal.

This is all to say, I would really love to always have some way available to get "all leap-second info that the current system knows about, if it does have any such info". For its part, hifitime either downloads from a fixed URL (which doesn't have that Access-Control-Allow-Origin header IIRC), consults a fixed list, or gets the user to locate the info, which isn't nearly as general or foolproof as it could be.

burntsushi
1 replies
1h45m

Yeah the way Jiff's leap second support worked (before I ripped it out) is that it would just read from `/usr/share/zoneinfo/leap-seconds.list` (or `/usr/share/zoneinfo/leapseconds`, whichever was available) and use that with some rudimentary caching. That way, Jiff isn't responsible for keeping the list up-to-date. The sysadmin is. Just like what we do for tzdb.

LegionMammal978
0 replies
30m

Indeed. The problem is, in the absence of one good solution like that, there are no good solutions, as is the case with web JS. Currently (on the desktop or laptop), the best bet is hifitime's leap-seconds parser, but then the programmer is still responsible for coming up with a sane fallback path for all their target systems. (Which can be tricky for individuals to do, e.g., I have no way to tell whether the tz database is in an expected location on macOS or iOS.)

sam0x17
7 replies
12h49m

Not to mention that BurntSushi is the author of the entire rust regex ecosystem

dhosek
6 replies
11h49m

I remember getting into a debate with him on Reddit about something or other and then realizing who I was debating with and saying never mind.

ramon156
5 replies
11h44m

That shouldn't stop you from settling an argument (given it was respectful). The best way to learn is to be wronged by smarter people.

qup
1 replies
10h55m

I feel very certain that's not the best way to learn.

sam0x17
0 replies
5h21m

training data is training data

sam0x17
0 replies
5h22m

Yeah for example I've given him crap for years for not having the will to bring back compile-time regex in Rust even though all the pieces for it are there in his `regex-automata` crate ;)

komadori
0 replies
10h56m

Also, just because someone is famous/important/etc doesn't mean they're always right. In fact, one of the dangerous things about being famous is the people stop being willing to disagree with you and that can lead to becoming detached and warped as a person.

dhosek
0 replies
4h13m

Well, the flip side of it was that in benchmarking our equivalent codebases for Unicode segmentation, my code was significantly faster, to both of our surprises as it turned out.

sva_
6 replies
10h9m

It's pronounced 'Giff' (with a hard 'G')

sapiogram
5 replies
7h51m

Non-native speaker here, what does "hard G" even mean? G as in "go", or G as in "gin"?

Smaug123
2 replies
6h49m

"hard G" is the sound denoted "g" in IPA. The other "g" is "dʒ".

n_plus_1_acc
1 replies
6h36m

That is the convention in english as least. Other languages may differ.

Smaug123
0 replies
6h32m

Fortunately, since we are explicitly talking about English here, that convention is indeed the relevant one to use!

stonemetal12
0 replies
3h50m

Hard G is "go". The name of the format if a play on words, in the US there is a brand of Peanut butter called "Jiff". The file format is supposed to be pronounced the same way (soft G). Some people, like the OP, claim that the G is supposed to be pronounced like the G in graphics (hard G). IDK why anyone cares.

Vinnl
0 replies
7h15m

"G" as in "gadverdamme".

(Native speaker here - but of Dutch, the language known for its proper hard Gs. Unless you're from the South.)

noxs
5 replies
10h51m

The fact that we need a such complicated datetime library just means so many unncessary artificial complexities were introduced before (yes the daylight saving, leap seconds etc.)

berkes
1 replies
10h27m

unncessary artificial complexities

The very fact they are there, still used, and so on, contradicts "unnessary [sic]". Sure, it might be outdated now, or technically better alternatives might be there.

But, in the end, software that deals with "The Real World" is going to be a complex, illogical mess. Because the real world is a complex, illogical mess. We could make a time that is global, counts resonant frequency of atoms. While technically superior, I will continue saying "The job took me 3h25 minutes" and not "The job took me 113,066,170,771,000,000 cycles", or even "The job took me 113066 Tera-cycles." or such. Messy, illogical and complex is often simply more practical. If only because "everyone does it that way".

nindalf
0 replies
8h59m

And we're going to say "let's do this a day from now", leaving the software to decide whether that's 24, 23 or 25 hours from now. It could be any of those things, depending on where it was said and the DST changes for that timezone.

Or conversely, is a specified instant considered "tomorrow"?

GoblinSlayer
1 replies
8h2m

Time library can be simple, it's just rust libraries tend to be philosophic for some reason, but it's only one of many design approaches.

Smaug123
0 replies
6h46m

They can certainly be simple and incomplete, or simple and incorrect; do you have an example of a simple, complete, and correct time library?

KingMob
0 replies
2h39m

Why don't we standardize on kilosecs and megasecs!?

7bit
5 replies
12h5m

So, ist it pronounced jiff or gif?

runiq
0 replies
4h55m

We're only minutes away from GAYPEG, an image format which you pronounce with a soft g, like in GIF.

sschueller
0 replies
11h57m

The author gets to decide IMO. Gif was always pronounced "jif" according to the creator so I hope the creator of this new tool will pronounce Jiff as "gif" to avoid confusion...

runiq
0 replies
11h41m

Imma pronounce it biff, it's a time handling library after all.

neallindsay
0 replies
5h24m

Yes

airstrike
4 replies
3h48m

> Jiff is pronounced like "gif" with a soft "g," as in "gem."

Over my dead body!

faitswulff
0 replies
3h34m

Well, it had a good run at the top of hackernews for a minute, but I think we can all see its glaring flaws now. I'll stick with chrono †

† pronounced "ch" as in "chase" of course, i.e. "trono"

aceazzameen
0 replies
2h15m

Huh, I thought Jiff was pronounced with a hard "J."

JohnTHaller
0 replies
3h22m

You pronounce it GIF instead of GIF?!?

mijoharas
3 replies
11h16m

This looks like a cool library.

Does anyone know why burntsushi is making this new library? I haven't messed around with times in rust much, but do the existing libraries have performance problems? Or are the existing API's awkward to use? Or is he just doing it for fun, or some other reason?

tikkabhuna
1 replies
5h54m

In Java they had the same problem. The Java standard library implementation wasn't great, Jodatime came along to address those issues. Java 8 then introduced a new DateTime API that was heavily influenced by Jodatime with the benefit that as it is in the standard library, it can be more heavily adopted by library-writers.

https://www.joda.org/joda-time/ https://www.baeldung.com/java-8-date-time-intro

WorldMaker
0 replies
3h41m

There's an additional related stepping stone here (as it is name dropped in the library's design document as well) in that TC-39 has been hard at work on a proposal to standardize similar APIs into EcmaScript (JS) called Temporal: https://tc39.es/proposal-temporal/docs/

Temporal benefits from the JodaTime/Java 8+ date work, but also includes more recent IETF and IANA standards as other influences.

jcgrillo
3 replies
3h14m

What does it mean for a Span to be negative? This is one thing I really like about Durations, they can't be negative and therefore match physical reality.

burntsushi
2 replies
3h4m

ISO 8601-2:2019 defines it as:

    negative duration:
    duration in the reverse direction to the proceeding time scale
Matching "physical reality" is a non-goal. What's important is modeling the problem domain in a way that makes sense to programmers and helps them avoid mistakes. A negative span doesn't give you any extra expressivity that "subtract a span from a datetime" doesn't already give you. Both are a means to go backwards in time. And they line up with human concepts that talk about the past. So for example, if I say, "I went camping 1 year ago." I am expressing a concept that is modeled by a negative span.

And there are also just practical benefits to allowing a span to be signed. For example:

    use jiff::{ToSpan, Unit, Zoned};

    fn main() -> anyhow::Result<()> {
        let now = Zoned::now().intz("Europe/Kyiv")?;
        let next_year = now.checked_add(1.year())?;
        let span = now.since((Unit::Month, &next_year))?;
        println!("{span}");
        Ok(())
    }
Has this output:

    $ cargo -q r
    -P12m
If negative spans weren't supported, what would the behavior of this routine be? It could return an error. Or maybe an out-of-band sign. I'm not sure. But this seems like the most sensible thing.

And of course, Temporal has negative durations too. This design was copied from them.

jcgrillo
1 replies
2h33m

Thanks for the explanation. I agree signed spans make it easier to express concepts like "1 year ago" vs "1 year from now". And clearly if the concept of a negative duration has made it into the standard then it makes sense to support it. But I do wonder if there's some value still in being precise, if I think I've measured a negative duration--e.g I look at my watch and write down t0, wait a bit, look at my watch again and write down t1 and find t0 > t1--that's something surprising that probably warrants further investigation. The explanation likely isn't "well time just went backwards for a bit there" :P. This happens frequently in computers, though, so maybe making the programmer handle an error each time is excessive.

burntsushi
0 replies
1h35m

Yeah it's hard for me to get a sense of where that would really matter in practice, or even when it would be correct to do so. You could a `assert!(!span.is_negative());`. Although, you have to be careful, because system time can change arbitrarily.

Your watch example is actually an important use case that specifically isn't served by Jiff. You should be using monotonic time (in Rust, that's `std::time::Instant`) for things like that. In that case, you really do get a guarantee that the duration between two instants (one in the past and one in the future) is non-negative. And if it weren't, then that would be worthy of an assert. But you get no such guarantees with system time.

nsajko
2 replies
12h24m

The title doesn't conform to the HN guidelines, dang.

croemer
0 replies
10h40m

Came here to say this :)

The original tag line is "Jiff is a datetime library for Rust that encourages you to jump into the pit of success."

carbonatom
0 replies
6h22m

The title doesn't conform to the HN guidelines, dang.

+1

I don't understand why people are downvoting your comment. This title is absolutely a violation of HN guidelines. And a very blatant one no less!

Do people not read https://news.ycombinator.com/newsguidelines.html anymore?

tomas789
1 replies
12h27m

The state of calendar libraries in Rust is less than ideal. When working with Pandas, there is .tz_convert() and .tz_localize() and that is basically it for timezone conversions. My benchmark for this is: given a date, get first hour of CET/CEST day in UTC. In Pandas a very simple operation. In Chrono, you have to have a NaiveDate, convert to DateTime<FixedOffset> and then to DateTime<Utc>. And there is no pattern in those conversions I managed to find. Sometimes it is a member function, other times a static method on the timezone object.

I hope somebody will rectify this at some point. Jiff seems like a step in a right direction but the syntax is sometimes weird. I guess I’d wellcome something more predictable

burntsushi
0 replies
7h35m

That should be `date(y, m, d).intz("Europe/Rome")?`. If you want to get UTC from there, then add `with_time_zone(TimeZone::UTC)`.

Jiff seems like a step in a right direction but the syntax is sometimes weird. I guess I’d wellcome something more predictable

Can you say more? How can Jiff be better?

hardwaresofton
1 replies
2h30m

"babe wake up, the new burntsushi just dropped"[0]

On a serious note, any rustaceans in here know the reason the crate doesn't use something like `tracing`? A bit too heavyweight maybe -- it is about half the size on crates?

`log` is of course fine, and I don't know that tracing down to the calls for tz operations is a normal use case, but always interested to know if there was a specific why here.

[0] https://knowyourmeme.com/memes/wake-up-babe

burntsushi
0 replies
1h53m

I think `log` is the lower common denominator, right? And `tracing` interops with it just fine? That's why I used it in Jiff I guess. And it's also been the thing I've been using since the birth of crates.io itself. Jiff doesn't have any requirements other than emitting messages at a specific log level.

But no, I'm sure `tracing` would have worked fine too.

returnfalse
0 replies
10h12m

Looks cool, thank you burntsushi for this. I have similar complaints about existing date-time libraries in rust. I’ll replace chrono/time with Jiff in my projects.

raasdnil
0 replies
10h41m

Has to be pronounced gif right?

pkulak
0 replies
2h56m

Thank you! Dealing with time, calendars and durations has always been the hardest part of Rust for me. And yes, I’m including the borrow checker and async!

It’s so painful to come from something like the JVM where time operations and formed in near natural language. From a quick glance, this looks even better than java.time.

mrbluecoat
0 replies
4h22m

bonus: it's pronounced like 'gif' ;)