return to table of content

Dada, an experimental new programming language

happens
92 replies
6h17m

It's weird, I want pretty much the exact opposite of this: a language with the expressive type system and syntax of rust, but with a garbage collector and a runtime at the cost performance. Basically go, but with rusts type system.

I'm aware that there are a few languages that come close to this (crystal iirc), but in the end it's adoption and the ecosystem that keeps me from using them.

dartos
18 replies
5h59m

That sounds… bad?

The whole point of rusts type system is to try to ensure safe memory usage.

Opinions are opinions, but if I’m letting my runtime handle memory for me, I’d want a lighter weight, more expressive type system.

afavour
5 replies
5h30m

I like Rust’s type system just fine but for me it’s types combined with language features like matching that draw me to Rust. When I was still learning I made an entire project using Arc<> with no lifetimes at all and it was actually a great experience, even if it’s not the textbook way to use Rust.

sanderjd
4 replies
4h46m

Honestly, I think syntax for Arc (and/or Rc or some generalization of the two) and more "cultural" support for writing in that style would have benefitted rust back when 1.0 was being finalized. But I think the cow is out of the barn now on what rust "is" and that it isn't this.

cmrdporcupine
2 replies
4h35m

Yes, if you think about it, it's a bit weird that async gets first syntactical class treatment in the language but reference counting does not. A similar approach of adding a syntactical form but not mandating a particular impl could have been taken, I think.

Same for Box, but in fact Rust went the opposite way and turfed the Box ~ sigil.

Which I actually feel was a mistake, but I'm no language designer.

zozbot234
0 replies
4h19m

Async has to get first-class treatment in the syntax because the whole point of it is a syntax-level transformation, turning control flow inside out. You can also deal with Future<> objects manually, but that's harder. A special syntax for boxed variables adds nothing over just using Box<> as part of the type, similar for Rc<> (note that in any language you'll have to disambiguate between, e.g. cloning the Rc reference itself vs. duplicating its contents, except that Rust does it without having to use special syntax).

sanderjd
0 replies
3h14m

Yeah, but personally I think Rc/Arc is more deserving of syntax than Box!

steveklabnik
0 replies
2h41m

A long time ago, it did have specialized syntax! We fought to remove it. There’s a variety of reasons for this, and maybe it would make sense in another language, but not Rust.

umanwizard
4 replies
5h54m

I’m assuming by rust’s type system they mean without lifetimes. In which case it’s existed in lots of GC languages (OCaml, Haskell) but no mainstream ones. It isn’t really related to needing a GC or not.

gpderetta
2 replies
5h26m

You still want RAII and unique references, but rely on GC for anything shared, as if you had a builtin refererence counted pointer.

I do also believe this might be a sweet spot for a language, but the details might be hard to reconcile.

umanwizard
1 replies
3h45m

I haven’t used Swift so I might be totally wrong but doesn’t it work sort of like you describe? Though perhaps with ARC instead of true GC, if it followed in the footsteps of Objective-C.

gpderetta
0 replies
2h27m

Possibly, yes. I haven't used swift either though. Does it have linear/affine types?

Edit: I would also prefer shared nothing parallelism by default so the GC can stay purely single threaded.

dartos
0 replies
2h43m

Without lifetimes, Pins, Boxes, Clone, Copy, and Rc (Rc as part of the type itself, at least)

wongarsu
3 replies
5h4m

Rust's type system prevents bugs far beyond mere memory bugs. I would even go as far as claiming that the type system (together with the way the standard library and ecosystem use it) prevents at least as many logic bugs as memory bugs.

dartos
1 replies
2h44m

The type system was built to describe memory layouts of types to the compiler.

But I don’t think it prevents any more logic bugs than any other type system that requires all branches of match and switch statements to be implemented. (Like elm for example)

speed_spread
0 replies
1h46m

It prevents a lot more than that. For example, it prevents data race conditions through Send/Sync traits propagation.

kaba0
0 replies
23m

Besides preventing data races (but not other kinds of race conditions), it is not at all unique. Haskell, OCaml, Scala, F# all have similarly strong type systems.

sanderjd
1 replies
4h49m

The whole point of rusts type system is to try to ensure safe memory usage.

It isn't though. The whole trait system is unnecessary for this goal, yet it exists. ADTs are unnecessary to this goal, yet they exist. And many of us like those aspects of the type system even more than those that exist to ensure safe memory usage.

dartos
0 replies
2h41m

It is the first and foremost goal of every language choice in rust.

I think traits muddy that goal, personally, but their usefulness outweighs the cost (Box<dyn ATrait>)

I should’ve probably said “the whole point of rusts type system, other than providing types and generics to the language”

But I thought that went without saying

BWStearns
0 replies
3h50m

The whole reason I got interested in Rust in the first place was because of the type system. I viewed it as "Haskell types but with broad(er) adoption". The fact that it also has this neat non-GC but memory safe aspect was cool and all but not the main sell for me.

gary17the
11 replies
5h31m

If you do not want to mess with Rust borrow checker, you do not really need a garbage collector: you can rely on Rust reference counting. Use 1.) Rust reference-counted smart pointers[1] for shareable immutable references and 2.) Rust internal mutability[2] for non-shareable mutable references checked at runtime instead of compile time. Effectively, you will be writing kind of verbose Golang with Rust's expressiveness.

[1] https://doc.rust-lang.org/book/ch15-04-rc.html

[2] https://doc.rust-lang.org/book/ch15-05-interior-mutability.h...

amw-zero
6 replies
5h15m

A language has a paved road, and when you go off of that road you are key with extreme annoyance and friction every step of the way.

You’re telling people to just ignore the paved road of Rust, which is bad advice.

gary17the
2 replies
4h45m

No, not really. Firstly, there is no significant "friction" to using Rust smart pointers and internal mutability primitives, as those constructs have been added to Rust for a reason: to solve certain borrow checker edge cases (e.g., multiply interconnected data structures), so they are treated by the Rust ecosystem as first-class citizens. Secondly, those constructs make a pretty good educational tool. By the time people get to know Rust well enough to use those constructs, they will inevitably realize that mastering the Rust borrow checker is just one book chapter away to go through out of passion or boredom.

wredue
1 replies
3h15m

I find quite a lot of friction in being demanded to understand all of the methods, what they do, when you’d use them, why you’d choose one over another that does a slightly different thing, but maybe still fits.

The method documentation alone in reference counting is more pages than some entire programming languages. That’s beside the necessary knowledge for using it.

rcxdude
0 replies
3h44m

Reference counting and locks often is the easy path in Rust. It may not feel like it because of the syntax overhead, but I firmly believe it should be one of the first solutions on the list, not a last resort. People get way too fixed on trying to prove to the borrow checker that something or another is OK, because they feel like they need to make things fast, but it's rare that the overhead is actually relevant.

dewbrite
0 replies
4h54m

I strongly disagree that smart pointers are "off the paved road". I don't even care to make specific arguments against that notion, it's just a terrible take.

Groxx
0 replies
4h34m

It's telling people to avoid the famously hard meme-road.

Mutexes and reference counting work fine, and are sometimes dramatically simpler than getting absolutely-minimal locks like people seem to always want to do with Rust.

zozbot234
1 replies
4h47m

This is what Swift does, and it has even lower performance than tracing GC.

(To be clear, using RC for everything is fine for prototype-level or purely exploratory code, but if you care about performance you'll absolutely want to have good support for non-refcounted objects, as in Rust.)

gary17the
0 replies
4h29m

An interesting point, but I would have to see some very serious performance benchmarks focused specifically on, say, RC Rust vs. GC Golang in order to entertain the notion that an RC PL might be slower than a GC PL. Swift isn't, AFAIK, a good yardstick of... anything in particular, really ;) J/K. Overall PL performance is not only dependent on its memory management, but also on the quality of its standard library and its larger ecosystem, etc.

MuffinFlavored
1 replies
2h26m

Can you help me understand when to use Rc<T> instead of Arc<T> (atomic reference counter)?

Edit: Googled it. Found an answer:

The only distinction between Arc and Rc is that the former is very slightly more expensive, but the latter is not thread-safe.
gary17the
0 replies
2h6m

The distinction between `Rc<T>` and `Arc<T>` exists in the Rust world only to allow the Rust compiler to actually REFUSE to even COMPILE a program that uses a non- thread-safe primitive such as a non-atomic (thus susceptible to thread race conditions) reference-counted smart pointer `Rc<T>` with thread-bound API such as `thread::spawn()`. (Think 1-AM-copy-and-paste from single-threaded codebase into multi-threaded codebase that crashes or leaks memory 3 days later.) Otherwise, `Rc<T>`[1] and `Arc<T>`[2] achieve the same goal. As a general rule, many Rust interfaces exist solely for the purpose of eliminating the possibility of particular mistakes; for example, `Mutex<T>` `lock()`[3] is an interesting one.

[1] https://doc.rust-lang.org/rust-by-example/std/rc.html

[2] https://doc.rust-lang.org/rust-by-example/std/arc.html

[3] https://doc.rust-lang.org/std/sync/struct.Mutex.html

overstay8930
10 replies
5h56m

You have awoken the ocaml gang

sanderjd
7 replies
4h51m

Yeah, ocaml is awesome! Frankly, if it had a more familiar syntax but the same semantics, I think its popularity would have exploded in the last 15 years. It's silly, but syntax is the first thing people see, and it is only human to form judgments during those moments of first contact.

zozbot234
2 replies
4h41m

Frankly, if it had a more familiar syntax but the same semantics

That's what ReasonML is? Not quite "exploding" in popularity, but perhaps more popular than Ocaml itself.

sanderjd
0 replies
2h1m

Interesting! I'm actually unaware of this, but will look into it.

ericjmorey
0 replies
2h36m

Don't forget ReScript

anentropic
2 replies
3h21m

Funny, because the semicolons and braces syntax is one of the things that puts me off Rust a bit, and I was not excited to see it in Dada

sanderjd
0 replies
2h5m

It isn't necessarily my preference either, but it's the most familiar style of syntax broadly, and that matters more for adoption than my personal preferences do.

estebank
0 replies
2h22m

Syntax in programming languages are a question of style and personal preference. At the end of the day syntax is meant to help programmers communicate intent to the compiler. More minimalist syntax trades off less typing and reading for less redundancy and specificity. More verbose and even redundant syntax is in my opinion better for languages, because it gives the compiler and humans "flag posts" marking the intent of what was written. For humans, that can be a problem because when there are two things that need to be written for a specific behavior, they will tend to forget the other, but for compilers that's great because it gives them a lot of contextual information for recovery and more properly explaining to the user what the problem was. Rust could have optional semicolons. If you go and remove random ones in a file the compiler will tell you exactly where to put them back. 90% of the time, when it isn't ambiguous. But in an expression oriented language you need a delimiter.

bmitc
0 replies
4h9m

F# has better syntax but is ignored. :(

galangalalgol
1 replies
5h22m

That is probably the closest, especially if they add ownership. That was the rust inventor's original goal, not just safety at minimal performance cost. I think ownership should be a minimal requirement for any future language, and we should bolt it on to any that we can. Fine grained permissions for dependency trees as well. I like static types mostly because they let me code faster, not for correctness, strong types certainly help with that though. Jit makes static types have some of the same ergonomic problems as dynamic ones though. I think some sort of AGI enslaved to do type inference and annotate my code might be ok, and maybe it could solve ffi for complex types over the c abi while it is at it.

rpeden
9 replies
5h41m

You might enjoy F#. It's a lot like OCaml (which others have mentioned) but being part of the .NET ecosystem there are libraries available for pretty much anything you might want to do.

jug
8 replies
4h12m

Yes, F# is an often forgotten gem in this new, brighter cross-platform .NET world. :)

asplake
7 replies
3h8m

:-) Is F# a contender outside the .NET world?

bmitc
3 replies
1h11m

What do you mean by "outside the .NET world"? F# is a .NET language (more specifically a CLR language). That question seems to be like asking "are Erlang and Elixir contenders outside of the BEAM world?" or "is Clojure a contender outside of the JVM world?".

F# being on top of the CLR and .NET is a benefit. It is very easy to install .NET, and it comes with a huge amount of functionality.

If you're asking if the language F# could be ported to another VM, then I'd say yes, but I don't see the point unless that VM offered similar and additional functionality.

You can use F# as if C# didn't exist, if that's what you mean, and by treating .NET and CLR as an implementation detail, which they effectively are.

kaba0
1 replies
25m

You are generally right, but Clojure is a bad example, it is quite deliberately a “hosted” language, that can and does have many implementations for different platforms, e.g. ClojureScript.

bmitc
0 replies
5m

Yea, that's true. I forgot about that. I did think of Clojure CLR, but I don't get the impression that this is an all that natural or used implementation so I ignored it. ClojureScript is obviously much more used, although it is still a "different" language.

https://github.com/clojure/clojure-clr

neonsunset
0 replies
41m

This conversation could be referring to https://fable.io/

Other than that, the question is indeed strange and I agree with your statements.

posix_monad
2 replies
2h26m

There aren't many languages that can do server-side and browser-side well. F# is one of them!

asplake
1 replies
1h31m

Non .NET server-side?

posix_monad
0 replies
0m

You can do Node.js with F#

But these days .NET is a great server-side option. One of the fastest around, with a bit of tuning.

BoppreH
8 replies
5h24m

I've always wondered if global type inference wouldn't be a game changer. Maybe it could be fast enough with caching and careful language semantics?

You could still have your IDE showing you type hints as documentation, but have inferred types to be more fine grained than humans have patience for. Track units, container emptiness, numeric ranges, side effects and idempotency, tainted values for security, maybe even estimated complexity.

Then you can tap into this type system to reject bad programs ("can't get max element of potentially empty array") and add optimizations (can use brute force algorithm because n is known to be small).

Such a language could cover more of the script-systems spectrum.

bananapub
5 replies
5h20m

one of the other reasons global inference isn't used is because it causes weird spooky action at a distance - changing how something is used in one place will break other code.

BoppreH
4 replies
5h16m

I've heard that, but never seen an example*. If the type system complains of an issue in other code after a local change, doesn't that mean that the other code indeed needs updating (modulo false positives, which should be rarer with granular types).

Or is this about libraries and API compatibility?

* I have seen examples of spooky-action-at-a-distance where usage of a function changes its inferred type, but that goes away if functions are allowed to have union types, which is complicated but not impossible. See: https://github.com/microsoft/TypeScript/issues/15114

nu11ptr
2 replies
4h49m

Try writing a larger OCaml program and not using interface files. It definitely happens.

BoppreH
1 replies
3h31m

I've never used OCaml, so I'm curious to what exactly happens, and if language design can prevent that.

If I download a random project and delete the interface files, will that be enough to see issues, or is it something that happens when writing new code?

nu11ptr
0 replies
3h11m

If you delete your interface files and then change the type used when calling a function it can cascade through your program and change the type of the function parameter. For this reason, I generally feel function level explicit types are a fair compromise. However, making that convention instead of required (so as to allow fast prototyping) is probably fine.

Narishma
0 replies
3h0m

If the type system complains of an issue in other code after a local change, doesn't that mean that the other code indeed needs updating

The problem is when it doesn't complain but instead infers some different type that happens to match.

mattgreenrocks
1 replies
3h29m

Type inference is powerful but probably too powerful for module-level (e.g. global) declarations.

Despite type systems being powerful enough to figure out what types should be via unification, I don't think asking programmers to write the types of module declarations is too much. This is one area where forcing work on the programmer is really useful to ensure that they are tracking boundary interface changes correctly.

BoppreH
0 replies
1h29m

People accept manually entering types only at a relatively high level. It'd be different if types were "function that takes a non-empty list of even numbers between 2 and 100, and a possibly tainted non-negative non-NaN float in meters/second, returning a length-4 alphanumeric string without side effects in O(n)".

cmrdporcupine
6 replies
5h59m

... so OCaml or StandardML then

bradrn
5 replies
5h53m

Or Haskell!

sanderjd
3 replies
4h40m

Ocaml, yes, but not haskell. It does include these things the parent wants, but similar to how Rust ends up being quite "captured" by its memory semantics and the mechanics necessary to make them work, haskell is "captured" by laziness and purity and the mechanics necessary to make those work.

Also, syntax does actually matter, because it's the first thing people see, and many people are immediately turned off by unfamiliarity. Rust's choice to largely "look like" c++/java/go was a good one, for this reason.

cmrdporcupine
2 replies
4h29m

I learned SML/NJ and OCaml a bit over 20 years ago and liked them, but when I tried my hand at Haskell my eyes glossed over. I get its power. But I do not like its syntax, it's hard to read. And yes, the obsession with purity.

sanderjd
1 replies
4h3m

Exactly right. I quite like haskell in theory, but in practice I quite dislike both reading and writing it.

But I like ocaml both in theory and practice (also in part due to having my eyes opened to SML about 20 years ago).

cmrdporcupine
0 replies
3h42m

I actually preferred SML/NJ when I played with writing it, but OCaml "won" in the popularity contest. Some of the things that made OCaml "better" (objects, etc.) haven't aged well, either.

Still with OCaml finally supporting multicore and still getting active interest, I often ponder going back and starting a project in it someday. I really like what I see with MirageOS.

These days I just work in Rust and it's Ok.

aloisdg
0 replies
5h28m

or F#

iainmerrick
5 replies
5h57m

TypeScript maybe?

actionfromafar
3 replies
5h31m

If we are going that far, I suggest hopping off just one station earlier at Crystal-lang.

sanderjd
2 replies
4h44m

Yep, I think Crystal is the thing that is making a real go at essentially this suggestion. And I think it's a great language and hope it will grow.

iainmerrick
1 replies
4h2m

Do you know how Crystal compares with Haxe? That's another one that might fit the requirements nicely.

actionfromafar
0 replies
1h43m

I don't understand the Haxe documentation but it seems to also have some kind of algebraic data type.

k__
0 replies
2h49m

Maybe ReScript?

quadrature
2 replies
6h4m

You’ve just described scala.

sanderjd
1 replies
4h45m

Ha, no. Scala does contain this language the parent described, but alongside the huge multitudes of other languages it also contains.

kaba0
0 replies
24m

Scala is an absolutely small language. It is just very expressive, but its complexity is quite different than, say, Cpp’s, which has many features.

malermeister
1 replies
5h58m

You might like Kotlin. It'll also give you access to the entire JVM ecosystem.

helsinki
0 replies
2h44m

Is that a blessing or a curse?

keeperofdakeys
1 replies
3h36m

The funny thing is that rust used to have things like garbage collection. For the kind of language Rust wanted to be, removing them was a good change. But there could always be a world where it kept them.

https://pcwalton.github.io/_posts/2013-06-02-removing-garbag...

weatherlight
0 replies
2h41m

checkout Gleam.

thegeekpirate
0 replies
6h8m

Right? One day... sigh

sanderjd
0 replies
4h53m

Totally agree! But I think it's a "both and" rather than an "either or" situation. I can see why people are interested in the experiment in this article, and I think your and my interest in the other direction also makes sense.

posix_monad
0 replies
2h28m

There are a bunch of languages that fit-the-bill already. F#, OCaml, Haskell and Scala all come to mind.

You might have to lose a few parens though!

naasking
0 replies
4h14m

Isn't that just the Boehm GC with regular Rust?

mattgreenrocks
0 replies
3h36m

Kotlin scratches that itch well for me. My only complaints are exceptions are still very much a thing to watch, and ADT declarations are quite verbose when compared with more pure FP languages.

Still, the language is great. Plus, it has Java interop, JVM performance, and Jetbrains tooling.

lawn
0 replies
3h32m

Take a look at Gleam!

For me it seems like the perfect match.

geodel
0 replies
2h42m

but in the end it's adoption and the ecosystem that keeps me from using them.

Well, since you can't really use without high adoption even if something comes up with all features you want, you still won't be able to use it for decades or longer.

efficax
0 replies
5h7m

use scala

bmitc
0 replies
4h10m

Isn't that F#?

GardenLetter27
0 replies
6h14m

Yeah, same for a scripting language too - something like Lua but as expressive as Rust.

There is Rune, but like you mentioned the issue is adoption, etc.

yamrzou
57 replies
6h39m

Their Hello, Dada! example:

print("...").await

I'm coming from Python, and I can't help but ask: If my goal as a programmer is to simply print to the console, why should I care about the await? This already starts with a non zero complexity and some cognitive load, like the `public static void main` from Java.

baq
18 replies
6h34m

I'd say you want something like 'debug_msg()' for this.

'print()' should be async because it does IO. In the real world most likely you'd see the output once you yield.

DinaCoder99
10 replies
6h28m

Huh, typically print is the debug message function vs explicitly writing to stdout

dartos
6 replies
5h58m

I don’t think so.

Normally print isn’t a debug message function, people just use it like that. (it normally works on non debug builds)

couchand
4 replies
5h41m

Printing directly to the console, even in a console app, is for debug purposes only.

If your console app is writing output to any device, it must, for instance, handle errors gracefully.

That means, at least in Rust, write! rather than print!.

lkirkwood
3 replies
5h12m

What makes you say that? I almost always use println! over write!.

From the docs: "Use println! only for the primary output of your program. Use eprintln! instead to print error and progress messages."

couchand
2 replies
4h41m

What makes me say that a well-built program properly handles errors?

myrmidon
1 replies
3h53m

Panic is a perfectly proper way for a well-built program to stop execution.

There is no point in juggling around Result types if a failure means that you can not recover/continue execution. That is in fact exactly what panic! is intended for [1].

[1]: https://doc.rust-lang.org/book/ch09-03-to-panic-or-not-to-pa...

couchand
0 replies
2h28m

Panic is perfectly fine in certain cases, but it's absolutely not a general error-handling mechanism for Good Programs (TM). (Some contexts excluded, horses for courses and all that)

You can and should recover from bog standard IO failures in production code, and in any case you'd better not be panicking in library code without making it really clear that it's justified in the docs.

If your app crashes in flames on predictable issues it's not a good sign that it handles the unpredictable ones very well.

nine_k
0 replies
5h26m

Production builds should retain all debug prints, only hide them behind a flag. This helps you preserve sanity when troubleshooting something.

wongarsu
1 replies
4h58m

In rust, the common debug message function would be log::info! or log::debug!, with two lines of setup to make logs print to stderr. Or for something more ad-hock, there's dbg! (which adds context what you are printing, and doesn't care about your logging config). Not that people don't use print for the purpose, but it's basically never the best choice. I assume Dada is conceived with the same mindset.

DinaCoder99
0 replies
1h39m

I disagree—println! is far more common for every day printf debugging than the log crate is. Do i have any evidence of this? No, but it takes less to type and is harder to mess up with log levels while working just as effectively.

jeroenhd
0 replies
5h17m

Print is a debug function in some languages, but it's usually just stdout. You can add all kinds of logging libraries that wrap around print() with prefixes and control sequences to add colours, but I generally don't bother with those myself. In those circumstances, I would use something like logger.debug() instead of plain print(), though.

I personally find myself using print debugging as a last resort when the debugger doesn't suffice.

est
2 replies
5h42m

gevent handles async fine without the explicit async/await

.NET core will introduce something similar

nine_k
1 replies
5h28m

The cost of it is that when you need to make something explicitly async, you have to wrap it into a greenlet in a much more involved way.

JavaScript lets you do it much more ergonomically.

gpderetta
0 replies
5h19m

Doesn't seem very involved; for example (cilk inspired syntax):

    let f = spawn { print() }  // fork
    ...
    wait f // join. f is a linear type
You only pay the complexity cost if you need it.

MatthiasPortzel
2 replies
4h40m

'print()' should be async because it does IO

What if I want to do synchronous IO?

n2d4
0 replies
4h25m

Just from glancing over the docs, that doesn't seem supported:

> Dada, like JavaScript, is based exclusively on async-await. This means that operations that perform I/O, like print, don't execute immediately. Instead, they return a thunk, which is basically "code waiting to run" (but not running yet). The thunk doesn't execute until you await it by using the .await operation.

Good riddance, IMO — never been a fan of blocking IO. Dada does have threads though, so I wonder how that works out. (Forcing async/await makes a lot more sense in JavaScript because it's single-threaded.)

dahart
0 replies
3h12m

Then use .await?

yamrzou
0 replies
5h10m

“The most effective debugging tool is still careful thought, coupled with judiciously placed print statements.” — Brian Kernighan, co-creator of Unix

Alifatisk
16 replies
6h32m

The reasoning with await is valid, it's an I/O call, but the await should maybe be hidden inside the print then?

mtsr
5 replies
6h20m

Maybe it’s actually a non-leaky abstraction because it makes the async-nature explicit. The alternative is hiding it, but it’s still going to affect your code, making that effectively a leaky abstraction.

luke-stanley
3 replies
5h43m

Maybe there could be something like a aprint() wrapper, if the authors wanted to make the async nature explicit? Or something else, probably not this for one of the most common things a programmer must do.

steveklabnik
2 replies
2h35m

Why is aprint “non leaky” but print.await “leaky”?

luke-stanley
1 replies
2h16m

Hey Steve, I wouldn't say that "print.await" is a leak abstraction. I think "print.await" is explicit and that's good, it communicates it's abstraction fairly clearly, presumably following a pattern used commonly in this, imagined language.

I suppose that a wrapper like "aprint" (a convenience function labelled async, like with an "a" prefix), would be a bit better than having people continually try using print, not await it, and not getting the expected output in stdout (or whatever stream it's sent to), while they are in the middle of trying to test something or otherwise get something working because I'm of the opinion that common things should be easy. Maybe "people would generally expect a print function to just work and not return a promise or something" is an abstraction? "aprint" might actually be the wrong name I'm not sure I've really thought about it right.

steveklabnik
0 replies
1h35m

I agree with you personally on print.await; maybe I replied to the wrong person on this thread, ha!.

gpderetta
0 replies
2h17m

Why is I/O so special that need to be explicitly marked across the call stack? What about memory allocation, that can arbitrarily delay a process? Should allocating functions be transitively annotated? What about functions that lock a mutex or wait on some synchronisation primitive? What about those that signal a synchronization primitive? What about floating points, that can raise exceptions? What about panicking functions?

Either all side effects should be marked or none should. Ret-connecting await annotations as an useful feature instead of a necessary evil is baffling.

klabb3
4 replies
6h14m

To be precise: the contract depends on the implementation. Here’s an example:

I write an in memory kv cache. It’s in memory so no async needed. Now I create a trait and implement a second version with file backing. Now the children are crying because async needs to be retroactively added and also why, makes no sense etc.

naasking
3 replies
3h35m

It does make sense if you want other types of resources, like time and memory, to also be part of the contract. Async annotations let you do this but hiding the asynchrony does not.

klabb3
2 replies
3h6m

Make sense might be an overstatement but ok. Then why do functions with sync syscalls (ie file, timers or mutex ops) not expose the same contractual differences? They’re just regular functions in most languages including Rust.

Perhaps anything involving syscalls should be exposed and contractual. I doubt it, but maybe it’s important for some obscure ownership-of-resources reason. But then why the inconsistency between traditional and pooled syscalls? The only difference is whether the runtime sits in the kernel or in user space. The only one who should care is the runtime folks.

My take has been for years that this is throwing complexity over the fence and shaming users for not getting it. And even when they do get it, they Arc<Mutex> everything anyways in which case you are throwing the baby out with the bathwater (RAII, single ownership, static borrowing).

naasking
1 replies
2h39m

Then why do functions with sync syscalls (ie file, timers or mutex ops) not expose the same contractual differences? They’re just regular functions in most languages including Rust.

Because the kernel doesn't expose that contract, so they don't have that behaviour.

The only difference is whether the runtime sits in the kernel or in user space.

In other words, what contracts you have control over and are allowed to provide.

My take has been for years that this is throwing complexity over the fence and shaming users for not getting it.

I'm sure how we got here would seem baffling if you're going to just ignore the history of the C10K problem that led us to this point.

You can of course paper over any platform-specific quirks and provide a uniform interface if you like, at the cost of some runtime overhead, but eliminating as much of this kind of implicit runtime overhead as possible seems like one of Rust's goals. Other languages, like Go, have a different set of goals and so can provide that uniform interface.

It's probably also possible to have some of that uniform interface via a crate, if some were so inclined, but that doesn't mean it should be in the core which has a broader goal.

klabb3
0 replies
1m

I'm sure how we got here would seem baffling if you're going to just ignore the history of the C10K problem that led us to this point.

I am not unaware of pooled syscalls. I worked on the internals of an async Rust runtime, although that should not matter for critiquing language features.

The archeological dig into why things are the way they can come up with a perfectly reasonable story, yet at the same time lead to a suboptimal state for a given goal - which is where the opinion space lies - the space where I’m expressing my own.

but eliminating as much of this kind of implicit runtime overhead as possible seems like one of Rust's goals

Yes, certainly. And this is where the perplexity manifests from my pov. Async is a higher level feature, with important contractual ecosystem-wide implications. My thesis is that async in rust is not a good solution to the higher level problems, because it interacts poorly with other core features of Rust, and because it modularizes poorly. Once you take the event loop(s) and lift it up into a runtime, the entire point (afaik - I don’t see any other?) to abstract away tedious lower level event and buffer maintenance. If you just want performance and total control, it’s already right there with the much simpler event loop primitives.

In short, I fail to see how arguments for async can stand on performance merits alone. Some people disagree about the ergonomics issues, which I am always happy to argue in good faith.

withinboredom
3 replies
5h58m

It MIGHT or might NOT be valid, it depends. In a lot of cases, I might just want to print, but not yield "right here," but later (if at all in the current method). Further, writing to i/o is usually non-blocking (assuming the buffers are big enough for whatever you are writing), so in this case, the await literally makes no sense.

nine_k
1 replies
5h23m

A smart print() implementation may check if there's enough output buffer, and, if so, quickly return a Future which has already completed. A smart scheduler can notice that and not switch to another green thread.

withinboredom
0 replies
5h15m

One can argue that in the VAST majority of instances, you'll never ever be printing so much that you'll fill the buffer. If you need that kind of control, just get a direct stream to stdout, otherwise make print() block if it needs to.

couchand
0 replies
5h38m

The canonical reason a language adds a utility like print over just offering the services of the underlying console is to make the Hello, World example as terse as possible.

IO is inherently extremely complicated, but we always want people to be able to do their simplified form without thinking about it.

secondcoming
3 replies
6h10m

Surely `public static void main` has less congnitive load than

    if __name__ == "__main__":
        main()

abdusco
1 replies
6h0m

You don't have to do this, though. You can have an entrypoint.py that simply calls `main()` without that if. You don't even need to have modules if you want to, so you can write your functions and call them right after.

wredue
0 replies
2h54m

So in python, you need to understand not 1, but at least 3 different versions of “an entry point”, and to you, this is “less cognitive load”?

I had the same issue with Swift. There’s 30 ways to write the exact same line of code, all created by various levels of syntax sugar. Very annoying to read, and even more annoying because engaging different levels of sugar can engage different rulesets.

mike_ivanov
0 replies
53m

It should have been implemented as `def __main__(): ...`

sanderjd
3 replies
4h36m

Also it immediately makes me wonder what `await` is... Is it a reference to a field of whatever the `print()` method is returning? Is it calling a method? If it's a method call without parentheses, how do I get a reference to a method without calling it?

(These kinds of questions are just unavoidable though; everyone will have these little pet things that they subjectively prefer or dislike.)

spankalee
0 replies
1h40m

I wonder why not do `await print()` though? It reads more naturally as "wait for this" and is more clearly not a property access.

sanderjd
0 replies
4h7m

Yeah but for a new language I haven't seen before, I immediately wonder!

wtetzner
2 replies
5h5m

If my goal as a programmer is to simply print to the console, why should I add care about the await?

Because that isn't ever anyone's actual goal? Optimizing a language design for "Hello World" doesn't seem like a particularly useful decision.

pas
0 replies
4h53m

sure, but it seems useful to be able to opt-in/opt-out of async easily. ie.

if I want a short/simple program it would be cool to put a stanza on the top of the file to auto-await all futures.

MatthiasPortzel
0 replies
4h44m

It’s not an end goal, maybe, but if I’m writing a complex program and I want to print to the console for logging or debugging or status, I shouldn’t have to think about the design of that print-call. I would like to be able to focus on the main complexity of the program, rather than worry about boiler-plate complexity every time I want to print.

spankalee
2 replies
2h4m

The other problem I see here is that starting and awaiting the task are too coupled.

In JavaScript calling the function would start the task, and awaiting the result would wait for it. This lets you do several things concurrently.

How would you do this in Dada:

    const doThings = async () => {
      const [one, two, three] = await Promise.all([
        doThingOne(),
        doThingTwo(),
        doThingThree(),
      ]);
    };
And if you wanted to return a thunk to delay starting the work, you would just do that yourself.

janderland
1 replies
1h47m

I assumed dada is using promises under the hood, just as JS is. If this is the case it could provide a static method for waiting on multiple promises, just as JS does.

spankalee
0 replies
1h41m

This seems to say otherwise (specifically the "but not running" part):

Dada, like JavaScript, is based exclusively on async-await. This means that operations that perform I/O, like print, don't execute immediately. Instead, they return a thunk, which is basically "code waiting to run" (but not running yet). The thunk doesn't execute until you await it by using the .await operation.

From https://dada-lang.org/docs/dyn_tutorial

wredue
1 replies
2h58m

Just going to be honest here:

“Zero complexity print to the screen”

Is, quite possibly, the dumbest argument people make in favour of one language over another.

For experienced people, a cursory glance at the definitions should be enough. For new programmers, ignoring that part “for now” is perfectly fine. So to is “most programming languages, even low level ones, have a runtime that you need to provide an entry point to your program. In Java, that is public static void main. We will go over the individual aspect of this later. ”. This really is not that difficult, even for beginners.

Personally, I find more “cognitive load” in there not being an explicit entry point. I find learning things difficult when you’re just telling me extremely high level *isms.

lamontcg
0 replies
36m

This does surface the fact that its another await/async red/green function language though.

If they're already making it gradually typed and not low-level, I don't understand why they don't throw away the C ABI-ness of it and make it more like Ruby with fibers/coroutines that don't need async/await.

I'd like parametric polymorphism and dynamic dispatch and more reflection as well if we're going to be making a non-low-level rust that doesn't have to be as fast as humanly possible.

(And honestly I'd probably like to keep it statically typed with those escape hatches given first-class citizen status instead of the bolted on hacks they often wind up being)

pama
1 replies
1h32m

In Python if you carelessly print within a multiprocess part of an application you may end up getting a nonreproducible mess on stdout with multiple streams merged at random points. So the cognitive load in this example is that this new language is meant for multithreaded coding and can make multithreading easy compared to other languages.

bmitc
0 replies
1h17m

That's a great example of the "simplicity" of Python being anything but.

raverbashing
0 replies
2h49m

Yeah I'm not so sold, but mainly, I don't understand the logic here

If I'm declaring an async function, why do I need to await inside it?

like, if the return of an async function is a promise (called a thunk), why can't I do

async async_foo() { return other_async_foo(); } and it will just pass the promise?

Then you await on the final async promise. Makes sense?

pmontra
0 replies
3h55m

I'm coming from several languages (C, Perl, Java, JavaScript, Ruby, Python) and I strongly dislike the async/await thing.

At least let people change the default. For example

  await {
    // all the code here
    // runs synchronously 
    async {
      // except this part where
      // async methods will return early
      print("but not me!").await()
    }
  }
However the remark I make to people advocating for static typed Ruby holds for this language too: there are already languages like that (in this case await by default,) we can use them and let Dada do its own thing.

jlouis
0 replies
4h7m

It is a fundamental question in your language design. Some languages make side-effects explicit one way or the other. Other languages handles side-effects in a more implicit fashion. There's a tradeoff to be made here.

benrutter
35 replies
5h23m

I love the idea of a "thought experiment language" - actually creating a working language is a big overhead, and its really fun to think about what an ideal language might look like.

The crazy thing with reading this and the comments, is that it seems like we all have been daydreaming about completely different versions of a "high level rust" and what that would look like. For me I'd just want a dynamic run time + simpler types (like "number" or a single string type), but it looks like other people have a completely different list.

Some of the additions here, like a gradual type system, I would really not want in a language. I love gradual type system for stuff like Python, Typescript and Elixir, but those are cases where there's already so much untyped code written. I would way prefer the guarantees of a fully static typed codebase from day one when that's an option.

sanderjd
15 replies
4h56m

In college, my programming languages class used a language called "Mystery" (I believe created by my professor), which was configurable. Assignments would be like "write some test programs to figure out whether the language is configured to use pass-by-value or pass-by-reference". And there were a bunch of other knobs that could be turned, and in each case, the idea was that we could figure out the knob's setting by writing programs and seeing what they did.

I loved this, both as a teaching aid, and as an eye-opener that programming languages are just an accumulation of choices with different trade-offs that can all go different ways and result in something that works, perhaps a bit better or perhaps worse, or perhaps just a bit more toward or away from one's own personal taste.

This is sort of the lisp idea of "create the language that is natural to write the application in, then write the application". Or Ruby's take on that idea, with more syntax than lisp but flexible and "human" enough to be DSL-ish.

But somewhat to my sadness, as I've progressed in my career, I've realized that the flip side of this is that, if you're building something big it will require lots of people and all those people will have different experiences and personal preferences, so just picking one standard thing and one standard set of defaults and sticking with that is the way to go. It reduces cognitive overhead and debate and widens the pool of people who can contribute to your effort.

But for personal projects, I still love this idea of thought experimentation around the different ways languages and programming environments could work!

couchand
10 replies
4h34m

Don't be so quick to discount DSLs. Sure, you don't want a half-baked DSL when some simple imperative code would do. But if you watch your API evolve into an algebra and then don't formalize it with a DSL you might be leaving powerful tools for understanding on the table.

A poor-fitting language is terrible for abstract thinking, on the other hand an internally-consistent and domain appropriate language can unlock new ways of looking at problems.

I'd highly recommend Martin Fowler's work on DSLs to see how you can apply these techiques to large projects.

jacobr1
3 replies
4h13m

A related notion is that you need strong, well-thought out, and when the system is changing, regularly refactored abstractions. You might not need a DSL but your class/type/trait designs needs to be sane, your API needs to be solid, etc ... DDD principles are key here.

couchand
2 replies
4h9m

Yes, eat your vegetables!

A question of philosophy: If you have all that, don't you already have a DSL, using a deep embedding in the host language?

seanc
0 replies
3h56m

I certainly think so. Or at least I find it very helpful to think about interface design that way. It's DLS's all the way down.

jimbokun
0 replies
1h36m

Yes, but the language in which you create your framework can do a lot of the heavy lifting. For example, if your main interface is a REST API, there is a large body of knowledge of best practices, educational resources, and existing tools for interacting with it.

With a new DSL, you need to create all of that yourself.

wredue
2 replies
3h20m

The problem a lot of people have with DSLs is… well, just look at a prime example: SAS.

If you’re an experienced programmer coming in to SAS, your vocabulary for the next LONG time is going to consist primarily of “What The Fuck is this shit?!?”

smallnamespace
0 replies
2h42m

What do you mean? Computations very naturally organize into batches of 40 cards each.

purist33
0 replies
2h1m

I hated SAS with a passion when I was forced to work with it for 2 years. One of the biggest problems I faced was, it would take me a long time to find out if something was doable or almost impossible in that language.

It wanted to be more than just SQL, but the interoperability with other languages was awful, we couldnt even work with it like SQLite.

jimbokun
1 replies
1h39m

Yes, but then you need to be able to market your DSL and get buy-in. Otherwise you will forever be just a team of one. And then need to sell to all the stakeholders of the project the idea of trusting one person for all the development.

So in addition to the skill of creating a DSL, you need the skills of thoroughly documenting it, training other people to use it, creating tools for it, and explaining the benefits in a way that gets them more excited than just using an existing Boring Old Programming Language.

Which is certainly possible. You can get non developers excited if they can use it for answering their own questions or creating their own business rules, for example. But it's a distinct skill set from cranking out code to solve problems. It requires a strong understanding of the UX (or DX) implications of this new language.

travisjungroth
0 replies
21m

I’m of the mindset that API and DSL are more of a continuum than categories. As soon as you write your first abstraction, you’re making a little language.

In the same way, what you listed isn’t a distinct skill set from cranking out code to solve problems. What happens is those skills are now levered. Not the good vibes “leveraged”. I mean in the “impact to success and failure is 100x baseline” sense. If those skills are in the red, you get wiped out.

rugina
0 replies
2h28m

Given that the GitHub repo is almost three years old, I expect Martin Fowler to already have Dada Patterns, Refactoring in Dada, Dada Distilled, Dada DSL and Dada Best Practices ready to publish.

sanderjd
0 replies
1h58m

Yep :) I thought there would probably be some folks here who would recognize this.

thesz
8 replies
5h6m

actually creating a working language is a big overhead

Languages, with first class values, pattern matching, rich types, type inference and even fancy RTS, often can be embedded in Haskell.

For one example, it is very much possible to embed into Haskell a Rust-like language, even with borrow checking (which is type-checking time environment handling, much like linear logic). See [1], [2] and [3].

  [1] http://blog.sigfpe.com/2009/02/beyond-monads.html
  [2] https://www.cs.tufts.edu/comp/150FP/archive/oleg-kiselyov/overlooked-objects.pdf
  [3] http://functorial.com/Embedding-a-Full-Linear-Lambda-Calculus-in-Haskell/linearlam.pdf
Work in [3] can be expressed using results from [1] and [2], I cited [3] as an example of what proper type system can do.

These results were available even before the work on Rust began. But, instead of embedding Rust-DSL into Haskell, authors of Rust preferred to implement Rust in OCaml.

They do the same again.

ericyd
5 replies
4h57m

Are you suggesting that creating a new programming language from scratch is a trivial exercise? If yes, wow. If no, I think the intention of your comment could be more clear, particularly regarding the quote you took from the original comment.

couchand
3 replies
4h30m

I suspect the GP was merely suggesting a less-costly alternative. Perhaps building a complete standalone compiler or interpreter is hard, but we're all designing APIs in our programming languange of choice day in and day out.

Both strategies are very hard, but one of then is "build a prototype in a weekend" hard and one of them is "build a prototype is a month" hard.

jacobr1
2 replies
4h10m

It is interesting to consider how much the lower abstraction influences the higher abstraction. If you are building on a existing language/runtime/framework then you can inherit more functionality and move faster, but also you implicitly will inherent many of the design decisions and tradeoffs.

convolvatron
0 replies
1h11m

totally. for me the interplay between the host language and the target language is hardest thing for me to manage when bringing up a new environment. it really doesn't seem like it should be a big deal, but it comes down to the sad reality that we operate by rote alot of the time, and completely switching semantic modes when going between one world and the other is confusing and imposes a real cost.

I'm still not that good at it, but my best strategy to date is to try to work in a restricted environment of both the host and the target that are nearly the same.

arethuza
0 replies
4h5m

Creating a "new" programming language isn't that difficult - creating something that is interesting, elegant and/or powerful requires a lot of thought and that is difficult.

pas
1 replies
5h2m

instead of embedding Rust-DSL into Haskell, authors of Rust preferred to implement Rust in OCaml

why? and how much does it matter, if the goal is to have a compiler/interpreter? (as I assume is the case with Dada, and was with Rust)

argiopetech
0 replies
4h50m

R&D. Bootstrapping.

pas
3 replies
5h5m

Runtime would be nice, but ... that's basically what Tokio and the other async frameworks are. What's needed is better/more runtime(s), better support for eliding stuff based on the runtime, etc.

It seems very hard to pick a good 'number' (JS's is actually a double-precision 64-bit IEEE 754 float, which almost never feels right).

benrutter
2 replies
4h54m

Yes, that's true - "number" is probably more broad than I'd really want. That said, python's "int", "float" and "decimal" options (although decimal isn't really first class in the same way the otherse are) feels like a nice balance. But again, its interesting the way even that is probably a bias towards the type of problems I work with vs other people who want more specification.

tialaramex
0 replies
2h14m

"Number" implies at least the reals, which aren't computable so that's right out. Hans Boehm's "Towards an API for the Real Numbers" is interesting and I've been gradually implementing it in Rust, obviously (as I said, they aren't computable) this can't actually address the reals, but it can make a bunch of numbers humans think about far beyond the machine integers, so that's sometimes useful.

Python at least has the big num integers, but its "float" is just Rust's f64, the 64-bit machine integers again but wearing a funny hat, not even a decimal big num, and decimal isn't much better.

jacobr1
0 replies
4h6m

The key though is probably to have a strong Number interface, where the overhead of it being an object is complied away, so you can easily switch out different implementations, optimize to a more concrete time at AOT/JIT time and have clear semantics for conversion when different parts of the system want different concrete numeric types. You can then have any sort of default you want, such as an arbitrary precision library, or decimal or whatever, but easily change the declaration and get all the benefits, without needing to modify downstream code that respects the interface and doesn't rely on a more specific type (which would be enforced by the type system and thus not silent if incompatible).

samatman
0 replies
3h13m

The gradual type systems you're referring to are bolted onto the languages, or in Elixir's case, the runtime. If you want to see what a language with a deeply integrated gradual type system is like, take a look at Julia. I've found it to be both expressive and precise.

rayiner
0 replies
2h19m

The challenge of thought experiments, in a statically typed language, is ensuring soundness. The first version of Java Generics, for example, was unsound: https://hackernoon.com/java-is-unsound-28c84cb2b3f

obeavs
0 replies
52m

For what it's worth, Moonbit (https://www.moonbitlang.com/) is a really nice take on this. Designed by the guys who created Rescript for OCAML, but for WASM-first world.

jimbokun
0 replies
1h48m

Isn't Rust a "high level rust"?

brainzap
0 replies
5h7m

I agree, fantasy and play is needed. Since we humans have a brain area for play and imagination, why not explore.

sorenjan
30 replies
4h12m

Changing a quote to change "his" to "theirs" seem like a very Rust community thing to do.

Updated to use modern pronouns.

https://dada-lang.org/docs/about/

OtomotO
7 replies
3h53m

Non-native-speaker take: I don't care, it just reads a bit "weird" as I learned English before and "theirs" was plural... but I am adaptable.

As long as the meaning of the quote isn't changed I couldn't care less and it seems very important to some people.

What I personally dislike though is the whole "Ask me my pronouns" thing... like "No, I don't care about your gender or sex, as long as I am not interested in a romantic relationship with you - just tell me how to call you and I'll do it, but more effort? No!"

To elaborate a bit more: I find the topic exhausting not because I hate freedom of choosing your own gender or anything like that, but because I personally do not care about your gender at all.

I don't care about your religion, your skin color, your culture, your sex, your gender... I care about individual people but I don't reduce them to a certain aspect of their existence.

Now I find the whole "Ask me my pronouns" exhausting and also rude because it puts pressure on me to ask you about a topic I am not interested in. Like: I get it, there is social pressure, I understand that you're not happy with certain social "norms" and developments. I totally get that and I guess we are on the same side for many of them, but I still do not care about your gender until I care about your gender. (And also, I don't live in your country probably, so your local politics may be of interest, but I still don't like being forced to talk about them before I can ask a genuine question on e.g. a technology topic ;))

Just write his/her/theirs... and I will respect your choice. I will not think less of you, nor will I put you on a pedestal for something I do not care about.

rob74
2 replies
3h38m

The "singular they" has been a thing in English for a long time (since the 14th century according to Wikipedia - https://en.wikipedia.org/wiki/Singular_they). I'm a non-native speaker as well, and wasn't taught about it in school either (maybe that has changed in the meantime?). The first time I consciously noticed it is probably in the Sting song If You Love Somebody Set Them Free (https://en.wikipedia.org/wiki/If_You_Love_Somebody_Set_Them_...), which was his debut solo single, so is already quite old itself (1985, although I probably heard it later).

timeon
0 replies
1h55m

There is also "singular you" in English.

moomin
0 replies
3h15m

Yeah, it's been commonly used when the object (in the grammatical sense) would typically have a gender but it is unspecified. In fact, it's so common native English speakers don't even notice they're doing it. (Which produces a steady stream of unintentional humor from those pretending it isn't a thing.) The usage as a sign of respect for specific non-binary and other gender non-confirming people is more modern (2009 is apparently the first recorded example). Although to a great extent, it doesn't matter. Language evolves over time and dictionary definitions necessarily trail usage.

The wikipedia article is quite detailed and will probably supply more information that anyone particularly wanted. https://en.wikipedia.org/wiki/Singular_they

OtomotO
0 replies
25m

Pluralis Majestatis exists in my mothertongue too ;)

yodsanklai
0 replies
3h14m

Non-native-speaker take: I don't care, it just reads a bit "weird" as I learned English before and "theirs" was plural...

Non-native speaker too, I find it easier to adjust in English compared to my native language (French), probably because the language is less engrained in me. I embraced the English neutral pleural - it's even convenient - but I found myself a bit more annoyed with the so called French *écriture inclusive", such as "les étudiant.e.s sont fatigué.e.s". Not really pretty IMHO. We could find something better..

subtra3t
0 replies
1h33m

People who want others to ask them their pronouns before referring to them, what is your reason for doing so?

ursuscamp
4 replies
3h54m

As an avid fan of Rust, the Rust community is incredibly cringe about this topic.

myko
3 replies
3h46m

Seems more cringe to complain about pronouns

zarathustreal
2 replies
3h43m

I think we’re all saying the same things here. No one wants to hear complaints about pronouns

rob74
1 replies
3h34m

Actually, what I don't want is a discussion about pronouns being at the top of the comments. Aren't there more interesting topics to discuss about this project?!

cmrdporcupine
0 replies
3h26m

Indeed, far more annoying is that someone grousing about the TFA update about pronouns "seems like a very [subsection of] HN community thing to do."

And yet here I am, N levels down in this thread, griping about it. Oops.

alphazard
4 replies
3h54m

I also noticed this, along with the warnings that Dada doesn't really exist yet (which is fine, thanks for the heads up).

I predict this project will have its priorities backwards. There's a group of people who want to govern a programming language project, and inject their ideology into that structure, and maybe there's another group of avid language designers in there too. I think there are more of the first.

herval
2 replies
3h47m

How do you “inject ideology” in a programming language?

Compiler error if the variable name is sexist?

alphazard
1 replies
3h37m

How do you “inject ideology” in a programming language?

I was just talking about the project community and governance. It would be hard to imagine injecting ideology into the language itself.

Oh wait, nevermind...

https://doc.rust-lang.org/beta/nightly-rustc/tidy/style/cons...

steveklabnik
0 replies
3h32m

This is part of rustc’s test suite. It affects nobody but rustc.

travisgriggs
0 replies
3h32m

Pournelle’s Iron Law of Bureaucracy

umvi
2 replies
3h54m

Is it still a quote in that case?

tirpen
0 replies
3h48m

Yes, it's just a different translation of the original French quote.

Hamuko
0 replies
3h51m

I've understood that you can modify quotes but you have to indicate the bits that you've modified. So "It enforces memory safety" becomes "[Rust] enforces memory safety" if you want to modify a quote to make more sense out of context.

javier_cardona
2 replies
3h54m

Just for context, the quoted manifesto was originally written in French (https://monoskop.org/images/3/3b/Dada_3_Dec_1918.pdf). In that version, that particular sentence is gender neutral: "tout le monde fait son art a sa façon".

I would say that that their updated quote is a more accurate translation of the original than the English translation they initially used.

sorenjan
1 replies
3h26m

That does make it a lot better, but at the same time makes the footnote even more of a deliberate statement that could have been left out.

gr4vityWall
0 replies
2h44m

That does make it a lot better

Why?

winwhiz
0 replies
3h12m

It is also a very Dadaism thing to do. Da!

twic
0 replies
3h30m

Changing it is perfectly reasonable, but specifically advertising that you've done it in a footnote is extremely Rust community.

timeon
0 replies
1h51m

modern pronouns.

In my native language it is quite old-school. Really polite form.

tialaramex
0 replies
3h43m

I mean, Tristan Tzara is Romanian and so it seems likely that this thought isn't originally English anyway, so it's reasonable for a modern writer to choose to translate it in a more inclusive way. I expect that an early English Bible and a modern one likewise make different choices about whether a text that's clearly about people generally and isn't concerned with sex or gender - should say "He" or "They" / "His" or "Their" and so on.

mplanchard
0 replies
2h11m

The policing of what other people should or shouldn't care about or advertise they care about is very boorish to me, but of course here I am doing the same thing.

everybodyknows
0 replies
4h5m

On iPad, I see a back-link from the footnote, but no forward link to it from the corrupted quotation.

mihaic
12 replies
3h47m

I've written a bit of Rust, and I was left with mixed feelings, that seem to be still the same here: - loved the memory safety patterns when compared to the horrible things that you can do with C++ - found almost every thing where it was different to have a harder to parse syntax, that I could never get used to. The implicit return at the end of a statement for instance make it harder for me to visually parse what's being returned, since I really depend on that keyword.

Code in general is hard for me to mentally read. I know it sounds nitpicky, but to me all keywords should be obviously pronounceable, so something like "func" instead of "fn" would be mandatory. Also, using the permission keywords where I'd expect the type to be also seems a bit strange, as I'd imagine that keyword to prefix the variable -- that's just how I think though.

It does seem like less decorator magic and symbol-based syntax would make it easier for beginners to grasp.

I may sound like a curmudgeon, but I'd prefer only one type of language innovation at a time.

bmitc
2 replies
3h33m

The implicit return at the end of a statement for instance make it harder for me to visually parse what's being returned, since I really depend on that keyword.

Cutting my teeth on Schemes and MLs and now working in Python, I have the complete opposite experience. It's jarring to have to specify return. What else would I want to do at the end of an expression? It seems tautological. The real reason it's there in Python is early return, which is even more dangerous and jarring.

mihaic
1 replies
1h55m

I know it's not very FP, but you might explicitly not want to return anything and just modify the data.

bmitc
0 replies
1h47m

That's perfectly acceptable and expected. F# supports OOP and imperative just as much as it does functional programming. In the case of such functions and expressions, the value returned is of type `unit` with a single value of `()`. In F#, expressions that return `unit` have the value explicitly ignored if they are not the last expression in a code block. Other expressions returning non-`unit` values that aren't at the end of an expression will generate a warning. In such cases, for example where a function performs a required side effect and returns a value other than `()` but you don't need that value, you can use `|> ignore` to get rid of the warning since it says you are explicitly wanting to ignore the returned value.

ordu
1 replies
2h52m

> I know it sounds nitpicky, but to me all keywords should be obviously pronounceable, so something like "func" instead of "fn" would be mandatory.

Keywords only? How about function names like strspn or sbrk? And how do you feel about assembly language, using mnemonics like fsqrt or pcmpeqd?

BTW, thinking about it, I notice, that I need all these lexemes to be pronounceable too, and I have my ways to pronounce sbrk or pcmpeqd. Probably if I do it aloud no one will understand me, but it doesn't matter because these pronunciations are for internal use only.

mihaic
0 replies
1h53m

I'm not sure how many codebases started after 2010 I've seen that have "pcmpeqd" as a method name. This is something I think makes sense only in highly optimized code, but in business logic it's a pain to read.

naasking
1 replies
3h26m

Code in general is hard for me to mentally read. I know it sounds nitpicky, but to me all keywords should be obviously pronounceable,

Have you tried Ada?

so something like "func" instead of "fn" would be mandatory.

What about no keywords, like:

    x => ...func body

mihaic
0 replies
2h13m

Have you tried Ada?

I have tried Pascal in that sphere, which was on the too verbose side.

Arrow notations like in JS/Typescript are fine to parse for me. Some clear symbols are actually easier to read than an unpronounceable alphanumeric.

haswell
1 replies
3h39m

I’m in the middle of working through The Rust Book, and I haven’t written any serious code with it yet, so interpret this through that lens.

When I looked at rust code before, it all seemed a bit weird. I couldn’t immediately understand it, but I’ve since come to realize this was because the dozen or so languages I can read well don’t really resemble rust, so my pattern matching was a bit off.

The more I learn about the syntax and core concepts, the more I’m learning that my brain absolutely loves it. Once I started to understand matches, lifetime syntax and the core borrowing mechanics, things clicked and I’m more excited about writing code than I’ve been since I taught myself GW-BASIC 25 years ago.

Just sharing this anecdote because I find it interesting how differently people experience languages. I also have an ongoing friendly debate with a friend who absolutely hates Python, while I rather enjoy it. I’ve tried to understand why he hates it, and he’s tried to understand why I like it. And it all seems to come down to hard-to-define things that just rub us in different ways.

I hope the benefits of rust find their way into more types of languages in the future.

mihaic
0 replies
1h45m

Yeah, I think at some point we all have some internal wiring that is hard to change, while other parts are flexible.

For instance, I'm fine to write C++, Javascript or Python (with types at least). Ruby or Rust for some reason do rub me the wrong way, no matter how much I try to tough it out.

4star3star
1 replies
3h31m

all keywords should be obviously pronounceable

I hear you. Internally, I always pronounced "var" as rhymes with "care", but then a colleague pronounced it "var" as rhymes with "car". I think the same guy pronounced "char" like char-broiled, whereas I had thought of it like "care". And he would say "jay-SON" for json, which I pronounced like Jason.

How would you feel about a notation that is not meant to be pronounced at all?

+Employee {

}

where + indicates a class definition.

:Rename() where : indicates a class method.

~DoStuff() where ~ indicates a static function

mihaic
0 replies
1h48m

Interesting, for all the examples you gave, I'd prefer to see a keyword, since you'd need to use a lot of symbols for a lot of different things in this way. I do find arrow notation or other symbol for lambdas fine, since it's a unique case and not a generic type of using symbols for keywords.

mattgreenrocks
0 replies
3h32m

Give it time. The syntax differences are real, but not insurmountable. I grew to prefer the location of the return type in function syntax despite having 15 years of C++ under my fingers.

brabel
11 replies
6h50m

If the claim that its performance will be similar to Rust's if you add type annotations, this could become a really attractive language!

As easy as JavaScript to write, as fast as Rust when the extra effort to write it justifies it.

usrusr
6 replies
6h20m

Still super weird, because the garbage collector tax that is avoided by the borrow checker that's decidedly not gone isn't all that big to begin with.

But perhaps it's a viable "training wheels" approach for getting used to borrow-checker friendly patterns? And I guess a scripting interpreter option that is fully rust-aware in terms of lifetimes could be truly golden for certain use cases, even if it turns out to be completely hostile to users not fully in tune with the underlying Rust. Sometimes "no recompile" is very important.

I wonder if the genesis story of the project might be hidden in "Dada has a required runtime": perhaps it started with the what-if of "how nice could we make Rust if we abandoned our strict "no runtime!" stance and went for making it runtime-heavy like e.g. Scala"? Then the runtime pulls in more and more responsibility until it's easier to consume a raw AST and from there it's not all that far to making types optional.

speed_spread
5 replies
5h13m

Garbage collection is actually faster than generic malloc for allocating memory because it can work as a simple bump allocator. And there are ways to handle collection efficiently. Malloc is also not entirely deterministic in performance because the heap can get fragmented. Either way, if latency matters you end up having to care about (de)allocation patterns at the app level.

nu11ptr
3 replies
4h42m

Agreed, it seems weird to me to avoid garbage collection in a high level language. It is one thing to use escape analysis to avoid creating garbage, but mallocing every object is going be slower than a well tuned GC.

couchand
1 replies
4h18m

mallocing every object

So don't do that then? Put most things on the stack. It's far faster than any allocation.

kaba0
0 replies
19m

That’s also true of a GCd language though.

estebank
0 replies
1h53m

I would say that in practice 80% of the values go on the stack, 18% in Box and 2% in an Arc/Rc. That's why Rust code tends to be fast: the common cases are really easy to represent with the borrow checker, so a hypothetical GC doesn't have to perform escape analysis to see if it is ok to specialise those allocations, while the more uncommon cases can still be represented, albeit more verbosely than a GCd language would need.

couchand
0 replies
4h19m

I think most people's concern with GC is not the allocation side of it? And in any case alloca blows them all out of the water.

if latency matters you end up having to care about (de)allocation patterns at the app level.

Yes, and you want tools that allow you to precisely describe your needs, which might be more difficult if a lumbering brute is standing between you and your data.

wokwokwok
3 replies
6h42m

There’s no claim to be as easy as javascript to write.

Rusts “difficulty” stems from its single ownership model, and this model is “different” not “easier”.

https://dada-lang.org/docs/dyn_tutorial/permissions

DinaCoder99
1 replies
6h23m

I personally find the semantics of javascript a lot harder to internalize than rust due to its scoping and very unintuitive object system. I can't imagine this is any more difficult than that.

jokethrowaway
0 replies
5h27m

I agree but most modern JS doesn't use prototypal inheritance.

JS has plenty of bad parts you shouldn't use. Classes are the main one.

rubyfan
0 replies
6h35m

From what it looks more expressive and seems intuitive to me.

color_me_not
4 replies
4h42m

I don't understand the comment in the method print_point in the class Point of the tutorial.

    [...]
    # This function is declared as `async` because it
    # awaits the result of print.
    async fn print_point(p) {
        # [...]
        print("The point is: {p}").await
    }

    [...]
From the first page of the tutorial:

Dada, like JavaScript, is based exclusively on async-await. This means that operations that perform I/O, like print, don't execute immediately. Instead, they return a thunk, which is basically "code waiting to run" (but not running yet). The thunk doesn't execute until you await it by using the .await operation.

So, what it boils down to is that async/await are like lazily computed values (they work a bit like the lazy/force keywords in Ocaml for instance, though async seems to be reserved for function declarations). If that is the case, that method "print_point" is forcing the call to print to get that thunk evaluated. Yet, the method itself is marked async, which means that it would be lazily evaluated? Would it be the same to define it as:

    fn print_point(p) {
        print("The point is: {p}")
    }
If not, what is the meaning of the above? Or with various combinations of async/await in the signature & body? Are they ill-typed?

I wish they'd provide a more thorough explanation of what await/async means here.

Or maybe it is a dadaist[0] comment?

[0] https://en.wikipedia.org/wiki/Dada

nialv7
2 replies
4h4m

I think they didn't do a very good job explaining it. Await doesn't just mean "please run this thunk", it means "I am not going to deal with this thunk, can someone come and take over, just give me the result in the end".

What this means, concretely, in Rust, is `.await` will return the thunk to the caller, and the caller should resume the async function when the result is ready. Of course the caller can await again and push the responsibility further back.

The most important thing here, is that `.await` yields the control of execution. Why does this matter? Because IO can block. If control wasn't given up, IO will block the whole program; if it is, then something else will have a chance to run while you wait.

color_me_not
1 replies
3h32m

So, you mean that this thunk is produced by the async function, and the await keyword will run it asynchronously?

In other words, print produces a thunk, and print_point also produces a thunk, and when await is used on the later, it is executed asynchronously, which will execute the print also asynchronously. So we end up with 3 different execution context: the main one, a one for each "await"?

What is the point of this, as opposed to executing the thunk asynchronously right away? Also, how does one get the result?

ordu
0 replies
2h35m

> the await keyword will run it asynchronously?

From the point of view of print_point await executes the thunk synchronously, print_point execution stops and awaits for print to finish it work. But a callee of print_point might want it to run print_point asynchronously, so print_point is an async fn, and callee can do something more creative then to await.

couchand
0 replies
4h23m

I suspect they're heavily relying on intuition coming from Rust, where both of those forms are okay. The one from TFA is sugar for your version. This works fine as long as there is only a single await point, otherwise you have to transform the syntax into a bind, which you might not be able to legally do manually (in Rust at least) if you hold a borrow across the await point.

bananapub
4 replies
6h40m

what does "creators of rust" mean? Graydon? niko? pcwalton?

bananapub
1 replies
6h37m

ah, thank you! I couldn't find anything on the website itself, should have thought to look at the code.

peterhull90
0 replies
4h37m

Also Brian Anderson (brson) was/is a significant rust contributor

Alifatisk
4 replies
6h33m

Why the name and the logo? Couldn't find info about it.

Otherwise, the idea of creating something close to rust but without the complexity sounds interesting. I just hope they don't stick to that name.

yawpitch
1 replies
6h29m

They’re both referencing the Dada artistic movement.

Alifatisk
0 replies
6h29m

Thx

w-m
0 replies
5h30m

I was thoroughly confused reading the Dada Manifesto[0], starting with the non-existent German meanings of the word dada and getting much stranger from there. Until I found out at the very bottom that it's a riff on a 1916 dada manifesto.

[0]: https://dada-lang.org/blog/manifesto

owenbrown
3 replies
3h33m

Every time I see a new language, I immediately check if it uses significant white space like Python. If it doesn’t, I sigh sadly and dismiss it.

diggan
0 replies
2h37m

Curious, I have the very opposite reaction, although I tolerate Python, but only for the massive amount of libraries and huge community. But as a language? Meh

What makes you so reliant on significant white space that any language without is a automatic dismissal?

csjh
0 replies
3h30m

Why?

RamtinJ95
0 replies
2h49m

This is such a weird take... I just want to know why is that SO important to you? For me that is one of the things that I like the least with Python.

nu11ptr
3 replies
4h54m

I like the idea, but please no "async/await". In a higher level language green threads like Go has are the correct answer IMO (and I'm not a Go fan, but I feel they got this part right).

Gradual typing is interesting, but I wonder if necessary. Static typing doesn't have to feel like a burden and could make it hard to reason about performance. I think more type inference would be better than gradually typed (like OCaml/ML).

jerf
1 replies
3h42m

"Gradual typing is interesting, but I wonder if necessary."

Open question: Are there any languages that can be used in a (decent [1]) REPL, that are strongly typed, but do not have Hindley–Milner-based type inference?

We have multiple concrete proofs that you can have a REPL with Hindley-Milner inference, but I'm curious if this is perhaps a concession to the difficulty of a strongly-typed REPL without a deeply inferable type system. But it's just an idle musing I'm throwing out to see the response to.

[1]: That is, for example, multiple people have put a Go REPL together, but anyone who has used a "real" REPL from the likes of Lisp, Haskell, Erlang, O'Caml, Python, etc., will not find it a "decent" REPL, as Go just can't have one for various reasons.

naasking
0 replies
3h30m

Scala has a REPL. It uses HM, but has limitations on type inference due to subtyping.

mplanchard
0 replies
2h5m

Personally I love explicit coroutines for their flexibility. It's great to be able to multiplex a bunch of IO bound operations on a single thread, defining some in chains and other to execute in parallel. It's great to be able to easily decide when I want to wait for them all to finish, to do something like `select`, or to spin them off into the background. Rust's ownership occasionally makes this a bit more of a challenge than I would like it to be, but I certainly wouldn't trade the ability for a more "convenient" syntax.

bmitc
3 replies
3h59m

I thought the creators of Rust were creator, singular, in Graydon Hoare. Are they involved with this?

estebank
1 replies
1h49m

Niko has been involved with Rust since before they had conceived of the borrow checker and the entire commiter list could be fed with a pizza.

bmitc
0 replies
1h45m

I would distinguish creator(s) from early key contributors and developers. I'm not aware of the full history of Rust but was under the understanding that Graydon Hoare is the creator of the language.

steveklabnik
0 replies
2h26m

Graydon does not have any commits in the repository.

Rucadi
3 replies
6h34m

no upfront types, for me this is unusable sadly

fweimer
1 replies
6h19m

Gradual lifetimes could be interesting, though.

leiroigh
0 replies
5h0m

^this!

In garbage-collected languages, please give me gradual / optional annotations that permit deterministic fast freeing of temps, in code that opts in.

Basically to relieve GC pressure, at some modest cost of programmer productivity.

This unfortunately makes no sense for small bump-allocated objects in languages with relocating GC, say typical java objects. But it would make a lot of sense even in the JVM for safe eager deterministic release of my 50mb giant buffers.

Another gradual lifetime example is https://cuda.juliagpu.org/stable/usage/memory/ -- GPU allocations are managed and garbage collected, but you can optionally `unsafe_free!` the most important ones, in order to reduce GC pressure (at significant safety cost, though!).

diggan
0 replies
4h54m

Literally every single program you ever created has needed types?

I've probably written 100s of tiny little utility programs that are a couple of lines at most, and wouldn't need types for any of those, it would just add extra verbosity for no gain.

pxeger1
1 replies
3h2m

I think there isn't enough research into languages with affine/linear typing (the property of some types that they can't be copied - which is partly what the borrow checker ensures in Rust). I'm super sold on it for enhancing safety. Vale with its "Higher RAII"[0] is the only other example I was aware of until seeing this.

Rust is great but being an early adopter has made its usability imperfect in places. Combining substructural typing with gradual typing and OOP is interesting here. Others in this thread have also mentioned wanting a higher-level Rust, like Go. I'd like to see a purely functional Rust. Haskell has experimental support for linear typing[1], but I suspect a language built with it from the ground up would be very different.

[0]: https://verdagon.dev/blog/higher-raii-7drl

[1]: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/line...

jokethrowaway
1 replies
5h31m

The main idea is that leases are an easier concept to understand than borrowing and lifetimes?

I don't think it will be, it sounds like a concept of similar complexity and it won't make it an "easy language".

People are scared of Typescript, so a typed language with an extra ownership concept will sound exactly like rust in terms of difficulty.

Not that I get the reputation of Rust being hard, even as a complete novice I was able to fight a bit with the compiler and get things working.

The gradually typed approach is nice but it just sounds like smarter type inference would get you 99% there while keeping the performance (instead of using runtime checks).

Not having unsafe code is both interesting and limiting. I keep all my code safe for my own mental sanity but sometimes having bindings to some big library in c/c++ is convenient (eg Qt or OpenCV).

turnsout
0 replies
2h37m

Yeah, it's not clear who this is for. If you can handle ownership, this doesn't seem to have many benefits over Rust. If you can't handle ownership, and don't mind a runtime, just use Swift, which seems to be the main inspiration for Dada's syntax.

jgilias
1 replies
6h46m

but one that was meant to feel more like Java or JavaScript

Those are two very different feelings though!

pas
0 replies
4h49m

compared to Scala and TS they are the same 'ewww' :S

emporas
1 replies
5h51m

The absence of GC, makes embedded Rust a joy. It can be easily attached to other programs like Erlang with NIFs, Javascript and web pages with Web Assembly and Emacs with command line execution. Micro-controllers as well of course.

I do consider the lightning start-up speed of a program to be one of the killer features of Rust. Rust with garbage collection throws away one of it's biggest advantages compared to every other language around.

zem
0 replies
1h52m

don't think of it as rust with garbage collection, think of it as a GC language with features borrowed from rust

dist-epoch
1 replies
5h17m

Dada is object-oriented, though not in a purist way

Are classes cool again?

speed_spread
0 replies
5h12m

Only the upper ones.

couchand
1 replies
5h29m

As of right now, Dada doesn't really exist, though we have some experimental prototypes...

OK, from here on out I'm going to pretend that Dada really exists in its full glory.

This is a brilliant trick I only recently discovered in another context: write the docs first, to validate the user experience of a novel system.

Twirrim
0 replies
5h7m

During architectural reviews, I'm often the annoying person grilling the team on the customer experience. If you don't start from how the customer will interact with it, how are you going to create anything ergonomic?

All too often, the engineering has started at "customers want to be able to do $x", and that's the last time the customer was part of the consideration. The solutions are great, but often miss out on what it'd be like to actually use it, as a customer. Lots of foot guns, and expectations of knowledge that a customer couldn't possibly have unless they had as much understanding of what happens under the hood as the engineers did, etc.

bsimpson
0 replies
5h42m

Very cool art movement: it's essentially a prototypical form of Photoshop/meme culture as protest against the Nazis. John Heartfield is my favorite Dada artist.

Perhaps his most famous piece is a photo of Hitler captioned "millions stand behind me," showing a donor passing him stacks of cash.

actionfromafar
1 replies
5h32m

What is Dada?

m0llusk
0 replies
5h8m

exactly

VMG
1 replies
6h20m

the contrast of the links against the light background is pretty poor

anentropic
0 replies
3h12m

don't know why you were down voted - this is totally correct, the site only looks right in dark mode

Doctor_Fegg
1 replies
5h35m

What if we were making a language like Rust, but one that was meant to feel more like Java or JavaScript, and less like C++?

That would be Swift?

Interesting experiment. But it does seem like there are increasing numbers of languages trying to crowd into the same spaces.

nu11ptr
0 replies
4h43m

Yes, but languages don't compose well. For example, you can't take Swift because you like all the things the language does and then add in first class support for Linux and Windows. Thus, anytime a language doesn't align with EVERY thing you need it to do... a new language evolves.

udev4096
0 replies
3h13m

Not this again. How many languages do we need? I am having a good time with Go and Python!

ubj
0 replies
2h52m

Interesting, but the intent seems similar to Chris Lattner's new Mojo language which arguably has similar characteristics and is further along in its development.

https://docs.modular.com/mojo/

talkingtab
0 replies
3h56m

My opinionated opinion: programming languages have three goals. 1) Be safe, don't make mistakes 2) Be expressive: The Sapir Whorf hypothesis 3) Be easy to use.

JavaScript (new) is +++2, and ++3 (to me). Java is +++1 & --2, -3.

Personally I like OO ("has a") but think Class-ification,("is a") is a big mistake. Take a truck and a car. Start replacing the pieces of the car with pieces from the truck. When is a car not a car? Arbitrary. When does the car have a tail gate, a flat bed?

That is not a joke. Classes and Types are a way to think (Sapir Whorf) that makes you do strange things.

The interesting thing about Dada is the "borrow", "share" etc and seems very good. But then instead of wrapping it in a class can't we just use an Object?

stephen
0 replies
2h33m

If you're cloning parts of TypeScript, please bring along mapped & conditional types!

Feel free to experiment on the syntax, but the concept is amazing, especially if you're planning on being dynamic-ish.

snarfy
0 replies
32m

It's a bit frustrating that I have to click around hunting for an example of the syntax.

If you are making a new programming language, please do us a favor and put your Hello World syntax example right on the landing page.

richrichie
0 replies
4h16m

It was only a matter of time before even the creators of Rust grew tired of it being another C++.

p0w3n3d
0 replies
1h31m

It sounds like mockups to create a programming language

melodyogonna
0 replies
2h44m

Mojo but for Javascript

luke-stanley
0 replies
5h6m

I was posting that "a_print" (an auto running async printer) might be better for one of the most common features a programmer uses.

I'm coming from Python, and for situations when people grasp for C/++ kind of performance and control, I think people are aware of the need for high performance memory safe languages that are easier to use than Rust but with many of Rust's benefits being at least possible. So I am quite excited by thinking from Dada and the people who are behind Rust and I'm also intrigued by SerenityOS's Jakt language project. I hope the insecure "C code problem" has a smooth migration path that let's C/++ devs, Typescript devs, and others make progress quickly in a powerful way. What other sort of alternative languages are there, among Dada's aspirations? Jakt? Vale (I understand a lead dev is poorly, so it's slowed a bit lately)? D? Go? Obviously AI will have a big impact. What language is going to have a big impact in this space?

jamsterion
0 replies
4h49m

Dada looks "almost" great! I especially like that it targets wasm; I believe wasm is the future of frontend and also backend with wasi. However, I believe that being gradually typed is a mistake. Dart started being optionally typed and then they made it fully statically typed for very good reasons. I hope they learn from Dart's experience there.

imjonse
0 replies
5h9m

Not new, launched in 2021 apparently.

greenie_beans
0 replies
38m

love this quote

I speak only of myself since I do not wish to convince, I have no right to drag others into my river, I oblige no one to follow me and everybody practices their art their own way.

Tristan Tzara, "Dada Manifesto 1918”
BobbyTables2
0 replies
4h20m

Sounds a bit like Python but with actual & optional runtime type checking?