return to table of content

A Tour of the Lisps

whartung
41 replies
1d20h

Regarding Guile, and, mind, I'm on macOS, but I found it not easy to get started with.

I'm looking for a compiled Scheme or Lisp. By that I mean, I want "prog.ext" to create the executable "prog".

I want this because I want to make some command line utilities, and I would like the actual code compiled (vs some p-code bundled with an interpreter).

I had tried to get started with Guile, as all of its packages looked attractive, but was stymied. It's been awhile, so I can't express details. It had to do with a combination of things like packages, modules, and I know I ran into a version issue (I think I wasn't running the latest version from my package manager, so it didn't have something in the documentation).

Anyway, for such an extensive system, I found it surprisingly frustrating.

I've been trying Gambit, which seems like it would be a really nice fit, but it's fighting with my Mac, because macOS doesn't put stuff in /usr/include (wchar.h in this case), and all of my efforts to fix that have failed (save I have not tried upgrading to whatever the latest macOS is).

Maybe I can try ECL, maybe that will work.

But, anyway, Guile, while attractive, just surprised me at how I wasn't really able to make it work for me enough to make me look elsewhere.

7thaccount
12 replies
1d20h

What about chicken scheme or chez scheme?

nerdponx
9 replies
1d20h

Chicken kind of felt like a dying ecosystem when I tried it, and it's also not very fast. The standard library itself is kind of limited, so if I couldn't find an egg for something I wanted to do, I felt very stuck.

Chez is good because it's supported by Akku, but I'm not sure if it will ever support R7RS. It does have a really nice FFI though, and the docs are very good.

hajile
8 replies
1d18h

R7RS isn't worth supporting right now.

R5RS is too small for real work unless you add tons of SRFIs, so they created R6RS for people who want to get stuff done. But the the R5RS people got ticked off because "its too big".

R7RS was supposed to be a compromise with a tiny R7RS-small, but later adding most of the R6RS features with R7RS-large. It's now been a decade and R7RS-large seems to be completely dead.

zilti
7 replies
1d17h

R7RS is very much worth supporting; there'll shortly be a Chicken Scheme 6 with full support for R7RS-Small.

R7RS-Large development is slow, but not at all dead.

hajile
6 replies
1d16h

It's been 10 years since R7RS launched and I believe around 15 years since R7RS-large began work. That goes a bit beyond "slow".

pjmlp
5 replies
1d5h

To put it in perspective, that is the amount of years that have taken C++ to adopt modules and concepts (still ongoing), Java to research and slowly start deploying Valhala, .NET to start having something better than NGEN / .NET Native across all workloads and beyond Windows, migrate an ecosystem to Python 3,....

hajile
3 replies
1d4h

We're talking the 2008 timeframe. When George Bush reigns and the biggest films are The Dark Knight and Iron Man. Halo 3, CoD4, and Crysis are new. Our latest generation of devs are just toddlers.

Stackoverflow is created.

The world is running on Windows Vista (XP actually, but that's another story).

Github is created.

Facebook pulls ahead of MySpace in user count for the first time.

Blu-ray finally beats HD-DVD.

People think the Large Hadron Collider will end the world when it turns on.

Randy Pausch's book "The Last Lecture" becomes a New York Times best seller.

First Android phone and iPhone 3g release this year with ARM11 CPUs. The App Store is brand new.

JS is still slow and Chrome releases to change that.

Amazon buys Audible.

Rust essentially doesn't exist.

AirBnb is founded.

Bitcoin doesn't exist yet (though the paper drops this year).

Memristor is finally proven to be possible.

The 46th Mersenne Prime is found.

USB-3 spec drops for companies to start working on.

Core i7, and Atom on 45nm are the latest CPUs (AMD also launched Phenom on 65nm). GTX280 and HD4870x2 are the fastest GPUs around and people are discussing how overpriced top-end GPUs are at $450-550 and complaining that these GPUs use 200-250w of power.

Tesla Roadster releases for about $100,000 and Musk launches the first car (promised to the actual founder) into space out of spite.

pjmlp
2 replies
1d3h

That wall of text doesn't change the point of my comment.

Not everything evolves at the speed of TL;DR; attention span folks wish for.

hajile
1 replies
21h14m

We aren't talking about the implementations, we're simply discussing the spec. Can you name another language spec that took over 15 years from conception to completion?

Even Common Lisp which languished for a terribly long time due to infighting didn't take that long despite being a much more comprehensive and difficult spec.

pjmlp
0 replies
10h46m

If you read carefully my original comment, you would have understood specs were part of the description.

Just as an example, C++ concepts were originally presented in 2005 [0], dropped in 2009 [1], redesigned as Concepts Lite in 2013 [2], graduated to technical specification in 2015 [3], added to C++20 roadmap in 2020 [4].

Making it 15 years to fully work out a language feature and related library functions, and even what came out 15 years is a subset of the original proposal, not going to bother with specification refinements after C++20.

Everything goes faster if it were us doing it, right?

[0] - https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n17...

[1] - https://isocpp.org/wiki/faq/cpp0x-concepts-history

[2] - https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n37...

[3] - https://en.cppreference.com/w/cpp/experimental/constraints

[4] - https://www.iso.org/standard/79358.html

shakow
0 replies
8h30m

And in the same time span, we have Elixir, Rust, Go, Julia & Kotlin comes from the void into the mainstay.

timbit42
0 replies
1d18h

Also, Larceny (R7RS, UTF-8, compiles to x86, x64, ARM, C).

rcarmo
0 replies
1d20h

I've got similar interests to the OP and the parent comment. I had some fun with both Chicken and Chez, but building standalone/static binaries was a bit of a pain.

The interesting thing for me (a while back) was realizing that even doing something as simple as an HTTPS request would be a bit of a challenge: https://taoofmac.com/space/blog/2019/06/20/2310

nerdponx
3 replies
1d20h

I suggest Gauche as an easier-to-use Guile alternative. I had similar annoyances with actually getting Guile to just do what it says it's supposed to be able to do.

I've advertised Gauche frequently here on HN (including in another comment in this thread). I have no affiliation with the project, but I like it a lot and I think it's under-appreciated. Its author hosts its documentation under the domain name "practical-scheme.net" (https://practical-scheme.net/gauche/man/gauche-refe/index.ht...) and I think the name is very well deserved.

Paul-Craft
1 replies
1d12h

Easier to use in what sense?

nerdponx
0 replies
23h5m

Easier to just install the interpreter using any package manager, open up the docs, and start writing useful scripts. All of the batteries included in the standard library go a long way towards helping with that as well.

I remember fighting with Guile a bit to achieve the same goal, although I don't remember the details at this point, and my experience might be outdated now. I do remember that some things which are included in Gauche (e.g. JSON) are 3rd-party libraries in Guile. Also I don't understand why some of their libraries are called "ice9". What's with the Cat's Cradle reference? Is it a pun? It it someone's username? Is it a "contrib" library of some kind that was absorbed into the standard library? Why didn't the name get changed?

Zambyte
0 replies
1d13h

Guile is really designed with embedding into a program in mind. Gauche is definitely more geared towards their use case, but I think their issues with Gambit can probably be resolved with a few environment variables or command line options

matheusmoreira
3 replies
1d8h

By that I mean, I want "prog.ext" to create the executable "prog".

An interpreted lisp could fulfill this requirement. I created one such lisp. I came up with an ELF code embedding method that lets me add lisp code to the interpreter executable itself. I can add lisp modules and they become loadable. I can add a main function and it gets executed automatically.

I wrote about it here:

https://www.matheusmoreira.com/articles/self-contained-lone-...

The lisp itself is not ready to be used for anything serious at the moment. I'd really like to see this ELF embedding show up in other languages though. It's the best solution I found to the perennial self-contained executable problem.

lispm
2 replies
1d7h

This has nothing to do with interpreted or compiled. SBCL compiles all code to native code - it has optional interpreter functionality, which is not needed most of the time.

Creating an sbcl executable for Linux is simply a call to dump an application, add the runtime and save that to some place. The there is an executable on the disk. The only thing is that the executable is not small or tiny -> but they include the compiler/loader/repl/..., which is useful.

Many other Lisps also can create executables. A LispWorks executable application on my Linux/ARM64 starts with roughly 7MB and it has unused stuff removed by a "treeshaker".

matheusmoreira
1 replies
1d4h

This has nothing to do with interpreted or compiled.

I agree with you. That's what I tried to express in my post. They asked for a compiled lisp but the actual requirement was self contained executables which interpreted languages could also fulfill.

lispm
0 replies
1d3h

Yes. Many Lisp implementations even can mix it in one executable. One, two or three modes of these: source interpreter, byte code interpreter, native code.

djha-skin
3 replies
1d20h

Janet is the perfect lisp for cli tools. https://janet.guide. It is my favorite.

nerdponx
1 replies
1d19h

I'm still not sure of what the Janet value proposition is compared to an R7RS Scheme. Not that it shouldn't exist! I just don't quite get the use case either.

ianthehenry
0 replies
1d17h

Janet’s value proposition is pretty similar to Lua — easy embedding, simple C API, minimalist runtime. But Janet improves on some weird Lua warts (block-scoped variables by default, 0-indexed collections, separate types for sequential and associative arrays). Plus it bundles a pretty nice standard library.

I’ve seen this comparison before, so clearly Janet isn’t doing a good job of explaining itself, but I think the only thing Janet and Scheme have in common are a few parentheses. Different core data structures, different feelings about mutability, completely different macro system…

Guile and Janet share PEGs (sorta) and embeddability but I didn’t think those were standardized at all. (I don’t really know any schemes.)

sph
0 replies
1d19h

, should be (unquote) and ,@ should be (unquote-splicing) while ; is for comments, yet Janet completely disregards this convention. It's stupid, but I dislike Janet because of this.

uticus
2 replies
1d15h

Guile always surprised me not because of anything LISPy, but because of its relationship with TCL

https://vanderburg.org/old_pages/Tcl/war/

Pinus
1 replies
1d10h

In RMS’ message that started that discussion, he stated that the GNU project intended to provide two languages — one Lisp-like (which I assume eventually became Guile), and one with a more algebra-like syntax. Did anything ever come out of the latter?

Y_Y
0 replies
1d9h

The current advice from GNU is that Guile is the one true language for the project[0]. There are terms of languages that are currently part of GNU[1] but none that seem to meet the description of algebraic and blessed.

[0] https://www.gnu.org/prep/standards/html_node/Source-Language... [1] https://www.gnu.org/manual/blurbs.html

coliveira
2 replies
1d19h

My understanding is that guile is an extension language. Create a prog.c and link to guile, that's how I think you can create an executable.

mst
0 replies
1d4h

I originally got a PAUSE id to upload to CPAN to add support for returning continuations from Guile code to Perl and being able to resume them as a function call on the Perl side later.

That let me write linear async-await-ish logic code in Guile and then have all the I/O and grungy stuff handled by an event driven layer in Perl space.

(this was about 20 years ago, give or take, but I still periodically get accused of writing lisp in whatever language I'm implementing things in at the time ;)

davexunit
0 replies
1d19h

Creating a C program that links to libguile is kind of a legacy use-case at this point. It was the original purpose of Guile back when it was but a simple interpreter. The trajectory for the past decade or so has been to build up Guile as a platform for writing your entire application. Rather than embedding an interpreter in a C program, the recommended approach is to write a Scheme program that uses the C FFI if and when necessary. The interpreter was once written in C but is now written in Scheme (a minimal C interpreter is kept around for bootstrapping purposes.) There's a sophisticated optimizing compiler that emits bytecode for the Guile VM as well as a JIT compiler. A new garbage collector and a Wasm compiler backend are currently being developed. The big missing piece is ahead-of-time native compilation, but the work on the Wasm backend will help that along as it needs to solve a lot of the same problems.

armchairhacker
1 replies
1d18h

Try Racket. It’s easy to setup (from experience on macOS), easy to learn, batteries included, and a compiled Scheme (create binaries via `raco`, https://docs.racket-lang.org/raco/exe.html).

It’s real highlights are very powerful macros and the ability to override the reader, so you can effectively create arbitrary languages (examples include reimplementations of Java, Lua, and Datalog, and a documentation generator with embedded Racket called Scribble). IMO it’s a research language first and foremost. But it has unusually good production support and online resources (I mean it when I say it’s easy to setup and learn), so I think it fits everything you asked for.

mark_l_watson
0 replies
1d3h

+1 for the Racket on macOS recommendation. It is really easy to make your own local libraries (but different than small local libraries in Common Lisp using Quicklisp) and live nicely on macOS. I wrote a Racket book, which I am still actively adding to that you can read online: https://leanpub.com/racket-ai/read

zilti
0 replies
1d17h

I can highly recommend Chicken Scheme, it does exactly what you want :) It has also a nice friendly community, and a couple hundred extensions.

zem
0 replies
1d9h

chicken is a very pleasant compiled scheme, with a friendly community and a decent set of packages available.

wglb
0 replies
1d19h

I'm looking for a compiled Scheme or Lisp. By that I mean, I want "prog.ext" to create the executable "prog".

sbcl compiles and generates an executable quite nicely.

tmtvl
0 replies
1d14h

The creator of Programming Language Benchmarks 2 is on Apple Silicon, and apparently SBCL compilation works out of the box: <https://github.com/attractivechaos/plb2>.

spit2wind
0 replies
23h22m

Guile itself is cool. Its documentation, however, sucks. It's really quite terrible. The manual is a haphazard mix of several reference documents, a half-hearted tutorial, and whole lot of jargon that's not explained.

It seems there was a period where Guile was considered a serious contender for the space now occupied by Python. My impression is that GNU lost mindshare and the opportunity because the Guile documentation takes a simple, elegant language and makes it inaccessible for newcomers. Python was known for being well-documented and easy to approach.

I'm curious what other's take on the history is.

mark_l_watson
0 replies
1d3h

Years ago I used Gambit Scheme heavily on macOS. Back then I always liked to build it from source. The source repo also has examples that are useful to see how to do common things. I found Gambit was really cool for building little command line utilities, but now I get good mileage from just making small text command line utilities in SBCL, or using LispWorks Pro.

davexunit
0 replies
1d20h

I'm not a mac user but I got someone who had never used guile before setup using homebrew to get guile and emacs, and then the guile homebrew tap (https://github.comad/aconchillo/homebrew-guile) for guile libraries.

EuAndreh
0 replies
3h54m

I would like the actual code compiled (vs some p-code bundled with an interpreter).

What's wrong with p-code bundled with an interpreter?

Some libc+cc combos add some code to do things the target architecture doesn't support natively, like ints of certain sizes. From this to the example you said, where in this spectrum do you consider it not to be compiled anymore?

Unless you're debugging the emitted binary instructions, why does the compiled output matter if it reaches the desired CPU and memory requirements?

sph
36 replies
1d20h

I believe this article is selling Guile short. I think it is the most pragmatic Lisp around.

I love Lisp's ideas but I find all modern implementations terrible. I would go so far as to say Lisp aren't popular because what we have today is not good enough. Common Lisp is the most advanced one, but it is like C++: it does everything and the kitchen sink. Design by committee. You want functional programming AND imperative? You want documentation that reads like an IBM mainframe manual? You want a standard library with names as cryptic as ANSI C? We got all that. But frankly, it is the one to build serious production software with.

Racket is the best for a beginner, but it keeps having that academic, "we made it for the kids" feel of being easy to understand but not very pragmatic for a seasoned programmer. And it is one of a kind, so a good Racket developer might never know Lisp itself.

Schemes are fun, but the only standard everyone agrees to relegates to small embeddable languages almost like Lua. They are not made to live on their own almost. Since the standard is very simple, there are half a million implementations that are not very practical. But of these, there is Guile, which has a pretty decent documentation AND standard library (even a built in PEG parser!), actively developed and it is my opinion that is it very underrated and the only Lisp worth my time these days.

To be honest, the most popular Lisp in the world is Emacs Lisp bar none. How's that Guile port coming along anyway?

velcrovan
11 replies
1d20h

Racket is the best for a beginner, but it keeps having that academic, "we made it for the kids" feel of being easy to understand but not very pragmatic for a seasoned programmer. And it is one of a kind, so a good Racket developer might never know Lisp itself.

As someone trying to write definitively about cold hard reasons Racket remains in relative obscurity, I would be interested to know specifics you can provide for this view. Is it just academia vibes? Or are there specific aspects of Racket that in your view repel “seasoned programmers”?

I notice you praise Guile for having decent documentation and standard library. How do you believe Racket’s documentation and standard library compare to Guile’s?

a1369209993
8 replies
1d18h

Or are there specific aspects of Racket that in your view repel [competent programmers]?

This is anecdotal and over a decade ago, but quite specific: my first exposure to Racket was with a version of the language (I think it was some pragma-type thing like `#lang`, but it's been a while) allegedly intended for CS classes, and included something along the lines of [exact spelling and phrasing almost certainly differ]:

  > (cons 'foo 'bar)
  error: cons: 'bar is not a list
I immediately deleted the Racket installation and added it to the same set of blacklists as the Java Virtual Machine (and hypothetically any COBOL implementations I ever encounter). That any version of the language would behave that way, much less one purportedly intended for people who are only just learning LISP in the first place, is a insult to everyone who ever bothered to actually learn LISP in the first place.

This is the sort of "Dangling by a Trivial Feature"[0] thing that there's just no excuse for, like trying out a new text editor and discoving that pressing backspace inserts the text "^H", except it was clearly deliberately aimed at people who didn't know any better and would 'hopefully' not realise there was a problem, rather than just being a local idiosyncracy[1], because cons did work correctly under other dialect settings.

Actively trying to fuck over newcomers who are only just learning LISP (or programming in general) deeply offends me.

0: https://prog21.dadgum.com/160.html

1: In which case I still wouldn't use Racket myself, but I wouldn't be actively opposed to anyone else doing so.

drekipus
6 replies
1d18h

I've dabbled in a few lisps and definitely a noobie but I have absolutely no idea what you're talking about

logicprog
5 replies
1d14h

That language behavior fundamentally misrepresents how a proper cons should actually work. It's a link between any two values, represented as essentially two joined pointers, one to each value. It represents a linked list when cons cells are nested, but that's just something that falls out of the far more beautiful and fundamental core axiom of the cons cell, not all of what cons cells are. Thus claiming cons cells must be nested (which is what requiring the second argument of cons to be a list (I.e. more nested cons cells or nil) implies) is actually bastardizing what a cons cell is, reducing it to a menial operation for producing lists that means the same thing as "prepend" instead of an elegant core abstraction you can derive further things from. Bastardizing it like this, honestly especially in a language for teaching CS, in the name of talking down to students, is unforgivable. I learned Common Lisp at 12 partially from Land of Lisp and partially from a Lisp 1.5 manual and experimentation. Properly learning the core abstractions was vital.

soegaard
2 replies
1d6h

Just to be clear: The teaching languages aren't Racket. The teaching languages consists of a series of small languages designed for beginners. The idea is that a beginner can get more precise error messages.

Analogy: On a calculator, what should the result of sqrt(-4) be?

Depends: If the user is a third grader, then maybe "error, can't take square root of -4". If the user knows complex numbers, then "2i".

The teaching languages does just that. They are linked to the book, which tells you when to "level up".

Conflating the teaching languages and Racket is not helpful.

a1369209993
1 replies
19h22m

"error, can't take square root of -4"

That has nothing to do whether the user knows better than to trust such a answer; it's because they (likely implicitly, which is admittedly unfortunate) asked for a real-valued result, which indeed can't be done. Eg, in python:

  >>> import math,cmath
  >>> math.sqrt(-4)
  ValueError: math domain error
  >>> cmath.sqrt(-4)
  2j

velcrovan
0 replies
16h5m

I remember the first time I ever used Python, the very first thing I did was type that exact thing. When I saw that error I immediately deleted Python, wiped my hard drive and drop-kicked my Dell Vostro computer all the way into the sun. It was so infuriating to me that that any programming language lie to people about basic math, deliberately keeping them ignorant and stupid, and actively preventing them from understanding the simple beauty of complex numbers.

kazinator
1 replies
1d14h

requiring the second argument of cons to be a list

A language called ISLisp (ISO Standard Lisp) also does this nasty thing.

a1369209993
0 replies
1d12h

To be scrupulously fair, that seems to fall under

just being a local idiosyncracy

albeit a stupid one; whereas Racket actively targets beginners in particular.

Also, this is ISO we're talking about here; we probably shouldn't expect anything technically competent from them that isn't just a rubber-stamped version of someone else's specification, and they didn't call it "ISO Standard Common Lisp" or "ISO Standard Scheme", presumably for good reason.

samth
0 replies
1d15h

It is indeed the case that the student languages in Racket use cons only for lists. Improper lists are an extra complexity that people just learning to program don't need.

sph
1 replies
1d19h

Sorry, I don't have anything more concrete than "general vibes."

Like, there's thousands of libraries and modules in Racket, sometimes competing with each other, that feel like they have been developed for a school project and kept around. And it is no secret that Matthew Flatt, one of the lead developers, is actually a professor at Utah University.

In my experience, it is pretty easy to tell when a language and ecosystem is developed in academia, or developed by software engineers by trade. They have two very different goals and approaches to the same problem. The former might focus on educational purpose for newbies, the latter on shipping production-ready software for professionals.

(In my humble opinion, this is one of the reasons Smalltalk never went very far in the real world. Too much focus on the educational aspect of it.)

igouy
0 replies
1d

Is "Smalltalk never went very far in the real world" also based on nothing more concrete than "general vibes." ?

"A Shipping Industry Case Study"

https://seaside.gemtalksystems.com/docs/OOCL_SuccessStory.pd...

How do you know what corporations used to develop custom business systems?

nerdponx
11 replies
1d19h

The problem with CL is not that it does everything, but that it does everything with a mish-mash of inconsistent idioms, thick layers of jargon, and implementation-specific behavior in places where you wouldn't really expect, leading to a combination of implementation lock-in and dependence on 3rd-party libraries that is stronger than one might expect at first.

My (least) favorite example:

  (nth needle haystack)
  (aref haystack needle)
Ugh! But then again, what other language provides both AREF and ROW-MAJOR-AREF?

And of course CL tooling is just weird if you aren't used to it.

Quicklisp is amazing! But it has no CLI and doesn't use HTTPS for package downloads.

Swank and Slynk is amazing! But clients other than Slime and Sly are second-class citizens.

SBCL can generate a compiled binary, cool! But the routine is called "save-lisp-and-die".

ASDF expects you to either put all of your Common Lisp projects in a single directory, or hard-code project locations in a config file. Does Go still do that too?

Ancient alien technology for sure, but at least moderately damaged upon crash-landing. Ironically, the only piece of CL tooling that feels mostly not-weird is called Roswell.

Of course all of this stuff is completely free-as-in-beer and developed almost entirely by volunteers in their spare time. Hard to criticize: they built it, and I didn't. But there is definitely a cumulative oddball feeling to it.

sph
3 replies
1d19h

You hit the nail on the head. And I like the comparison with alien technology: in some ways it feels centuries ahead than our mainstream languages, but in other it feels they have never developed stuff we take for granted in 2024, like first-class hashmaps, Unicode and decent date/time functions.

gmfawcett
2 replies
1d13h

SBCL has excellent Unicode support, for one, and hash tables are first class objects in all CL implementations. Won't disagree about date and times.

tmtvl
1 replies
1d5h

By 'first-class hashmaps' I believe SPH means a type of quasiquoted syntax for hash table initialisation. Something like...

  (let ((my-map `#M((foo ,foo) (bar ,bar) (baz ,baz))))
    (gethash 'foo my-map))
EDIT: almost forgot, thank you for pointing out SBCL's support for Unicode, reading its documentation I found a couple of functions I was in need of.

gmfawcett
0 replies
1d

Thanks. I may have taken "first-class" too literally. :)

Constructor syntax is nice-to-have, but I don't think the lack of it is evidence of alien technology. Java has the same problem, for example, and Java is about as non-alien as you can get. Overall, hash maps are quite well done in CL, and good surface syntax is easily found in popular libraries, if that's a pain-point (serapeum, etc.).

I'm happy to help you find sb-unicode. :) I rarely use CL these days, but when I do, I'm so grateful to the SBCL team for their amazing and ongoing work.

floren
1 replies
1d19h

Some day I'll do something useful with the debugger in SLIME but so far it's mostly "look at the error message, find the one frame in the stack that tells me something useful, hit either continue or abort"

vindarel
0 replies
1d19h

Quick help: look at Emacs/your editor menu for commands, in Emacs type "v" to go to the buggy line, fix your bug, compile the function (C-c C-c), type "r" on the buggy frame to restart it from where it failed, and voilà. If you were processing something costly, you didn't have to restart it from zero.

vindarel
0 replies
1d19h

Qlot works well for project-local dependencies. https://qlot.tech/

https for QL: https://github.com/rudolfochrist/ql-https

another package manager: https://github.com/ocicl/ocicl ("built on the world of containers")

Can't really refute the weird feeling. But there's often a reason! (like, it makes totally more sense to install a library from within the Lisp REPL and not from the terminal, we can use it right away)

schemescape
0 replies
1d17h

Good list! I, too, have many complaints about Common Lisp.

But for a language whose standard hasn’t been updated in roughly 30 years, it holds up impressively well!

mark_l_watson
0 replies
1d3h

While you speak the truth about CL, I have been using CL for 40+ years and I can be happy by using my own preferred subset of the language. BTW, I do the same with Haskell. I am enthusiastic, but a novice, for Haskell but I use a shockingly small subset of the language, language extensions, and available libraries.

kazinator
0 replies
1d19h

what other language provides both AREF and ROW-MAJOR-AREF

C? :)

Capricorn2481
0 replies
1d18h

SBCL can generate a compiled binary, cool! But the routine is called "save-lisp-and-die"

...And?

evdubs
2 replies
1d15h

Racket is the best for a beginner, but it keeps having that academic, "we made it for the kids" feel of being easy to understand but not very pragmatic for a seasoned programmer. And it is one of a kind, so a good Racket developer might never know Lisp itself.

I find Racket to be more pragmatic than Java, Python, and JavaScript. So much so that I trade options with it. Edit: "pragmatic" not for all things, but for many things, including GUIs and charts. Second edit: whoops. I see I've already pointed out this Racket program to you.

https://github.com/evdubs/renegade-way

nequo
1 replies
1d14h

This is a very cool project!

Have you considered using Typed Racket for it? If yes, then how do you see the tradeoffs?

evdubs
0 replies
1d11h

I sort of evaluated Typed Racket before I started writing lots of Racket code, and my experience was that "Contract" Racket (regular non-Typed Racket) was more ergonomic and idiomatic. I think the contract system is great and I use it, for example, when sending messages to Interactive Brokers:

https://github.com/evdubs/interactive-brokers-api/blob/maste...

This file just includes request message definitions and to-string (->string) implementations.

The overall program's performance is adequate for me, so if the contract system is causing overhead, I don't particularly care about it. Maybe that's a scalability concern for more performance-demanding programs which could benefit from Typed Racket.

hajile
1 replies
1d18h

It's here that I'll again get on my soapbox about Scheme.

SRFIs suck really badly. The language isn't useful without them, but every implementation uses a different subset. You have to find out which SRFI does what you want (there may be more than one) then find out what number it is then look up the SRFI itself because the spec is pretty much the only documentation you're going to get.

This entire situation sucks for experienced devs and simply kills most beginners before they even do anything.

Things were supposed to get better with R7RS. It was supposed to have a tiny core language to keep the R5RS crowd happy, but add the R6RS features in the large edition that are needed to get real work done.

R7RS-small released in 2013. That's TEN YEARS ago and it STILL doesn't have a standardized library because apparently nobody wants to work on R7RS-large.

The whole thing is an unusable mess for no good reason and it's killing the language.

zilti
0 replies
1d17h

People are working on R7RS-Large, a lot of the standard is decided upon by now. I suspect, though, it'll take another five years...

https://github.com/johnwcowan/r7rs-work/blob/master/WG2Docke...

nsm
0 replies
1d16h

I would like to disagree with this view of Racket.

I'm a fairly experienced professional programmer with a lot of systems experience, and I find Racket an extremely well thought out, competently implemented and high performance (particularly with the switch to Chez Scheme) language.

I wrote a bit about it at https://nikhilism.com/post/2023/racket-beyond-languages/ and Bogdan Popa and Alex Hirsanyi have done amazing stuff with the language https://defn.io/ https://alex-hhh.github.io/index.html including a sophisticated web framework, a Kafka GUI client and so on.

I think the only deficiency for a "pragmatic for a seasoned programmer" (beyond the standard "no FAANG is sponsoring it" related lack of resources and presence in popular discussion) is the lack of good editor integration beyond emacs. That said, I've found DrRacket perfectly reasonable for my uses, although clunky. I'm not saying it can replace something like Python, but that is more because of the smaller ecosystem and contributor base than anything wrong with the language itself.

neilv
0 replies
1d19h

Racket is the best for a beginner, but it keeps having that academic, "we made it for the kids" feel of being easy to understand but not very pragmatic for a seasoned programmer. And it is one of a kind, so a good Racket developer might never know Lisp itself.

I know why you got that reasonable impression, but there's more to it, which can make it much more interesting to practitioners...

The gang-of-professors reasons for Racket are for research platform and education platform.

However, at the same time, Racket (then called PLT Scheme) attracted a disproportionate share of a user community of high-powered programmers and software engineers, like you also see with Common Lisp and some other languages. And one of the professors, Matthew Flatt, happens to be a great systems programmer, with good software engineering sensibility.

There's some really solid stuff in Racket, and I've used it on important systems with hyper-productive teams (you'd think they had a hundred engineers, when it was only a few).

The impression you get might be for two reasons:

1. The gang-of-professors decides most of the customer-facing image for Racket, in various ways, such as writing some introductory books, promoting their zero-prior-experience-student-oriented IDE as the Racket IDE, odd language on the Web site, etc.

2. At times they've also made direction decisions that suit either their education&research goals or, secondarily, their idea of what professional practice wants. On the latter, of course, people who've been professors for decades aren't going to have all the insights of the best people who've been practicing in industry that same period.

If you want to approach Racket as a serious and skilled practitioner, one way is to focus on the Guide and Reference books (I wish these were consolidated), and expect that you'll have to creatively build much of the ecosystem bits you need from scratch. That can actually be a great situation to be in, if you're up for it.

The governance model, last I checked, is a benevolent dictatorship by the gang-of-professors, and the industry practitioner user community is currently small, so just be aware of that going in. But, there's an open ecosystem for packages, more empowering than most languages, due to all the language-extension mechanisms of Racket, so you can "build out" Racket in a decentralized way, to some extent. If you do that, you'll want to keep on top of what the gang-of-professors are doing, and give them a heads-up on things you're doing, to minimize unpleasant conflicts.

I'm not going to make a sales pitch for it; just wanted to add some info for serious practitioners who're already interested, on what to expect, and how to approach it.

natrys
0 replies
1d9h

How's that Guile port coming along anyway?

It's just not. It was someone's GSoC project, and they basically stopped working on it in 2015 and nobody picked up the slack since: https://git.hcoop.net/?p=bpt/emacs.git

We only got Nativecomp in elisp because a very talented and persistent compiler engineer from ARM saw through it. With the basics done, I think they want to tackle doing more aggressive optimisations with libgccjit. But best not to take things for granted when bus factor is 1.

mark_l_watson
0 replies
1d3h

I also gave up on Guile on macOS a long while ago after having fun with it on Linux. However, now you can work around the lack of Guix on macOS by using this brew tap: https://github.com/aconchillo/homebrew-guile

davexunit
0 replies
1d19h

Yeah Guile should get more love! Very practical lisp that punches above its weight. I've had the fortune of writing Guile as my full-time job for the past year and I've been a user for a lot longer so I can attest that you can get stuff done with it.

a-french-anon
0 replies
1d6h

How's that Guile port coming along anyway?

https://github.com/lem-project/lem (as a realistic climacs) is probably our only hope for a "new Emacs".

NeutralForest
0 replies
1d19h

Lol, a bit scathing but mostly fair from my experience. I haven't used Lisps a lot but I'm a pretty big Emacs user and I've played around with the languages your mentioned and I feel somewhat the same.

I'll just add that the ideas and many libraries around Racket are super cool. There's really a bunch of documentation and tutorials for a lot of things, it's fun.

netcraft
36 replies
1d20h

after decades of programming, lisp has always been that thing that intrigues me endlessly, but I haven't had a chance to actually wield it myself. but so many people whose opinions I respect love lisps, and not just for a little while. Clojure especially. Other ideas like datomic and xtdb are also high on my list of things I need to experience. I think im going to have to make an intentional effort to find a lisp job next time.

epgui
27 replies
1d20h

Doing Clojure at CircleCI, a few years ago, redefined who I was as an engineer… FWIW. It’s an incredible language.

Today I work with python (sigh) and every day I long for Clojure.

behnamoh
26 replies
1d19h

I keep hearing sentiments like this but then I wonder, if Clojure or <another awesome Lisp> is so much better than Python <or some other mainstream language>, then why are we still writing in those languages? If it's because of libraries, then the question is: Why do people write libraries for these languages and not the Lisp ones?

crote
8 replies
1d16h

Because it's weird enough to be off-putting, and it doesn't solve a real-world problem.

If you look at the arguments in favor of Lisp, they'll often boil down to it being "beautiful", "elegant", "flexible", or even "magical". It's a very minimal language which allows you to do absolutely anything - a lot of which would be an absolute nightmare in most other languages. You could implement just about any programming paradigm in Lisp if you want to. I believe this makes it very appealing to computer scientists, or other people with a more mathematical background. However, this flexibility is also a massive footgun: if you're not careful your junior developer might end up reinventing the wheel a dozen times, and writing completely unmaintainable code in the process.

On the other hand, most other programming languages look kind-of the same. They are all quite opinionated about how stuff is supposed to work, with a lot of hardware details leaking into the language. However, they are very easy to learn: most of it is just "do a bunch of operations in succession" taken to the extreme. Anyone who knows C# will be able to pick up a basic understanding of C, Python, or JavaScript well within a day, and a lot of people new to programming will be able to write not-completely-terrible code within a month or two when given the right guidance. They don't need to know about all the abstractions and technical details to be a functioning member of your team.

When you're running a business, you don't care about any of that beauty or flexibility. You want code which is quick and easy to write, trivial to read, and understandable by even the worst programmer in your company. In practice that means in your comparison you'll be choosing Python over Lisp. Heck, Go was developed entirely around this principle, cutting out as many language features as possible. And because all the other companies are making the same choice there will also be way more libraries for Python, making the gap even larger.

So yeah, in a stroke of irony Lisp isn't more popular because it is better.

evdubs
5 replies
1d15h

Because it's weird enough to be off-putting, and it doesn't solve a real-world problem.

Nonsense. Quoth Wikipedia, "Lisp pioneered many ideas in computer science, including tree data structures, automatic storage management, dynamic typing, conditionals, higher-order functions, recursion, the self-hosting compiler, and the read–eval–print loop." These are all solutions to real world problems.

So many languages borrow features that were originally developed in Lisp.

However, this flexibility is also a massive footgun: if you're not careful your junior developer might end up reinventing the wheel a dozen times, and writing completely unmaintainable code in the process.

Plenty of code written in Lisp looks just like your Python, Ruby, JavaScript, Java, et al programs where you're defining structures or classes, writing and calling functions, importing useful libraries, etc. Plenty of this Lisp code is just as maintainable as the non-S-expression code.

Anyone who knows C# will be able to pick up a basic understanding of C, Python, or JavaScript well within a day

Same with Lisp. It's just:

(function arg1 arg2)

Instead of

function(arg1, arg2)

You want code which is quick and easy to write, trivial to read, and understandable by even the worst programmer in your company.

There is plenty of "lowest common denominator" code like this written in Lisp. Much Lisp code is not buried under inscrutable macros, just like not all Java code is buried under layers of inscrutable classes.

baq
4 replies
1d5h

(function arg1 arg2) > > Instead of > > function(arg1, arg2)

That isn't the problem of learning Lisp. The problem with learning Lisp is

   (`a ,b c)
and the infinite power thus infinite responsibility of the wizard programmer.

There is plenty of "lowest common denominator" code like this written in Lisp. Much Lisp code is not buried under inscrutable macros, just like not all Java code is buried under layers of inscrutable classes.

But the problem remains: it only takes one wizard to make reading code impossible by outsiders.

The canonical way to write a Lisp system is to define a DSL (or, should I say, system-specific language) and implement the system in that. But this means no-one outside of the language/system developers know the language, this means Lisp tends to be write-only by design - not in the line-noise meaning, but in the obscure foreign language meaning.

You certainly can not do that, but if you choose to not do that, why pick Lisp? You can argue that all software systems develop their internal system-specific languages, and I agree - but the overall strict language provides a rigid framework for anchoring understanding, something you have to conciously work for with Lisp. (Note Java pre-2020 was so limited in expressivity it wasn't any better, because all the complexity ended up in the class hierarchies you reference - it was more similar to Lisp in that way than many realize.)

lispm
1 replies
1d5h

it only takes one wizard to make reading code impossible by outsiders.

One Java architect can make the code unreadable, even for himself.

canonical way to write a Lisp system is to define a DSL

It's popular, but not canonical. But even those DSL language patterns can be learned. In many non-Lisp projects, the DSL gets implemented in C/C++.

baq
0 replies
1d5h

One Java architect can make the code unreadable, even for himself.

I agree, even in my original post.

But even those DSL language patterns can be learned. In many non-Lisp projects, the DSL gets implemented in C/C++.

The friction is everything, otherwise we'd all be programming Turing machines. In Lisp, it is zero, by design; it can be good or bad, as most things in engineering. Consequences of uncontrolled growth of a DSL vs imposing artificial process limitations for the future of the software project must be considered explicitly; this is automatic when the cost of starting a DSL is non-zero.

kazinator
0 replies
1d2h

The canonical way to write a Lisp system is to define a DSL

That's just a subtle lie popularized by Paul Graham in "Beating The Averages".

evdubs
0 replies
23h35m

The problem with learning Lisp is `(a ,b c)

If you can understand "String ${interpolation}", you can understand list quasiquoting.

But the problem remains: it only takes one wizard to make reading code impossible by outsiders.

This really is a Lisp meme. There are plenty of Lisp wizards like Guy Steele, Rich Hickey, and Matthew Flatt. The wizards perform the magical act of making code legible and intelligible. I have stumbled around several Clojure and Racket code bases and never felt like "I should understand this code but the features of Lisp make it impossible to know for sure." "Infinite power" macros and whatever are really only used sparingly and generally when it's impossible to achieve a goal otherwise. No one is doing (define + -).

But this means no-one outside of the language/system developers know the language, this means Lisp tends to be write-only by design - not in the line-noise meaning, but in the obscure foreign language meaning.

I, as a Racket novice, have been able to add candlesticks [1] to the plot library without learning much about it. I have also debugged DrRacket (an IDE) to uncover that Racket GUI operations performed significantly worse if non-integer scaling was used [2]. At no point when I was going through Racket internal code did I ever feel it was write-only. In fact, it was quite convenient to modify Racket internal source code, rebuild, and test changes in a way that would be much more difficult in Java or C++.

You certainly can not do that, but if you choose to not do that, why pick Lisp?

Built in rationals.

The ergonomics of defining [XML / JSON / etc] data as S-expressions and doing things like pattern matching on that data.

Great, coherent integration between GUIs, plots, statistics functions, and all the other bits of Racket's batteries inclusions.

You still have access to all the other great features that other languages have borrowed from Lisp like REPL development, package managers (edit: maybe package managers were not a Lisp invention), good IDE tools, etc.

It is nice to learn the meta-syntax of parentheses once and know that the code will always look like that. No need to consider if some feature is implemented as a syntactically different new keyword, annotation, function call, or whatever. It'll always be a (feature).

something you have to conciously work for with Lisp.

Plenty of languages have style guides, linters, static analysis tools, etc. to make sure the code conforms to certain restrictions. Lisp feels no different in this regard.

[1] https://docs.racket-lang.org/plot/renderer2d.html#%28def._%2...

[2] https://github.com/racket/gui/commit/20e589c091998b0121505e2...

tharne
1 replies
1d3h

When you're running a business, you don't care about any of that beauty or flexibility. You want code which is quick and easy to write, trivial to read, and understandable by even the worst programmer in your company.

This is exactly right, I don't know why so many people still fail to grasp this concept.

nxobject
0 replies
22h51m

I find that a good analogy is the AK-47. As with AK-47s, the surrounding ecosystem is an advantage all of its own.

galaxyLogic
2 replies
1d18h

Because of habit. There is always a cost in trying to learn something new, to the level you are on with some other language and it's tooling and libraries.

Programmers who program in some language for a few years realize they are still learning more about it, all the time. Therefore they know if they switched to another language it would also take them a few years to attain the same level of mastery.

And then there is similar inertia with an organization that has lots of code in some specific language already.

colingw
1 replies
1d18h

Yes, it's more than just about the language itself, as I describe here: https://www.fosskers.ca/en/blog/software-dev-langs

For some people, community signals are very important. Massive conferences or raw number of libraries, etc., indicate some inner quality of that language's ecosystem that they value.

galaxyLogic
0 replies
1d

Nice link, holistic viewpoint!

roenxi
1 replies
1d15h

I don't think anyone is saying Clojure is better than Python in the abstract. Clojure has a different style from Python and that style can be more fun and is better for some specific tasks like application programming and anything that wants to use multiple CPU cores. Tends to have better long term prospects too, old Python code doesn't work in my experience but things on the JVM have cockroach powers.

As for why things are and aren't popular, who knows? It is quite possible that people just don't like the look of the parenthesis, or there are a couple of key libraries that aren't good but nobody vocal has put there finger on which ones. The error messaging isn't quite up to a standard that people want to deal with. Or maybe popularity is really just about pure random chance, at most 4 languages get to have >20% market share, by definition.

epgui
0 replies
1d7h

I don't think anyone is saying Clojure is better than Python in the abstract

Actually, I will make that claim without hesitation.

crq-yml
1 replies
1d14h

Lisps are "wizard" languages: the runtime semantic is kept close to the syntax, which also means that you can rapidly extend the syntax to solve a problem. This quality is true of Forth as well, and shares some energy with APL and its "one symbol for one behavior" flavor. With respect to their metaprogramming, the syntactical approach hands you a great foot-gun in that you can design syntax that is very confusing and specific to your project, which nobody else will be able to ramp up on.

But Algols, including Python, are "bureaucrat" languages: rather than condensing the syntax to be the exact specification of the program, they define a right way to form the expression, and then the little man inside the compiler rubber stamps it when you press the run button. In other words, they favor defining a semantics and then adding a syntax around that, which means that they need more syntax, it's harder to explain the behavior of a syntactical construction, and it's harder to extend to have new syntax. But by being consistent in a certain form-filling way, they enable a team to collaborate and hand off relatively more code.

IMHO, a perfectly reasonable approach I'm exploring now for personal work is to have a Lisp(or something that comes close enough in size, dynamic behavior, and convenience, like Lua) targeting a Forth. The Forth is there to be a stack machine with a REPL. You can extend the Forth upwards a little bit because it can metaprogram, or downwards to access the machine. It is better for development than a generic bytecode VM because it ships in a bootstrappable form, with everything you need for debugging and extension - it is there to be the layer that talks to the machine, as directly as possible, so nothing is hidden behind a specialized protocol. And you can use the Lisp to be the compiler, to add the rubber-stamping semantics, work through resource tracking issues, do garbage collection and complicated string parsing, and generate "dumb" Forth code where it's called for. That creates a nice mixture of legibility and configurability, where you can address the software in a nuts-and-bolts way or with "I want an algorithm generating this functionality".

Karrot_Kream
0 replies
1d10h

IMHO, a perfectly reasonable approach I'm exploring now for personal work is to have a Lisp(or something that comes close enough in size, dynamic behavior, and convenience, like Lua) targeting a Forth.

I've had this idea for a while now but never got around to actually executing it. I'd love to follow your progress if you're doing it publicly.

__MatrixMan__
1 replies
1d16h

Most Python users aren't software engineers. They're students, scientists, business analysts... We're lucky that we were able to drag them away from Excel. Asking them to learn yet another programming language might be a bit much. And if they did, it wouldn't probably be a lisp (it would probably be among: C, Javascript, Julia, Go, Nim).

I want to move on from Python, but there are so many more people that I can help if I stay.

tugberkk
0 replies
1d9h

I have the same problem. It is a good language to teach to non-cs majors. If you want to build something out of the box, use Python. However, GPT is coming and maybe we won't even use Python anymore for simpler tasks.

ParetoOptimal
1 replies
1d17h

I can say at least that Haskell is this way for me.

epgui
0 replies
1d4h

I think you'd rewire your brain "equally well" with either Haskell or Clojure.

I prefer Haskell's type system, but Clojure has better syntax IMO.

tikhonj
0 replies
1d18h

At the end of the day, things get popular through social processes, so it's far more a matter of social factors—some of which are practically random—than any intrinsic qualities of the thing itself.

kazinator
0 replies
1d15h

Why do people write libraries for these languages and not the Lisp ones?

1. They want their names to be widely recognized, so they find a popular bandwagon to hop onto.

2. Raw numbers? More people using Python means more people trying to make libraries for Python, means more libraries remaining in the race after you eliminate the crap from people who don't know how to write libraries.

Note that Python is, by now, an old language. It wasn't instantly popular, and you wouldn't have predicted it. In, say, 1999, you had to be some GNU/Linux person to even know what Python is. It was far from obvious that, of all things, it would get so popular. That Eric Raymond article in the Linux Journal around that time probably gave it a bit of a boost.

Python definitely rode on the coattails of increasing GNU/Linux popularity, too. More people using Linux started asking questions how to script this and that, and going "gack!" at shell or perl programming. It seems Python might appeal to survivors of VisualBasic shifting gears into GNU/Linux stuff.

epgui
0 replies
1d18h

For the same reason people can't change their well-established habits and opinions, particularly those that have network effects.

Language popularity has little to do with how well languages are designed or how simple they actually are. Most popular languages are popular for historical reasons.

eduction
0 replies
1d4h

This is a great talk on exactly this topic from a Clojure conference, if you want a long answer. It focuses on the “functional” feature of Clojure but all of it applies to the whole language (and probably any good lisp) IMO. https://youtu.be/QyJZzq0v7Z4?si=y3hhYaInMkRLBCJK

My brief text answer (focused on Clojure):

-python has been around about 20 more years than Clojure

-the advantages of Clojure vs python/etc probably aren’t nearly as big as python vs a non memory managed, compiled, static language like C. That whole generation of “scripting” languages had the wind at their backs in a way Clojure and its contemporaries never quite will (though Clojure et al tend to be much better at concurrency and parallelism and this will help a lot)

-unfamiliar syntax (not Algol like) and paradigm (not oop)- and the truth is many programmers back away slowly when a thing is too alien

-hasn’t found a niche as big as data science or scientific computing or CRUD website building - I think python has aggregated some great academic niches, at least one of which (ML/data science) exploded in popularity. Ruby had the rails community. Clojure seems to have some popularity in fintech but has no big single niche yet that it dominates.

coldtea
0 replies
1d9h

If it's because of libraries, then the question is: Why do people write libraries for these languages and not the Lisp ones?

Because libraries already exist for those languages, as well as support, vendors, and a big ecosystem, familiar syntax, and jobs. So they write libraries for languages that are already popular.

If you meant, "but why wasn't some Lisp the one that gain popularity back in the day, when C, C++, Python, and Java didn't exist or where still fresh?"

I think because:

1) it was too advanced for the procedural mindset at the time,

2) it was not sufficiently efficient in those primitive 16 bit machines

3) fragmentation

and most importantly, no killer app and major vendor backing or OS first-class support (like C had for UNIX, C++ for Windows, and Java got from SUN).

Zambyte
0 replies
1d18h

Social forces and economics. Lisps were winning in academia for a while. Unix started winning in the engineering world before Lisp systems really got their feet under them. Engineers made a lot of money, and academia shifted focus to the systems that were making money. Now everyone programs for Unix instead of Lisp.

karmakaze
4 replies
1d20h

My take on Lisp after going (partway) through SICP, is that it's a syntax and not so much a language. The language is what you build up for the particular kinds of things you need to do. This is both the strength and weakness of Lisp, with a tight-knit competent team, everything is elegantly achievable. However on a small/understaffed team or one with high turnover, each member has to onboard onto that team's language built using Lisp.

Imagine the best and worst DSLs that you've had to use. Joining a Lisp team would be somewhere on that spectrum though I hope their homegrown/app language is far better than the average/bad DSL.

Clojure is much better in that it has many 'batteries included' and opinions on things to make different codebases less different than with other Lisps.

bcrosby95
1 replies
1d20h

My take on Lisps is that people overblow the DSL aspect of it. I just write functions that call other functions, as opposed to methods that call other methods.

pfdietz
0 replies
1d19h

That's a fine thing to do in Common Lisp. If you ever change your mind, it's very easy to change a function into a generic function and split off the body into one or more methods.

tmtvl
0 replies
23h5m

SICP is a great, wonderful book. But it's even better when balanced with Seibel's Practical Common Lisp. It's like the Dean Martin to SICP's Jerry Lewis.

nerdponx
0 replies
1d19h

If it's any help, SICP is the wrong place to start for actually learning Scheme as a practical language. It's for learning about "the structure and interpretation of computer programs", which is not the same thing as "writing useful computer programs".

horeszko
2 replies
1d16h

I think im going to have to make an intentional effort to find a lisp job next time.

Does anyone have any tips on where to find a lisp job?

reikonomusha
0 replies
1d16h

Lisp jobs are sometimes advertised in Who's Hiring threads, Reddit's r/lisp, etc. There's a Lisp job advertised on Reddit [1,2] right now even. Another good source is to check out the companies in [3] and see if they have any openings on their website (or cold-emailing).

[1] https://www.reddit.com/r/ProgrammingLanguages/s/AZmouaoARl

[2] https://jobs.lever.co/dodmg/af802f7f-4e44-4457-9e49-14bc47bd...

[3] https://github.com/azzamsa/awesome-lisp-companies

john-shaffer
0 replies
1d13h
charlotte-fyi
29 replies
1d19h

Don't really get the criticism of Clojure for being hosted on the JVM, particularly relative to its status as a "productive" Lisp. Like oh, you get access to one of the biggest and most mature library ecosystems out there as well as best in class operational tooling? Obviously there are use cases where the JVM doesn't fit and all things being equal I prefer shipping statically linked binaries too, but the JVM still feels like an obvious "pro" here.

whateveracct
13 replies
1d19h

The JVM precludes general tail-call elimination though.

jordibc
10 replies
1d18h

It does preclude it, but clojure found an arguably elegant solution to it, using recur[1] instead. As a plus, in addition to achieving the same result as tail-call elimination, it does check that the call is indeed in tail position, and also works together with loop[2].

For me, it made me not miss tail-call elimination at all.

[1] https://clojuredocs.org/clojure.core/recur

[2] https://clojuredocs.org/clojure.core/loop

packetlost
7 replies
1d18h

It is, IMO, a missed opportunity to use a hard-coded identifier for `recur`ing instead of the `(let sym ((...)) ...)` form that would let you nest loops.

Aside from that, I agree. Tail-call optimization's benefits are wildly overblown.

whateveracct
6 replies
1d17h

The benefits aren't overblown if you are someone who learned Lisp with a functional approach. As in, using higher-order functions etc. You have to be careful whenever you approach a problem that way on the JVM.

xdavidliu
4 replies
1d15h

what does tail-call optimization have to do with higher-order functions? I thought the former pertains to iterative procedures written with recursive syntax, where the recursive call is at the very end of the function and called by itself, so stack size is O(1). Higher-order functions means passing functions to things like map, filter, etc.

throwaway17_17
3 replies
1d14h

In the context of higher order functions, tail call elimination allows for the avoidance of building up intermediate stack frames and the associated calling costs of functions when doing things like composing functions, particularly when calling large chains of nested function calls. The benefits of TCO for something like mapping a function can also be pretty large because the recursive map can be turned into a while loop as you describe at the beginning of your comment.

The optimization of stack frame elision is pretty large for function calls on the JVM and the stack limits are not very amenable to ‘typical’ higher order function ‘functional programming’ style.

packetlost
2 replies
1d3h

tail call elimination allows for the avoidance of building up intermediate stack frames and the associated calling costs of functions when doing things like composing functions, particularly when calling large chains of nested function calls.

This is more general than what tail-call-optimization can handle. This is true, but only in the context of recursive functions, and you don't actually save anything besides not needing to re-allocate the stackframes below your recursion point. Other optimizations such as inlining may perform some of this in the general case. Regardless, you get the same benefits by using `recur` in Clojure, it's just explicit, it still uses no extra stack space.

The downside is purely stylistic. It's functionally the same as if you did `(let recur () ...)` in Scheme.

throwaway17_17
1 replies
20h44m

I’m not certain, but I am pretty sure tail call optimization includes generic tail call elimination. This does not rely on the function being recursion. In effect the compiler converts all tail calls into direct jumps and allows for the reuse of stack space, limiting the total stack size for any given chain of tail called function to a statically determined finite size. This same optimization also allows for the omission of instructions which manage stack frames and their associated bookkeeping data. I know the Ocaml compiler does this, and I’m almost sure that GHC does as well.

I do not know if the above is included in what clojure does for tail calls, recursive or not, but on the JVM the elimination of those calls can and does have an impact.

packetlost
0 replies
15h6m

I’m not certain, but I am pretty sure tail call optimization includes generic tail call elimination.

I believe they're related, but not the same thing.

I do not know if the above is included in what clojure does for tail calls, recursive or not, but on the JVM the elimination of those calls can and does have an impact.

As far as I know the JVM doesn't allow a program to arbitrarily modify the stack, so any support would need to be baked into the JVM itself, which it might be now, but I'm not finding any indication that it is. The `loop`/`recur` construct essentially compiles to a loop in the Java sense (to my understanding), so it is as efficient as a recursive loop with TCO. The more general tail-call elimination likely isn't possible on the JVM, but you're correct that it would likely result in a speed up.

All of this is sort of besides the point: I don't think there's much in terms of higher-order functions (which is an extremely broad category of things) that you can't do in Clojure just because it lacks TCO. At least no one has been able to give me an example or even address the point directly. Speed is not really what I'm referring to.

packetlost
0 replies
1d17h

Can you provide an example?

whateveracct
0 replies
1d17h

The issue arises when you program really heavily with closures and function composition. You sadly cannot do functional programming as in "programming with functions" without care on the JVM.

lispm
0 replies
1d10h

In Scheme:

    (define (foo)
      (bar))
the call to bar is a tail call. How does recur optimize this? Well, it doesn't, since "general TCO" / "full TCO" means that any tail call gets optimized, such that the stack does not grow. Clojure recur/loop is just a loop notation.

Looping construct in most languages provides a simple conversion of self recursion (a function calls itself recursively) to a loop: update the loop variables and then do the next loop iteration.

But the general case of tail calls is not covered by a simple local loop, like what is provided by Clojure.

thmorriss
1 replies
1d18h

the clojure loop construct is often cleaner than code written to be tail recursive

uyrifo
0 replies
1d17h

And often faster: https://medium.com/hackernoon/faster-clojure-reduce-57a10444...

Yet it’s always noted as code smell associated with “inexperienced candidates” in interviews.

For that matter, first and last too: https://medium.com/hackernoon/faster-clojure-reduce-57a10444...

The amount of paired programmers suggesting changing nths to firsts and lasts is demoralizing.

MathMonkeyMan
6 replies
1d17h

I hardly wrote any Clojure, but the only thing that bugged me was the startup time of the repl. It's been talked about enough. Yes, that problem goes away if I use a proper setup with a language server or whatever, and yes it doesn't matter for "situated" production applications, but it still peeved me.

What do I care if it's in the JVM? Sure, a JVM instance uses a lot of memory to help the garbage collector, but that doesn't bother me. JVM is just an old, mature ecosystem. Every runtime we work with (browser, nodejs, CPython, your decades of hand-written C++, the Go standard library) shares design tradeoffs with the JVM. Nothing inherently off-putting about it.

coffeemug
5 replies
1d17h

I haven’t touched JVM in ages, but there are two things off putting about it.

First it’s viscerally slow. They have a state of the art GC, amazing benchmarks, tons of work going into performance, but it still feels slow and laggy when you develop on it. None of the other ecosystems you mentioned have that problem (including Python).

Second, they have a bad sense of design. The class library comes from a culture of needing three classes to open a file, and that culture permeated through the entire ecosystem. Almost all the software in it feels bloated and over engineered. The modal JVM experience is spending 95% of your time dealing with “enterprise-y” boilerplate that turns out to have nothing to do with the enterprise and everything to do with bad design decisions and the culture downstream from those. C++ has its own flavor of this problem, but certainly not Python or Go.

7thaccount
3 replies
1d15h

I couldn't agree more. I'm not very knowledgeable on Java, but was blown away every time I looked to see the crazy amount of boilerplate to do anything. There are all these design patterns that seem to only exist because the language is so terrible. Thousands of people who aren't professional developers write millions of lines of Python each year (just a guess, but sounds right) and the vast majority just write code and don't need 50 classes in their application to do something.

roenxi
2 replies
1d15h

You're talking about something different - Java the language is a bit ugly, but this is about JVM performance (ie, the runtime virtual machine that is installed to execute Java programs) with Clojure, where there is not much boilerplate to speak of.

Although the JVM is one of the sleekest environments around though and I'm confused by the fellow saying it is "viscerally slow". Clojure loads slowly, but after that everything happens at speed.

john-shaffer
0 replies
1d13h

Clojure itself actually loads pretty quickly, but almost every project has enough libraries to make loading the project take a few seconds.

7thaccount
0 replies
1d5h

I'm aware they're different and apologize for getting off topic. It wasn't my intent.

Capricorn2481
0 replies
1d9h

First it’s viscerally slow. They have a state of the art GC, amazing benchmarks, tons of work going into performance, but it still feels slow and laggy when you develop on it. None of the other ecosystems you mentioned have that problem (including Python).

I really can't relate to this. What part of the process is noticeably slower than Python? I have a lot of Python projects and couldn't say any of them are slower than JVM apps.

outworlder
1 replies
1d17h

Don't really get the criticism of Clojure for being hosted on the JVM, particularly relative to its status as a "productive" Lisp.

If you are doing long running server side apps, it is a better fit. Even better if you are already a Java shop.

Otherwise, its either detrimental or, at the very least, a source of very 'alien' behavior, not the least of which being the stack traces. That gets pretty obvious when you compare with the likes of Common Lisp with its incredibly elegant system that's essentially Lisp _almost_ all the way down.

The JVM has its own advantages of course. Billions of dollars of optimization work being one of them. Being able to use java libraries to fill the gaps (at the expense of the less elegant stack traces and some

It can be a complete show stopper in many applications. Say, you want to interface with C libraries. Or embed some form of Lisp in your app. Browser-based apps (emscripten doesn't help you), which is why Clojurescript exists.

Or you are building something like an iOS application. I have successfully embedded (although not shipped to the App store) Chicken Scheme in a couple of different ways. The first, as a library with all the cross compilation nonsense. And the second, by simply telling the compiler to stop at C code generation, adding the C blob in the app, and compiling everything together with the rest of the app. That gave me a remote REPL which was amazing for debugging.

roenxi
0 replies
1d15h

Fair points, I say. But interfacing with C isn't a show stopper is it? Clojure can bridge to C by exposing the C library though the JNI/JNA framework.

It isn't much fun and there are certainly situations where I'd say Clojure was a poor choice for calling C/C++ libraries, but if you need to do it then it can be done.

askonomm
1 replies
1d19h

And quite a lot of people actually do ship statically linked binaries with Clojure, using GraalVM. Clojure LSP server for example is distributed as a static binary.

colingw
0 replies
1d18h

Yes, although that solution can't yet be said to be "push button". My impression is that there are a decent amount of people who want non-JVM, native Clojure. Hence efforts like Janet and Jank.

LispSporks22
1 replies
1d18h

I know of one company that dumped its clojure code base because lambda costs were too high. They even experimented with graal as a fix.

whalesalad
0 replies
1d17h

I would have dumped lambda as a runtime before dumping the codebase. Obviously we don't know the full story but that sounds a lil silly to me.

Plus lambda's will stay alive for quite a while if they are in use. So startup time is felt once, but then would be identical to any other language or runtime.

truculent
0 replies
1d16h

It also gives you access to Babashka if you want Clojure for other use-cases where start-up time is an issue

https://babashka.org/

ska
0 replies
1d19h

I think hosted is almost tautologically a mixed bag - you get access to the host (both +ves and -ves) but you also introduce a layer and some invariable friction.

cmrdporcupine
25 replies
1d20h

While I agree with the author that there's something about Rust (my day-job lang & goto personal lang these days) that just lacks... elegance... I've also never found the Lisp (and many other FPs) emphasis on recursion all that compelling. Aesthetically or computationally. I find the programs written this way hard to reason about and read. Personally.

For me, the underutilized gold mine feels like logic languages (Prolog et al.) Though I know recursion is used there a lot, too.

pfdietz
9 replies
1d20h

I don't think Common Lisp has an emphasis on recursion. As evidence, consider that it doesn't in the standard require any kind of tail recursion optimization and has multiple ways of doing iteration. Scheme is the language with the fetish about doing things with recursion.

ctrw
7 replies
1d17h

Common lisp let's you do recursion when you need it and not when you don't. Using recursion for iterating through a list is stupid because you bury the lead in the middle of a function and you do the same thing until you stop. Using recursion for a tree or more connected graphs isn't because they are fundamentally recursive.

xdavidliu
6 replies
1d15h

you can argue that lists are fundamentally recursive too: they are a car and and cdr, where the cdr itself is another list.

cmrdporcupine
4 replies
1d2h

This is actually the core of what I was getting at. The way Lisp lists are constructed and processed actually encourages recursive methods.

But, again, it's probably because I've been exposed to too much Scheme that this sits so prominently in my mind. I've never worked in a large scale Common Lisp code base.

pfdietz
2 replies
1d

Not really. You don't want to recurse down the tail of a list, as in practice that's a great way to overflow the stack.

In practice, you want to recurse only on the elements of a list. And that's just how you'd handle a tree whose nodes are vectors instead of lists. So the implication that lists lead to recursion is a red herring.

Scheme is misleading because of its fetish of implementing iteration with tail calls.

xdavidliu
0 replies
23h18m

Not really. You don't want to recurse down the tail of a list, as in practice that's a great way to overflow the stack.

The SICP book had a paragraph on the distinction before recursive process and recursive procedure. I can't remember which was which, but the point there was that recursive procedure could be an iterative process if it's written with tail recursion in mind, in which case it would not overflow the stack.

soegaard
0 replies
1d

Btw - Racket guarantees more than Scheme. In Racket non-tail calls can't lead to a stackoverflow. When a "potential stackoverflow" is detected the "frames" are moved to heap, and the computation continues.

https://stackoverflow.com/questions/49912204/why-there-is-no...

lispm
0 replies
22h3m

actually encourages recursive methods

Not really.

Lisp has imperative loops which process lists: prog/go, do, dolist, loop for in/on.

Then it has a bunch of higher order functions: mapcar, map, mapc, member, find, remove, delete, every, some, ...

For example this is the equivalent of an early 1960s procedure to map a function over a list for side-effects:

    CL-USER 5 > (defun mapc-old (x f)
                  (prog ((m x))

                   loop

                    (when (null m) (return nil))
                    (funcall f (car m))
                    (pop m)

                    (go loop)))
    MAPC-OLD
As you can see above, the above uses a PROG providing a local variable M, a tag LOOP, and is using a GO statement to pass control to the position of the LOOP tag.

We can then process a list. Here we print the squares of the elements.

    CL-USER 6 > (mapc-old '(1 2 3 4)
                          (lambda (e)
                            (print (expt e 2))))

    1 
    4 
    9 
    16 
    NIL
Code using imperative loops to implement higher-order functions were common then. Then with macros one is hiding an imperative loop implementation behind a macro transformer: here the DOLIST macro

    CL-USER 12 > (dolist (e '(1 2 3 4))
                   (print (expt e 2)))

    1 
    4 
    9 
    16 
    NIL
Then one invented complex iteration sub-languages, here the LOOP macro in a tiny example.

    CL-USER 17 > (loop for e in '(1 2 3 4)
                       do (print (expt e 2)))

    1 
    4 
    9 
    16 
    NIL
Any list processing function, which is threatening to create stack overflows (even though some implementations can increase a stack at an stackoverflow error), is best rewritten into a function which does not cause stack overflows. Lisp uses traditionally imperative loop constructs for that. Scheme often uses tail-recursive functions for that.

ctrw
0 replies
17h7m

Sure, the issue is that most loops are farily simple and putting the logic in the middle of a function far away from the iteration is bad.

It's the same reason why I use let instead of a function when I want to encapsulate simple state across multiple statements. Sure they are both the same fundamentally, but one is much easier to reason about.

cmrdporcupine
0 replies
1d18h

That's fair. Any time I've decided to spend time in the sexpr-world, I've gone straight to the hard stuff, into Scheme. Apart from Elisp, anyways.

I've never worked in the source tree of a large Common Lisp program. Though I've bought the books and read the tutorials etc.

epgui
5 replies
1d20h

It’s like anything else, you get used to it just as easily.

whartung
2 replies
1d20h

...or don't use it.

Scheme likes to tout it because of tail call optimization. Common Lisp can do it, but doesn't really shove it as a first principle of "Common Lisping" (and CL does not dictate tail call optimization).

I don't use it (save when it's necessary). I just iterate using the supplied iterators (do, dolist, loop, whatever).

I can't speak to other functional languages.

ska
1 replies
1d19h

CL's iteration primitives are more capable that those provided by most languages, which often makes for a very clean approach.

Recursion is sometimes the easiest way to reason about a problem though, so it's nice to have decent support and TCO.

epgui
0 replies
1d18h

Recursion is sometimes the easiest way to reason about a problem though

Yes, it's one of the things that allow FP algorithms to be more declarative. It's difficult to seriously argue that recursion is fundamentally more complex-- the contrary argument is more obvious to me.

Everything that's left is the fact that it requires people to unlearn things, which is uncomfortable and requires effort, but unfamiliarity is not an indicator of complexity.

cmrdporcupine
1 replies
1d20h

I take it as: mathematicians like recursion, so they put some recursion in your recursion...

epgui
0 replies
1d18h

There's a strong argument to be made that mathematicians tend to build the simplest abstractions conceptually, especially in comparison to computer scientists. Personally, when I stumble upon some maths, I pay close attention. It has paid off.

thuuuomas
4 replies
1d20h

Have you checked out any of the minikanren logic programming environments people have implemented in scheme?

The Reasoned Schemer is an accessible introduction to that space if you’re not put off by the socratic-themed dialogue.

& the Barliman demo is still pretty exciting even after LLM codegen.

https://youtu.be/er_lLvkklsk

nerdponx
2 replies
1d19h

I'd like to see a practical use case for Minikanren other than quines. It's fun and interesting, but I'm not smart enough to go from "a tiny set of primitives" to "solving practical problems". I had trouble even implementing the basic "moses is a man" examples you see in Prolog tutorials.

tmtvl
0 replies
1d14h

MiniKanren has been used in the medical world: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9562701/

cmrdporcupine
0 replies
1d18h

Yeah me too I was just looking at an Elisp port (!) of minikanren the other day, and, yeah, all the examples were algorithmic puzzles, rather than solutions to problems.

erichocean
0 replies
1d15h

It's also available in Clojure: https://github.com/clojure/core.logic

If you want to write one yourself, it's pretty easy: https://www.youtube.com/watch?v=y1bVJOAfhKY

mrkeen
2 replies
1d19h

I've also never found the Lisp (and many other FPs) emphasis on recursion all that compelling.

Recursion appeals to me because of immutability:

I do not want to do a summation by writing the wrong answer to memory, and then updating it in-place until it's the right answer. It makes me have to think not only about what the right answer is, but when it is.

But can't I just use stack variables? Well, I don't think summation of primitive ints using stack variables is too taxing, but what about computing a word-count instead of a summation? Then your 'stack variables' are pointers into Strings and mutable HashMaps. When you return the result, do you make a defensive copy? How deeply do you copy? Your clone of a HashMap can still reference mutable data in your original HashMap.

Do you name your variable 'result' when it does not yet contain the result?

You might have to think about the primitive/object split - when is it pass-by-value and when is it pass-by-pointer? Do you have in-values and out-values? With immutability, you only have values, and you only pass-by-value.

If you're sharing objects across threads, and you decide to lock - is that sufficient? What if the caller takes the lock, does a get(), releases the lock, and then starts mutating the gotten value, bypassing the protection of the lock. Now you're thinking about defensive copies and maybe deadlocks too. Just share an immutable value without a lock.

Sometimes recursion is aesthetically better, but usually it's just what you're left with after you take a big hammer to mutability and its surrounding issues.

And most of the time (in Haskell but I'm guessing in Lisps too) you don't write out direct recursions, but instead use maps and folds which have recursion under the hood.

galaxyLogic
0 replies
1d18h

I think mutable variables (inside functions) are just fine, as long as the function itself always returns the same result for the same arguments. That is the essence of "pure" I think.

A function should be a "black box". I don't care what happens inside it as long its externally observable behavior is pure.

cmrdporcupine
0 replies
1d2h

Here's what I don't like about recursion & associated world of linked lists car/cdr: Modern Computing Hardware performance characteristics. That is, missed branch predictions, and lost opportunity to apply SIMD/vector operations.

On "classical" computing hardware, it's fine. On present-day architectures we're better off writing with "wider" operations across vectors rather than depth-first-ish recursive operations on scalars.

But yes, for some domains, the performance doesn't matter.

I just also find recursion more difficult to read in some contexts. In others, things like tree transformation etc. it's easier to read, for sure.

But I am also a fan of immutable, or at least copy-on-write, data, expressed in series of functional applications, etc. etc.

colingw
0 replies
1d18h

Manual recursion often isn't needed. You can get basically all of what you want from Transducers: https://github.com/fosskers/cl-transducers

huqedato
12 replies
1d20h

After reading the article (BTW a good one), one thing intrigues me: why the author left Haskell for Lisp(s) ?

kccqzy
3 replies
1d19h

I did the same and I treat Haskell as a rite of passage. At this point it's cliche to say that Haskell makes you a better programmer in other languages, but it's true. You really immerse yourself in Haskell for a few years, and you become a better programmer even when you aren't writing Haskell. I'm referring to universal things like "making invalid states unrepresentable" or "preferring to parse than to validate" or "functional core imperative shell" that can work well and lead to cleaner code in any language.

(The actual reason I left Haskell was because I switched my employer. Choosing an employer is IMO a bit more important than choosing the language to code in.)

mst
1 replies
1d4h

I've bounced off haskell several times now (I'm probably due another attempt this year, I continue to be convinced the problem is one or more 'aha' moments that I just haven't managed to make click yet).

The "functional core, imperative shell" approach is -fantastic- though and I'm not sure where I picked it up from but a lot of my (especially async/event driven) code looks at least close to that.

It occurs to me that this may be why I've yet to be significantly bothered by function colouring when writing async/await based code in Perl and/or JS, but I'd probably have to give that more thought before I go beyond 'maybe' with that particular idea.

kccqzy
0 replies
1d

I agree with you. The function coloring "problem" in the Haskell world is basically the distinction between a pure function and an IO function. I put the word "problem" in quotes because it's abundantly clear to any Haskeller that it isn't a problem but rather a beneficial constraint on programming that, sadly, other languages don't enforce in their type systems. The whole point of type systems is to add constraints and prevent you from writing certain hopefully nonsensical programs (if not we might as well go back to untyped lambda calculus which is Turing complete). Haskell just goes further than most languages. When you stick with this kind of constraints, you eventually naturally lead to approaches like "functional core imperative shell" which is an improvement in code quality and clarity.

For a junior programmer, the benefit of such constraints is greater because they do need the compiler to tell them no. The benefit is somewhat reduced for senior programmers because experience leads them to naturally avoid things that a Haskell compiler would have yelled at them for.

ParetoOptimal
0 replies
1d17h

Non-mainstream languages can be a very positive signal for employers I've found.

behnamoh
3 replies
1d19h

At some point you realize Haskell and/or its ecosystem keeps breaking things too much and you move on.

ParetoOptimal
2 replies
1d17h

I love Haskell and know there are efforts to fix this, but I find it hard to disagree.

tome
1 replies
1d9h

Not just efforts, but both GHC HQ and the Core Libraries Committee making stability a top priority. Expect casual breaking changes in the Haskell world to become a thing of the past, in very short order (in fact the most recent release of GHC (9.8) had very few breaking changes).

ParetoOptimal
0 replies
1d3h

I've been out of the loop for a bit, so that's great news to hear!

colingw
2 replies
1d18h

Hi. I actually left Haskell for Rust. And when I say "left", I mean I don't start new projects in it. I still maintain my libraries.

I wrote Haskell for 10 years or so, both FOSS and professionally. I've "been around the block" so to speak and consider myself to have a decent view of the landscape. Overall, Rust lets me code in the style I want while being very resource efficient. I write Rust professionally.

Hizonner
1 replies
1d15h

The part that confused me was your leaving Haskell for Rust and then leaving Rust for Lisp. Were both of those transitions aimed at some common goal?

I mean, if I left Haskell, I think the main reason would be to to shed the load of thinking simultaneously about laziness and how it interacts with optimization. OK, almost any non-Haskell language gets you out of that. But choosing to leave Haskell specifically for Rust would be about efficiency first and foremost, especially improving on the size and complexity of the RTS (and its limited platform support). I could also see wanting the concurrent programming benefits of Rust's ownership system. And it's nice to be able to write embedded or kernel code. And there's a bandwagon to jump on.

Lisp, on the other hand, doesn't really seem like an improvement over Haskell in any of those ways. It solves different problems. Lisp feels like it's on the "opposite side" of Haskell from Rust. So why did you "reverse" and try Lisp to begin with?

I agree that Rust is ugly, by the way. Honestly I think it started with keeping the C syntax and went from there.

colingw
0 replies
1d10h

I should make it clear that I haven't left Rust; I write it every day professionally.

I transitioned from Haskell to Rust to capture efficiency and small binaries. Since I'm in the game of shipping CLI tools, this was important for me.

You're right though that Lisp is on the other end of that; we're back to bigger runtimes with no tree-shaking, since that would hinder debugging. For now I'm experimenting with the Interactive Programming paradigm because the debugging story is just too good. For long-lived programs, this may be the way to go.

Rust code can be made nice to look at it, but it isn't the default nor the trend.

crote
0 replies
1d16h

I can't speak for the author, but I do know the reason I personally left it: Haskell-the-language is amazing, Haskell-the-ecosystem is awful.

Like the other commenter I completely agree that knowing Haskell makes you a better programmer, and when I code in other languages I really miss certain parts of it. On the other hand, the entire Haskell ecosystem feels like a half-finished PhD thesis, and often-promoted libraries suffer from fundamental flaws which are "open research questions". There's a constant drive in many Haskellers to make their code as abstract, type-safe, and generic as possible, but unfortunately practicality and ease of use are often forgotten along the way.

It's a great language, but if you're trying to write production code which builds upon the wider ecosystem you're in for a world of pain. In 2024 Rust has probably rifled enough through Haskell's pockets to make it mostly irrelevant as anything but a testbed for programming language researchers.

nerdponx
8 replies
1d20h

I feel like I say this a lot, but if you like Guile you should really check out Gauche. It has more "batteries" included than Python, several of the "alien technology" features that people expect from Common Lisp, and good documentation to help you navigate it all.

Scheme itself has a reputation for being very "minimal", and R7RS itself is great, but it also shines as a base for bigger languages like Guile and Gauche.

Clojure is pretty cool too though. I love that it really has a life of its own as a "third" dialect to complement Common Lisp and Scheme.

Can't speak for Elisp, not enough room in my brain to learn Emacs after years of Vim.

If you like Python, I'd also like to shout out Hy (https://hylang.org/), which compiles to Python, and like Fennel is to Lua, it does a reasonably good job of bolting on s-expressions while preserving the semantics of the underlying language. It's definitely a "bigger" language than Fennel, but Python is similarly "bigger" than Lua. Hy has also developed its own distinct feel, especially when you take into account the first-party Hyrule utility library (https://hyrule.readthedocs.io/en/master/index.html) which provides a lot of very interesting macros, deriving lots of useful ideas from both Common Lisp and Clojure.

drekipus
2 replies
1d18h

I tried Hy for 2023 AoC and loved it, but it broke after day 8 because the solution I wrote crashed after 40mins. but re writing the same logic in python took 2 mins to complete.

Hy is nice, but "compiling to python" isn't

nerdponx
1 replies
1d17h

That's a new one to me, I've never had a Hy program perform substantially worse than the equivalent Python. I'm curious what the offending code was!

drekipus
0 replies
1d16h

I think from memory it was to do with recursion (ie: stack overflow error) from having a few too many nested `lfor`s

I'll try and find it

rcarmo
1 replies
1d20h

I've been using Hy on and off. Off because it broke "let" at one point and I had a few critical things (like my entire site engine) written in it, and on because I like it a lot. But it's not a great LISP for performance (Fennel can run rings around it when using luajit), and I wish they shipped a 1.0 (I cringe every time I read the changelog for a release and check the "breaking changes" part, which is always... beefy).

Zambyte
0 replies
1d14h

Off because it broke "let" at one point

It wouldn't be very difficult to provide an implementation of `let` that behaves how you would like though, so there is that at least.

p4bl0
0 replies
1d11h

Your description of Gauche makes me think of Racket, which I really like. How would you compare the two?

at_a_remove
0 replies
1d2h

I've always been curious about trying out a Lisp, but it always seemed like I was going to have to implement everything myself. Your note on Gauche having the same batteries-included outlook as Python is very tempting indeed.

dogprez
6 replies
1d17h

[Fennel] also lacks Common Lisp's debuggability, given that it sits entirely within Lua's runtime.

I'm not sure exactly what feature the OP was referring to. It sounds like they don't think you can get a REPL for an executing Fennel process? You can. If you are only using it for AOT Fennel->Lua you can't, you have to include its runtime.

schemescape
5 replies
1d17h

Last I checked, Lua was “bring your own debugger”. Assuming that hasn’t changed, a REPL is nice, but you can’t pause and inspect anything by default.

emidln
1 replies
1d16h

Why can't you just put the equivalent of `(repl)` wherever you want to debug and drop into your REPL?

schemescape
0 replies
1d16h

Short answer: I don’t know. That sounds like a good idea, but how would that access local variables in the caller (to inspect state)?

I remember the Lua C API exposes a lot of information, but I didn’t think it was accessible from scripts. Of course, it was a long time ago and I could have easily missed something at the time. Happy to be corrected!

Edit: you might also run into difficulties trying to redefine non-global functions to add the call to “repl”.

colingw
1 replies
1d15h

To be the most fair, Fennel 1.4 recently released with an `assert-repl` form that opens a repl when some assertion fails, in which you can inspect local variables, etc. That's getting closer to CL.

https://git.sr.ht/~technomancy/fennel/tree/1.4.0/item/change...

schemescape
0 replies
1d15h

Thanks for the correction! I had only used Lua, and thought Fennel had no runtime, so I assumed this was not possible.

dogprez
0 replies
1d16h

Fennel has a form `assert-repl` which will drop into the REPL wherever, if the condition fails. For writing games you can launch the REPL in the game loop if a keyboard button is pressed. But what you can't do, that I know of, is interrupt arbitrary execution and get a Fennel REPL. You'd probably need a lua debugger of some sort for that. I'm not that familiar with that though.

beepbooptheory
6 replies
1d18h

Does anyone know how to get a lisp job? Or have any experiences to share? Its been a dream for a while for me, and I think I am possibly ready (at least with common lisp, have started doing more clojure recently though). It just seems so impenetrable to me, I don't even know how to begin to search for it.

neilv
2 replies
1d18h

I got mine by contributing to some niche communities, and developing open source packages.

A great foot in the door is when someone is using and likes some software that you have written. If they know it came from you, and can easily find out you're available when they're ready to hire.

Of course, that's not an option for everyone -- some people simply haven't yet had the free time or paid opportunity to make open source -- and we should try not penalize people for that. But it's much better positive signal than a job-seeker approach of "I spent person-months memorizing Leetcode medium-difficulty answers, and rehearsing whiteboard interview stage presence like a rockstar."

Beware that good Lisp-family jobs are rare, and don't tend to be highest-paying. Outside of relative pay within categories like enterprise Java-shop programmer.

(Aside: From a hiring perspective, using some beloved niche language with few jobs available is also a great way to pick up mythical "10x" hires, and retain them for a long time. :) But, more seriously, it's ethical to make sure that prospective hires for unpopular niche keywords realize what they're getting into, in the current software job environment that often assumes fad-following & job-hopping strategy, and is skeptical/derisive of people whose resumes don't look like that.)

colingw
1 replies
1d15h

If the Stackoverflow Dev Survey is to be believed, CL and Clojure devs actually make decent money.

neilv
0 replies
1d14h

https://survey.stackoverflow.co/2023/#work-salary

This chart I'm looking at might be broken, because mouseover is showing median salary of exactly $96,381 for all of Scala, Elixir, Clojure, "Lisp", and F#. (But somehow OCaml and Haskell didn't get jumbled in with those.)

BTW, I'm keeping in mind that people into the fringe power-user tech might be more capable than your average bear. For example, your typical person who, somehow, got years of experience with CL, IME, is a lot more capable overall than your typical person with the same number of years using Python. So, someone taking home $150K doing a Lisp at a company that lets them might have comparable skills to someone making $500K at a FAANG. (Excluding the blip when Google acquired ITA Software, which I guess brought a bunch of CL people there.)

reikonomusha
0 replies
1d16h

It's been in Who's Hiring threads, r/lisp, etc. There's a Lisp job advertised on Reddit [1,2] right now even. Another good source is [3].

[1] https://www.reddit.com/r/ProgrammingLanguages/s/AZmouaoARl

[2] https://jobs.lever.co/dodmg/af802f7f-4e44-4457-9e49-14bc47bd...

[3] https://github.com/azzamsa/awesome-lisp-companies

colingw
0 replies
1d18h

The Clojure Slack has active channels for both job posters and job seekers.

Jach
0 replies
1d17h

Haven't had an exclusively lisp job, so maybe I shouldn't comment, but... I did use CL and Clojure on the job for a few smaller things at my last two places. It's easier to find Clojure companies (and them to find you) than Common Lisp ones. You might want to peruse https://github.com/azzamsa/awesome-lisp-companies from time to time and see if any have openings. There's other resources linked too and of course there's the reddit and discord community (such as there is) hubs. You can also see if there are any meetups in your area, that's how I almost ended up at a Clojure startup some years back.

I should have taken strategy notes after talking to a guy at my last job who got management buy-in to rewrite a lot of Java code (for android) to Kotlin and have all new code for android be in Kotlin (before that was considered the sensible default). I think that's in general a better approach for a lot of would-be paid lispers: don't wait for or look for the lisp job, make the lisp job. Whether that's doing work where the customer doesn't care what language the thing is made in, or introducing it (some have even snuck it in -- the original clojure.jar got a lot of early success that way) to an existing work place. What I somewhat remember from my conversation was that if you can make a good technical case and have at least one other person supporting you (ideally your entire dev team as was his case), it's a lot easier to sell. No one raised bogus concerns about increasing the hiring difficulty or effort learning the new system. (I say bogus because engineers are learning all the time, and huge swathes of the industry have already had to do things like migrate from ObjC to Swift, or the various versions of JavaScript and later TypeScript + all the framework churn, switching IDEs; learning and change are quite common and a non-issue.) From other Lisp company reports, getting a new hire up to speed to be productive with the team using Common Lisp is a matter of a week or two, a small portion of the overall onboarding time a lot of new jobs have. Mastery takes longer, of course, but that's different.

If I had stayed longer at my last job I would have continued to flesh out a better demo for interactive selenium webdriver tests for our main Java application after injecting ABCL into it, it seemed like the easiest vector to get more interest from my team and other teams. It kind of sucks when you're debugging a broken test and finally hit an exception but now you have to start over again (especially if you stepped too far in the debugger), especially with heavy webdriver tests that can take a long time. The Lisp debugging experience is so much better... And when writing the test from scratch, it's very interactive, you type code and execute it and verify the browser did what you intended. When you're done you run it again from scratch to verify.

JonChesterfield
6 replies
1d16h

I really like Kernel. John Shutt's thesis language. It throws away a bunch of incidental complexity from scheme, most notably the minor disaster of hygienic macros. It follows a minor tangent on cyclic data which I'm not convinced matters very much.

Implementations are a bit DIY.

An interesting play would be to implement rsr7 scheme in kernel, semi-metacircular fashion.

spindle
3 replies
1d8h

Does picolisp scratch that itch for you?

mst
1 replies
1d4h

Kernel is operative rather than applicative - i.e. fexpr-based - and I don't think you can get something quite the same without it.

(arguably a macro is really just a limited fexpr that's executed and the results inlined at compile time, though I mean that as 'morally equivalent' rather than making a technical claim since I haven't attempted to implement that in my fexpr based interpreters yet)

JonChesterfield
0 replies
21h57m

Did you go with first class environments? If so defmacro builds a fexpr with an an extra eval in the dynamic environment plus a flag to tell the compiler it must be inlined (and that various other things should be reported as semantic errors).

You can get somewhat close to fexpr with lazy evaluation but it turns out that's not the same thing either. Taking the arguments apart and building different things out of them is a superset of stashing it somewhere to evaluate later.

(lambda (x) ...) ; evaluate the argument in the caller, bind it to the name x

(fexpr (x) ...) ; bind the argument to the name x

(normal (x) ...) ; make a thunk out of the argument and the environment

(macro (x) ...) ; bind the argument to x then do the force inline & extra eval on call

It's interesting that macros are the one with a bunch of extra work to do after the call whereas fexpr does the bare minimum of adding stuff to the lexical environment. Lambda maps eval across the arguments, normal maps make-thunk across the arguments.

JonChesterfield
0 replies
22h10m

Picolisp is excellent.

msk-lywenn
1 replies
1d5h

Could you elaborate on how hygienic macros are a minor disaster?

JonChesterfield
0 replies
22h13m

Macros achieve a limited version of a fexpr with much more complicated semantics. Hygienic macros take that a step further - you get implicit renaming of variables which is generally convenient - in exchange for even more complicated semantics.

The abstraction leaks as well. `(reduce and (list #t #t #f))` is probably an error. Calling a macro and calling a function looks the same and sometimes is the same and sometimes falls over.

Macros are sacrosanct to lisp. Hygienic macros are a crown jewel of scheme. They're still the wrong thing though.

In the beginning there was dynamic scope, possibly by accident. There were also fexpr - pass the unevaluated data and the environment, instead of passing evaluated data, which were also dynamically scoped. Sometimes called nlambda. That was difficult to program with and very difficult to compile.

Macros are easier to use than dynamically scoped fexpr. They're much easier to compile and lisp was getting a bit of a kicking for being too slow. There were some papers written, macros were the better choice, and here we are.

Lexical scoping turned up a little while after macros won the fight. Lexically scoped fexpr with first class environments are the right thing. Simpler and more capable than macros. First class environments being another thing sacrificed on the alter of performance decades ago.

Shutt noticed this and wrote about it at length. It's slightly subtle that a non-hygienic macro is a subset of fexpr. Force inline it at the caller, refuse to run various calls (e.g. pass it to functions), wrap the return value in an eval in the caller environment. A hygienic macro has to do reflective symbol renaming stuff which is also expressible as a fexpr by implementing the symbol renaming rules of scheme, which thus far I don't have the patience for.

Qualified as minor, in that the wrong thing we have is still very useful.

Disaster in the sense that we could have fexpr if history had happened in a slightly different sequence.

Opportunity in that lisp can be better. First class environments and first class macros can be done. The performance constraints of the past have been removed by hardware progress and to a lesser extent by progress in compiler design.

dannyobrien
5 replies
1d15h

As an amateur coder, this really matches my journey, though with a slightly different ordering: after the usual BASIC/Perl/Python homebase, a few years really expanding my mind in Haskell-land, followed by a "well, I've bounced off Lisp in the past, let's see what I make of it now", then having real joy in writing Clojure, exploring Guile and Scheme, and then coming finally to appreciate Common Lisp, warts and all.

I still "see the parentheses", and I wish there was a way (outside Coalton) to integrate the security that a strong typechecker can provide, but I really do enjoy working in the Lisp ecosystems now, and feel like I understand Lispers excitement at what they have.

medo-bear
3 replies
1d8h

I wish there was a way (outside Coalton) to integrate the security that a strong typechecker can provide,

Common lisp is optionally strongly typed. SBCL has extensive type checking facilities. Coalton is statically typed.

Security does not imply strong (or static) typing, and strong typing definitely does not imply security (thinking your code is secure just because you used strong or static typing is silly). In fact code complexity that static typing introduces can have detrimental effects to overall security.

Helpful starting point on state of type checking in common lisp: https://lispcookbook.github.io/cl-cookbook/type.html

dannyobrien
2 replies
1d1h

Just to be clearer, I meant security as in confidence in the code running correctly, rather than defended against vulnerabilities.

medo-bear
0 replies
8h58m

This is from update to SBCL yesterday:

   enhancement: functions with declared return types have their return values type-checked in optimization regimes with high SAFETY and (DEBUG 3).
http://sbcl.org/news.html

medo-bear
0 replies
23h22m

But software correctness is orthogonal to static typing. You most def should not have the confidence that your software is correct (ie solves the practical problem you are trying to solve) just because it compiles.

Y_Y
0 replies
1d9h

to integrate the security that a strong typechecker can provide

Have you checked out typed-racket? That's a reasonably nice way to get strong typing by tackling type correctness contracts on top of otherwise standard racket.

There's also Alexis King's marvellous Hackett, which seems to be finished for now, but was positive proof that you can have - and eat - all the curry-flavoured cake you want while staying in scheme.

LAC-Tech
4 replies
1d18h

The big thing I took away from LISP is that having everything be prefix solves a lot of problems. Seriously, the arguments people have in languages like Zig about whether to have "operator overloading" just seem a bit silly. Operators are just functions, it's very silly to make them syntastically different.

But I never got much into REPL driven development. Probably because I just couldn't wrap my head around emacs.

mpweiher
3 replies
1d18h

Smalltalk solves that by having everything be infix. With fewer parentheses.

¯\_(ツ)_/¯

LAC-Tech
2 replies
1d18h

Yeah that's another approach to it, one I think could definitely be borrowed (though I wouldn't do it exactly how smalltalk did). And then of course there's the "concatenative" family where everything is postfix.

mpweiher
1 replies
1d9h

How would you make it different?

One change that I made in ObjS is that I added the pipe so you don't need to parenthesize chained keyword messages.

   ((receiver msg1:arg1) chained1:arg2) chained2:arg3
vs.

   receiver msg1:arg1 | chained1:arg2 | chained2:arg3

Having colon keywords does get in the way a little, as I also need them for schemes (URIs are part of the language) and I would also like them for type-specifiers, as type-after really seems the better way.

I can probably pull all that off syntactically, not sure about aesthetically.

LAC-Tech
0 replies
15h14m

I've actually thought about this a lot!

1. no more keyword parameters

2. every message has exactly one argument, which may be a tuple.

3. "obj.msg" becomes short hand for "obj msg ()"

4. if a message expects 2 element tuple and you give it a one element one.. it returns a closure that expects the rest of the arguments. ie "add1 = 1.+"

im_down_w_otp
3 replies
1d19h

Why do print-line-debugging to find out what's happening at a location in code when you can just be inside your program and inspect everything live as it's running?

Yes, that's because print-effing one's way to understanding is also among the most crude methods of debugging. An excellent alternative to doing that is to use an actual debugger. That will also allow you to be "inside your program".

foobarian
1 replies
1d19h

How do you "get inside your program" on a locked down production box? Watched by SOX auditors like hawks? Any answer offered needs to be comparably easy to checking in an extra printf and letting ci/cd deploy it.

im_down_w_otp
0 replies
1d19h

I used to be an Erlang engineer, and you would have needed very explicit security controls around being able to attach to an Erlang cluster to get the "insider your program" experience. This was in FinTech, so I'm familiar with the constraint.

I would imagine you'd need the same thing for accessing the runtime of any production deployed application(s) in such an unfettered manner. Erlang, Lisp, C, or other?

ctrw
0 replies
1d17h

The difference with lisps is that the debugger is always part of the runtime and you have full access to all the languages capabilities while inside.

I've yet to see a C debugger which does more than a fancy version of printf. Imagine being able to compile functions on the go while still running your main program with all the state saved.

a2code
3 replies
1d5h

What makes Lisp (Scheme) great (for me) is that functions are first class citizens, the prefix syntax is consistent, tail call recursion optimization makes it runnable, and metaprogramming allows for powerful abstractions. What I would also emphasize is that using lambda expressions makes you wonder when to define something and when to omit the definition. Lisp is simple to learn but difficult to master.

What I would like to do is to use Lisp to solve complex problems. The problem is that abstractions are not often efficient.

For example, consider computing the calculus product rule. An S-expression abstracts the function: (lambda (u v) (* u v). With metaprogramming, this becomes a new S-expression (lambda (u v) (+ (* (d u) v) (* u (d v)))), where (d u) and (d v) further compute the derivative. To optimize this further, you need implementation dependent code and replace abstractions of S-expressions with native machine code. This is where I feel that the troubles with Lisp start.

hayley-patton
1 replies
1d5h

To optimize this further, you need implementation dependent code and replace abstractions of S-expressions with native machine code

This can be done by a program called a compiler. There are many compilers for Lisp.

(In other words, the same argument applies to every language ever which doesn't exactly correspond to machine code.)

a2code
0 replies
1d4h

And yet to extend Python, for example, you write code once. To extend Lisp, you have to extend a multiplicity of implementations you find relevant.

JonChesterfield
0 replies
21h23m

There are properly good lisp compilers. The baseline is good and they're extensible - you can teach them things about your domain. SBCL for lisp. Maybe racket for scheme.

mustermannBB
2 replies
19h2m

Biggest issue with most Lisps is, IMHO, that you need emacs for a great developer experiences when it comes to tools. And emacs is not for everyone.

colingw
1 replies
17h27m

Not so for Clojure, there's great integration for both the Vim family and VSCode.

edanm
0 replies
9h44m

Seconding this, Clojure was my introduction to Lisps, and I used only VSCode with it. One of the greatest development environments I've ever used, and about half the fun of learning Clojure for me. (The other half being learning the language itself, of course!)

galaxyLogic
2 replies
1d19h

Includes this great comment by Simon Peyton-Jones in support of incremental typing:

"I think to try to specify all that a program should do, you get specifications that are themselves so complicated that you're not longer confident that they say what you intended."

kqr
1 replies
1d6h

I think, though, that when reading this quote it should be taken in the context of e.g. Haskell exhibiting incremental typing – because it doesn't try to specify everything. It can't.

graemep
0 replies
1d5h

Does not seem to fit with the previous bit of the quote:

"dynamic languages are still interesting and important. There are programs you can write which can't be typed by a particular type system but which nevertheless don't "go wrong" at runtime, which is the gold standard - don't segfault, don't add integers to characters. They're just fine."

thetwentyone
1 replies
1d15h

Also an interesting Lisp for it’s size is femtolisp from one of the co-creators of Julia:

https://github.com/JeffBezanson/femtolisp

rcarmo
0 replies
1d10h

I have a femtolisp quote at the top of my LISP resources page:

https://taoofmac.com/space/dev/lisp

onetimeuse92304
1 replies
1d6h

What's your current mental model of an "ideal Lisp"?

It would be something like a fusion of Clojure and Common Lisp, (...)

Mine would be, too. Unfortunately, that's not possible. Clojure is one of those few languages actually designed by somebody who knows what they are doing. Clojure's limitations come from the limitations of the platform it is running on and certain design criteria (interoperability) that were necessary for its success.

fulafel
0 replies
2h47m

There could be a Clojure implementation hosted on a CL.

In fact someone seems to have started on one: https://github.com/ruricolist/cloture

mark_l_watson
1 replies
1d4h

Nice article! The author captured many of my own feelings of Clojure compared to Common Lisp.

I have been happily using Lisp languages since the 1970s and CL since 1982. I retired several months ago but I still spend time coding for my own research projects and for my book examples. I have been torn between going all in using just one of Racket or Common Lisp. Racket is a fun environment and I like the community but I have so much history with CL. I don’t know if I will choose just one, but it would be a good idea!

The article didn’t mention the Hy language that compiles to Python AST. For a while I was very enthusiastic about Hy and I wrote a book about it. It used to have a “let” macro that mapped Lisp variable scoping to the jankyness of Python but that support has gone away and I feel like using “setv” and just accepting Python like scoping is a bridge too far for me. The philosophy behind Hy is great, but it is not my philosophy.

I love the Clojure language (and I donated money to the project at the beginning) but I only use Clojure when I am being paid to use it for specific projects. The Bork Dude’s Babashka language is a fast loading Clojure, and I had a lot of fun for a while setting up a nice dev environment and using it for all recreational programming for a week. Babashka is worth a serious look.

kagevf
0 replies
1d3h

For racket do you use Dr racket or emacs?

hickelpickle
1 replies
1d6h

I've had a really pleasant experience with Kawa scheme recently, which is a scheme that runs on the JVM, compiles to bytecode and provides easy interop with Java. I needed a scripting language for a project, and it ended up developing into me implementing a repl like admin console that allows me to administer events and query data/state (project is a game back-end so everything was already event-based with thread-safe dispatching and an entity system accessed via a singleton).

Being able to inter-opt with java is great, as I can wrap scheme procedures in functional interfaces and use them as drop in replacements for java functions, as my event system was already based on predicates and consumers for event handling.

I've worked through some of the SICP and have always wanted to get more into scheme/lisp, but the barrier of starting a full project in it always kept me from getting much hands on experience. Its been quite enlightening actually getting to work with a form of REPL driven development and getting my hands dirty with coding some scheme, having access to the JVM means I can do practically anything with it, with out needing to bootstrap tons of code, and using it in a project with a large scope lets me solve real-world problems with it vs just toying around which has been what most of my scheme/lisp experience was before.

SeanLuke
0 replies
1d5h

To me, Kawa is easily the best Lisp that targets the JVM. And I say that hating Scheme. I'm a Common Lisp guy. Because it is really fast, well designed, and mostly Scheme compliant (no call/cc of course). And it has first in class interoperability with Java and the JVM.

After Kawa, I'd pick ABCL. It's not as fast as Kawa and its interoperability isn't nearly as good, but it is effectively a 100% standards-compliant Common Lisp: what more could you ask for?

Only last would I pick Clojure. It's designed to feel "sort of immutable" but this is impossible if you want any degree of interoperability with the JVM. As a result, you wind up with lots of Refs and other fun stuff which in my experience make Clojure much slower than Kawa and ABCL. It's always slower by quite a lot, but in fact I've had a few extreme situations where Clojure would wind up being __literally__ three orders of magnitude slower.

hiAndrewQuinn
1 replies
1d10h

Scheme has a special place in my heart after working through SICP as a late high schooler.

I probably haven't actively written anything in Scheme in a decade, but it later inspired me to learn Haskell to a decent level of depth, just to see what the "other" functional languages feel like. So much the same, yet so much different.

eggdaft
0 replies
1d10h

Scheme was taught in my first year of CS.

One thing I noticed is that those who hadn’t coded through their teenage years found scheme much easier to learn than other languages we were being taught in the first year. I think that was because scheme felt more like mathematics and everyone had done A Level maths (in the UK), many to an advanced level.

Immutability and functions make a lot more sense than weird variables that keep changing and - what the heck are classes? We just unlearn this mathematical thinking when we start hacking Python etc.

dartharva
1 replies
1d12h

Article topic notwithstanding, I am curious in the design choice of moving the "Table of Contents" to the right side of the text and links for other blog entries to the left. Aren't the sides conventionally swapped as with default Blogspot/Wordpress blog templates?

Also, what was the objective behind shaping the text as a Q&A-style interview when it's only a personal blog?

colingw
0 replies
1d10h

The questions were originally posed by a Doom Emacs community member, so I answered them as-is.

The site is designs as it is because I'm not a frontend guy xD

agambrahma
1 replies
1d10h

Sounds like he'd like Coalton

colingw
0 replies
1d10h

CL-embedded langs like Coalton and April are mentioned in the article.

Jeaye
1 replies
1d13h

I appreciate the shout out to jank, the native Clojure dialect on LLVM with C++ interop. :) Lisp has such a colorful history and Clojure, in my opinion, made it even more accessible and practical. People complain about Clojure's error messages, but, coming from C++, anything that doesn't corrupt memory and segfault is golden.

mark_l_watson
0 replies
1d3h

I also liked that reference since I had not heard of Jank before. It is a work in progress so I just added a calendar entry for 9 months from now to check it out. https://jank-lang.org/

FrustratedMonky
1 replies
1d19h

Not even an honorable mention for F#? Why does Haskell get a mention, since it is also just another ML based language.

Seems like F# is trying to solve the same problems as LISP. Lot of same features.

Edit: Did like article. Is serious question

LISP-> Clojure-> F#.

There are similarities.

Edit2:

Guess it is a stretch. But not insultingly so.

Have seen articles comparing Lisp macros to F# Type Providers.

nerdponx
0 replies
1d19h

F# isn't a Lisp, so I think it's fair to consider it out of scope. I think the only reason Haskell got mentioned is because the author has experience with it and was using it as a personal point of reference.

matheusmoreira
0 replies
1d8h

How can newcomers get the most out of learning Lisp?

Make your own lisp. I chose that route even though I had never written lisp code before. Finally understood its elegance.

masfoobar
0 replies
9h53m

I have toyed with various lisps and schemes over the years. Eventually - since around 2012 (ish) my focus has been with GNU Guile.

Which won me over originally was the documentation, and the C functions following the same format as my own. Being able to add Guile into my game did really speed up my time "hacking" 3D scenes and features, before converting them over to C code.

Then started to use GNU Guile for building a personal website, and other toy projects on my server.

While I don't use GNU Guile for much these days, I certainly think it should be given, atleast, half the reputation Python gets today.

I think the problem with GNU Guile is that (last time I checked) was that it was very Linux focused. Always had trouble with it on Windows. I dont own a Mac but assume to be the same there as well.

Of course, I have GNU Guile on my Windows machine thanks to WSL2.

codr7
0 replies
1d9h

I spent a lot of time unsuccessfully bending Common Lisp into something I could love.

It's still one of my favorite problem solving languages, but far from trivial to fix from the inside.

Tried writing my own readers that compile to Common Lisp.

Then I ended up on a side track to write my own interpreters; mostly Forths at first, then gradually more Lisp-like and practical creations.

My latest attempt may be found here:

https://github.com/codr7/jalang

MarkG509
0 replies
1d6h

Over 40 yrs ago, I had a programming assignment in college that I wrote in Lisp. My Prof, a non-Lisp-er, asked me to rewrite it, expecting C or Pascal. But I took him through my Lisp code, and was able to argue for the 'A' on that assignment. I might have created a Lisp-er in the process.