return to table of content

Show HN: Marimo – an open-source reactive notebook for Python

bluish29
23 replies
22h51m

That's one interesting project. As someone who relies heavily on collaboration with people using Jupyter Notebook. The most annoying points about reproducing their work are the environment and the hidden state of Jupyter Notebooks.

This does to address directly the second problem. It does however by sacrificing flexibility. I might need to change a cell just to test a new thing (without affecting the other cells) but thats a trade off if you focus on reproducibility.

I know that requirements.txt is the standard solution to the other problem. But generating and using it is annoying. The command pio freeze will list all the packages in bloated way (there is better ways) but I always hoped to find a notebook system that will integrate this information natively and have a way to embed that into a notebook in a form that I can share with other people. Unfortunately I can't see support for something in any of the available solutions (at least up to my knowledge).

akshayka
21 replies
22h42m

Yes, the second half of reproducibility is for sure packages. A solution for reproducible environments is on our roadmap (https://marimo-team.notion.site/The-marimo-roadmap-e5460b9f2...), but we haven't quite figured it out yet.

It's a bit challenging because Python has so many different solutions for package management. If you have any ideas we'd love to hear them.

aidos
16 replies
22h4m

People always complain about pip and python packaging but it’s never been an issue for me. I create a requirements.base.txt that has the versions of things I want installed. I then:

    pip freeze -r requirements.base.txt > requirements.txt
Install is then simply:

    pip install -r requirements.txt
Updating / installing something new is a matter of adding to the base file and then refreezing.

bluish29
10 replies
21h50m

There are several problems with this approach, notably you don't get information about specific platform stuff. You don't get information on how these package are installed (conda, mamba..etc).

And it does not account for dependincies version conflicts which life very hard.

aidos
9 replies
21h37m

I don’t understand the platform thing, is that something to do with running on Windows? Why wouldn’t you just pip install? Why bring conda etc into the mix?

If you have conflicts then you have to reconcile those at point of initial install - pip deals with that for you. I’ve never had a situation in 15 years of Python packages where there wasn’t a working combination of versions.

These are genuine questions btw. I see these common complaints and wonder how I’ve not ever had issues with it.

bluish29
7 replies
21h21m

I will try to summarize the complaints (mine at least) in obvious simple points

1- pip freeze will miss packages not installed by pip (i.e. Conda).

2- It does include all packages, even not used in the project.

3- It just dumps all packages, their dependencies and sub-dependencies. Even without conflicts, if you happen to change a package, then it is very hard to keep track of dependencies and sub-dependencies that need to be removed. At some point, your file will be a hot mess.

4. If you install specific platform package version then this information will not be tracked

aidos
3 replies
20h47m

Ok. I think that’s all handled by my workflow, but it does involve taking responsibility for requirements files.

If I want to install something, I pip install and then add the explicit version to the base. I can then freeze the current state to requirements to lock in all the sub dependencies.

It’s a bit manual (though you only need a couple of cli commands) but it’s simple and robust.

bluish29
1 replies
20h42m

I don't think that manual handling of requirement.txt in a collaborative environment is a robust process. It will be a waste of time and resources to handle it like that. And I don't know about your workflow but it is obviously not standard and it does not address the first and forth points.

aidos
0 replies
20h24m

Haha. Ok. I think that’s where we’re just going to have to agree to disagree.

pastorhudson
0 replies
3h55m

This is my workflow too. And it works fine. I think the disconnect here is that I grew up fighting dependencies when compiling other programs from source on Linux. I know how painful it can be and I’ve accepted the pain and when I came to python/venv I thought “This isn’t so bad!”

But if someone is coming from data science and not dev-ops then no matter how much we say “all you have to do”. The response will be why do I have to do any of this?

paddy_m
0 replies
19h13m

Can you name a package manager (any language) that handles #3 well?

How does it handle the problem?

graemep
0 replies
6h37m

Problems 1 and 2 can be solved by using a virtualev/venv per project.

3 is solved by the workflow of manually adding requirements and not including dependencies. It may not work for everyone. Something like pipreqs might work for many people.

I do not understand why 4 is such a problem. Can you explain further?

d0mine
0 replies
8h29m

1/4- Ordinary `pip install` works for binary/platform-specific wheels (e.g., numpy) too and even non-Python utilities (e.g., shellcheck-py)

2/3- you need to track only the direct dependencies _manually_ but for reprodicible deployments you need fixed versions for all dependencies. The latter is easy to generate _automatically_ (`pip freeze`, pip-tools, pipenv/poetry/etc).

ShamelessC
0 replies
13h26m

Yes, there are more problems with Windows.

bmitc
2 replies
17h25m

Poetry handles all of this properly.

ShamelessC
0 replies
13h26m

Just not PyTorch apparently.

331c8c71
0 replies
8h49m

I regularly observe it stalling at dependency resolution stage upon changing version requirements for one of the packages (or python version requirements).

n8henrie
0 replies
2h54m

I follow a similar approach -- top-level dependencies in pyproject.toml and then a pip freeze to get a reproducible set for applications. I know there are edge cases but this has worked really well for me for a decade without much churn in my process (other than migrating from setup.py to setup.cfg to pyproject.toml).

After trying to migrate everything to pipenv and then getting burned, I went back to this and can't imagine I'll use another third-party packaging project (other than nix) for the foreseeable future.

actuallyalys
0 replies
5h43m

The post you’re responding to said that there are many Python packaging options, not that they don’t work. Pip freeze works reasonably well for a lot of situations but that doesn’t necessarily mean it’s the best option for their notebook tool, especially if they want to attract users who are used to conda.

bluish29
2 replies
22h32m

The link redirect does not specify which point in the list you are referring to but I guess it is "Install missing packages from...". If so, then I really wonder if you mean supporting something like '!pip install numpy' like Jupyter or something else?

I don't think this is really a solution, not to mention that this raise the question. Does it support running shell commands using '!' like Jupyter Notebook?

akshayka
1 replies
22h25m

Oh, sorry for not being more clear. That's not the one. It's "Package management: make notebooks reproducible down to the packages they use": https://marimo-team.notion.site/840c475fd7ca4e3a8c6f20c86fce...

Does that align with what you're talking about?

That page has some scrawled brainstormed notes. But we haven't spent time designing a solution yet.

bluish29
0 replies
22h16m

Thanks. That is precisely what I was talking about in my comment. It would solve the problem if we have some like that integrated natively. I understand that between pip, conda, mamba and all the others it would be hard problem to solve. But at least auto generating requirements.txt would be easier. But to be honest the hard part is identify packages and where they are from not what to do with information. Good luck with the development.

gcarvalho
0 replies
8h6m

The third half is data which only exists on your machine :P

And even if it’s on some shared storage, it may have been generated by another unreproducible notebook or worse, manually.

331c8c71
0 replies
8h54m

Nix is the only solution for reproducible environments that I would call rock-solid.

It comes with costs and the gpu-related stuff is especially tricky e.g. https://www.canva.dev/blog/engineering/supporting-gpu-accele...

simonw
8 replies
21h26m

This is amazing. I'm a big user of both Jupyter notebooks and Observable notebooks (https://observablehq.com/) and the thing I miss most from Observable when I'm using Jupyter is the lack of cell reactivity.

You've solved that incredibly well!

I also really like that the Marimo file format is just Python. Here's an example saved file from playing around with the intro: https://gist.github.com/simonw/e6e6e4b45d1bed9fc1482412743b8...

Nice that it's Apache 2 licensed too.

Wow, I just found the GitHub Copilot feature too!

mscolnick
3 replies
21h6m

Myles here (other core contributor) -

We are thrilled to see you have such a strong positive reaction. It means a lot coming from you - I initially learned web development using Django and landed my first contracting gig with Django.

I drifted away from writing Python and towards Typescript - but marimo has brought me back to writing Python.

mind-blight
1 replies
18h41m

Congrats Myles! Super excited that you all have finally open sourced! I'm gonna start moving my Jupyter notebooks over to this asap. I love that it's all just .py files.

Have you had anyone use Marimo to write production web app code? I've been doing a lot of AI experiments for the new venture, and it's been a pain to have to switch back and forth between .ipynb files and regular py files

mscolnick
0 replies
18h10m

People have used marimo for production web apps. They won't get you as far as writing HTML/JS. But great for internal tools or external showcases, tutorials, interactive blogs, etc.

Our friends at SLAC use marimo for their internal exploration experiments and publishing interactive apps. He is an example: https://marimo.io/@public/signal-decomposition

arthurwu
0 replies
19h32m

let's go!! so excited to see this get deserved attention

LoulouMonkey
3 replies
18h52m

Hi Simon, slightly unrelated question.

I'm a big fan of your work, and as I've learnt a lot from reading your blog posts over the years, I'd be curious to know a bit more about typical use cases for wanting to work with Observable notebooks.

The only reason why I'm using A JavaScript notebook tool (Starboard.gg) is to be able to access cool visualisation packages like Anychart or Highcharts.

Given the hype around Observable notebooks, I feel that I'm missing something.

What makes you decide to start something in an Observable notebook rather than in Jupyter?

Thanks!

simonw
2 replies
16h55m

I primarily use Observable to build interactive tools, as opposed to Jupyter which I use more for exploratory development and analysis.

Here are some of my Observable notebooks which illustrate the kind of things I use it for:

https://observablehq.com/@simonw/search-for-faucets-with-cli...

https://observablehq.com/@simonw/openai-clip-in-a-browser

Those are both from https://simonwillison.net/2023/Oct/23/embeddings/

https://observablehq.com/@simonw/gpt4all-models provides a readable version of JSON file on GitHub

https://observablehq.com/@simonw/blog-to-newsletter is the tool I used to assemble my newsletter

A killer feature of Observable notebooks for me is that they provide the shortest possible route from having an idea to having a public URL with a tool that I can bookmark and use later.

YousefED
0 replies
11h2m

Congrats OP on launching this, looking forward to dive further in! It's great to see people experimenting in the Reactive + Live Programming space as like you mention, I think it can bring a lot of improvements to how we build software. Did you run into any limitations adopting this model?

A killer feature of Observable notebooks for me is that they provide the shortest possible route from having an idea to having a public URL with a tool that I can bookmark and use later

Thanks for sharing simon! I'm working on an Open Source Notion + Observable combination (https://www.typecell.org), where documents seamlessly mix with code, and can mix with an AI layer (e.g.: https://twitter.com/YousefED/status/1710210240929538447)

The code you write is pure Typescript (instead of sth custom like ObservableJS) which opens more paths to interoperability (aside from having a public URL). For example, I'm now working to make the code instantly exportable so you can mix it directly into existing codebases (or deploy on your own hosting / Vercel / whatever you prefer).

LoulouMonkey
0 replies
8h55m

Thanks for getting back to me, I'll go through the examples you shared.

SushiHippie
5 replies
22h49m

Looks cool!

Have you looked into WASM? Something like a jupyterlite [0] alternative for marimo?

And are there plans to integrate linting and formatting with ruff? [1]

[0] https://jupyterlite.readthedocs.io/en/stable/

[1] https://github.com/astral-sh/ruff (ruff format is almost 100% compatible with black formatting)

akshayka
4 replies
22h45m

We started looking into WASM this week, and did some light exploratory coding toward it. It's on our roadmap: https://marimo-team.notion.site/The-marimo-roadmap-e5460b9f2...

A ruff integration is a great idea. I'll add it to the roadmap.

SushiHippie
1 replies
22h27m

<2 cents>

I see some package management stuff on the roadmap.

Maybe you could take a look at the cargo cli, like pixi did [0]. IMO it's a nice user experience.

[0] https://prefix.dev/

</2 cents>

akshayka
0 replies
22h13m

Thanks for the suggestion. We'll definitely take a look.

prabir
0 replies
22h2m

Looking forward to the WASM integration. Being able to use plain filesystem such as nextcloud and able to run it there would be great. I have been trying to get juypterlite wasm in my next cloud alternative that I have been working so would love to try this.

SushiHippie
0 replies
22h35m

Perfect, thank you!

hedgehog
4 replies
22h28m

This looks quite nice and it might compose well with a cache library like the one posted on HN recently (XetCache, https://news.ycombinator.com/item?id=38696631).

noahlt
3 replies
22h20m

Yeah, having worked on alternative notebooks before, one of the big implicit features of Jupyter notebooks is that long-running cells (downloading data, training models) don't get spuriously re-run.

Having an excellent cache might reduce spurious re-running of cells, but I wonder if it would be sufficient.

akshayka
2 replies
22h6m

We've thought briefly about cell-level caching; or at least it's a topic that's come up a couple times now with our users. Perhaps we could add it as a configuration option, at the granularity of individual cells. Our users have found that `functools.cache` goes a long way.

We also let users disable cells (and their descendants), which can be useful if you're iterating on a cell that's close to the root of your notebook DAG: https://docs.marimo.io/guides/reactivity.html#disabling-cell...

smacke
1 replies
20h54m

ipyflow has a %%memoize magic which looks quite similar to %%xetmemo (just without specifying the inputs / outputs explicitly): https://github.com/ipyflow/ipyflow/?tab=readme-ov-file#memoi...

Would be cool if we could come up with a standard that works across notebooks / libraries!

hedgehog
0 replies
11h30m

Function-level caching is the best match for how I'd use it. Often the reason for bothering to cache is that the underlying process is slow, so some kind of future-with-progress wrapper could also be interesting. An example of how that could be used would be wrapping a file transfer so the cell can show progress and then when the result is ready unwrap the value for use in other cells. Or another example would be training in PyTorch, yield progress or stats during the run and then the final run data when complete.

warthog
3 replies
23h7m

Did not work a lot with Jupyter nbs but I think it would be good for you to put more emphasis into Jupyter vs Marimo into your website

pvg
1 replies
23h3m
noahlt
0 replies
22h58m

It's there, but warthog is right, it should be a toplevel section like "A reactive programming environment" — yes ideally people would read the description and understand the differences themselves, or consult the FAQ, but the fact is that most people will understand Marimo in relation to Jupyter and so you might as well optimize that path.

alsodumb
0 replies
23h4m

Copying from a reddit answer by OP: https://www.reddit.com/r/MachineLearning/comments/191rdwq/co...

marimo solves problems in reproducibility, maintainability, interactivity, reusability, and shareability:

*Reproducibility* In Jupyter notebooks, the code you see doesn't necessarily match the outputs on the page or the program state. Some cases in which this can happen: (1) if you delete a cell, its variables stay in memory, which other cells may still reference (2) users can execute cells in arbitrary order. This leads to widespread reproducibility issues. One study analyzed 1 million Jupyter notebooks and found that 36% of them didn't reproduce (https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded...).

In contrast, marimo guarantees that your code, outputs, and program state are all synchronized, making your notebooks more reproducible by eliminating hidden state. marimo achieves this by intelligently analyzing your code and understanding the relationships between cells, and automatically re-running cells as needed (sort of like a spreadsheet but better).

*Maintainability* marimo notebooks are stored as pure Python programs (.py files). This lets you version them with git; in contrast, Jupyter notebooks are stored as JSON and require extra steps to sensibly version.

*Interactivity* marimo notebooks come with UI elements that are automatically synchronized with Python (like sliders, dropdowns) ... scrub a slider and all cells that reference it are automatically re-run with the new value. This is very difficult to get working in Jupyter notebooks.

*Reusability* marimo notebooks can be executed as Python scripts from the command-line (since they're stored as .py files). In contrast, this requires extra steps/effort to do for Jupyter, such as copying and pasting the code out or using external frameworks. In the future, we'll also let you import symbols (functions, classes) defined in a marimo notebook into other Python programs/notebooks, something you can't really do with Jupyter.

*Shareability* Every marimo notebook can double as an interactive web app, complete with UI elements, which you can serve using our CLI. This isn't possible in Jupyter without substantial extra effort.

You might also want to check out Joel Grus' talk on notebooks. We solve many of the problems he highlights: https://www.youtube.com/watch?v=7jiPeIFXb6U&t=1s

petters
2 replies
5h30m

Defining the same variable more than once is an error. The reason for this is obvious. But if the variable is never used in a cell that does not first write to it, reusing variable name should be possible.

Allowing that would be good, because many notebook cells start with "fig, ax = plt.subplots(2, 2)" and this is currently not allowed more than once.

jkl5xx
1 replies
3h55m

Does the local underscore variables feature solve this? Or the approach outlined in the plots tutorial? IMO, not allowing redeclaration is more valuable than supporting this use case. A slight paradigm shift away from your example gives you the significant benefits of a reactive environment with fewer edge cases/quirks. I'd much rather have a notebook error out instead of silently overwriting a value. You save so much time debugging.

bluish29
0 replies
1h24m

Does the local underscore variables feature solve this

I tried this yesterday trying to convert a Jupyter notebook with a log of fig, axs, and it was very annoying converting all of them. I tried local _ with fig_ and ax1_ …etc. but it is considered a variable that cannot be reused too. Furthermore, I expected local vs global variables to be cell based somehow, but that was naive on my part. It does static analysis, not dynamic, so defining something like _suffix and add it to all reused variables and assign different values for each cell will need a dynamical analysis to work.

chris_nielsen
2 replies
15h56m

I love this, but Im using DataSpell from JetBrains at the moment because it has 2 killer features:

    1. Variable viewer so I can see the current value of all variables in scope. 
    
    2. Interactive debugger
Maybe the variable viewer is only important because Jupyter notebooks don’t track and rerun dependencies? So I wouldn’t need it with Marimo. But the interactive debugger is priceless.

Any plan to add debugging?

mscolnick
1 replies
15h49m

1. We do have a variable viewer. We have a few helper panels in the bottom left.

2. PDB support is planned and was scoped out yesterday.

Appreciate the feedback!

chris_nielsen
0 replies
12h2m

That's awesome, ok I'm going to go check it out. Great work!

zengid
1 replies
21h25m

Very cool! This is something Jack Rusher cries for in his talk "Stop Writing Dead Programs" https://www.youtube.com/watch?v=8Ab3ArE8W3s

JayCeeM
0 replies
18h57m

Also from Joel Grus as well "I don't like notebooks" https://www.youtube.com/watch?v=7jiPeIFXb6U

paddy_m
1 replies
19h19m

Very exciting! I took a quick look and I have a couple of questions.

1. Can you describe your interactive widget story? I see that you integrated altair, and there is some custom written react code around it [0] [1]. I'd be interested in porting my table widget to your platform at some point.

2. How much, if any does this depend on the jupyter ecosystem?

3. How does this interact with the jupyter ecosystem?

[0] https://github.com/marimo-team/marimo/blob/b52faf3caf9aa73f4... [1] https://github.com/marimo-team/marimo/blob/b52faf3caf9aa73f4...

akshayka
0 replies
18h25m

1. We don't have a public plugin API yet, but we will in the future. Our (internal) plugins are represented as custom elements: Python writes the HTML (e.g., `<marimo-vega ...>` and the frontend instantiates it. In the meantime, maybe we can help you port your table widget and make it a marimo plugin. You can reach us in our Discord (https://discord.gg/JE7nhX6mD8) or at Github.

2. marimo was built from scratch, it doesn't depend on Jupyter or IPython at all.

3. marimo doesn't interact with the Jupyter ecosystem. We have brainstormed the possibility of a compatibility layer that allows Jupyter widgets to be used as marimo plugins, but right now that's just an idea.

mondrian
1 replies
21h35m

Looks cool. This is kind of like streamlit, which (I think) tried to escape the limitations of notebooks by giving you an API to quickly make a shareable app with sliders/charts etc. (Yet it retains some notebook concepts like 'cells').

Marimo kind of takes the reactive widgets of streamlit and brings them back into a notebook-like UI, and provides a way to export the notebooks into shareable apps.

akshayka
0 replies
21h26m

Thanks! One way we differ from streamlit is that ML/data/experimentation work can start in marimo — i.e., you can use marimo for traditional notebooking work, without ever making an app. But you can also use marimo to make shareable apps as you've articulated.

krawczstef
1 replies
19h42m

how do you read the resulting python files? That's what I'm struggling with -- but I guess the point is you don't read them, you use marimo for that?

akshayka
0 replies
19h34m

Thanks for the question. Each cell is represented as a function that maps its referenced variables to the variables it defines. Cells are sorted in the order they appear on the notebook page.

If you run `marimo tutorial fileformat`, that'll open a tutorial notebook that explains the fileformat in some detail.

exe34
1 replies
19h30m

That's amazing! Can I edit it in another editor, save the file and have it updated live in the browser notebook? Or does it have to recompute everything?

akshayka
0 replies
18h31m

Not yet, but that's something we do want to support.

esafak
1 replies
21h15m

Could this be used with MDX or something to embed interactive examples in documentation? That is an underserved use case.

mscolnick
0 replies
20h4m

It is not possible at the moment (we use iframes in our documentation), but once we support WASM, it should be possible.

dimatura
1 replies
21h54m

I already use jupytext to store notebooks as code but the improved state management and notebook-as-app features are pretty compelling and I'm trying it out.

Unfortunately, I'm quite used to very specific vim keybindings in Jupyter (https://github.com/lambdalisue/jupyter-vim-binding) that make it pretty hard to use anything else :/

aldanor
0 replies
6h16m

If you're a vimmer and a jupyter user, do yourself a favour and switch from browser to vscode: vim emulation is much better overall and you get proper python lsp experience, with jumping to definitions, type inference, copilot, and all that.

(Neovim user myself, as much as I dislike vscode for everything else, as of now it's hard to replace it when using jupyter)

bsdz
1 replies
21h29m

This is a great idea. I'd been planning to create something similar where cells are topologically ordered based on their dependency structure; although I was thinking perhaps to integrate with Jupyter more, eg use their existing kernel web sockets infrastructure. In my mind, one would be able to zoom out and see a graph view where hovering over a node would show its corresponding cell with content / output. Each node might be coloured according to execution status. That said, I'm not a UI expert and I never got around to it. So thanks for your efforts, I'll definitely give it a spin.

akshayka
0 replies
21h24m

That sounds really cool! marimo has a dependency graph viewer built-in, but we could definitely improve it. Coloring nodes by execution status, and annotating cells with their variable defs/refs, would be great quality-of-life improvements.

bitsrat
1 replies
16h35m

I read in a comment that Marimo is an alternate to Jupyter. Does it not depend on Jupyter Server or ipykernel ? Is it a replacement for Jupyter lab ?

I am thinking of Jupyter as all the components in this diagram - https://docs.jupyter.org/en/latest/projects/architecture/con...

Sorry did not get to look into the codebase yet

mscolnick
0 replies
15h40m

Correct, it does not depend on Jupyter. It’s built from the ground up with different principles in mind

aredox
1 replies
18h55m

Awesome!

What would be the best way to use it locally in a minimal, self-contained install?

derHackerman
0 replies
18h42m

Try using pipx!

Micoloth
1 replies
19h35m

Wow.. Really great work, finally someone is doing it!

Since I've thought about this for a long time (I've actually even made a very simplified version last year [1]), I want to contribute a few thoughts:

- cool that you have a Vscode extension, but I was a little disappointed that it opens a full browser view instead of using the existing, good Notebook interface of Vscode. (I get you want to show the whole Frontend- But I'd love to be able to run the Reactive Kernel within the full Vscode ecosystem.. Included Github Copilot is cool, but that's not all)

- As other comments said, if you want to go for reproducibility, the part about Package Management is very important. And it's also mostly solved, with Poetry etc...

- If you want to go for easy deployment of the NB code to Production, another very cool feature would be to extract (as a script) all the code needed to produce a given cell of output! This should be very easy since you already have the DAG.. It actually even existed at some point in VSCode Python extension, then they removed it

Again, great job

[1] https://github.com/micoloth/vscode-reactive-jupyter

smacke
0 replies
19h20m

You're probably referring to nbgather (https://github.com/microsoft/gather), which shipped with VSCode for a while.

nbgather used static slicing to get all the code necessary to reconstruct some cell. I actually worked with Andrew Head (original nbgather author) and Shreya Shankar to implement something similar in ipyflow (but with dynamic slicing and a not-as-nice interface): https://github.com/ipyflow/ipyflow?tab=readme-ov-file#state-...

I have no doubt something like this will make its way into marimo's roadmap at some point :)

yowlingcat
0 replies
19h32m

This is very cool. I think I need to play around with this a bit more to wrap my head around the reactivity element, but the basic shift of ipynb to standard Python would be such a huge workflow improvement for my team. We use jupyter notebooks when prototyping and trying to code review unwieldy python-in-JSON is miserable. Great to see an alternative that's worked its way around that.

wisty
0 replies
10h48m

Cool. On a side note, I think the old Jupytext extension is hugely underrated. It lets Jupyter run a .py file (with markdown notes as comment in the file, displayed as notes in the web page).

Both of these solve the most important part of this problems in iPython - horrible git interaction, horrible programming practice to discouraging writing library files, though Jupyter fixes most of the weird non-deterministic behaviour by forcing you to rerun the script every time you load it (rather than reactive techniques). State is OK for power users but it's known to be a massive pain for people who are just learning programming, and an issue in large projects or with interaction.

With this new project having reactive updates I think it's definitely going to be great for beginners, or in gnarly projects.

I wonder if it runs on pyodide (a cPython compiled to run in the browser, with matplotlib and scipy bundled).

stuaxo
0 replies
2h20m

This is good, I've been waiting for something like this to solve the issue of determinism in notebooks.

smacke
0 replies
20h57m

I'm a big fan of Marimo (and of Akshay and Myles in particular); it's great to finally see a viable competitor to Jupyter as it can only mean good things for the ecosystem of scientific tooling as a whole.

rurban
0 replies
21h14m

I'll definitely try it out tomorrow! Could fix a lot of problems with my current project.

rossjudson
0 replies
13h8m

Arrggghh. Now I have to learn Python, which I've been actively resisting and making jokes about for years.

robsh
0 replies
1h44m

It would be amazing if it could be deployed with pyodide/wasm as an alternative to a Python web server. Truly a standalone interactive notebook, hosted with plain html.

petters
0 replies
10h56m

Looks really impressive!

But state is not tracked perfectly. Sometimes you have to manually re-run the cell. For example if one cell defines a dataclass d and another cell changes d.x = "new value". Then other cells using d.x will not know that it has changed.

peter_l_downs
0 replies
21h51m

Marimo are wonderful little pets, I used to have some and really liked it. I should get some more. Never failed to start a conversation when guests came over.

https://soltech.com/blogs/blog/how-to-care-for-your-marimo-m...

nnx
0 replies
11h54m

Very interesting project, a breeze of fresh air and welcome competition to Jupyter.

I guess it's still very early but the onboarding for Mario VSCode is not great at the moment, no idea how to actually start writing a Marimo notebook (no "Create: New Marimo notebook" option like Jupyter's).

Then I then tried clone the cookbook repo, and get "module not found" errors that are even less friendly than when it happens on Jupyter: have to figure out which cell the error actually comes from to even know which module is missing.

mvelbaum
0 replies
10h7m

Does this allow to run a long running task in the background so that a user can close & reopen the tab and continue seeing all the output that has been produced thus far?

This is currently being worked on in Jupyter: https://github.com/jupyterlab/jupyterlab/pull/15448

jwilber
0 replies
15h54m

This is amazing!

j0e1
0 replies
20h53m

This is a welcome alternative to Jupyter Notebooks/lab- great work! One thing that would be nice is an ability to see previews on GitHub of the Marimo notebook (like Jupyter Notebook). I am not sure if this is possible given you would have to run the code to see the output.

ingenieroariel
0 replies
15h18m

The list of dependencies seems very short, apart from tornado it does not seem like the other ones pull in a lot of other deps.

Congrats, this looks very useful and awesome.

  dependencies = [
    # cli
    "click>=8.0,<9",
    # python 3.8 compatibility
    "importlib_resources>=5.10.2; python_version < \"3.9\"",
    # code completion
    "jedi>=0.18.0",
    # compile markdown to html
    "markdown>=3.4,<4",
    # add features to markdown
    "pymdown-extensions>=9.0,<11",
    # syntax highlighting of code in markdown
    "pygments>=2.13,<3",
    # for reading, writing configs
    "tomlkit>= 0.12.0",
    # web server
    "tornado>=6.1,<7",
    # python <=3.9 compatibility
    "typing_extensions>=4.4.0; python_version < \"3.10\"",
    # for cell formatting; if user version is not compatible, no-op
    "black",
  ]

elijahbenizzy
0 replies
21h18m

You've built observable but for python. Love it!

carterschonwald
0 replies
20h44m

Awesome! I’ve been wanting this sort of thing for a long time. But I’ve only been aware of the Julia tool pluto

bravura
0 replies
18h41m

I am most intrigued by the annotation demo you showed, since annotation is painful to set up for small projects.

Can you talk about it in more detail?

Can I tell who the user is so I can have multiple annotators?

Can I use gold data to determine which annotators aren't paying attention?

Where do I learn more about how to build this kind of tool?

Overall, kudos, I signed up for the waitlist.

aqader
0 replies
19h35m

this is really cool, can’t wait to try it out for some ML pipeline development. kudos myles and akshay!

ametrau
0 replies
18h23m

Thank you. Jupyter has me taking my hair out a lot of the time. Some completely bizarre design decisions

Onawa
0 replies
8h21m

Aren't many of the issues with Jupyter being mentioned in this thread solved by Quarto? I have been advocating for it's use more at work, and NIH has even started offering classes on it through the NIH library.

Beefin
0 replies
20h58m

we use the jupyter-server kernel gateway api at https://nux.ai would love to explore using marimo's API for code execution