return to table of content

Show HN: I built a non-linear UI for ChatGPT

kkukshtel
8 replies
23h59m

I built a similar demo to this but for images - IMO this is a much better structure for working with LLMs as it allows you to really riff with a machine instead of feeling like you need a deterministic "next step"

https://youtu.be/k_mJgFmdWWY

dvt
2 replies
23h34m

Sweet demo, you should do a Show HN! This is much more interesting to me, as the visual element makes much more sense here rather than just putting entire paragraphs in nodes.

serial_dev
0 replies
13h3m

The text nodes is also interesting, it's like a mind map, I can see how it could be great for learning, planning, collaboration, exploring...

kkukshtel
0 replies
2h41m

Thanks for the encouragement! I just put up a post, hope other people like it!

lukan
1 replies
10h32m

Looks good, I tried it out and it is indeed alpha in many regards(e.g. sometimes it does not save a picture on windows, sometimes it does not show the prompt, ..) , but the idea has potential. I would encourage you to keep working on it (and maybe keep in mind, that if this suddenly gets viral and you have no API limits in place, you might get poor quickly).

kkukshtel
0 replies
2h40m

Yeah the idea was mostly to put a stake in the ground for an early UX experiment (I released it last year), but it's been in the back of my mind as something to continue experimenting with and honestly rebuilding for web in the custom game engine I'm working on.

kkukshtel
0 replies
2h39m

What I really want to do is make it model agnostic. SDXL was an easy choice at the time, but you could really easily just make it be a local model or any hosted visual model with an endpoint. The core idea is just tying an LLM to an image model and tying those to a force-directed graph, so really anything could be an input (or an output - you could also do it with text)

setnone
0 replies
23h17m

Great stuff! That deterministic "next step" is the last line of defense for us humans :)

btbuildem
7 replies
23h47m

Interesting take! It does seem to address a typical "intermediate" workflow; even though we prefer linear finished products, we often work by completing a hierarchy first. I've been using Gingko [1] for years, I find it eases the struggle of organizing the structure of a problem by both allowing endless expansion of levels, and easily collapsing it into a linear structure.

In your case, do you hold N contexts (N being the number of leaves in the tree)? Are the chats disconnected from each other? How do you propose to transition from an endless/unstructured canvas to some sort of a finished, organized deliverable?

1: https://gingkowriter.com/

Ringz
4 replies
23h13m

Slightly OT, but there was a standalone software just like gingko for the Mac. Do you now something about it?

Edit: I think it was an old version of gingko as a desktop app. Still available at https://github.com/gingko/client/releases

ludwigschubert
1 replies
22h43m

Are you thinking of Bike?

https://www.hogbaysoftware.com/bike/

(Maybe not — this isn’t markdown first; but it is a very macOS-y, keyboard driven, hierarchical outliner that I enjoy.)

Ringz
0 replies
22h30m

Bike looks very nice and it’s built on open file formats. I will try it out. Look at my edit above: it might be an old version of ginkgo. But I’m on my phone right now and can’t figure it out…

Ringz
0 replies
22h47m

Thanks, but that’s not the one. It was like a pure Markdown outliner, very keyboard driven.

setnone
0 replies
23h26m

Great questions!

In your case, do you hold N contexts (N being the number of leaves in the tree)?

It depends, contexts are just a form of grouping

Are the chats disconnected from each other? > How do you propose to transition from an endless/unstructured canvas to some sort of a finished, organized deliverable?

RAG with in-app commands, i'm working on a local RAG solution, it's early but promising. Basically chat with all your data and applying a wide range of command on it.

TeMPOraL
0 replies
5h21m

How do you propose to transition from an endless/unstructured canvas to some sort of a finished, organized deliverable?

Why would they, though? For me as a potential user of this (and someone who thought about building a tool like this for myself), the tree (or better, a directed graph) is the desired end result.

carlosbaraza
4 replies
9h45m

Sometime ago I had an idea for a similar interface without the dragging feature. Basically, just a tree visualisation. I usually discuss a tangent topic in the same conversation, but I don't want to confuse the AI afterwards, so I edit a previous message when the tangent started. However, OpenAI would discard that tangent tree, instead it would be nice to have a tree of the tangent topics explored, without necessarily having to sort them manually, just visualising the tree.

IanCal
3 replies
8h5m

ChatGPT keeps the full tree doesn't it? You can swap back and forth on any particular node last I checked.

endofreach
2 replies
6h26m

I haven't seen that. So i have actually built what parent wrote.

So it seems i did waste time unnecessarily... but where exactly do i find the full tree in ChatGPT convos?

noahjk
0 replies
5h2m

I don’t think it’s available on mobile, if that’s where you are. On desktop, you can switch between previous edits.

I’d be interested in seeing what you made though because I’m really interested in the idea of a branching UI

IanCal
0 replies
3h18m

It's all kept but it's not a nice UI. When you change a question you get (on the site, maybe just desktop?) a left and right button to move between the different variations.

One thing you could do is import your data, as the exported conversations have this full tree last time I tried.

CuriouslyC
4 replies
1d

It looks like you put a lot of work into this but node based workflows are ok when they're a necessary evil and just an evil otherwise.

I'd be more interested in a tool where I can "add data" to it by drag and drop or folder import, then I can just type whatever prompt and the app's RAG system pulls relevant data/previous prompts/etc out of its store ranked by relevance, and I can just click on all the things that I want inserted into my context with a warning if I'm getting near the context limit. I waste a lot of time finding the relevant code/snippets to paste in manually.

setnone
0 replies
1d

For me this interface is canvas-based first, node-based second, meaning sometimes I might not even use connections to get my desired result from LLM but i have the place and form for the result and i know how to find it. Connections here are not set in stone like in mind mapping software for example, it's a tool.

setnone
0 replies
1d

I'd be more interested in a tool where I can "add data" to it by drag and drop or folder import, then I can just type whatever prompt and the app's RAG system pulls relevant data/previous

This is something very similar to what i'm planning to add next, so stick around.

ramoz
0 replies
1d

Well here’s a somewhat limited version of your idea and really only helps mitigate the copy/paste effort with coding: https://github.com/backnotprop/prompt-tower

My original idea was a DnD interface that works at the os level as a HUD… and functions like your idea but that is not so simple to develop.

durch
0 replies
22h16m

This sounds a lot like my dream setup, We've been slowly building something along those lines. I've linked a video at the bottom that shows how we did something similar with an Obsidian plugin. Hit me up if you're interested in more details, we'd be happy to get an alpha user who gets it.

We've mostly had trouble explaining to ppl what exactly it is that we're building, which is fine, since we're mostly building for us, but still it seems like something like this would be the killer app for LLMs

Obsidian Canvas UI demo -> https://www.youtube.com/watch?v=1tDIXoXRziA

Also linking out Obsidian plugin repo in case someone wants to dive deeper into what we're about -> https://github.com/cloud-atlas-ai/obsidian-client

teruakohatu
3 replies
20h41m

It seems to work well but a desktop app (or self hosted) is essential. I can't paste in valuable API keys to a third party website.

setnone
2 replies
20h29m

Desktop app is coming soon and self-host option is already available as a part of Extended License.

I have no plans to open source it at the moment, but it would be great to come up with something like 'open build' for cases like that.

teruakohatu
1 replies
20h24m

The purchase screen made me think self hosted was coming soon for extended. How far off is desktop and will the desktop be self-hosted or an interface to the website ?

setnone
0 replies
20h19m

Not far off, some days i would say.

Yes it's a wrapper with opted-out sentry and vercel analitycs, just like self-host package.

ramoz
3 replies
1d1h

I like this and wished openai or anthropic enabled similar in their UIs... it would be simple actually: "create a new chat from here"

Otherwise, great job! It's cool, but it's pricey and that is a personal deterrence.

gopher_space
1 replies
23h58m

I've pegged my thinking on software purchases to local McDonald's drive-thru menu equivalencies.

diebillionaires
0 replies
23h14m

macdonalds is so overpriced, so I cannot condone this method :)

tippytippytango
0 replies
16h14m

I find editing a previous question accomplishes this well, the existing UI already keeps all your previous edits in a revision tree.

iknownthing
3 replies
1d1h

Curious why you settled on the BYOAK approach rather than a subscription approach

setnone
2 replies
1d1h

Subscription fatigue is real :)

tomfreemax
0 replies
23h29m

I have to say, I didn't realize it was no subscription before I saw this comment. Makes it much more interesting from the start.

Yes, I hate subscriptions. Love your approach.

I also love that you focus on your strength which is the intuitive and flexible interface, rather than LLM or prompts or whatever. Like this its also very extensible, as every good tool should be.

iknownthing
0 replies
1d

I was thinking it was because it would be easier than keeping track of usage which I assume you would need to do with a subscription based model i.e. all users using your key.

altruios
3 replies
21h22m

The only feedback I would give is I'm suspicious of (will not buy) closed sourced AI anything. With that said: thank you for sloughing off the subscription model trend! That is welcome.

But going open source so that I know "for sure" no telemetry is being sent and charging for support would be the only way to get money out of me for this. I'm probably the odd one out for this, so take that with a fair helping of salt.

This is a great idea, so much so that this is also something I could probably put together a MVP of in a weekend (or two) of dedicated work (the fancy features that I personally don't care about would probably take longer to implement, of course...).

Good work! Keep it up.

IanCal
1 replies
8h24m

But going open source so that I know "for sure" no telemetry is being sent and charging for support would be the only way to get money out of me for this.

Is the self hosted option a workable solution for you?

https://www.grafychat.com/d/docs/selfhost

Unless it's minified I guess.

altruios
0 replies
3h49m

I would only use this (or any ai) self-hosted if it works 100% offline.

I would also not want it minified - as I would want the freedom to tinker with it to my personal specifications. Which makes me ask a question: what rights would I have to modify this software, per your license?

setnone
0 replies
13h21m

Thank you!

I would love if we had some kind of 'open-build' methodology so those projects not willing to open the source but are willing to perform any kind of necessary audit against the build, just a thought.

varispeed
1 replies
18h39m

It's like when I replaced dropbox with just a few scripts and sftp.

igor47
0 replies
3h17m

Syncthing, actually.

I think you were joking but the benefit of designing software at personal scale is often an exponential reduction in complexity.

siva7
0 replies
8h49m

"easily"? well, no except you're a techie.

shanghaikid
2 replies
10h3m

This is interesting and all, but it's a tad complex to use. AI is supposed to simplify your life, but this just ends up making things more complicated.

Ask -> answer, no more steps, that is the core value of ChatGPT or AI.

social_quotient
0 replies
9h42m

Suppose I have a conversation with ChatGPT about a macro, or better yet, a series of macros. We reach the 10th sub-module, but suddenly, I find a bug in module 2 (20 minutes ago chat). While I could redirect the chat back to module 2, it's a bit convoluted. Ideally, I'd want to return to an earlier point in the conversation, resolve module 2, and then continue where we left off. However, if I update my response from 20 chats ago, I risk orphaning the rest of the conversation. The response lag also complicates things because I might move on to new ideas or debugging tasks in the meantime. I suppose I should say because of the lag time, I’m not in sync with the chat, that lag affords me the opportunity to keep doing other things. If the chat was more like groq maybe it would be less the case - not sure.

The other thing I find is that if I change how I replied/asked, I get a different answer. I like the idea I can fork this node and evaluate outcomes based on my varied inputs. You’re right it’s hugely more complex. But its complexity I think I'd love to have available.

setnone
0 replies
9h37m

Ask -> answer, no more steps, that is the core value of ChatGPT or AI.

This is the absolutely ideal state of the product, i agree.

rajarsheem
2 replies
1d1h

The demo you shared shows you are creating child chat from the original parent chat. Have you tried something like connecting merging two child chats to create a subsequent child chat? Or maybe simply creating a child chat from a previous child chat?

visarga
1 replies
22h2m

I wish there was a node to load a folder of JSON, TXT or CSV files, pipe them one by one and collect the outputs in another folder. Like a LLM pipeline / prompt editor.

pants2
2 replies
1d1h

From watching the demo it looks interesting, but I figure I would get tired of dragging nodes around and looking for ones that I'm interested in. Does it allow searching?

It would be more interesting to me if it could use AI as an agent to create a graph view - or at least propose/highlight followup questions that self-organize into a graph.

setnone
0 replies
1d

Yes, search is one of my favorite features here, try '/' shortcut

setnone
0 replies
1d

I would get tired of dragging nodes around

Me personally i find value in taking my time to organize and drag around, probably because i'm a visual thinker

groby_b
2 replies
22h40m

I have to admit, I don't get it. (And I want to be clear that's a personal statement, not an overall comment on the app. It looks quite well done, and if others get value from it, awesome!)

But for me, I'm stuck with questions. What's the point of drawing connectors, there seems no implied data flow? Is this just for you as a reminder of the hierarchy of your queries? Or do you actually set the upstream chat as a context, and reflow the whole thing if you change upstream queries? (That one would definitely be fun to play with - still not sure about long-term value, but def. interesting)

Good luck, and looking forward to see where you're taking this!

setnone
0 replies
21h52m

Thank you!

Like I mentioned earlier for me the app is canvas-based first, node-based second. So connections are a tool, a visual tool to craft or manage prompt to then feed it to LLM. Canvas is a visual tool to organize and keep large amounts of chats.

I try use LLM not for the sake of chatting, but to get results and those tools seem to help me with that.

Hope that makes sense.

jonnycoder
0 replies
16h2m

Seems like organized chatGPT in the form of mind mapping. It’s quite intuitive to me because I’ve had some chats where I kept scrolling back to the first gpt response. Therefore, you can map out a question and answer, then create nodes for follow up about specific details. Each branch of the tree structure can organize a rabbit hole of follow ups on a specific topic.

yoouareperfect
1 replies
1d

Very nice! Thanks for sharing, will definitely give it a try. I think we settled for chat interface to play with LLMs, but there's nothing really holding us back to try new ways.

x3haloed
0 replies
1d

Yeah, I'm annoyed that OpenAI has deprecated its text completion models and API. I think there's a ton of value to be had from constrained generation like what's available with the Guidance library.

tomfreemax
0 replies
23h33m

Of course, it can be done with emacs and org mode...

It's almost like every software or library will get ported to JavaScript eventually, with the difference, emacs and org mode was before.

tomfreemax
1 replies
23h20m

Didn't find it in the documentation. How would I go about if I want to self-host it for a small team of like 14 people?

Should I buy licenses for 14 (3x extended) instances, or 1 for all, where everyone can see everyone's conversations or are there accounts? I have a central ollama instance running and also Openai API keys.

Thank you.

setnone
0 replies
23h4m

How would I go about if I want to self-host it for a small team of like 14 people

Should I buy licenses for 14 (3x extended) instances

Yes that should work. Each license comes with 5 seat/activations. Each seat has its own copy of the data.

rmbyrro
1 replies
23h11m

Thank you so much for building this, it's exactly what I was looking for!

Love the license instead of subscription model. Also loved that I can start trying right away without any hassle.

Couple suggestions:

I can't decide between Extended and Premium options. What does "premium support" mean?

Also, it only shows an upgrade option in the check-out page, perhaps it'd be interesting to include it in the FAQ and also the Pricing section.

setnone
0 replies
22h51m

Thank you!

What does "premium support" mean?

Premium option includes prioritized support and access to new features that might be unavailable for other types of licenses.

I will update the website for more clarity.

pasaley
1 replies
23h55m

Interesting choice of questions in the demo.

Are you from Nepal?

setnone
0 replies
23h29m

No but I'm a frequent visitor, i love the mountains there!

ntonozzi
1 replies
1d1h

This is wild! What have you found it most useful for?

Have you tried a more straightforward approach that follows the ChatGPT model of being able to fork a chat thread? I could use something like this where I can fork a chat thread and see my old thread(s) as a tree, but continue participating in a new thread. Your model seems more powerful, but also more complex.

setnone
0 replies
1d1h

This is my daily GPT driver, so for almost anything from research to keeping my snippets tidy and well organized. I use voice input a lot to take my time and form my thoughts and requests, text-to-speech to listen for answers too.

firtoz
0 replies
22h20m

Thank you, now I really have to try Obsidian...

lIIllIIllIIllII
1 replies
8h41m

For what it's worth, one CSS line lags the HELL out of my laptop on the site. It's backdrop-filter: blur(0.1875rem) for modals, like the youtube video popup

wildrhythms
0 replies
6h17m

I'm a front-end dev and I refuse to apply this effect for this reason. Even on high end laptops it uses way too much power and starts blasting the fans.

joshuahutt
1 replies
21h21m

Very cool! I built a version of this [1], but balked at trying to sell it. This is the third iteration of this idea I've seen so far. Your reply popup is a smart feature and a nice touch! Love it. I love the privacy focus and BYOK, as well.

Congrats on the launch!

Really cool to see graph interfaces for AI having their moment. :)

[1] https://coloring.thinkout.app/

diebillionaires
0 replies
14h15m

Wow, this is really cool! Thanks for sharing!

freedomben
1 replies
19h34m

This looks really cool. I did not expect to see something I might actually buy but this is something that could be very nice for me :-)

Will the Self-host package include source (i.e. source available) or is it just the transpiler output?

Also, is there (or plan to be) support for postgres or other database for persistence?

setnone
0 replies
19h23m

Thank you!

Will the Self-host package include source (i.e. source available) or is it just the transpiler output?

No sources, just a folder with compiled assets that you can run on a static server. This is already available.

Also, is there (or plan to be) support for postgres or other database for persistence?

Yes there are plans for local pg

causal
1 replies
21h27m

Congrats on the launch - I love this. Organizing text is often the hard part when working with LLMs.

Only thing I don't love is heavy mouse use. Are there keyboard shortcuts for all the operations shown?

setnone
0 replies
20h14m

Thanks!

Are there keyboard shortcuts for all the operations shown?

For now yes, what would you like to be added?

bredren
1 replies
21h52m

Your full-stack dev graph seems to have 75 queries in it.

Please consider providing a demo video showing how this works with code work.

I get the overall behavior, but sometimes code segments can be quite long, or multiple specific sections need to be combined to create additional context.

It would be helpful to see the current baseline product behavior for interaction on a "common" coding task, solving problems in typescript and / or python.

setnone
0 replies
13h17m

Thank you for the feedback!

I'm planning to release more videos, stay tuned.

Zambyte
1 replies
1d1h

Looks cool! How can I host it?

setnone
0 replies
1d

Thanks! Self-host package comes with Extended license

xucian
0 replies
5h5m

nice, something I didn't know I needed :D

might want to increase font weight in the pricing section, it's hard to read

also in "How much does it cost?" I think you should also add the Free option (for those like me who missed the Try For Free button at the top)

whiddershins
0 replies
3h7m

Cool!

You have a typo in the word ‘presicion’

Ironically

wan888888
0 replies
22h18m

Amazing work, kudos! Love the canvas, drag'n'drop and line connectors, did you use a library or made it yourself?

troupo
0 replies
7h29m

I wanted the same for myself but balked at the amount of work I'd need to do to implement it :)

Great job!

subhashp
0 replies
11h11m

Excellent UI! I love it.

siva7
0 replies
8h52m

Good landing page, explained to me the product well enough. I like your concept also as i wished sometimes for something similiar in the past.

seedie
0 replies
1m

[delayed]

rfc
0 replies
1d

Nice! This is really cool. Well done.

raxrb
0 replies
22h22m

Do you plan to open source it? I will love to extend it. I had similar ideas about non linear UI.

p1esk
0 replies
17h37m

Hard to try it on my phone.

nssmeher
0 replies
22h0m

Great stuff! Interesting usecases will be present

noashavit
0 replies
18h4m

Congrats on the launch! I love that you let ppl try it without even signing up! The mobile experience needs to work tho.

nirav72
0 replies
22h53m

This is great. More importantly - I love the pricing!!

mubu
0 replies
22h55m

This seems very cool and I'd like to try it out

midnitewarrior
0 replies
12h39m

Can you go get acquired by Phind please? Brainstorming with the robots is a non-linear activity and I believe you are on the right track.

jdthedisciple
0 replies
23h35m

looks packed with stuff, how long did it take u to build this?

entherhe
0 replies
1d1h

I always feel like whiteboarding & concept mapping is better when it comes to generative AI, especially when it comes to the nature that we are chat in a "multimodal way" these days -- just think of old plain text SMS compared to mems links rich-text powered IM tools nowadays.

Congrats! you may also check flowith and ai.affine.pro for similar selling points.

Also, heptabase is good and they will definitely make a ai version soon or later.

dangoodmanUT
0 replies
23h11m

Super cool, would be great for prompt engineering and iteration

damnever
0 replies
10h42m

Awesome, this is similar to the thread conversations on Slack.

buescher
0 replies
3h6m

A tree visualization like this one would be great as a complement to tabs in web browsing, especially on a monster display.

bschmidt1
0 replies
17h58m

Powerful stuff, this is the kind of workspace I've been waiting for for AI. Excited to see how it evolves!

brunoborges
0 replies
22h19m

Can you share details of the technology stack used to build the tool?

asadalt
0 replies
23h23m

i wish perplexity had a similar ui option. so I can out my research in multiple paths.

_boffin_
0 replies
1d

Yes! this is what i've been thinking about!

Wheaties466
0 replies
4h54m

something i built as an add on but would be nice to integrate into some of these front ends would be a find replace key: value store to assist in potentially "leaking" something.

if you could replace IPs or domains or subdomains with a filler domain like something.contoso.com and send that to chatgpt instead of my internal domain that would be a feature that I would pay money for.

like i said i have an implementation written in python for this but its an add on to an additional frontend which makes it extra clunky.

LASR
0 replies
1d

Wow. I was so frustrated with chat that I was almost going to write something like this myself. Now I don't have to :)

Curious about the business model here though. How much sales have you had so far, if you don't mind me asking?

7734128
0 replies
1d1h

Make sure to have very tight limits on any API key you provide to someone else. They could burn through tens of thousands of dollars each day if you do not have security in place.