Accessibility in both Notion and Confluence is absolutely abysmal. Any chance you have thought about this while working on Docmost so far? This is a pretty important thing for companies to adopt this, with ADA in the US and the upcoming EAA within the EU. Also it'd be nice to have a product that actually did their homework where this is concerned for once :) Let me know if you want me to give it a once-over. DIsclaimer: native screen reader user, blind person, developer, accessibility auditor and all that jazz.
Hi, OP. First, congratulations on launching a product, and thank you for giving it strong copyleft! I ran it as directed, and it's pretty slick. I have some detailed comments on the database side of things that I hope you'll take seriously before trying to scale this. I've ran a distributed Postgres DB at scale for a well-known company that used `yjs` for precisely the same thing you're doing here, so I have some real-world experience with this.
You do not want to run this in Postgres, or any RDBMS for that matter. I promise you. Here [0] is `y-sweet` [1] discussing (at a shallow level) why persisting the actual content in an RDBMS isn't great. At $COMPANY, we ran Postgres on massive EC2s with native NVMe drives for storage, and they still struggled with this stuff (albeit with the rest of the app also using them). Use an object store, use an LSM-tree solution like MyRocks [2], just don't use an RDBMS, and especially not Postgres. It is uniquely bad at this. I'll explain.
Let's say I'm storing RFC2324 [3]. In TXT format, this is just shy of 20 KB. Even if it's 1/5th that size, it doesn't matter for the purposes of this discussion. As you may or may not know, Postgres uses something called TOAST [4] for storing large amounts of data (by default, any time a tuple hits 2 KB). This is great, except there's an overhead to de-TOAST things. This overhead can add up on retrievals.
Then there's WAL amplification. Postgres doesn't really do an `UPDATE`, it does a `DELETE` + `INSERT`. Even worse, it has to write entire pages (8 KB) [5], not just the changed content (there are circumstances in which this isn't true, but assume it is in general). Here's a view of `pg_stat_wal`, after I've been playing with it:
docmost=# SELECT wal_fpi, wal_bytes FROM pg_stat_wal:
wal_fpi | wal_bytes
---------+-----------
1641 | 11537465
(1 row)
Now I'll change a single byte in the aforementioned RFC, and run that again: docmost=# SELECT wal_fpi, wal_bytes FROM pg_stat_wal;
wal_fpi | wal_bytes
---------+-----------
1654 | 11656052
(1 row)
That is nearly 120 KB of WAL written to change one byte. This is of course dependent upon the size of the document being edited, but it's always going to be bad.Now let's look at the search query [6], which I've reproduced (mostly; I left out creator_id and the ORDER BY) here:
docmost=# EXPLAIN(ANALYZE, BUFFERS, COSTS) SELECT id, title, icon, parent_page_id, slug_id, creator_id, created_at, updated_at, ts_headline('english', text_content, to_tsquery('english', 'method'), 'MinWords=9, MaxWords=10, MaxFragments=10') FROM pages WHERE space_id = '01906698-1b7c-712b-8d4f-935930b03318' AND tsv @@ to_tsquery('english', 'method');
QUERY PLAN
----------------------------------------------------------------------------------------------------------
Seq Scan on pages (cost=0.00..12.95 rows=1 width=192) (actual time=13.473..48.684 rows=3 loops=1)
Filter: ((tsv @@ '''method'''::tsquery) AND (space_id = '01906698-1b7c-712b-8d4f-935930b03318'::uuid))
Rows Removed by Filter: 3
Buffers: shared hit=32
Planning:
Buffers: shared hit=1
Planning Time: 0.261 ms
Execution Time: 48.717 ms
~50 msec to do a relatively simple SELECT with no JOINs isn't great, and it's from the use of `ts_headline`. Unfortunately, it has to parse the original document, not just the tsvector summary to produce results. If I remove that function from the query, it plummets to sub-msec times, as I would expect.It doesn't get better if I forcibly disable sequential scans to get it to favor the GIN index on `tsv` (unsurprising, given the small dataset):
QUERY PLAN
----------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on public.pages (cost=106.29..110.56 rows=1 width=192) (actual time=17.983..51.424 rows=3 loops=1)
Recheck Cond: (pages.tsv @@ '''method'''::tsquery)
Filter: (pages.space_id = '01906698-1b7c-712b-8d4f-935930b03318'::uuid)
Heap Blocks: exact=1
Buffers: shared hit=41
-> Bitmap Index Scan on pages_tsv_idx (cost=0.00..106.29 rows=1 width=0) (actual time=1.231..1.231 rows=7 loops=1)
Index Cond: (pages.tsv @@ '''method'''::tsquery)
Buffers: shared hit=25
Planning:
Buffers: shared hit=1
Planning Time: 0.343 ms
Execution Time: 51.647 ms
And speaking of GIN indices, while they're great for this, they also need regular maintenance, else you risk massive slowdowns [7]. This was after having inserted a few large-ish documents similar to the RFC, and creating a few short pages organically. docmost=# SELECT * FROM pgstatginindex('pages_tsv_idx');
version | pending_pages | pending_tuples
---------+---------------+----------------
2 | 23 | 26
Let's force an early cleanup: docmost=# EXPLAIN (ANALYZE, BUFFERS, COSTS) SELECT gin_clean_pending_list('pages_tsv_idx'::regclass);
QUERY PLAN
--------------------------------------------------------------------------------------
Result (cost=0.00..0.01 rows=1 width=8) (actual time=16.574..16.577 rows=1 loops=1)
Buffers: shared hit=4659 dirtied=47 written=22
Planning Time: 0.322 ms
Execution Time: 16.776 ms
17 msec doesn't sound like a lot, but bear in mind this was only hitting 4659 pages, or 37 MB. It can get worse.You should also take a look at the DB config if you're to keep using it, starting with `shared_buffers`, since it's currently at the default value of 128 MB. That is not going to work well for anyone trying to use this for real work.
You should also optimize your column ordering. EDB has a great writeup [8] on why this matters.
Finally, I would like to commend you for using UUIDv7. While ideally I'd love (as someone who works with DBs) to see integers or natural keys, at least these are k-sortable. Oh, and foreign keys – thank you! They're so often eschewed in favor of "we'll handle it in the app", but they can absolutely save your data from getting borked.
[0]: https://digest.browsertech.com/archive/browsertech-digest-fi...
[1]: https://github.com/jamsocket/y-sweet
[2]: http://myrocks.io
[3]: https://www.rfc-editor.org/rfc/rfc2324.txt
[4]: https://www.postgresql.org/docs/current/storage-toast.html
[5]: https://wiki.postgresql.org/wiki/Full_page_writes
[6]: https://github.com/docmost/docmost/blob/main/apps/server/src...
[7]: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4...
This is excellent info.
However, do you think an application like this would be database bound? It feels like so many more things would limit throughout (and product market fit) before Postgres.
Given its buried a bit in their stack, they can always optimize later as well.
I think it sounds like premature optimisation too. To ever hit these problems would mean the project had been a roaring success. And they would be much easier to fix later than complicating installs early on.
I think what the OP is trying to get across is that storing text documents in which you're live-editing, will cause a lot of updates, and that these updates may be relatively big in the database (the size of the whole document) compared to the actual edit (a few characters at a time), and can also get quite large in number quickly even as a small number of people are live-editing. I don't know this for any kind of fact, but I could see how this is a fundamental architectural problem, rather than a (premature) optimization. I get that we don't need to build for infinite scale right out the gate, but I could see that the OP is possibly onto something here worth verifying at the very least.
Precisely this. There’s a difference between premature optimization and choosing the correct technology. Even then, I’m willing to compromise; for example, using a DB as a queue. Is it what they’re designed to do? Absolutely not, but it’s easy to implement, and crucially, it’s a lot easier to later shift queues – by definition, ephemeral data — than your persistence model.
At scale, I can definitively say that yes, Postgres becomes the bottleneck for this. The company I worked for was using the same libraries, the same stack.
Re: optimize it later, your persistence model is the hardest thing to change once it’s built and full of data. I strongly recommend getting that right the first time.
I truly appreciate your in-depth articulation.
Indeed, the Yjs state update can be problematic due to its growing size and constant updates.
I will have a look at MyRocks. Reference pointers sound more plausible.
I spent time analyzing and deciding my usage of uuid7 from different perspectives. From the git logs, you can see it came at the last minute.
As someone who: - would love to self host, and would much prefer keeping it to Postgres / Redis for which there is widespread knowledge on how to host - would like you to keep your development velocity up
I'd encourage you not to switch databases, or at least to defer this for a while. I can't imagine you'll have issues with the amount of WAL written for quite a while, and by that time, the world could be quite different (OrioleDB might be mature! https://github.com/orioledb/orioledb)
MyRocks speaks MySQL. It’s just a different storage engine behind MySQL, replacing InnoDB. The major change I can think of would be replacing the `uuid` type with `bin(16)`, and needing to do the encoding and decoding in the app.
Re: development velocity, IMO there’s a solid base product now. I feel like it’s a great time to make a solid change to help future velocity, but I’m not OP.
I should have been more explicit.
I do not intend to change the Postgres database or introduce a new one. I’m sure this won’t be an issue for the majority use-case.
However, I am open to learning more about alternate ways to efficiently handle Yjs state updates, which may be useful for a cloud version that would run at scale. If I were to go that way, it would not affect self-hosted users and would probably be via a reference pointer and not a database switch.
This is absolutely not an issue at the moment. Nothing to worry about.
Agreed that if you want to keep the relational part, using the DB to store pointers is probably the easiest, though it might require rethinking search.
I appreciate you at least considering the various options. If you do nothing else, tuning Postgres parameters, optimizing column ordering, and removing duplicated indices will be a great step forward that is completely backwards-compatible.
I'm a bit surprised. Postgres is one of the possible database you can use on XWiki, which is something quite comparable to Confluence. Postgres seems to do fine and relational databases have strengths that are not to be discarded too early.
Confluence itself uses relational databases including Postgres, and they seem to do well too.
And I know for a fact that both handle huge wikis. (hundreds of spaces, millions of document revisions)
I’m by no means suggesting they abandon RDBMS, just that they shift large textual / binary content out of it, and store pointers instead.
You can store essentially anything in an RDBMS, especially Postgres. That doesn’t make it the right choice. It might make it easier, but easier isn’t the same thing as correct.
I am of course biased as a DBRE, since I’m the one who gets paged about problems caused by decisions like this. Then I get to deliver the uncomfortable news that everything needs to be overhauled, and inevitably am told that’s not going to happen, and to just figure it out.
Not OP but thanks for sharing, excellent comment.
Forgot one other thing: there is at least one duplicated index, where there’s a UNIQUE constraint on a column (I think on Pages?), and then an index on the same column. The unique constraint sets up an index already, so you don’t need both.
nice of you to help OP
A little feedback: I want to try it (the website makes it look very clear and promising!), but the installation page [1] scared me and I almost left. Then I looked at the first instructions and they were for installing Docker. I know the section is named “Prerequisites,” but I was expecting just the docker-compose and some documentation on vars, given that the only way to install it is with Docker. Even the “Installation Steps” start with mkdir, cd, curl, vi, only to say “use this docker-compose.” The prerequisites can be important for many people, and there are many ways to solve this (if you think it’s a problem).
One thing to remember: devs and tech-savvy people skip everything and look directly at the terminal commands/code. It’s the reason you should never insert the “don’ts” in your repository readme too high on the page: they will be the first things we’ll cut and paste :D
This is not a criticism; it seems you did a wonderful job. Just the feedback of one of many dummy experimenters that you might lose on that page :)
IMO if you want to drive adoption you need to ship an all-in-one container. Use runit, and stick the DB, redis and your app in the same container with one nice big data directory.
Because for most small teams with the ability to run containers, that's going to be all they ever need - and it means your "let's try it" experience is just `docker run <my container image>`.
I know this is not the recommended practice, but I am working on a legacy project where we need to migrate to Docker and the first step (before decomposing in multiple containers) would be to have an all-in-one container running all services. I am looking for hints / recommendations on the best way to do this. Can you provide any pointers ?
Not the person you asked, but I've done something like that several years ago, too.
From alpine
RUN apk add --no-cache $yourDependencies
Add theSoftware /bin/theSoftware
CMD rc-service yourDependency start && /bin/theSoftware
You probably want to make that CMD into a bash script, produce theSoftware in another "from $yourRuntime as builderImage" and COPY from=builderImage thoughI'd strongly encourage you to explore the image with dive after to visualize whats been added on which command while you're changing the dockerfile
https://github.com/wagoodman/dive
I'd push back on that being a good idea though. It's okayish, but it'd be better if the software had no dependencies, falling back to file storage/embedded DB. But that's obviously scope screep, so probably not worth the effort if your chosen framework doesn't give you a drop in option for that. Basically ymmv, I wouldn't touch a software that nests multiple daemons into the same image. It's a hack for easier demos or transient starts on dev machines, but that's it.
Your orm does support sqlite though, that'd probably be better then nesting a database daemon
That’s useful, thanks.
Will probably need to start with an Ubuntu image instead of Alpine to keep changes to a minimum.
Recommendations for an “rc-service” to use in this context ?
This has worked pretty well for me both for service control and pre-execution environment setup : https://github.com/just-containers/s6-overlay
There is also good old supervisord, but I haven't personally used it in containers yet. https://docs.docker.com/config/containers/multi-service_cont...
If you use Ubuntu then just install systemd from the repos and use that imo. Less learning for you to do if you already know how services are configured in Ubuntu.
You’ll need to point the entrypoint to /usr/bin/init.
You already know this, but I would be remiss if I didn't first point out that this is bad practice for a number of reasons.
That said, I have had to do this before and if I ever have to do it again, I will be going right for a base image that has systemd. Red Hat provides UBI images that have systemd in it for reasons similar to this. Assuming that a pod or deployment using something like podman is not an option, This what I would use personally.
Doing this as an optional extra alongside the regular app-only container is good, but it definitely shouldn't be the only way this is distributed. It's extremely inflexible.
I would remove the instructions to install docker: people can see them in the docker documentation, it doesn’t make sense to include them somewhere else.
Also I would use a .env file to manage the env variables, without requiring the user to modify the docker compose file. It’s very likely that people will version the yaml file, so it’s not a good idea to keep secrets in plaintext there.
https://docs.docker.com/compose/environment-variables/set-en...
I may be unusual, but I much prefer the environment variables in a docker compose file. A .env feels like an anti-pattern to me honestly.
This sounds right until you have to version your docker-compose file.
Storing passwords or secrets in git should be avoided; the .env file structure allows you to leave untouched the yaml file. Anybody changing it? Git pull, and you’re ready to go, since you didn’t change the yaml file and you don’t have to substitute secrets again.
I don't disagree, but I think you're conflating secrets with environment variables. Yes most secrets are (or should be at least) passed in through env vars, but there's also a ton (in some apps 80% to 90%) of configuration that aren't secrets. I also dislike when people treat every config value as a secret. Secrets require additional overhead and care, and burdening yourself (or another dev or operator) with that in order to tweak a completely non-secretive value is unnecessary and IMHO often counterproductive.
For secrets, a .env file is fine for local dev and docker-compose IMHO. The "hidden file" nature of a .env is a good fit for secrets. (For prod I prefer K8s Secrets or Vault or similar)
I would not remove them, but place them elsewhere, linked from the point you should run them at the install process.
It's very useful to have a complete 'getting started' page that get you from zero to working, without assuming that the reader understands what every intermediate step means. But as you said, the parts that are dependencies for other products can be encapsulated so that savvy users can skip them easily.
It’s better to just place links to the relevant parts of the Docker official docs if you want to help people using Docker for the first time IMO.
Focus the energy of your own docs on your own product itself, and let the docs of your dependencies cover most of the general steps regarding those.
We are still using Confluence on-prem (behind VPN). To switch, we need the following:
- An export function (PDFs).
- An integrated diagram editor like Gliffy.
- History / diffs.
Outline is the closest to this so far, but we are in no rush, so we'll watch the development of this as well. Thanks for sharing!
Hi!
I'm working at XWiki SAS [1] on tools to migrate from Confluence to XWiki [2], an open source and powerful wiki software that was born at about the same time as Confluence.
We have all this. We offer support and consulting, including for handling your migration. Our migration tools try to keep as much of the content and its feature as possible, and we work on compatibility macros for this.
Feel free to reach us, or me.
[2] http://xwiki.org
Comprehensive site!
Noticed this example from your store embeds French flavor for me:
https://store.xwiki.com/xwiki/bin/view/Extension/Office365In...
Very possible, most of us are French and Romanian :-)
Please, get a designer and redesign that logo in a more trendy way. That logo may have hurt your chances since 2008.
1. PDF export will come.
2. Diagrams will come too. MermaidJs is next on the line. Other diagram providers like Draw.io and Excalidraw will come once I figure out an efficient way to handle storing and retrieving their raw data.
3. There is support for page history. No diff comparison yet though.
Mermaid is cool, but you need to land a Draw.io or Gliffy or Lucid.
It would be better to provide a plugin system so the community can contribute to the project.
I think this misses the biggest need (which one might not consider if coming from Confluence):
- consolidation of wiki together with your code's READMEs and generated docs (e.g. sphinx, mkdocs, swagger, etc., anything that outputs documentation from your codebase)
Note on the "integrated diagram editor", this brings up another feature (though less critical than above):
- by standardizing on a docs-as-code abstraction like mermaid or kroki, you can then leverage (a) diffable diagrams as code, and (b) quite a few relevant OSS editors.
See VSCode extensions for a few different implementations, but that said, if you pick mermaid, then the same diagrams work in the wiki tool as on GitHub, as well as local-first open content format tools like Foam, Dendron, or Obsidian.md, which is nice.
The diagrams as code is not a one size fit all. I'd use mermaid for some technical things, but it's failing for communication/presentation purposes. You need to have the ability for arbitrary placement, annotations, flow. Mermaid is all fun until you want to connect two previously distant boxes and everything explodes - for long-term documentation purposes it may not even matter, but if I'm about to show it to anyone, I'm going to Excalidraw.
We use Bookstack[0] for this and I can recommend it. Free and open source.
- PDF Export - Oauth2 - Revisions - History - Permissions - WYSIWYG/Markdown/Diagrams etc.
I haven't tried it in Confluence, but https://www.tldraw.com/ can be embedded in Notion at least, and works very well as a diagram editor.
I’m always curious what something like this does that git, markdown, and a good text editor can’t. Going with existing open tooling has the added benefit of being completely portable.
I will admit that it would be very cool to see a client that abstracted got and markdown away for non-technical users.
I feel like obsidian with some more git polish could get it done.
Being able to quickly make new pages and see the hierarchy on the side panel is the feature I really need that doesn't seem to exist outside of Notion-likes.
Obsidian is exceptional. You can make a new page just by creating a link to it. The hierarchy is available as fileview, or as graph- view based on links. That’s just the out of the box tooling. Tons more extensions for basically anything you can imagine.
Try SiYuan Note.
Makes it accessible to non-devs. Markdown may seem simple to you, but too many people it isn't. Got and it's concepts are beyond many people.
The point is that you can create a rich text editor that stores the data as markdown.
Note how I said that git and markdown could be abstracted away, as in hidden from the user who can’t be bothered to learn. Use them under the hood, so at the end of the day your entire wiki is just a repo.
The point is that you can create a rich text editor that stores the data as markdown.
Both Nextcloud and XWiki do this.
Now, why not git+markdown? I'm not sure it exists so we can't really know if it can work well or not.
I have my doubts:
About Markdown: I believe it is fine for very basic content, but you will probably want something more powerful to cover more advanced needs. HTML will be too low level for this, so you will probably need something to extend Markdown with custom macros, at which point you may as well adopt something that already exists.
For git: wikis tend to have versioning per document, not of the whole stuff. You will want to have easy and efficient document history manipulation (access of old revisions, comparison between revision, rollback). And you may want the wiki to remain efficient with a large number of documents and revisions, even when multiple people are writing to the wiki at the same time, and the git repository might be a bottleneck.
For a single user with simple note taking needs, I believe git+markdown can have good characteristics. I'm not sold on the git+markdown thing for a multi user wiki. It would need to be proven, but should someone do this, they should not solve "How do I write wiki software based on git+markdown", the problem should be "I need to have a wiki that's efficient in such and such cases, and git+markdown is a good basis because [...]".
Personal gripe, but there are a gazillion note taking tools, but very few of them do real-time collab.
Me and my partner currently use Apple Notes for this, simple stuff like grocery lists, todo lists, etc. But Apple Notes perf is abysmal with real-time collab. The app constantly hangs and fans are spinning non-stop. iOS is not much better.
This is why I use txt/markdown based notes.
Async collaboration happens with git, synchronous can happen with any text editor that supports collaborative editing.
that git, markdown, and a good text editor can’t.
Probably runs on phones better.
Vscode can open, edit and commit to any repo from the browser.
Congrats on this - It looks really good. We’ve been evaluating documentation tooling for our company. We’re in a weird regulatory environment where the documentation is created by someone else, but reviewed and approved by another person.
I bring this up because a feature that could set you apart from others is the concept of a “merge request” for documentation. Where someone can make a document, another can modify it and submit changes for review.
GitBook has this but it lacks in some other key ways for us.
Curious what other features you need where merge requests for documentation is the primary requirement but Git (or some other VCS) isn't sufficient?
How would you manage a Confluence document via git?
a repository of markdown files with a custom viewing software that supports the syntax and renders it for readability or acts as a wysiwyg editor would work well: I do this personally, with Obsidian.md, where my vault directory is a git repository. a hosted web interface should be able to do something similar.
I’ve also always wanted this, but what I’ve realized after noodling on it a while is I’d really just prefer a way to use git, and push markdown documents to the Notes System.
I dont want a different system handling edits reviews and merges.
I just want CD to send my docs from git to a system that can properly host / give me the Doc-related features I need.
I just want CD to send my docs from git to a system that can properly host / give me the Doc-related features I need.
Material for Mkdocs does exactly this.
If this could co-exist in the backend with an easy to use browser interface for the many who write text more than markdown, would be amazing.
Being markdown centric would be great. Makes this tool a great destination for so much markdown content already existing.
This would be a great feature. We have a similar problem whereby there are official versions of documents which have been through a review process and the only way to work on the next version on Confluence is to have a separate working copy of the page which pollutes the search and gets messy very quickly.
Confluence 100% supports the situation you’ve described. The way I’m reading this is that you need to have a publishing workflow, or a document approval workflow, which Confluence can do. At one of my jobs, we wrote it with CQL.
This would indeed be nice to have.
These would be great:
- Managing pages in git/other vcs as plain text, using any editor I choose. I can commit pages using git or other vcs, don't have to use the browser to add pages.
- Writing pages in some markup language, maybe not markdown, as it is not expressive enough in some areas. Maybe markdown is possible for simple pages and the wiki knows it is markdown from the file extension, but the wiki also allows more powerful formats like restructuredText, which can be extended by the user.
- Server-side rendering of pages, that can easily be cached (since pages are files, one could easily check the shasum of the file to determin cache validity), which makes display of pages almost instant, as opposed to laggy shitty confluence.
Why would you need a tool like this if you’re writing markdown docs outside the browser and version controlling them with Git? Doesn’t that defeat the entire purpose?
Or is it just you want a developer-native workflow to upload docs intended for the rest of the non-developer team?
In general, I would say that's a really bad idea. If you’re dumping this self-hosted (and probably bug filled MVP, as all are) on your team, yet never having to deal with the UI layer that everyone else does…it’s a recipe for revolt and tool churn.
I’ve seen this mistake a million times from technical founders. Same thing will happen with your marketing website CMS, after you realize static site + markdown + git doesn’t scale to non-dev humans and the headless CMS you picked (but never interact with) is actually trash in daily use.
Or is it just you want a developer-native workflow to upload docs intended for the rest of the non-developer team?
This. I am so annoyed by all the quirks and silly dysfunctional behavior of confluence, when all I need is a developer friendly workflow, that actually motivates to keep documents up to date and allows diffing easily and quickly and blaming and all that good stuff you get when you have git or other capable vcs.
In general, I would say that's a really bad idea. If you’re dumping this self-hosted (and probably bug filled MVP, as all are) on your team, yet never having to deal with the UI layer that everyone else does…it’s a recipe for revolt and tool churn.
I don't see how. Users of the UI could, without explicitly knowing, save pages creating commits themselves. Maybe it could be difficult to square that with collaboratively working on a document in realtime though. If I had to choose between the two, I would pick git workflows any day of the week though and it is not so often the case in my experience, that people really work collaboratively on wiki pages of documentation.
I’ve seen this mistake a million times from technical founders. Same thing will happen with your marketing website CMS, after you realize static site + markdown + git doesn’t scale to non-dev humans and the headless CMS you picked (but never interact with) is actually trash in daily use.
That is, why I am suggesting my points as additional, not as replacement. The non-devs can have their clicky bunty UI, but please let me use efficient workflows as a developer and don't create a sluggish experience in the browser, that will never motivate any dev to maintain documentation inside of it.
Also markdown does not do the job. It does not have some of the necessary building blocks. It is good for simple pages and perhaps a readme, but when it gets to proper technical documents, I would rather have something more capable. For example restructuredText, where you can define custom directives and so on. I used that before to make a little wiki with document interlinking functionality when rendering and used it to write a thesis. It is very capable. But there are others like org-mode format, and asciidoc and more. All more capable than standard markdown. (And yet, confluence has already issues with standard markdown, lol.)
An alternative is, of course, not to force devs to use confluence for documentation. Keep confluence to the marketting and sales fluff, and let engineers use efficient tooling, that they are already familiar with and that accompanies the code, instead of dividing documentation out into confluence, where it will quickly become unmaintained and forgotten.
As the parent post observes, I've also yet to see a GUI that works with Markdown and static site generators. This doesn't mean it's a bad idea. It has to be good enough to dogfood.
It'd be nice to implement the non-technical workflow as producing a pull request via a branch. Each save (even autosave after N minutes) could be its own commit.
The user could see their changes from their branch in near realtime. Markdown isn't an issue, it's a benefit, although explicit support for images and diagrams is needed.
The GUI workflow could see if an automatic merge works, if not, it is fine to require manual intervention. Another option is to swap branches, copy main branch, rollback main branch to the place before any conflicts emerged, and then apply changes.
Bonus points for using CRDT for handling multiple users working together on a page during a conference call on the same branch.
Have you seen MyST? It's just as expressive as ReST and much easier to write IMHO.
How did they come up with such a SEO-hostile name?
Not quite the same use case, but I've been really enjoying using https://nextra.site/ to create a static documentation site for one of my projects.
It's managed to strike a good balance of getting out the way and letting me mostly just write plain markdown, whilst being able to fall back to react components if needed.
With CD to GitHub pages on merge to main I think it's a pretty good experience
Just another vote against Markdown. Markdown is OK for simple docs, but very poor for wiki where inter-page links are the Magic Sauce. Creole markup is a lot better for wikiing imho.
I wouldn't store the file format in the file extension; rather store metadata properly as metadata. Chances are that the application wants to hold a lot more metadata anyway, so you're going to need a metadata storage scheme anyway. (Yes, I am a lone crusader for eliminating metadata from filenames.)
We use Jekyll for this at work, build the site using GitHub actions, and host through GitHub pages. Works a treat. Supports mermaid diagrams, mathml, ...
Features I would like to see:
* markdown support (for writing/formatting)
* mermaid support (for diagrams)
1. You can use markdown shortcuts on the editor. It works.
2. Mermaid support is coming.
draw.io is excellent; it stores source code as comments in the PNG/SVG
Perhaps equivalently, use Markdown and store directly in the rendered HTML?
I guess the best way will be to store the PNG/SVG in the storage driver with a reference to it in the editor. The same PNG/SVG will get updated/replaced whenever the code is updated. I need to do more study on this.
Storing the raw data in the editor will bloat the Prosemirror JSON and Yjs state (real-time collaboration) which I want to avoid.
Yep, I think you should store the source file (the .drawio file for drawio) as an attachment of the edited document and call the dedicated editor whenever someone wants to edit it with it, but also save the rendered file to be used to display the result when displaying the document or during regular document edition.
Just to add a detail, I would also like Markdown support when editing a page. Technically Confluence supports Markdown (or at least it used to) but after you saved the page it converted the content to it's own internal format and the Markdown was gone.
I would also like to be able to update a page through the API. Again, Confluence "technically" supports page editing through the API but it's so cumbersome that it's basically useless. The reason for this request is that we use our wiki to document certain activities (monthly security checks, AWS spend, etc.) and I have to manually update Confluence. It would be so much better if I could write a little Ruby (or Bash, etc.) to add content to a table in a page.
1. You can use Markdown shortcuts.
2. Collaborative editing makes updating content outside the editor tricky. It will work, but not very well. I will consider supporting content updates via the API in a future release.
Hi,
As said in other comments, I work on XWiki. We support Markdown [1] to write documents, among other syntaxes (including (X)HTML), although it's way more limited than our own syntax.
For Mermaid, apparently there are initiatives to integrate it, but nothing finished [2, 3] I guess.
[1] https://extensions.xwiki.org/xwiki/bin/view/Extension/Markdo...
[2] https://github.com/jingkaimori/xwiki-mermaid
[3] https://dev.xwiki.org/xwiki/bin/view/GoogleSummerOfCode/Merm...
Who are you aiming this at?
Unfortunately I’d never advocate for something like this at my work. Self-hosting doesn’t make sense in terms of total cost of ownership. I’d rather engineers spent time solving problems in our core business than making sure our wiki is online.
Other engineers can do hosting just fine. Even a middle schooler copy pasting commands. Sounds like a problem somewhere between the keyboard and the chair.
I disagree with your parent comment but what you present is far from enough to host something reliably.
You would be surprised. Many companies have strict requirements for self-hosting. I worked for a couple of these companies, and managed service was a no-go from the start. They (usually) pay far more for that option than what they would usually pay for the managed service.
Being able to self host means you control your infra, so that's a very good property of the tools you use and should probably be a criterion for choosing what you use, even if you don't use the capability yourself. The ability to self-host increases the chances there's something to migrate your data in and out.
The ability to self host also doesn't prevent someone, including the original developers, to also proving hosted services.
And since the presented tool is open source, it's also possible for another company to provide hosting.
That's a very negative take, and seemingly unfounded.
As with a lot of modern open source, the monetisation comes from providing a hosted/ supported cloud version so your engineers can spend their time solving your core business problems rather than making sure the wiki is online.
That said, it's a Beta, and they've put 12 months into it already to get it where it is.
It's great to have open source competition in this area, so the current lack of a cloud option should put it in the "awesome, I'll check it out, then wait for a cloud option" category.
So you've never been screwed over by an online-only product becoming horribly shitty, or incredibly expensive? To me, the ability to counteract any insane company policies is a big reason that the ability to self host is incredibly important.
Even if it's just a stop gap solution while we find a better solution to migrate to.
This looks neat. Is there an online demo? I looked but couldn't see one (on mobile).
On docmost.com, pinch to zoom is disabled when viewing screenshots (Firefox Android).
I have fixed the zooming issue. Please try now.
Unfortunately, there is no public demo yet. If you email me (in bio), I can create a demo for you.
Can confirm zooming issue is fixed.
I got it running in Docker. I had to run "docker-compose up -d" rather than the instructions' "docker compose up -d" (no dash), which just gave a confusing error message about not knowing "-d".
It looks great!
I got confused at first between Workspaces and Spaces. It seems that Workspaces contain Spaces which contain Pages? I like that, but the names seem too similar.
I noticed the page title doesn't update on some locations. E.g. I go into a page, the title updates, but then when I leave the note the title remains. Similarly the "Login" title persisted after I'd logged in.
It took me a minute to figure out how to access it from other computers on my local network, and the problem was I left the APP_URL as localhost. Confusingly, that partly worked, so it might be worth putting in a warning about it. With APP_URL set to localhost, if on another computer on the same network I go to <ip address>:3000, it redirects to <ip address>:3000/home, so something is connecting, but then nothing loads and it's just a blank screen (there's an error message in the browser console about failing to load resources from "localhost").
How and where are pages stored? I would love to use this, but need robust backup/restore.
Edit: It would be great if going to a page URL when not logged in would redirect to that page after login.
Edit 2: "Copy link" doesn't work? It pops up a message saying "Link copied" but it's not in the clipboard.
1. You were using an older docker compose version, I guess? Newer versions recommend `docker compose` without the dash.
2. Your understanding of the hierarchy is correct. Workspace -> Space -> Page. I admit, the naming is similar and can be confusing. I just couldn't come up with a conclusive alternative while self-debating it. Do you have better suggestions?
3. I will look into the title issue. Thanks for pointing out.
4. I will put the APP_URL thing in mind too.
5. The pages are stored in the pages table. The content is stored in 3 formats, i. Prosemirror Json (default editor state). ii. Yjs state (real-time collaboration). iii. raw Text (for search indexing). The affected tables are json_content, ydoc and text_content
5. I will look into the redirect issue, and "Copy link" bug.
I appreciate your feedback and thank you for trying it out.
1. I guess so. I'm using the one from Ubuntu 23.10.
2. I think something like Project would work better than Space.
4. Does the app need to know its server name? Can it just use relative URLs everywhere instead of absolute? That would simplify configuration.
5. Thanks, I've had a poke around in the DB. I think a Backup/Restore feature in the UI is important. Also good would be a way to request the data programmatically e.g. I'd put it in a cron job before my Borg backup job runs (that would store daily versions using diffs, so it would need to be uncompressed). An "export to HTML/Markdown/[something importable]" feature would help people feel comfortable trusting the app. I have thousands of pages of handwritten notes in an Android app called Squid, and regularly export from that to back it up (it gives me an SQLite DB file only readable by Squid, which is okay but not ideal); without that backup option I'd be pretty worried about losing my work in there, and I get the same feeling about Docmost.
By the way, are you allowed to distribute the Tiptap Pro extensions? E.g. the Latex support and Comments.
1. "Project" sounds good too.
2. With a small change, I can make the frontend use the window.location URL if none is set. On the backend, the catch is emails. Emails with link use the APP_URL to build it.
3. I do not really think "backup" belongs to the UI, but there is a possibility. I plan to work on HTML and Markdown exports. We have what it takes already from the editor.
3. Latex and Comments do not use the Tiptap Pro extensions, so it's fine. Also, the comment extension is entirely different from Tiptap's own which depends on Tiptap Cloud to work.
Congrats on your launch.
Knowledge management is a special area. Look forward to seeing this grow.
As a heavy user of both Confluence and Notion, and in the interest in seeing alternatives like this grow:
Is there any plan to make this tool local/offline-first and mobile-first? There's a big need in this feature, and something that's best baked into the bread early. It's a big gap of Notion and ultimately why I had to ditch it.
Confluence has some ways to at least cache enough of it, or use a plugin. Confluence is also massive, lots of features (including workflows and approvals).. it might be worth clarifying which ones you're covering and planning to cover.
What are you using in place of Notion today, that's more mobile friendly?
Thanks!
Notion is excellent at collaboration, and works so so on mobile the last time I used it. Mostly if you didn't open all the pages you wante before taking off on a flight (imagine no wifi), things fall apart.
On to the option -- currently been giving Anytype a pretty hard go on the colalboration side, as well as playing with Obsidian to see if it can feed into it.
Downvotes are fine, comments are even better.
Try SiYuan Note. It's offline-first and has a mobile app.
Tbh, I have no plans of making it local/offline-first, as that will be a different ball game entirely.
Mobile-first? If you are referring to mobile apps, it will probably come in the future.
For now, it’s just me building. I am focusing more on building the core features of a wiki.
Thank you for your positive words. I appreciate it.
Very cool, looks like React frontend and NestJS backend, so Typescript / Javascript for both. Curious why you chose NestJS over other options?
I mostly used Java, Python, and PHP in my previous projects. On this particular project, I had to learn Node.js. NestJS was a good choice due to my Java Spring experience.
I chose Node mainly because of Yjs, which powers the real-time collaborative editing.
Hmm. Last time I tried yjs it wasn't suitable for production. Ok for a quick demo only. Buy that was a few years ago so presumably those issues have been resolved.
Yjs is a lot more stable now and production ready.
Docmost uses Hocuspocus (by Tiptap) as the websocket backend for Yjs (https://tiptap.dev/docs/hocuspocus/introduction).
Nice!
What was the thought process for AGPL instead of something else ?
AGPL is excellent for promoting the spirit of open source in this new world of web services (that are largely proprietary and SaaS).
AGPL is useful also for preventing commercial improvements that don't make it back into the general product.
It's a Free Software license, not an Open Source license.
Definitely open source too: https://opensource.org/license/agpl-v3
All free software licenses are open source licenses.
Any support for diagrams?
I use PlantUML extensively and tools like Znai and others have native support for it.
Diagrams are hugely valuable.
Even being able to embed something like diagrams.net right into the page via plugin for the time being (and save the resulting file in the system) would be great.
No diagrams yet, but I have plans of integrating Mermaidjs. I will have a look at PlantUML.
I'm a huge fan of wikis, and of particular ways of using them within a company.
(But I'm not as big a fan of certain wiki software products that seem guided by enterprise sales to customers who don't seem to understand wikis. :) )
One thing an enterprise product did do passably well, for a big win, was integration of a drawing tool. Not everyone in a company needs that integration, but some users will, and its presence can mean that a super-helpful visual is captured when it otherwise wouldn't.
Could you summarize a few nuggets of wisdom about why you like wikis? Specifically, what particular ways of use are the most effective within a company?
https://www.tldraw.com/ can be live embedded (in Notion at least, I haven't tried Confluence or others) giving you a very nice shared drawing ability within a wiki that doesn't otherwise support that functionality.
Hi!
I work on XWiki [1]. Nice to see fellows building open source alternatives, we can't have enough of this. I hope you succeed.
It takes a lot and lot of work to build something comparable to Confluence. XWiki has been there since the beginning. How do you position yourself compared to XWiki? What made you decide not to join the forces?
I would love to migrate off Confluence to something open. I tried XWiki but the user experience seemed comparatively rough. Do you have a set of extensions you’d recommend to make it more palatable to those who want a Confluence-like experience?
Both Confluence and XWiki are huge and your experience is subjective, so it really depends on which parts you'd want to see improved. General improvement should not be made through extensions, it should really be done directly in the product. Quite some customers have pushed for improvements over the last months, for instance we now have page ordering in the navigation panel as a core feature thanks to their valuable feedback.
So don't hesitate to share your feedback on the project [1] (generic feedback is interesting to read but specific stuff is more easily addressed), read previous feedback [2], ask questions on the forum [3], chat with us on our community chat [4] or even report bugs [5]. See also the roadmap to see if something important to you is already scheduled [6].
If you have money to spend, also know that XWiki SAS [7] offers consulting and support and we address customer concerns. We also sell "Pro" extensions (with free trial) that cover some features that are expected by Confluence users, among other things [8] (while we sell these features, it's not open core. It's still (truly) open source: you can get the code under LGPL and all).
[1] https://www.xwiki.org/xwiki/bin/view/Survey/ProductFeedback
[2] https://www.xwiki.org/xwiki/bin/view/Main/Feedback/DownloadF...
[4] https://dev.xwiki.org/xwiki/bin/view/Community/Chat
Is there a full demo available anywhere?
No, but I got it running in Docker pretty easily following their instructions: https://news.ycombinator.com/item?id=40834104
No public demo yet, but I can create a temporary one for you if you email (in bio) me.
Single sign-on support is essential for me. If this was compatible with Keycloak/OIDC it would solve a lot of problems. I will be keeping a close eye on this project.
I have plans for OIDC integration.
Please make it trivial to use a designated "remote user" header so upstream reverse proxy could handle auth.
Do you have plans for a SaaS version too?
Note: Outline is another Open-Source Documentation/Wiki and Collaboration tooling option I like.
I have future plans for a cloud version. For now, my focus now is on building a solid software.
Outline is a great software, I have tried it.
That sounds great. Outline only has SSO in the per-user paid version. If you decide on providing additional functionality in a non-free "enterprise" version, please don't use the per usermonth model for people running it on prem.
Have you checked SiYaun and Affine?
Yes, I have. I have tried the Affine demo, but not SiYuan. I think they are both great and promising.
Hi! Congratulations on the launch; the product looks fantastic. Are you experiencing any slowdowns with a paid service like Tiptap? Last I checked, some of the most basic and popular features were behind a paywall. Have you considered using an alternative like Lexical editor?
While some Tiptap Pro extensions would have made things a little easier, I appreciate that the core and majority of the extensions are open source. It has the building blocks to create custom extensions without limits.
Docmost does not depend on any Pro Tiptap extensions.
The team at Tiptap are doing something really amazing. I believe it is fair that they find avenues to make revenue from it.
I like Lexical, but I found Tiptap first and loved it.
The problem with modern wikis is stale data. No one wants to maintain someone else’s data. I won’t be investing into a new wiki until someone solves this problem.
Big issue indeed, and I think it's more generally documentation.
I believe someone needs to be in charge of the documentation and have dedicated time for this. Other issues this person would need to make sure:
- keep a good structure
- avoid duplicated stuff
- avoid orphaned content
This can also be partially addressed by adding a mandatory step to update the documentation when performing tasks that require documentation.
I believe documentation with some stale stuff is be better than no documentation at all though. Imperfection needs to be tolerated.
Any tools available to import from Obsidian? It has a big user base so if you provide import from it, it will provide a quick way to extensively check feature coverage - nothing convinces like using with real world data.
Since Obsidian is Markdown, it should be pretty straightforward to write an import program for it.
When I start working on the import feature, this will be possible.
This is really cool! My big problem with most document software is:
1. Everything is locked in. I want to be able to easily export or back up my notes.
2. The pricing is so nickel and dimey. Have more than 100 nodes in the document tree? Upgrade your tier. Adding new people to projects is a buying decision every time and it’s fatiguing.
Can you tell us more about how it uses pg and redis?
Postgres is the primary database for storing all workspace and user-related data.
Redis is used for queues, collaborative editor state sync across servers, and WebSocket sync across servers. The last two functions are important when running the software on multiple nodes or replicas.
Nice!
Did you consider Nix/Guix instead of docker as a suggested way to deploy? Docker is a harmful and very common tool, witch lead to a gazillion of wasted resources and security nightmare due to pulling anything from unknown sources and put it in production.
Aside, similarly, MarkDown is popular but it's really a crappy set of markups that fails in the most useful productivity aspect: outlining. Org-mode is less known being tied to Emacs but it's far, far more featuresfull and immediate to use.
Beside that's I wish the best luck to all devs, there are gazillion of webapps all suffering the modern stack issue: inability to integrate anything, so the need to recreate the wheel everytime and incorporate a feature at a time to the point of being monsters, but your sauce so far seems to be the most polished I've seen.
The software itself is very easy to install, even without Docker. You are set with just three commands: pnpm install, pnpm build and pnpm start.
Docker is not a hard dependency but it was the easiest way to document it and hope it works for everyone, given the Postgres and Redis requirements.
With time, I will create documentation for other platforms.
Thank you.
Will you be implementing synced blocks?
Synced blocks are interesting. It might come in the future.
Taiga is awesome, and gets you 95% of the way there. Great piece of software.
That looks like it has quite a different purpose.
This looks very interesting!
I like the focus on UI (many open source projects missing this aspect)
Thank you. It was my first time doing react things. The great Mantine UI (https://mantine.dev/) react library helped a lot.
Looks very promising, congratulations on your progress so far.
Do you have plans to offer a hosted/managed/SaaS service? As others have pointed out, not everyone wants to self-host, and offering a managed service doesn't diminish the advantages of it being Free and Open Source (assuming good data export/import features).
For comparison, the SourceHut project offers a managed service, which is well-run, well-liked, and brings them good revenue.
I consider NextCloud to be an example of what not to do. There are plenty of NextCloud providers, but (from what I can tell) none of them are closely tied to the development of NextCloud itself. Bug-reports to service-providers can be expected to be met with that's a NextCloud bug, not our problem.
I have plans for a cloud-hosted version. It will come in the future.
Really wish a designer would go and help you out, this looks really good except for the rather dated look and feel.
Feels very Confluence-like to me. Which components look better in Confluence in your eyes?
Congrats on your launch!
This is something I’m going to keep a close eye on. My company is using confluence and I hate how slow confluence is.
Your marketing site, the menu doesn’t close when clicking on an item on mobile Firefox on IOS
Fixed. Thanks for pointing out.
Have you considered instead writing a plugin for Mediawiki that accomplishes 90% of what you want for 1% of the effort?
Have you considered instead writing a plugin for Mediawiki that accomplishes 90% of what you want for 1% of the effort?
My current take: no matter how you slice it, the above comment strikes me as a leading question that is way off target. Tell me if I'm wrong.
1. MediaWiki plugins? Never heard of them. Do you mean MediaWiki extensions? https://www.mediawiki.org/wiki/Category:Extensions
2. Are you suggesting that MediaWiki is relatively close to Confluence or Notion? In some way? What way?
I predict that the development effort required to adapt MediaWiki to a Notion-type product would be at least hundreds of thousands of dollars. To put it bluntly: foolish and a waste of money. Am I missing something? Tell me.
To better explain my point of view: Notion is a single-page application built (probably) with JavaScript or TypeScript. MediaWiki is rooted in a server-side, old-school web application style. It perhaps may have evolved some, but it has relatively little JS in comparison.
P.S. I realize this comment sounds grumpy. I suppose I find it rather silly and maybe even presumptive to read a comment like the one above. It is a leading question suggesting an incredible (as in unbelievable) claim. It suggests that Docmost is somehow missing a path forward requiring only 1% of the effort. Ok, that sounds appealing. If true. But its suggestion is just bonkers... MediaWiki as a jumping off point?. WAT? I have no relationship to Docmost, but I consider myself a bit protective of open-source developers, especially for a useful product with good potential. I recommend they tune out these kinds of comments for their sanity.
This is really cool! Would it be possible in the future to add OAuth authentication via AzureAD (EntraID)?
Good luck, looks great!
Looks great I am thinking of deploying it. A killer feature would be a staging and approval process. Most systems lack this functionality
This looks really good. I hope you keep building.
I will definitely check it out.
Very cool project! Is there a way to sponsor it?
I also noticed that the documentation is using Docusaurus - it would be awesome to use Docmost for it, so that you have both a demo environment (at least R/O) and do dogfooding
How does it compare to Outline (https://www.getoutline.com/)?
Confluence is the slowest piece of enterprise software I have ever used besides maybe an XL Jira. “Waiting for Confluence” became a thing at paypal like “my monorepo is downloading all of the dependencies for everything.”
Very long coffee breaks, maybe down the street, for documentation from across town to load. We didn’t attempt to update that documentation, so anything better, is better. I’m barely exaggerating
The UI rocks
And that's just for people with typical hands and eyes.
Imagine of what it's like for people with disabilities.
Piggybacking to ask: I'd love to pay like a $100 to a person with disabilities to use my website for a couple of hours and shit on the experience. Has anyone ever done this? Is there a service you've used and have been happy with?
My exerience with automated tools has been less than stellar. Oh, an image has an alt tag? Congrats, it's accessible, even if that alt tag literally just says "alt tag". I also don't think "simply turn on accessibility features yourself" is a proper solution. It feels like there's a massive difference between me using such tools for the the purpose of testing one website and someone actually relying on such tools.
This feels like it should be a more concrete service, with a focus on actually interacting with the testers. Maybe this exists already, but my limited experiences have been using "accessibility certification" services where you submit a site, and days or weeks later get back a huge list of stuff to "fix" without any real guidance or ability to even check "is this what you meant?" before resubmitting and waiting for days/weeks again. It's the opposite of 'fast feedback' and 'talking to end users' - slow feedback mediated through a service preventing any interaction with the actual testers.
Generally what you want is to have people like this embedded within the team/the company itself for this reason. When I do an audit, or when I stream on Twitch about a product/website/whatever I aalways make sure I remain available for questions/followups, but accessibility isn't a checkbox to check, it's a standard you maintain and that means experts need to be there to help you maintain that standard.
Not everyone can afford to have extra team members doing that. Some of us are working with skeletal teams to begin with, but would still like ways of validating/reviewing a11y concerns with real people, not just scripts and automated tools. And yes, on an ongoing basis, not just a one-time thing.
If you use HTML5 and use the alt settings, have simple tables and use a friendly color palette your 90% better than most on accessibility.
I had a couple of such sessions as part of accessibility certification when rebuilding a largish government website years back. It's definitely worth it for understanding the why of the accessibility recommendations, rather than just following a checklist. And if budget allows to do website improvements that go beyond.
The experience was also humbling in an awesome way. I think I still haven't seen anyone navigate the web as fast as this one blind person was capable of, due to the mastery of his tools.
Indeed. I am reminded of the major catalyst that got me to become serious about learning vim: I saw a senior developer flying around in his text editor, editing things like a ninja, and I had to learn that skill. I've been extremely glad I invested the time.
Had the same feeling now three times while watching vision, impaired people using accessibility tools. They can absolutely fly through things much faster than a typical user can. Aside from the general awesomeness that is taking something that most people would consider a disability and turning it into a superpower, it makes me think that I am really missing something by not having that skill.
Has anybody done this before and can offer some advice for how to start, and where to go with it? I am a Linux-only user, which I assume is going to matter for the tooling.
There definitely are accessibility testers who do this kind of thing professionally. The latest episode of the Linux After Dark podcast [1] had Florian Beijers [2] as a guest who livestreams accessibility testing of open source projects and does professional accessibility consulting.
[1] https://linuxafterdark.net/linux-after-dark-episode-72/ [2] https://florianbeijers.xyz/
Lol that was me :)
That is pretty much my dayjob :P Although I'm also a developer myself so can argue from both the blind person's view of "This button doesn't tell my screen reader what it's meant to accomplish", as well as the developer's point of view of " You simply forgot to add an accessible label, don't you dare tell me you don't have capacity to fix something that'd take 10 seconds to fix". ;)
There are people in Fiverr who specialise in it. (But a lot of them will just run basic tools, so watch out) A shout-out on mastodon may give you some good leads too.
Meta comment, but this made me smile. I love the vivid expression, the raw humility, and the philosophically deep but humorous and light hearted commentary on how it feels to have your code evaluated :-)
It's a great idea to be proactive in this area, but always make it easy for your users to report problems. People with different conditions, whether classified as disabilities or not, have varying capabilities. You might be surprised by the issues people encounter with your product, even if you follow all guidelines meaningfully (so not just with placeholders like alt="image" - yes, even then).
Source: I've been part of an accessibility taskforce at my company for a long time.
Whilst important, it certainly wouldn't be the top of my list of things to tackle when getting a project off the ground
And while that is a perfectly valid stance to have on this, you'll more than likely shoot yourself in the foot if you don't.
- Wayland didn't think it was important to immediately include this, and now we have the majority of linux distributions having serious issues with screenreaders and other assistive tech. Fedora's shipped with this broken for almsot a decade. Calamares didn't think it'd be important to fix and has been broken for about as long. - Particularly now, with devs grabbing a component library on top of React with a generous helping of CSS frameworks and third-party NPM-based extra bits that are all tangled together, and what have you, if you don't vet this stuff beforehand you'll have to retrofit half your UI to fix things after the fact. That, right there, is why accessibility seems so hard to implement.
Fixing a native HTML select for accessibility is easy; it already is. Fixing some componentized overengineered monstrosity that figured they'd get to it later and as a result doesn't speak with screen readers, doesn't work on phones, doesn't work when zoomed in, doesn't respond to speech recognition software, goes absolutely nuts when the user scrolls, and doesn't let you use it properly with a keyboard... yeah that is harder :)
That’s the most common approach, but a cynical rewording of that statement is: I’m not making it accessible by default.
I think it makes sense to learn about it early and do a11y as much by design as possible. I think a universal design approach will help anyone. Many power users will appreciate good keyboard only navigation as a side effect. With a bit of a11y knowledge you might be able to catch a lot of low hanging fruits. Wouldn't do it with public procurement and overdone rigout in mind. Just spinning up a screen reader on an app can actually be a fun experience.
Also auth. I would rather just lose all my shit than try to go through the login rigamarole on either of these sites again.
Are we talking about Confluence from Atlassian’s login? What’s so bad about it? It’s usually tied to your SSO provider, so you just have to sign in to your work account. In the server days, it was connected to your AD/LDAP password.
I don’t like Atlassian products very much for a lot of reasons (each iteration of the UI gets worse), but the login process has never been an issue for me, so I’m surprised to see your comment.
Not OP, but have to use the cloud version of Jira and Confluence. My biggest complaint is that they put the "Yes! Send me news and offers from Atlassian about products, events, and more." checkbox in the place where I would expect the "Remember me" checkbox.
Absolutely psychopatic behaviour.
I can’t explain how frustrating it is on Amazon where that checkbox instead reveals your password in plain text. Super easy way to dox yourself.
How important could this be to companies really, if they are using Confluence and Notion, which are already doing this poorly.
I understand that in the US, the ADA can make for a nasty sting which can make them care in a hurry.
Depends on the company. In many areas companies are required to not discriminate against disabled employees. Just like your physical space must be accessible, your digital tools must be too. Otherwise you would might, say, pass on a more experienced and knowledgeable blind candidate in favor of a less qualified sighted person, because your internal tools can’t be used independently by a blind person.
Lots of companies are technically exposed to the risks related to this kind of thing, could be legitimately sued. But they don’t always recognize this aspect of their exposure.
It seems natural to me that this sort of thing will become more important over time.
More interesting to me though is what prompted your question. Parent requested making things accessible because that’s something as individual they need and benefit from. It wasn’t about how important it might be to “companies”. Individual people need accessibility.
I thought about it.
For example, the sidebar page tree supports keyboard navigation.
The UI library I am using, Mantine, follows accessibility best practices and has full keyboard support.
There is still a lot to do in this regard. As the project progresses, more support will come.
In the past, I built a Twitter bot (@threadvoice) to help people listen to Twitter threads in audio format ( https://twitter.com/Philipofficial9/status/11899711858004869... ). I had accessibility as one of my motivations while building it.