This is nice but Tesseract does not perform well when it comes to tables, at least when I tried it on multiple documents.
It would miss some cells from a table, or does not recognise all the numbers when they have commas.
This is nice but Tesseract does not perform well when it comes to tables, at least when I tried it on multiple documents.
It would miss some cells from a table, or does not recognise all the numbers when they have commas.
Safari already does that. Quite a useful feature.
Specifically, only Apple Silicon allows automatic OCR. Works on iOS too.
It works on Intel Safari as well.
It doesn’t work on my Intel Macs unless it’s really slow.
Maybe it is! I mostly use it to select text in social media images for which it feels reasonably responsive.
It works for me on an Intel MPB (2020) but it's probably a lot slower.
On this page: https://en.wikipedia.org/wiki/Typeface it takes almost ~10 seconds for the text in the first image to become selectable after page load.
With local images / PDFs in Preview it's really quick though
Neat.
You can also use MacOS's OCR capability to create a shortcut that allows you to copy and paste the text out of any region on the screen -- for example, a stack trace someone is showing you in a screen share.
I was really impressed until I realized that the app is basically a wrapper around tesseract.js, which is the actually cool part. Tesseract has a wasm port that can operate inside of a webworker.
Not saying that the article was being misleading about this, just saying that the LLM part is basically doing some standard interfacing and HTML/CSS/JS around that core engine, which wasn’t immediately obvious to me when scanning the screenshots.
Simon - hope you don't mind me commenting on you in third person in relation to the above. Simon is a great explainer, but I wish he would credit the underlying technology or library (like tesseract.js) a bit more upfront, like you.
It matters in this case because for tesseract, the exact model is incredibly important. For example, v4 is pretty bad (but what is available on most linux distros when ran serverside) whereas v5 is decent. So I would have had a more accurate interest in this post if it was a bit more upfront that "Tesseract.js lets you run OCR against PDFs fairly quickly now, largely because of better processors we as devs have, not because of any real software change in the last 2-3 years".
I felt this before for his NLP content too - but clearly it works because he's such a great explainer and one for teasing content later that you do read it! I must say I've never been left confused by Simons work.
I was pretty careful to credit Tesseract.js - it's linked at the top of the tool itself https://tools.simonwillison.net/ocr and prominently in the article I wrote: https://simonwillison.net/2024/Mar/30/ocr-pdfs-images/
What else should I have done?
It's all subjective. Reading your linked blog made it perfectly clear for me you built this using tesseract.js. No idea what the other guys are complaining about.
You act like you were misled, but the article, within the first few sentences, says he realized the tools are available to do this (including naming tesseract.js explicitly!), he just needed to glue them together. Then he details how he does that, and only then mentions he used an LLM to help him in that process. The author's article title is equally not misleading.
Was an earlier headline or subtitle here on HN what was misleading, but then that was changed to not be misleading?
Using the built-in browser OCR is usually much better but it is still behind an experimental API.
The LLM part is almost irrelevant to the final result to be honest: I used LLMs to help me build an initial prototype in five minutes that would otherwise have taken me about an hour, but the code really isn't very complex.
The point here is more about highlighting that browsers can do this stuff, and it doesn't take much to wire it all together into a useful interface.
Are these paid posts?
What gave you that idea?
This quote.
The LLM part is almost irrelevant to the final result to be honest
Why would that suggest I'm being paid by anyone?
Oh I think I see. No, I'm not being paid to promote LLMs.
The point of my blog post was two-fold: first, to introduce the OCR tool I built. And second, to provide yet another documented example of how I use LLMs in my daily development work.
The tool doesn't use LLMs itself, they were just a useful speed-up in building it.
It's part of a series of posts, see also: https://simonwillison.net/tags/aiassistedprogramming/
My reasoning: There are hundreds of billions of dollars at stake getting the wider world to embrace LLMs on a long-term basis. If I were a VC, or Nvidia/OpenAI/MS marketing person, I'd be paying trusted names such as yourself to post about using LLMs. That coupled with the loose link between OCR and LLMs in your latest post created an itch I thought was worthy of a scratch.
No, I'm not being paid to promote LLMs.
This is good enough for me, thanks.
Wow, this is promising. I tried on a few poorly scanned papers I've lying about. A few observations:
1. Pre-process PDF images to detect letters better?
2. Use LLMs to spell/grammar check and perhaps even auto-complete missing pieces?
3. Employ rich text to capture style (ex: lexical.dev)?
Unsure if it is feasible to bundle it all up for web.
See also: https://github.com/RajSolai/TextSnatcher / https://github.com/VikParuchuri/surya
I've been trying out alternative versions of this that pass images through to e.g. the Claude 3 vision models, but they're harder to share with people because they need an API key!
In case you wanted to add a pre-processing step, I found this ImageMagick script useful: https://www.fmwconcepts.com/imagemagick/textcleaner/index.ph...
Not sure how difficult it is to run it in the browser, though.
FYI, cert is expired.
Use LLMs to spell/grammar check and perhaps even auto-complete missing pieces?
I would really want human review. Remember that copier that changed digits because it was being clever with compression?
Is there something I can run on my Mac that will systematically OCR every PDF on my drive for easy searching?
My s3-ocr tool could do that with quite a bit of extra configuration.
https://github.com/simonw/s3-ocr
You would need to upload them all to S3 first though, which is a bit of a pain just to run OCR (that's Textract's fault).
You could try stitching together a bunch of scripts to run the CLI version of Tesseract locally.
If it's s3 and the url is configurable, you could probably run minio inside a docker container locally and save a few minutes?
Sadly to use AWS Textract with PDFs you have to push to an AWS S3 bucket and then pass the bucket and file name to the Textract API.
In the same vein, I'm building a tool [1] to extract tables from PDFs (no OCR yet) and spreadsheets. The end goal is to make it easy to combine data from multiple sources by joining tables and produce some useful reports out of it.
The PDF parsing is done by the excellent PDFplumber Python library [2], the web app is built with Elixir's Phoenix framework and it is all hosted on Fly.io.
[1] https://data-tools.fly.dev [2] https://github.com/jsvine/pdfplumber
I recently built a similar tool except it’s configured to use some deep learning libraries for the table extraction. I’m excited to integrate unitable which has state of the art performance later this week.
I built this because most of the basic layout detection libraries have terrible performance on anything non trivial. Deep learning is really the long term solution here.
This is extremely cool and exactly what I've been looking for. Looking forward to trying it out.
This is timely. I just completed a few experiments and wrote a little about doing OCR on my handwritten notes.
https://notes.joeldare.com/handwritten-text-recognition
Tesseract was one of the tools I tested, although I used the CLI instead of the WASM version.
Link does not seem to work
Shameless plug: You can also try https://getsearchablepdf.com for batch OCR, it supports images and handwriting.
Does that run locally without the need to share information?
Yes. Nothing leaves your browser.
I was thinking of doing something like this for visually impaired users. The next step is to pipe it into the JavaScript web speech synthesis API.
https://mdn.github.io/dom-examples/web-speech-api/speak-easy...
We already have our own tools for that, either integrated into screen readers or available as add-ons. Thanks for the thought, though.
The example on the Tesseract.js page shows it highlighting the rectangles of where the selected text originated. Does this level of information get surfaced through the library for consumption?
I just grabbed a two-column academic PDF, which performed as well as you would expect. If I was returned a json list of text + coordinates, I could do some dirty munging (eg footer is anything below this y index, column 1 is between these x ranges, column 2 is between these other x ranges) to self-assemble it a bit better.
Yes it does, but I've not dug into the more sophisticated parts of the API at all yet. I'm using it in the most basic way possible right now:
const {data: {text}} = await worker.recognize(imageUrl);
The amazing thing here is that this tool is almost all compiled using LLM. This is very exciting. I have been using GPT-4 a lot lately to make tiny utilities. Things I wouldn't have even tried because of how much effort it takes to get started on those simple things.
I always wanted to make a chrome extension for one thing or another, but all the learning involved around the boilerplate always drained the motivation. But with GPT I built the initial POC in an hour and then polished and published it on store even. Recently I compiled some bash and cmd helper scripts, I don't know either of these enough (do know some bash) and don't have it in me to learn them. Specially the windows batch scripts. Using LLM it was matter of an hour to write a script for my need as either a windows batch script or even bash script.
Oh I even used GPT it to write 2-3 AutoHotKey scripts. LLMs are amazing. If you know what you are looking for, you can direct them to your advantage.
Very exciting to see that people are using LLMs similarly to build things they want and how they want.
This behavior really freaks me out. It raises the potential that hostile urls will be introduced.
Here's an EasyOCR service: https://github.com/MittaAI/mitta-community/tree/main/service.... It will run locally, or leverage GPUs if they are available.
A PDF to image processor is being built and should be out in a few weeks
No docs, but happy to help anyone wanting to use either. Email is kord @ the company I'm working on.
Tesseract is way outdated though, to the point of being borderline useless when compared to alternatives. What’s the current deep learning based FOSS SOTA, does anyone know? I want something that does what FineReader does - create a high quality searchable text underlay for scanned PDFs.
This is cool! I built something similar but it's CLI based. https://github.com/lifeiswilde/textract-ai
For now, I'm still using OCRmyPDF as it maybe slow but incredible usefull.
The files become big but it just works.
If an alternative is quicker / lighter I will use it but it must just works.
This is cool! I've also used tesseract OCR and found it to be pretty amazing in terms of speed and accuracy.
I use it for ingest of image and pdf type files for my own website chatting tool: tinydesk.ai!
I run the backend on an express js server so all js as well.
Smaller docs I do on the client side, but larger ones (>1.5mb) I've found take forever so those process in the backend.
Tables are still the big unsolved problem for me.
There are a ton of potential tools out there like Tabula and AWS Textract table mode but none of them have felt like the perfect solution.
I've been trying Gemini Pro 1.5 and Claude 3 Opus and they looked like they worked... but in both cases I spotted them getting confused and copying in numbers form the wrong rows.
I think the best I've tried is the camera import mode in iOS Excel! Just wish there was an API for calling that one programmatically.
Out of curiosity have you tried ocrs by Robert Knight? https://github.com/robertknight/ocrs
No I hadn't heard of that one!
Google and Azure have their own PDF Table extraction service but I have noticed Textract is a bit better.
I think the camera import on Excel MacOS works pretty well. You could probably call that version through an API.
Would this be helpful? https://github.com/facebookresearch/nougat
Seems like it can handle tables.
If you're on Windows try https://table2xl.com (disclosure: I'm the founder), it's more accurate than Excel's camera import. No API though.