return to table of content

Running OCR against PDFs and images directly in the browser

Oras
7 replies
1d19h

This is nice but Tesseract does not perform well when it comes to tables, at least when I tried it on multiple documents.

It would miss some cells from a table, or does not recognise all the numbers when they have commas.

simonw
6 replies
1d19h

Tables are still the big unsolved problem for me.

There are a ton of potential tools out there like Tabula and AWS Textract table mode but none of them have felt like the perfect solution.

I've been trying Gemini Pro 1.5 and Claude 3 Opus and they looked like they worked... but in both cases I spotted them getting confused and copying in numbers form the wrong rows.

I think the best I've tried is the camera import mode in iOS Excel! Just wish there was an API for calling that one programmatically.

simonw
0 replies
1d18h

No I hadn't heard of that one!

sumedh
0 replies
1d19h

Google and Azure have their own PDF Table extraction service but I have noticed Textract is a bit better.

maCDzP
0 replies
1d12h

I think the camera import on Excel MacOS works pretty well. You could probably call that version through an API.

f_k
0 replies
1d5h

If you're on Windows try https://table2xl.com (disclosure: I'm the founder), it's more accurate than Excel's camera import. No API though.

reliablereason
6 replies
1d22h

Safari already does that. Quite a useful feature.

minimaxir
4 replies
1d21h

Specifically, only Apple Silicon allows automatic OCR. Works on iOS too.

pvg
3 replies
1d21h

It works on Intel Safari as well.

minimaxir
2 replies
1d20h

It doesn’t work on my Intel Macs unless it’s really slow.

pvg
0 replies
1d18h

Maybe it is! I mostly use it to select text in social media images for which it feels reasonably responsive.

__jonas
0 replies
1d18h

It works for me on an Intel MPB (2020) but it's probably a lot slower.

On this page: https://en.wikipedia.org/wiki/Typeface it takes almost ~10 seconds for the text in the first image to become selectable after page load.

With local images / PDFs in Preview it's really quick though

Noumenon72
0 replies
1d15h

Neat.

You can also use MacOS's OCR capability to create a shortcut that allows you to copy and paste the text out of any region on the screen -- for example, a stack trace someone is showing you in a screen share.

https://apple.stackexchange.com/a/468362

aabhay
6 replies
1d21h

I was really impressed until I realized that the app is basically a wrapper around tesseract.js, which is the actually cool part. Tesseract has a wasm port that can operate inside of a webworker.

Not saying that the article was being misleading about this, just saying that the LLM part is basically doing some standard interfacing and HTML/CSS/JS around that core engine, which wasn’t immediately obvious to me when scanning the screenshots.

authorfly
2 replies
1d11h

Simon - hope you don't mind me commenting on you in third person in relation to the above. Simon is a great explainer, but I wish he would credit the underlying technology or library (like tesseract.js) a bit more upfront, like you.

It matters in this case because for tesseract, the exact model is incredibly important. For example, v4 is pretty bad (but what is available on most linux distros when ran serverside) whereas v5 is decent. So I would have had a more accurate interest in this post if it was a bit more upfront that "Tesseract.js lets you run OCR against PDFs fairly quickly now, largely because of better processors we as devs have, not because of any real software change in the last 2-3 years".

I felt this before for his NLP content too - but clearly it works because he's such a great explainer and one for teasing content later that you do read it! I must say I've never been left confused by Simons work.

kuschkufan
0 replies
1d

It's all subjective. Reading your linked blog made it perfectly clear for me you built this using tesseract.js. No idea what the other guys are complaining about.

targafarian
0 replies
1d4h

You act like you were misled, but the article, within the first few sentences, says he realized the tools are available to do this (including naming tesseract.js explicitly!), he just needed to glue them together. Then he details how he does that, and only then mentions he used an LLM to help him in that process. The author's article title is equally not misleading.

Was an earlier headline or subtitle here on HN what was misleading, but then that was changed to not be misleading?

spullara
0 replies
1d15h

Using the built-in browser OCR is usually much better but it is still behind an experimental API.

simonw
0 replies
1d21h

The LLM part is almost irrelevant to the final result to be honest: I used LLMs to help me build an initial prototype in five minutes that would otherwise have taken me about an hour, but the code really isn't very complex.

The point here is more about highlighting that browsers can do this stuff, and it doesn't take much to wire it all together into a useful interface.

jgalt212
4 replies
1d6h

Are these paid posts?

simonw
3 replies
20h13m

What gave you that idea?

jgalt212
2 replies
18h10m

This quote.

The LLM part is almost irrelevant to the final result to be honest
simonw
1 replies
15h41m

Why would that suggest I'm being paid by anyone?

Oh I think I see. No, I'm not being paid to promote LLMs.

The point of my blog post was two-fold: first, to introduce the OCR tool I built. And second, to provide yet another documented example of how I use LLMs in my daily development work.

The tool doesn't use LLMs itself, they were just a useful speed-up in building it.

It's part of a series of posts, see also: https://simonwillison.net/tags/aiassistedprogramming/

jgalt212
0 replies
4h47m

My reasoning: There are hundreds of billions of dollars at stake getting the wider world to embrace LLMs on a long-term basis. If I were a VC, or Nvidia/OpenAI/MS marketing person, I'd be paying trusted names such as yourself to post about using LLMs. That coupled with the loose link between OCR and LLMs in your latest post created an itch I thought was worthy of a scratch.

No, I'm not being paid to promote LLMs.

This is good enough for me, thanks.

ignoramous
4 replies
1d20h

Wow, this is promising. I tried on a few poorly scanned papers I've lying about. A few observations:

1. Pre-process PDF images to detect letters better?

2. Use LLMs to spell/grammar check and perhaps even auto-complete missing pieces?

3. Employ rich text to capture style (ex: lexical.dev)?

Unsure if it is feasible to bundle it all up for web.

See also: https://github.com/RajSolai/TextSnatcher / https://github.com/VikParuchuri/surya

simonw
2 replies
1d20h

I've been trying out alternative versions of this that pass images through to e.g. the Claude 3 vision models, but they're harder to share with people because they need an API key!

CharlesW
0 replies
1d19h

FYI, cert is expired.

yjftsjthsd-h
0 replies
1d16h

Use LLMs to spell/grammar check and perhaps even auto-complete missing pieces?

I would really want human review. Remember that copier that changed digits because it was being clever with compression?

voisin
3 replies
1d17h

Is there something I can run on my Mac that will systematically OCR every PDF on my drive for easy searching?

simonw
2 replies
1d17h

My s3-ocr tool could do that with quite a bit of extra configuration.

https://github.com/simonw/s3-ocr

You would need to upload them all to S3 first though, which is a bit of a pain just to run OCR (that's Textract's fault).

You could try stitching together a bunch of scripts to run the CLI version of Tesseract locally.

xrd
1 replies
1d7h

If it's s3 and the url is configurable, you could probably run minio inside a docker container locally and save a few minutes?

simonw
0 replies
22h53m

Sadly to use AWS Textract with PDFs you have to push to an AWS S3 bucket and then pass the bucket and file name to the Textract API.

giovannibonetti
2 replies
1d18h

In the same vein, I'm building a tool [1] to extract tables from PDFs (no OCR yet) and spreadsheets. The end goal is to make it easy to combine data from multiple sources by joining tables and produce some useful reports out of it.

The PDF parsing is done by the excellent PDFplumber Python library [2], the web app is built with Elixir's Phoenix framework and it is all hosted on Fly.io.

[1] https://data-tools.fly.dev [2] https://github.com/jsvine/pdfplumber

serjester
1 replies
1d17h

I recently built a similar tool except it’s configured to use some deep learning libraries for the table extraction. I’m excited to integrate unitable which has state of the art performance later this week.

I built this because most of the basic layout detection libraries have terrible performance on anything non trivial. Deep learning is really the long term solution here.

https://github.com/Filimoa/open-parse

pants2
0 replies
1d12h

This is extremely cool and exactly what I've been looking for. Looking forward to trying it out.

codazoda
2 replies
1d21h

This is timely. I just completed a few experiments and wrote a little about doing OCR on my handwritten notes.

https://notes.joeldare.com/handwritten-text-recognition

Tesseract was one of the tools I tested, although I used the CLI instead of the WASM version.

martin82
0 replies
17h38m

Link does not seem to work

f_k
0 replies
1d17h

Shameless plug: You can also try https://getsearchablepdf.com for batch OCR, it supports images and handwriting.

zzz999
1 replies
1d18h

Does that run locally without the need to share information?

simonw
0 replies
1d18h

Yes. Nothing leaves your browser.

mwcampbell
0 replies
1d21h

We already have our own tools for that, either integrated into screen readers or available as add-ons. Thanks for the thought, though.

fbdab103
1 replies
1d20h

The example on the Tesseract.js page shows it highlighting the rectangles of where the selected text originated. Does this level of information get surfaced through the library for consumption?

I just grabbed a two-column academic PDF, which performed as well as you would expect. If I was returned a json list of text + coordinates, I could do some dirty munging (eg footer is anything below this y index, column 1 is between these x ranges, column 2 is between these other x ranges) to self-assemble it a bit better.

simonw
0 replies
1d20h

Yes it does, but I've not dug into the more sophisticated parts of the API at all yet. I'm using it in the most basic way possible right now:

    const {data: {text}} = await worker.recognize(imageUrl);

smusamashah
0 replies
1d18h

The amazing thing here is that this tool is almost all compiled using LLM. This is very exciting. I have been using GPT-4 a lot lately to make tiny utilities. Things I wouldn't have even tried because of how much effort it takes to get started on those simple things.

I always wanted to make a chrome extension for one thing or another, but all the learning involved around the boilerplate always drained the motivation. But with GPT I built the initial POC in an hour and then polished and published it on store even. Recently I compiled some bash and cmd helper scripts, I don't know either of these enough (do know some bash) and don't have it in me to learn them. Specially the windows batch scripts. Using LLM it was matter of an hour to write a script for my need as either a windows batch script or even bash script.

Oh I even used GPT it to write 2-3 AutoHotKey scripts. LLMs are amazing. If you know what you are looking for, you can direct them to your advantage.

Very exciting to see that people are using LLMs similarly to build things they want and how they want.

pyuser583
0 replies
1d13h

This behavior really freaks me out. It raises the potential that hostile urls will be introduced.

kordlessagain
0 replies
1d5h

Here's an EasyOCR service: https://github.com/MittaAI/mitta-community/tree/main/service.... It will run locally, or leverage GPUs if they are available.

A PDF to image processor is being built and should be out in a few weeks

No docs, but happy to help anyone wanting to use either. Email is kord @ the company I'm working on.

ein0p
0 replies
1d12h

Tesseract is way outdated though, to the point of being borderline useless when compared to alternatives. What’s the current deep learning based FOSS SOTA, does anyone know? I want something that does what FineReader does - create a high quality searchable text underlay for scanned PDFs.

_ache_
0 replies
1d10h

For now, I'm still using OCRmyPDF as it maybe slow but incredible usefull.

The files become big but it just works.

If an alternative is quicker / lighter I will use it but it must just works.

CaffeinatedDev
0 replies
1d20h

This is cool! I've also used tesseract OCR and found it to be pretty amazing in terms of speed and accuracy.

I use it for ingest of image and pdf type files for my own website chatting tool: tinydesk.ai!

I run the backend on an express js server so all js as well.

Smaller docs I do on the client side, but larger ones (>1.5mb) I've found take forever so those process in the backend.