return to table of content

Show HN: Semantic Grep – A Word2Vec-powered search tool

fzeindl
5 replies
14h12m

Fyi, there is already a widely used tool along with a company called semgrep, which stems from semantic grep: https://semgrep.dev/.

tracerbulletx
2 replies
13h15m

God, the amount of enterprise speak in there is overwhelming.

throwaway290
1 replies
9h44m

Because of how it is marketed. But it is real and very much works.

bitexploder
0 replies
4h39m

And you can just go to its github to avoid the product side.

icholy
1 replies
13h40m

The similarities between semgrep and the linked tool start and end with the name.

fzeindl
0 replies
13h17m

Yes, you are right.

fbdab103
5 replies
21h55m

I might have a work use case for which this would be perfect.

Having no experience with word2vec, some reference performance numbers would be great. If I have one million PDF pages, how long is that going to take to encode? How long will it take to search? Is it CPU only or will I get a huge performance benefit if I have a GPU?

9dev
2 replies
21h39m

As someone working extensively with word2vec: I would recommend to set up Elasticsearch. It has support for vector embeddings, so you can process your PDF documents once, write the word2vec embeddings and PDF metadata into an index, and search that in milliseconds later on. Doing live vectorisation is neat for exploring data, but using Elasticsearch will be much more convenient in actual products!

michaelmior
1 replies
20h20m

I would personally vote for Postgres and one of the many vector indexing extensions over Elasticsearch. I think Elasticsearch can be more challenging to maintain. Certainly a matter of opinion though. Elasticsearch is a very reasonable choice.

9dev
0 replies
11h1m

Elasticsearch is a pain to maintain, the docs are all over the place, and the API is what you end up with if developers run free and implement everything that jumps to their mind.

But there just isn’t anything comparable when it comes to building your own search engine. Postgres with a vector extension is good if all you want to do is a vector search and some SQL (not dismissing it here, I love PG); but if you want more complex search cases, Elasticsearch is the way to go.

kristopolous
0 replies
20h52m

No. Look at the implementation. Gpu isn't going to give you huge gains in this one

SkyPuncher
0 replies
21h25m

Vector stuff doesn’t take much. It’s essentially the same computational time as regular old text search.

danieldk
5 replies
11h25m

Some small tips from superficially reading the code:

https://github.com/arunsupe/semantic-grep/blob/b7dcc82a7cbab...

You can read the vector all at once. See e.g.:

https://github.com/danieldk/go2vec/blob/ee0e8720a8f518315f35...

---

https://github.com/arunsupe/semantic-grep/blob/b7dcc82a7cbab...

You can compute the similarity much faster by using BLAS. Good BLAS libraries have SIMD-optimized implementations. Or if you do multiple tokens as once, you can do a matrix-vector multiplication (sgemv), which will be even faster in many implementations. Alternatively, there is probably also a SIMD implementation in Go using assembly (it has been 7 years since I looked at anything in the Go ecosystem).

You could also normalize the vectors while loading. Then during runtime the cosine similarity is just the dot product of the vectors (whether it pays off depends on the size of your embedding matrix and the size of the haystack that you are going to search).

neonsunset
4 replies
6h2m

SIMD situation in Go is still rather abysmal, it’s likely easier to just FFI (though FFI is very slow so I guess you're stuck with ugly go asm if you are using short vectors). As usual, there's a particular high-level language that does it very well, and has standard vector similarity function nowadays...

bitexploder
3 replies
4h42m

Go ahead and share the language, it is good etiquette for HN :)

neonsunset
2 replies
3h39m

Actually, there's three:

C#: https://github.com/dotnet/runtime/blob/main/docs/coding-guid... (and full family of other types: Vector2/3/4, Matrix3x2/4x4 and upcoming Tensor<T>), vector similarity I was talking about is this: https://learn.microsoft.com/en-us/dotnet/api/system.numerics..., it uses a highly optimized SIMD kernel, for DotProduct just use an adjacent method

Swift: https://developer.apple.com/documentation/swift/simd-vector-... it is also a competent language at portable SIMD by virtue of using LLVM and offering almost the same operators-based API (e.g. masked = vec1 & ~vec2) like C#

Mojo: https://docs.modular.com/mojo/stdlib/builtin/simd which follows the above two, it too targets LLVM so expect good SIMD codegen as long the lowering strategy does it in an LLVM-friendly way, which I have not looked at yet.

e12e
1 replies
1h37m

Not Julia?

neonsunset
0 replies
1h10m

Julia is less "general-purpose" and I know little about the quality of its codegen (it does target LLVM but I haven't seen numbers that place it exactly next to C or C++ which is the case with C#).

Mojo team's blog posts do indicate they care about optimal compiler output, and it seems to have ambitions for a wider domain of application which is why it is mentioned.

drdeca
4 replies
20h55m

Very cool!

Do I understand correctly that this works by splitting each line into words, and using the embedding for each word?

I wonder whether it might be feasible to search by semantics of longer sequences of text, using some language model (like, one of the smaller ones, like GPT2-small or something?). Like, so that if you were searching for “die”, then “kick the bucket” and “buy the farm”, could also match somehow? Though, I’m not sure what vector you would use to do the dot product with, when there is a sequence of tokens, each with associated key vectors for each head at each layer, rather than a single vector associated with a word.. Maybe one of the encoder-decoder models rather than the decoder only models?

Though, for things like grep, one probably wants things to be very fast and as lightweight as feasible, which I imagine is much more the case with word vectors (as you have here) than it would be using a whole transformer model to produce the vectors.

Maybe if one wanted to catch words that aren’t separated correctly, one could detect if the line isn’t comprised of well-separated words, and if so, find all words that appear as a substring of that line? Though maybe that would be too slow?

throwawaydummy
3 replies
19h45m

I wanna meet the person who greps die, kick the bucket and buy the farm lol

Are models like mistral there yet in terms of token per second generation to run a grep over millions of files?

ignoramous
2 replies
19h5m

Mistral has published large language models, not embedding models? sgrep uses Google's Word2Vec to generate embeddings of the corpus and perform similarity searches on it, given a user query.

throwawaydummy
1 replies
18h53m

No I got that I asked because wouldn’t embedding generated by fine tuned transformer based LLMs be more context aware? Idk much about the internals so apologies if this was a dumb thing to say

throw156754228
3 replies
7h6m

The model of a word to a vector breaks down really quickly one you introduce the context and complexity of human language. That's why we went to contextual embeddings, but even they have issues.

Curious would it handle negation of trained keywords, e.g "not urgent"?

djoldman
0 replies
5h54m

Definitely a limitation of word2vec as applied here.

Something like SBERT addresses this.

arunsupe
0 replies
6h50m

This implementation does not because the query has to be a word. One way to extend it to phrases is to average the vectors of each word in the phrase. Another way is to have a word2vec model that embeds phrases. (The large Google News model I think does have phrases, with spaces converted to underscore). But going from words to phrases opens up a whole new can of worms - how large a phrase to consider from the input stream etc. Plus, I don't think averaging words in the phrase is the same as learning the embedding for the phrase. Sentence embedding models are necessary for that, but they are far too slow for this use case as pointed out by others.

To summarize, this is a simple implementation that works for the simplest use case - semantic matching of words.

VHRanger
0 replies
3h37m

For a lot of cases, word2vec/glove still work plenty well. It also runs much faster and lighter when doing development -- the FSE library [1] does 0.5M sentences/sec on CPU, whereas the fastest sentence_transformers [2] do something like 20k sentences/sec on a V100 (!!).

For the drawbacks:

Word embeddings are only good at similarity search style queries - stuff like paraphrasing.

Negation they'll necessarily struggle with. Since word embeddings are generally summed or averaged into a sentence embedding, a negation won't shift the sentence vector space around the way it would in a LM embedding.

Also things like homonyms are issues, but this is massively overblown as a reason to use LM embeddings (at least for latin/germanic languages).

Most people use LM embeddings because they've been told it's the best thing by other people rather than benchmarking accuracy and performance for their usecase.

1. https://github.com/oborchers/Fast_Sentence_Embeddings

2. https://www.sbert.net/docs/sentence_transformer/pretrained_m...

sitkack
2 replies
22h58m

Your post might have been flagged because of your example?

arunsupe
1 replies
22h25m

Oh.Please explain. The example was entirely arbitrary. Should I change it?

threatofrain
0 replies
22h18m

IMO no. There are enough people who will vouch.

samatman
1 replies
20h49m

This is a good idea. I'm going to offer some unsolicited feedback here:

The configuration thing is unclear to me. I think that "current directory" means "same directory as the binary", but it could mean pwd.

Neither of those is good: configuration doesn't belong where the binaries go, and it's obviously wrong to look for configs in the working directory.

I suggest checking $XDG_CONFIG_HOME, and defaulting to `~/.config/sgrep/config.toml`.

That extension is not a typo, btw. JSON is unpleasant to edit for configuration purposes, TOML is not.

Or you could use an ENV variable directly, if the only thing that needs configuring is the model's location, that would be fine as well.

If that were the on ramp, I'd be giving feedback on the program instead. I do think it's a clever idea and I'd like to try it out.

SloopJon
0 replies
16h2m

I take a Show HN to be a solicitation for feedback.

It took me a few tries to get this to run. I tried passing -model_path, but it complained that it couldn't find config.json. I then made a config.json in the pwd (which contained the executable), but it couldn't expand "~". I then tried searching *.txt, but it couldn't find anything unless I specified just one file.

In terms of offering an experience similar to grep, it's a lot slower. It's not quite as slow with the SLIM file, but I wonder if there's some low-hanging fruit for optimization. Perhaps caching similarity for input tokens?

rldjbpin
1 replies
9h32m

document search, an llm use-case almost all companies want for their white-collar workers, currently follows creation of a RAG.

vector search is the first step of the process, and i argue that getting top n results and letting the user process them is probably the best of both worlds without involving "AI".

this can be enhanced with multilingual support based on the encoder used. but building the index is still an expensive process and i wonder how that could be done fast for a local user.

onli
1 replies
21h38m

That's totally clever and sound really useful. And it's one of those ideas where you go "Why didn't I think of that" when stumbling over the materials, word2vec in this case.

infruset
1 replies
8h7m

Very nice ! How might one go about adapting this to other languages ? Does a version of the model downloaded exist somewhere ?

arunsupe
0 replies
7h38m

I would think translation to other languages would be trivial. The model is just a map of word to vector. Every word is converted to it's vector representation; then the query word is compared to the input words using cosine similarity.

_flux
1 replies
12h16m

I wonder if it would be possible to easily add support for multiple CPUs? It seems to be taking at most 150% CPU, so on my workstation it could be (assuming high parallellism) 10 times as fast.

Alas the word2vec repository has reached its quota:

    fetch: Fetching reference refs/heads/master
    batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.
    error: failed to fetch some objects from 'https://github.com/mmihaltz/word2vec-GoogleNews-vectors.git/info/lfs'

So here are another sources I found for it: https://stackoverflow.com/a/43423646

I also found https://huggingface.co/fse/word2vec-google-news-300/tree/mai... but I'm unsure if that's the correct format for this tool. The first source from Google Drive seems to work and there's little chance of being malicious..

arunsupe
0 replies
11h19m

The model in Google drive is the official model from Google and will work.

Haven't tried the huggingface model, but, looks very different. Unlikely to work.

tgw43279w
0 replies
7h6m

I really like how simple the implementation is!

sam_perez
0 replies
16h10m

This tool seems really cool, want to play with it for sure.

Just in general, semantic search across text seems like a much better ux for many applications. Wish it was more prevalent.

randcraw
0 replies
20h51m

This would be really useful if it could take a descriptive phrase or a compound phrase (like SQL 'select X and Y and Z') and match against the semantic cluster(s) that the query forms. IMO that's the greatest failing of today's search engines -- they're all one hit wonders.

pgroves
0 replies
21h21m

How fast is it?

gunalx
0 replies
21h57m

Really cool. Often just want to fuzzy search for a word, and this would be useful. Can it do filenames as well ? Or do I need to pipe something like LS first.

danielmarkbruce
0 replies
15h25m

damn, great thinking.