return to table of content

Show HN: Turn any website into a knowledge base for LLMs

muggermuch
14 replies
2d23h

I like this a lot!

But: I feel the more of these services come to being, the more likely it is that every website starts putting up gates to keep the bots away.

Sort of like a weird GenAI take on Cixin Liu's Dark Forest hypothesis (https://en.wikipedia.org/wiki/Dark_forest_hypothesis).

(Edited to add a reference.)

throw10920
7 replies
2d15h

I feel the more of these services come to being, the more likely it is that every website starts putting up gates to keep the bots away

That's why we need microtransactions, because I'd rather be able to have both nice AI services and useful data repositories that they pull from, than have to choose just one. (and that one would be AI services, because you can't stop all the scrapers, so data sources will just keep tightening their restrictions)

pjc50
6 replies
2d8h

We're never going to have microtransactions because of microfraud - and AI makes this problem worse rather than better.

throw10920
5 replies
2d7h

What is this thing you've invented, "microfraud", and where's the evidence it exists?

pjc50
4 replies
2d7h

All financial transactions have a fraud risk. Microtransactions are no different. But any microtransaction system faces a choice: continually pop up payment confirmations (unusably annoying), or automatically accept charges (vulnerable to fraud).

Click fraud on adverts is a form of microfraud, and pay-per-click is the existing form of microtransaction.

throw10920
3 replies
2d7h

There's zero evidence that this exists, if only because there's very few examples of working microtransactions systems at all. Adverts are not micro transactions. (I don't care how you want to use that word - nobody else uses it like that)

But, all of the systems that I've seen (Blendle, video games) have had no problem at all with fraud, and a very small amount of annoyance to value delivered.

There's simply no reason to believe that this will be a problem, either empirically or theoretically.

pjc50
2 replies
2d6h

"I'm going to build a system for transferring money, and I am confident that nobody will ever try to defraud it" -- someone who is about to lose all their money

Previously: https://news.ycombinator.com/item?id=15592192

Unsolved, difficult problems of micropayments:

- pay before viewing: how do you know that the thing you're paying for is the thing that you're expecting? What if it's a rickroll or goatse?

- so do you give refunds a la steam?

- pay and adverts: double-dipping is very annoying

- pay and adverts: how do you know who you're paying? A page appears with a micropayment request, but how do you know you've not just paid the advertiser to view their ad?

- pay and frame: can you have multiple payees per displayed page? (this has good and bad ideas)

- pay and popups: it's going to be like those notification or app install modals, yet another annoyance for people to bounce off

- pay limits: contactless has a £30 limit here. Would you have the same payment system suitable for $.01 payments and $1000 payments? How easy is it to trick people into paying over the odds (see refunds)?

- pay and censors: who's excluded from the payment system? Why?

If it was that easy, it would have been done.

Part 2: business model problems!

- getting money into the system is plagued by usual fraud problems of card TX for pure digital goods

- nobody wants to build a federated system; everyone wants to build a Play/Apple/Steam store where they take 30%

- winner-take-all effects are strong

- Play store et al already exist, why not use that?

- Free substitute goods are just a click away

- Consumers will pirate anything no matter how cheap the original is

- No real consumer demand for micropayments

=> lemma from previous 3 items: market for online goods is efficient enough to drive all marginal prices to zero

- existing problem of the play store letting your kid spend all the money

- friction: it would be great if you didn't have to repeatedly approve things, such as a micropayment for every page of a webcomic archive. But blanket approval lets bad actors drain the jar or inattentive users waste it and then feel conned

- first most obvious model for making this work is porn, which is inevitably blacklisted by the payment processors, has a worse environment for fraud/chargebacks, and is toxic to VCs (see Patreon and even Craigslist)

- Internet has actually killed previously working micro-ish payment systems such as Minitel, paid ringtones (anyone remember the dark era of Crazy Frog?); surviving ones like premium SMS and phone have a scammy, seedy feel.

- accounting requirements: do you have to pay VAT on that micropayment? do you have to declare it? Is it a federal offence to sell something to an Iranian or North Korean for one cent?

throw10920
1 replies
2d5h

"I'm going to build a system for transferring money, and I am confident that nobody will ever try to defraud it" -- someone who is about to lose all their money

You seem to have conjured the impression that micropayment systems have to be radically different than current payment models, which is wildly mistaken.

You can build an effective micropayment system using only currently available tools (digital wallets, microcurrencies, digital storefronts, review systems) that have most/all of the nice properties of existing platforms, which invalidates almost every single point you make.

Few of these points seem very well thought-out - they're mostly relatively easily refuted by using logic and/or pointing to what the industry is already doing.

pay before viewing: how do you know that the thing you're paying for is the thing that you're expecting?

In the exact same way as current digital storefronts.

How easy is it to trick people into paying over the odds

What are "the odds"? Are we betting now?

so do you give refunds a la steam?

Yes, exactly like current digital storefronts.

- pay and adverts: double-dipping is very annoying

Exactly like current digital platforms (e.g. Spotify, YouTube premium).

- pay and adverts: how do you know who you're paying?

What does this even mean - how do "adverts" factor in to "how do you know who you're paying"??

pay and frame: can you have multiple payees per displayed page?

What does this mean??

- pay and popups: it's going to be like those notification or app install modals, yet another annoyance for people to bounce off

A theory that is trivially dispelled by empirical evidence of the tens of billions of dollars in microtransactions that US players spend on free-to-play games. You create a microtransaction wallet currency that is roughly equivalent to normal money, and then you pay for things by clicking on them, like in a normal game with microtransactions. Empirically, people get used to this very quickly and the friction becomes unnoticeable.

- pay limits: contactless has a £30 limit here

What does any of this have to do with contactless payments???

Would you have the same payment system suitable for $.01 payments and $1000 payments?

It's pretty easy with a few seconds of thought to think of systems that handle both of those cases well. For instance, you can make it so you have to hold down a button to purchase something with your microcurrency, with the duration of the hold (nonlinearly) proportional to the cost of the item.

- pay and censors: who's excluded from the payment system? Why?

Exactly the same as current platforms - the platforms/wallet providers determine that.

> If it was that easy, it would have been done.

Objectively false. There are many good ideas that have failed because of market factors or poor marketing. In this case, the prevalence of ads, and people's generosity at a time before scraping has truly taken off, is subsidizing the market. The generosity will decrease and doesn't scale, and I shouldn't have to point out the problems with ads.

- getting money into the system is plagued by usual fraud problems of card TX for pure digital goods

So you handle that exactly the same way as current platforms - you use a payment processor.

- nobody wants to build a federated system; everyone wants to build a Play/Apple/Steam store where they take 30%

I have to pay 30% already. I'd rather pay directly than with my eyeballs and brain (through ads). This is a problem, but it's better to implement a solution, and then lobby for regulation requiring an open, interoperable payment protocol.

- winner-take-all effects are strong

Sure? We have the same problem for ads and platforms currently.

- Play store et al already exist, why not use that?

This was already answered by vlehto in a response[1] to your comment you linked above, which you conveniently left out here. I'll quote:

Play store does not have the content I want. And it seems overly difficult to post the content I want to make. And if I want to pay for engagement with the content, that's not an option.

Patreon is lot closer to what suits my needs. But even that is too inflexible on how to use payments and to what you should pay. So far everybody seems to be making their micro-transaction payment models "flexible" by making the amount paid flexible. But that's exactly the one thing I want to be fixed. I'd like to host my entirely own webpage and patreon just to handle the money from exactly the kind of transaction I want.

Patreon is obviously not a micropayment platform and grossly inadequate for, well, almost anything - it's run by bad people who take large cuts and screw over creators, the friction to use it is incredibly high, and the payment model (subscriptions) does not scale well and isn't fair to smaller creators.

- Free substitute goods are just a click away

...and yet, somehow people still pay for things online. This is quite the non-argument.

- Consumers will pirate anything no matter how cheap the original is

Piracy is obviously evil and I'm doing my best to fight it. But this isn't an argument. In fact, the commonly-touted line "piracy is a service problem" logically implies that low-friction micropayments will make piracy less prevalent, not more.

- No real consumer demand for micropayments

See above points about the market being subsidized by ads.

=> lemma from previous 3 items: market for online goods is efficient enough to drive all marginal prices to zero

...and yet people still pay money for things.

- existing problem of the play store letting your kid spend all the money

This has nothing to do with microtransactions at all. This is just a platform permissions problem.

- friction: it would be great if you didn't have to repeatedly approve things, such as a micropayment for every page of a webcomic archive. But blanket approval lets bad actors drain the jar or inattentive users waste it and then feel conned

It's trivial for someone to break up their webcomic into chapters and have users pay for the bundle. If a particular comic creator doesn't, then they'll very quickly implement that as their readers get incredibly annoyed and leave. And the vast majority of comic creators will use an existing platform to host instead of rolling the microtransaction system themselves. As for being conned? We handle that in exactly the same way as current digital storefronts.

- first most obvious model for making this work is porn

And? I don't see how this is relevant.

- Internet has actually killed previously working micro-ish payment systems

See above points about the market being subsidized by ads, systems being launched before their time, etc.

surviving ones like premium SMS and phone have a scammy, seedy feel

That's purely a function of those things, and is not intrinsic to microtransactions, as evidenced by F2P games.

- accounting requirements: do you have to pay VAT on that micropayment? do you have to declare it? Is it a federal offence to sell something to an Iranian or North Korean for one cent?

We handle this in exactly the same way as current platforms/payment processors.

[1] https://news.ycombinator.com/item?id=15592626

ben_w
0 replies
1d22h

You seem to have conjured the impression that micropayment systems have to be radically different than current payment models, which is wildly mistaken.

His claim is that the existing system has fraud, therefore micro-transactions will have analogous thing he named "micro-fraud" — so you agree with him now?

marcellus23
4 replies
2d20h

Responding just because it's a pet peeve of mine: Cixin Liu did not invent the dark forest hypothesis. People were discussing it, and writing science fiction books about it, for decades before the 3BP books were published. Nothing against him, and he definitely helped popularize the concept, but I think it's incorrect to refer to it as "Cixin Liu's hypothesis".

sooheon
1 replies
2d11h

Was curious as a lover of the 3BP series, google gave me this:

"We've been sitting in our tree chirping like foolish birds for over a century now, wondering why no other birds answered. The galactic skies are full of hawks, that's why." (The Forge of God, Legend edition, 1989, pg 315).

Yeah, same concept and even the same imagery.

Source: https://warwick.ac.uk/fac/sci/physics/research/astro/people/...

marcellus23
0 replies
2d3h

The Forge of God and its sequel, Anvil of the Stars, are amazing books for anyone interested in the dark forest theory, by the way. A bit slow and contemplative, so you have to be in the right mood, but they're one of my favorite reads of the last few years.

I think there's a passage that even uses an analogy of a forest, though I'm not sure.

dankwizard
1 replies
2d17h

But he is responsible for the name, not the concept. So yes it is Cixin Liu's Dark Forest hypothesis.

diggan
0 replies
2d17h

Just like Amerigo Vespucci put the name "America" on a map and people starting referring to the New World as such, although he didn't discover it himself.

tremarley
0 replies
2d18h

This would be amazing

jakubsuchy
10 replies
2d16h

I feel like this is unethical. You built yet another bot scraper. It would only be an ethical tool if it validated I own the website I am scraping before it starts.

realusername
4 replies
2d11h

Well, Google itself is just an unethical bot scrapper then...

elric
3 replies
2d9h

Several lawsuits have confirmed that. Google regurgitating articles from French newspaper sites comes to mind.

This is not an easy problem to solve. In my naive take, authors get to decide how their work is used, not scrapers.

fkyoureadthedoc
1 replies
2d5h

In my naive take, authors get to decide how their work is used, not scrapers.

Inasmuch as they've put it on the public web they've already made a decision on who gets to see it, and you really can't stop people from doing what they want with it on a personal level.

If that's print it out and put it on a wall in my house, or use whatever tools I have at my disposal to consume it in any way I please, there's not really anything the author can do about it.

kmoser
0 replies
2d3h

Copyright law says otherwise. As for enforcing the law, you're right, it may be difficult for individual authors to move the needle. But that that doesn't mean it's ok for scrapers to violate the law.

As to what constitutes fair use, that's a whole other story: some scraping may be found to be legal while others may not. Benefiting monetarily from legally dubious scraping only makes that scraping look more infringe-y. Of course, nothing is settled law until a court decides.

realusername
0 replies
2d7h

The French newspaper blatantly lied on how metadata tags works in the EU debates so I wouldn't trust them on this subject.

That was actually a big enlightening moment for me, as long as money is involved, the so called ethics were out of the window instantly. From the far left newspapers to the far right ones, they all lied on this topic. Only a handful tech blogs and newspapers did tell the truth.

corn13read2
2 replies
2d16h

yes only big conglomerates now can scrape pages. If you're not google stealing the info then...... Right?

jakubsuchy
1 replies
2d7h

I didn't say that but a site owner should have the right to decide.

In addition, this scraper doesn't even identify itself (I checked). It pretends to be a normal browser, without saying it's a scraper.

talldatethrow
0 replies
2d1h

I think a website owner can decide. They can take the site down, or they can put it behind some kind of user wall.

visarga
1 replies
2d11h

This is probably a losing direction - protecting your little island of content in the sea of internet and LLM outputs. Get more value by exposure. This is the trend of open source, wikipedia and open scientific publication. LLMs double down on the same collaborative approach to intelligence.

You can of course decouple from the big discussion and isolate your content with access restrictions, but the real interesting activity will be outside. Look for example the llama.cpp and other open source AI tools we have gotten recently. So much energy and enthusiasm, so much collaboration. Closed stuff doesn't get that level of energy.

I think IP laws are in for a reckoning, protecting creativity by restricting it is not the best idea in the world. There are better models. Copyright is anachronic, it was invented in the era of the printing press when copying was made easy, LLMs remix they don't simply copy, even the name is unfitting for the new reality. We need to rename it remixright.

pjc50
0 replies
2d8h

Get more value by exposure

The LLM era doesn't give credit or attribution to its sources. It erases exposure. So there's a disincentive to collaborate with it, because it only takes.

I think IP laws are in for a reckoning, protecting creativity by restricting it is not the best idea in the world.

We've been having this discussion for over 20 years since the Napster era, or even the era of elaborate anti piracy measures for computer games distributed on tapes 40 years ago.

I've reached the conclusion that the stable equilibrium is "small shadow world": enough IP leakage for piracy and preservation, but on a noncommercial scale. We sit with our Plex boxes and our adblockers, knowing that 90% of the world isn't doing that and is paying for it. Too much control is an IP monopoly stranglehold where it costs multiple dollars to set a song as your phone ringtone or briefly heard background music gets your video vaporised off social media. Too _little_ control and eventually there is actually a real economic loss from piracy, and original content does not get made.

AI presents a third threat: unlimited pseudo-creative "slop", which is cheap and adequate to fill people's scrolling time but does not pay humans for its creation and atrophys the creative ecosystem.

MattDaEskimo
7 replies
2d17h

In my opinion this is a transitional niche.

Soon websites/apps whatever you want to call them will have their own built-in handling for AI.

It's inefficient and rude to be scraping pages for content. Especially for profit.

temporary_name
3 replies
2d14h

In agree that this niche is DOA. No offence to OP but the barrier for entry to this stuff is low. I built basically the same thing over a weekend for personal use. React frontend, python server, chroma for embeddings, sqlite cache, switch between open AI and anthropic (I want to add llama for full local execution when I get a better pc). I have a local SPA with named "projects", can configure crawl depth from a start page, I can set my crawl rate, don't have to pay to use it, can choose any provider I want... I'm just one guy and that took a day to get working plus a bit of polish.

I would guess the hardest thing by far in developing the advertised product would be user management, authentication, payments and wrapping the subscription model's business logic around the core loop. And probably scaling, as running embeddings over hundreds of scraped pages adds up quickly when free tier users start hammering you.

My question when deciding to sell something I've built is, if building the service model is harder than building the actual service, where is the value add?

My take on the natural evolution is that collating and caching documents, websites etc for search (with source attribution ideally) is a problem that will I think ultimately be solved by OS vendors. Why sign up for SaaS and expose all your content to untrustworthy 3rd parties, when it's built right in and handled by your "trusty" OS.

In the meantime, I reckon someone more dedicated than me will (or probably already has) open source something like I built but better, probably as a CLI tool, which will eventually reach maturity and be stolen cough I mean adopted by the top end of town.

Ethically I think nothing's changed for centuries in regards to plagiarism and attribution. It gets easier to copy work and thinking, but it also ultimately gets easier to acknowledge sources. Good folk will do the right thing as they always have done.

Regarding efficiency, I think tools like this have a place in making access to relevant and summarised knowledge during general research more efficient, when doing the broad strokes to find areas of interest to zoom in on, when more traditional approaches take over.

Interesting times anyway. I have to give credit to people that try, but I'm taking a back seat in thinking of ideas to productise in this space, as by the time I've thought it through, something new comes along that instantly makes it obsolete.

purple-leafy
0 replies
2d12h

God. Some people on hackernews suck.

This isn’t “niche”, it’s a pretty cool thing OP has built.

How about instead of commenting and trivialising what people have done, you say something positive

hluska
0 replies
2d4h

Your last paragraph really says it all. You haven’t accomplished anything in the space and you’re not willing to try. So you’re just going to hate on everyone who does.

Nobody cares how you would build it because you haven’t. At least not in any form that we can see.

CPLX
0 replies
2d8h

the barrier for entry to this stuff is low. I built basically the same thing over a weekend for personal use. React frontend, python server, chroma for embeddings, sqlite cache

Lmfao. God bless HN for keeping this meme going for decades by now.

obilgic
1 replies
2d15h

I think each website/data-source having their own built-in AI is also a transitional period.

It is like every website having search engine vs google.

sooheon
0 replies
2d11h

We already have the google analogue (llm that's seen all the websites), so are we going in circles?

muratsu
0 replies
2d16h

I doubt. For larger players, data is valuable - so they are preventing scraping already (eg reddit, linkedin). For smaller websites there’s also not much of an incentive.. Maybe hosting providers will help with preventing scraping? like ddos protection

samuria
6 replies
4d12h

Interesting, I wanted to do this for a personal use case (mostly learning), but with PDFs. What's tech stack? I have explored using the AWS AI tools, but it seems a bit overkill for what I want it to do.

tompec
2 replies
4d9h

Tech stack is a mix of serverless Laravel, with Cloudflare and AWS functions, and some Pinecone for vector storage. Still experimenting on a few things but don't want to over-engineer unless I know where I'm going.

stevenicr
1 replies
3d

Given that cloudflare spies on traffic and reports to multiple agencies on it's findings, perhaps a breakdown of the chain and the privacy implications of each block in the stack would be beneficial?

stevenicr
0 replies
2d20h

Ya know, a downvote on this pre-aug 2019 would be fine.

people still being ignorant about their publicly posted policies 5 years later is annoying.

lou1306
0 replies
4d7h

If the PDFS are textual or have OCR, then pdf2text from the Poppler suite ought to be enough? If not, add Tesseract/ocrmypdf to the pipeline?

cranberryturkey
5 replies
4d17h

how does this work?

tompec
2 replies
4d17h

Give it URLs or domains, and it will crawl and extract their content, embed them in a vector database, and give you an endpoint that you can then query when doing RAG stuff or semantic search.

xiconfjs
1 replies
4d12h

But how does it work in the background? What‘s the tech stack?

ramon156
0 replies
4d8h

In another comment:

Tech stack is a mix of serverless Laravel, with Cloudflare and AWS functions, and some Pinecone for vector storage. Still experimenting on a few things but don't want to over-engineer unless I know where I'm going.
kordlessagain
0 replies
4d3h

I do this with https://mitta.ai by using a Playwright container that does a callback to a pipeline that uses either meta data from the PDF or sends it to an EasyOCR deployment on a GPU instance on Google for text extraction. Then I use a custom chunker and instructor/xl embeddings.

All of that code is Open Source, and works well for most sites. Some sites block Google IPs, but the Playwright container can run locally, so should be able to work around it with some minimal effort.

pryelluw
4 replies
2d23h

Does this respect robots.txt?

tompec
1 replies
2d20h

It does respect robots.txt when crawling. I'll add more details about this in the docs.

pryelluw
0 replies
2d19h

I appreciate the reply. As someone who runs multiple CMSs it’s painful to deal with the ai crawlers these days. Specially the ones that don’t respect my terms.

srameshc
0 replies
2d22h

Valid question and I am sure it doesn't.

danirod
0 replies
2d22h

I hope this gets answered.

Also I've checked their docs to see if there is any mention about the user agents or IP ranges they use for scraping, with no luck.

kaycebasques
4 replies
2d18h

I spent a lot of time thinking about how to manage embeddings for docs sites. This is basically the same solution that I landed on but never got around to shipping as a general-purpose product.

A key question that the docs should answer (and perhaps the "How it works" page too): chunking. You generate an embedding for the entire page? Or do you generate embeddings for sections? And what's the size limit per page? Some of our docs pages have thousands of words per page. I'm doubtful you can ingest all that, let alone whether the embedding would be that useful in practice.

tompec
3 replies
2d18h

I chunk pages and generate embeddings for each chunk. So there's no real size limit per page.

kaycebasques
2 replies
2d17h

The more detail, the better. If `<section>` elements are found you chunk those? Do you do it recursively or do you stop after a certain level? And when section elements don't exist, you use `<h1>`, `<h2>`, etc. to infer logical chunks?

tompec
1 replies
2d15h

Having looked at a lot of HTMLs, I noticed that sections are not really the default. I rely on headings (h1, h2, ...) to chunk each pages. Each chunk has its heading hierarchy attached to it. There are a lot of optimizations that could be done at that level.

chasd00
0 replies
2d5h

i'm just guessing but i would think following whatever semantics leads to the highest search rank in google's algorithm would be what you're most likely to find out in the wild.

jeanloolz
3 replies
2d6h

I built a similar thing as a python library that does just that: https://github.com/philippe2803/contentmap

Blog post that explains the rationale behind the library: https://philippeoger.com/pages/can-we-rag-the-whole-web

Just submit your XML sitemap into a python class, and it will do the crawling, chunking, vectorizing and storage in an SQLite file for you. It's using SQLiteVSS integration with Langchain, but thinking of moving away from it, and do an integration with the new sqlite-vec instead.

xrd
1 replies
2d3h

I know sqlite-vss has been upgraded lately. But, it was unstable for a while prior. Are you having good experiences with it?

jeanloolz
0 replies
2h37m

Actually, Sqlite-vss has been untouched for quite some time, and the creator has officially communicated that it was deprecated to be replaced by sqlite-vec, which has recently seen its first non-alpha release (v0.1.0). So, I would embrace sqlite-vec now if I were you.

I have not used sqlite-vec much because it was only alpha-released for now, but it finally came out a few days ago. I'm looking into integrating it and use it to make sqlite more my go-to RAG database.

samstave
0 replies
16h5m

This is part of a dream of a tool I would like:

A relational crawler on a particular subject with nuanced, opaque, seemingly-temporally-unrelated connections that show a particular MIC conduction of acts::

"Follow all the congress members who have been a part of a particular committee, track their signatory/support for particular ACTs that have been passed, and look at their investment history from open data, quiver, etc - and show language in any public speaking talking about conflicts and arms deals occurring whereby their support of the funding for said conflicts are traceable to their ACTs, committee seat, speaking engagements, investment profit and reporting as compared to their stated net worth over each year as compared to the stated gains stated by their filings for investment. Apply this pattern to all congress, and their public-profile orbit of folks, without violating their otherwise private-related actions."

And give it a series of URLs with known content for which these nuances may be gleaned.

Or have a trainer bot that will constantly only consume this context from the open internet over time such that you can just have a graph over time for the data...

PYTHON: Run it all through txtai / your library ? nodes and ask questions of the data in real time?

(And it reminds me of the work of this fine person/it::

https://mlops.systems/#category=isafpr

https://mlops.systems/#category=afghanistan

ancras
3 replies
3d8h

This is interesting. Can it work with any website, even say document repositories hosted on standard servers like gitbook?

tompec
2 replies
3d8h

It works with pretty much any website, and works well with docs hosted on GitBook yes, I have embedded a website that's hosted there.

webappguy
1 replies
3d

Confirmation email doesn't work, so cannot try it. I attemtped twice and checked spam

tompec
0 replies
2d21h

Apologies, please email me at support at embedding.io. If you have something you'd like embedded, please also mention it so I can set it up for you.

23B1
3 replies
2d23h

I tried it out. This would be extremely useful to me to the point I'd be willing to happily pay for it, as it's something I would have otherwise had to spend a long time hacking together.

1) The returned output from a query seems pretty limited in length and breadth.

2) No apparent way to adjust my prompts to improve/adjust the output e.g. not really 'conversational' (not sure if that is your intent)

Otherwise keep developing and be sure to push update notifications to your new mailing list! ;-)

dmje
1 replies
2d22h

Agree with this. I also think the emphasis here (to OP) should be "I'd be willing to happily pay for it" - ie I'd rather be paying a reasonable amount each month for something that is going to remain active that have the large (current) disparity between "free" and "enterprise". I'd say make some middle tiers of (I don't know?) $5 / $10 / $20 a month for reasonable numbers of queries or whatever. Keep the "enterprise" offering there for the biggies, but offer us small players some hope that this will be sufficiently funded / supported.

Brilliant idea, btw, I like it :-)

tompec
0 replies
2d20h

Thanks! I'm still figuring things out about pricing, but there will be small plans available.

tompec
0 replies
2d21h

Thanks! The chat demo is actually just a small thing I put together as a preview of what can be done, but the main product is the API. But seeing that most users seem to like that, there's probably something there... If you want to email me at support at embedding.io with some requirements, I can see how to make that work for you.

suyash
2 replies
2d3h

Can it get content that is gated/behind login ?

onemoresoop
1 replies
1d23h

How would expect it to do that??

suyash
0 replies
1d22h

there are ways to do that, just wondering if this tool has the capability or not.

replete
2 replies
2d6h

Does anyone know of a way to do this locally with Ollama? The 'chat with documentation' thing is something I was thinking of a week ago when dealing with hallucinating cloud AI. I think it'd be worth the energy to embed a set of documentation locally to help with development

navbaker
0 replies
2d4h

Yes, Langchain has tooling specifically for connecting to Ollama that can be chained with other tooling in their library to pull in your documents, chunk them, and store them for RAG. See here for a good example notebook:

https://github.com/langchain-ai/langchain/blob/master/cookbo...

is_true
2 replies
2d4h

Do you plan on doing revenue sharing with the site owners?

mmkos
1 replies
2d1h

Do OpenAI and all the other LLM behemoths?

is_true
0 replies
1d5h

this is a completely different product.

have_faith
2 replies
2d5h

Looks cool! anything about how it compares to similar RAG-as-a-service products? something I've been researching a little.

FWIW, the pricing model of jumping from free to "contact us" is slightly ominous.

crngefest
1 replies
2d3h

With these early stage startups it often means they haven’t really figured out how to price their product and will cut you a very generous deal if you push a bit

notatoad
0 replies
2d

but what does "a generous deal" even mean. they have no pricing at all listed, so there's no reference point. if i want to index ten pages more than the free tier is it going to cost me $10/mo or $10000/mo

boredemployee
2 replies
2d16h

does it embbed images as well? if not, do you plan to do so?

tompec
1 replies
2d14h

It doesn't embed images, no. But that's a great idea for the roadmap!

boredemployee
0 replies
1d16h

great. I really want a feature like that! I'd like to query my knowledge base about images as well!

blackeyeblitzar
2 replies
4d11h

Is there a way to deal with websites where you need to login? Like subscription based sites?

tompec
1 replies
4d9h

Unless you own those sites, I'm afraid that's not going to be possible.

brianfryer
0 replies
2d14h

But if I do own those sites, there’s a way?

Cynddl
2 replies
2d21h

I find it interesting that as an (edit: UK) academic researcher, I would be likely be forbidden to use tools like this, that fail basic ethics standards, regulations such as GDPR, and practical standards such as respecting robots.txt [given there's no information on embedding.io, it's unlikely I can block the crawler when designing a website].

There's still room for an ethical development of such crawlers and technologies, but it needs to be consent-first, with strong ethical and legal standards. The crazy development of such tools has been a massive issue for a number of small online organisations that struggle with poorly implemented or maintained bots (as discussed for OpenStreetMap or Read The Docs).

popcorncowboy
1 replies
2d4h

I'm less convinced. Are you saying it's unethical to automate browsing a site?

Because if you save the pages you browse on some site, they're yours (authors don't own your cache).

Perhaps you're arguing that if you wrote a lightweight script/browser (which is just your user agent) to save some website for offline use, that'd be unethical and GDPR violating? Again, I don't think so but maybe I'm missing something. But perhaps this turns on what defines a "user agent".

Perhaps this becomes a "depth of pre-fetch" question. If your browser prefetches linked pages, that's "automated" downloading, akin to the script approach above. Downloading. To your cache. Which you own. (Where I struggle to see an ethical violation)

Genuinely curious where the line is, or what exactly here is triggering ethics, GDPR and practical standards?

Cynddl
0 replies
2d3h

Maybe a good illustration would be ClearView AI. They are scraping websites, extracting information (images), and training ML models to learn embeddings (distance between faces). They indiscriminately collect personal data without opt-in, but a limited opt-out mechanism.

In this case, if this tool is used to scrape a website, there are too direct issues: 1/ no immediate way for the website owner to exclude this particular scraper (what is the useragent?) 2/ no way for data subjects (whose data is present on the website) to search whether the scraper learned their personal data in the embeddings. Data being available publicly doesn't mean it can be widely used [at least outside the US, where we have much stricter rules on scraping].

vladde
1 replies
2d11h

The example API key on the page is decoded to "WOW YOU'RE A HACKER" :)

soundblaster
0 replies
23h3m

that == at the end looked like a base64 encoded string ;)

rvz
1 replies
2d19h

Enterprise: Contact Us

If there is no certifications or compliance information then I don't think there is anything to discuss about any enterprise plan.

tompec
0 replies
2d19h

Gotta start somewhere :)

rcarmo
1 replies
2d23h

How do I feed it a sitemap?

tompec
0 replies
2d21h

It currently will try to find a sitemap on its own. But I have on the roadmap to let users add their own.

oars
1 replies
2d7h

Interested in seeing whether this will be widespread in 5 years or whether sites will have fought back.

dartos
0 replies
2d5h

Sites are already fighting back.

Twitter and Reddit locked down their APIs. Soon enough, you’ll need an account to even access any content

michaelmior
1 replies
3d

This looks interesting, but I get a 404 on the iframe when I try to go into the chat.

tompec
0 replies
2d20h

Sorry about that, a bit too much load at the moment

khanan
1 replies
2d23h

Can this be deployed on-prem or is it an cloud-toy?

tompec
0 replies
2d20h

Currently just a cloud-toy.

danirogerc
1 replies
2d22h

Can I query multiple vectorized websites at once? Can I export vectorized websites and host them myself? Any chance to export them to a no-code format, like PDF?

tompec
0 replies
2d21h

You can group as many websites as you want into a collection. Then query that collection. Not sure what you mean by exporting; you would like to export the vectors themselves? Or just the chunks of text from the websites?

vulture916
0 replies
2d3h

Experiencing many Internal Server Errors.

suyash
0 replies
2d3h

any open source tools for doing just this?

r0b05
0 replies
3d

Does it hallucinate much?

olalonde
0 replies
2d18h

Which LLM model is it using?

nashashmi
0 replies
2d2h

Would you share the source? I want to use this for a private internal network of pages. How would that work?

mattfrommars
0 replies
2d14h

So i provide a URL, your service does the crawling of the site?

lua-steve
0 replies
1d23h

How does this handle changes to the website? Does it re-crawl the whole site periodically and regenerate the embeddings? Or is there some sort of diff-checker that only picks up pages that have changed, added, or deleted?

hluska
0 replies
2d4h

I like the concept, the documentation is very good and I even enjoy the domain name. This is an excellent launch and congratulations on getting it out.

dmitrygr
0 replies
2d1h

  > Turn any website into a knowledge base for LLMs
I would pay for the opposite product: make your website completely unusable/unreadable by LLMs while readable by real humans, with low false positive rates.

dazbradbury
0 replies
3d

Nice! What's the underlying model / RAG approach being used? Be good to understand that part as presumably it will have a big impact on performance / usability of the results.

crowcroft
0 replies
2d23h

I like this. Abstracting away the management of embeddings and vector database is something I desperately want, and adding in website crawling is useful as well.

ckluis
0 replies
2d4h

How much does it cost?

breck
0 replies
2d16h

#1. Gratuitous self promotion (but also my honest best advice): The future of knowledge bases is ScrollSets: https://sets.scroll.pub/

#2. If you are interested in knowledge bases, see #1

barrenko
0 replies
2d5h

Will this work for forums?

ashwinnair99
0 replies
5h47m

How are you deciding on the best RAG configuration for your app? How you decide chunking strategy, embedding and retrievers for your app? Check out our open source tool-RAgBuilder that allows developers get to the top performing RAG for their data https://news.ycombinator.com/item?id=41145093

alok-g
0 replies
2d6h

Would be great to use for developer documentation for various languages, frameworks and libraries.